Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

ASoC: codecs: lpass: add support for v2.5 rx macro

Merge series from Srinivas Kandagatla <srinivas.kandagatla@linaro.org>:

This patchset adds support to reading codec version and also adds
support for v2.5 codec version in rx macro.

LPASS 2.5 and up versions have changes in some of the rx blocks which
are required to get headset functional correctly.

Tested this on SM8450, X13s and x1e80100 crd.

This changes also fixes issue with sm8450, sm8550, sm8660 and x1e80100.

+3899 -1956
+11 -24
Documentation/admin-guide/LSM/tomoyo.rst
··· 9 9 10 10 LiveCD-based tutorials are available at 11 11 12 - http://tomoyo.sourceforge.jp/1.8/ubuntu12.04-live.html 13 - http://tomoyo.sourceforge.jp/1.8/centos6-live.html 12 + https://tomoyo.sourceforge.net/1.8/ubuntu12.04-live.html 13 + https://tomoyo.sourceforge.net/1.8/centos6-live.html 14 14 15 15 Though these tutorials use non-LSM version of TOMOYO, they are useful for you 16 16 to know what TOMOYO is. ··· 21 21 Build the kernel with ``CONFIG_SECURITY_TOMOYO=y`` and pass ``security=tomoyo`` on 22 22 kernel's command line. 23 23 24 - Please see http://tomoyo.osdn.jp/2.5/ for details. 24 + Please see https://tomoyo.sourceforge.net/2.6/ for details. 25 25 26 26 Where is documentation? 27 27 ======================= 28 28 29 29 User <-> Kernel interface documentation is available at 30 - https://tomoyo.osdn.jp/2.5/policy-specification/index.html . 30 + https://tomoyo.sourceforge.net/2.6/policy-specification/index.html . 31 31 32 32 Materials we prepared for seminars and symposiums are available at 33 - https://osdn.jp/projects/tomoyo/docs/?category_id=532&language_id=1 . 33 + https://sourceforge.net/projects/tomoyo/files/docs/ . 34 34 Below lists are chosen from three aspects. 35 35 36 36 What is TOMOYO? 37 37 TOMOYO Linux Overview 38 - https://osdn.jp/projects/tomoyo/docs/lca2009-takeda.pdf 38 + https://sourceforge.net/projects/tomoyo/files/docs/lca2009-takeda.pdf 39 39 TOMOYO Linux: pragmatic and manageable security for Linux 40 - https://osdn.jp/projects/tomoyo/docs/freedomhectaipei-tomoyo.pdf 40 + https://sourceforge.net/projects/tomoyo/files/docs/freedomhectaipei-tomoyo.pdf 41 41 TOMOYO Linux: A Practical Method to Understand and Protect Your Own Linux Box 42 - https://osdn.jp/projects/tomoyo/docs/PacSec2007-en-no-demo.pdf 42 + https://sourceforge.net/projects/tomoyo/files/docs/PacSec2007-en-no-demo.pdf 43 43 44 44 What can TOMOYO do? 45 45 Deep inside TOMOYO Linux 46 - https://osdn.jp/projects/tomoyo/docs/lca2009-kumaneko.pdf 46 + https://sourceforge.net/projects/tomoyo/files/docs/lca2009-kumaneko.pdf 47 47 The role of "pathname based access control" in security. 48 - https://osdn.jp/projects/tomoyo/docs/lfj2008-bof.pdf 48 + https://sourceforge.net/projects/tomoyo/files/docs/lfj2008-bof.pdf 49 49 50 50 History of TOMOYO? 51 51 Realities of Mainlining 52 - https://osdn.jp/projects/tomoyo/docs/lfj2008.pdf 53 - 54 - What is future plan? 55 - ==================== 56 - 57 - We believe that inode based security and name based security are complementary 58 - and both should be used together. But unfortunately, so far, we cannot enable 59 - multiple LSM modules at the same time. We feel sorry that you have to give up 60 - SELinux/SMACK/AppArmor etc. when you want to use TOMOYO. 61 - 62 - We hope that LSM becomes stackable in future. Meanwhile, you can use non-LSM 63 - version of TOMOYO, available at http://tomoyo.osdn.jp/1.8/ . 64 - LSM version of TOMOYO is a subset of non-LSM version of TOMOYO. We are planning 65 - to port non-LSM version's functionalities to LSM versions. 52 + https://sourceforge.net/projects/tomoyo/files/docs/lfj2008.pdf
+2 -2
Documentation/admin-guide/mm/transhuge.rst
··· 467 467 instead falls back to using huge pages with lower orders or 468 468 small pages even though the allocation was successful. 469 469 470 - anon_swpout 470 + swpout 471 471 is incremented every time a huge page is swapped out in one 472 472 piece without splitting. 473 473 474 - anon_swpout_fallback 474 + swpout_fallback 475 475 is incremented if a huge page has to be split before swapout. 476 476 Usually because failed to allocate some continuous swap space 477 477 for the huge page.
+2 -2
Documentation/cdrom/cdrom-standard.rst
··· 217 217 int (*media_changed)(struct cdrom_device_info *, int); 218 218 int (*tray_move)(struct cdrom_device_info *, int); 219 219 int (*lock_door)(struct cdrom_device_info *, int); 220 - int (*select_speed)(struct cdrom_device_info *, int); 220 + int (*select_speed)(struct cdrom_device_info *, unsigned long); 221 221 int (*get_last_session) (struct cdrom_device_info *, 222 222 struct cdrom_multisession *); 223 223 int (*get_mcn)(struct cdrom_device_info *, struct cdrom_mcn *); ··· 396 396 397 397 :: 398 398 399 - int select_speed(struct cdrom_device_info *cdi, int speed) 399 + int select_speed(struct cdrom_device_info *cdi, unsigned long speed) 400 400 401 401 Some CD-ROM drives are capable of changing their head-speed. There 402 402 are several reasons for changing the speed of a CD-ROM drive. Badly
+1 -2
Documentation/devicetree/bindings/arm/stm32/st,mlahb.yaml
··· 54 54 55 55 examples: 56 56 - | 57 - mlahb: ahb@38000000 { 57 + ahb { 58 58 compatible = "st,mlahb", "simple-bus"; 59 59 #address-cells = <1>; 60 60 #size-cells = <1>; 61 - reg = <0x10000000 0x40000>; 62 61 ranges; 63 62 dma-ranges = <0x00000000 0x38000000 0x10000>, 64 63 <0x10000000 0x10000000 0x60000>,
+3 -3
Documentation/devicetree/bindings/arm/sunxi.yaml
··· 57 57 - const: allwinner,sun8i-v3s 58 58 59 59 - description: Anbernic RG35XX (2024) 60 - - items: 60 + items: 61 61 - const: anbernic,rg35xx-2024 62 62 - const: allwinner,sun50i-h700 63 63 64 64 - description: Anbernic RG35XX Plus 65 - - items: 65 + items: 66 66 - const: anbernic,rg35xx-plus 67 67 - const: allwinner,sun50i-h700 68 68 69 69 - description: Anbernic RG35XX H 70 - - items: 70 + items: 71 71 - const: anbernic,rg35xx-h 72 72 - const: allwinner,sun50i-h700 73 73
+14 -5
Documentation/devicetree/bindings/input/elan,ekth6915.yaml
··· 18 18 19 19 properties: 20 20 compatible: 21 - enum: 22 - - elan,ekth6915 23 - - ilitek,ili2901 21 + oneOf: 22 + - items: 23 + - enum: 24 + - elan,ekth5015m 25 + - const: elan,ekth6915 26 + - const: elan,ekth6915 24 27 25 28 reg: 26 29 const: 0x10 ··· 35 32 36 33 reset-gpios: 37 34 description: Reset GPIO; not all touchscreens using eKTH6915 hook this up. 35 + 36 + no-reset-on-power-off: 37 + type: boolean 38 + description: 39 + Reset line is wired so that it can (and should) be left deasserted when 40 + the power supply is off. 38 41 39 42 vcc33-supply: 40 43 description: The 3.3V supply to the touchscreen. ··· 67 58 #address-cells = <1>; 68 59 #size-cells = <0>; 69 60 70 - ap_ts: touchscreen@10 { 71 - compatible = "elan,ekth6915"; 61 + touchscreen@10 { 62 + compatible = "elan,ekth5015m", "elan,ekth6915"; 72 63 reg = <0x10>; 73 64 74 65 interrupt-parent = <&tlmm>;
+66
Documentation/devicetree/bindings/input/ilitek,ili2901.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/input/ilitek,ili2901.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Ilitek ILI2901 touchscreen controller 8 + 9 + maintainers: 10 + - Jiri Kosina <jkosina@suse.com> 11 + 12 + description: 13 + Supports the Ilitek ILI2901 touchscreen controller. 14 + This touchscreen controller uses the i2c-hid protocol with a reset GPIO. 15 + 16 + allOf: 17 + - $ref: /schemas/input/touchscreen/touchscreen.yaml# 18 + 19 + properties: 20 + compatible: 21 + enum: 22 + - ilitek,ili2901 23 + 24 + reg: 25 + maxItems: 1 26 + 27 + interrupts: 28 + maxItems: 1 29 + 30 + panel: true 31 + 32 + reset-gpios: 33 + maxItems: 1 34 + 35 + vcc33-supply: true 36 + 37 + vccio-supply: true 38 + 39 + required: 40 + - compatible 41 + - reg 42 + - interrupts 43 + - vcc33-supply 44 + 45 + additionalProperties: false 46 + 47 + examples: 48 + - | 49 + #include <dt-bindings/gpio/gpio.h> 50 + #include <dt-bindings/interrupt-controller/irq.h> 51 + 52 + i2c { 53 + #address-cells = <1>; 54 + #size-cells = <0>; 55 + 56 + touchscreen@41 { 57 + compatible = "ilitek,ili2901"; 58 + reg = <0x41>; 59 + 60 + interrupt-parent = <&tlmm>; 61 + interrupts = <9 IRQ_TYPE_LEVEL_LOW>; 62 + 63 + reset-gpios = <&tlmm 8 GPIO_ACTIVE_LOW>; 64 + vcc33-supply = <&pp3300_ts>; 65 + }; 66 + };
+11 -1
Documentation/kbuild/kconfig-language.rst
··· 150 150 That will limit the usefulness but on the other hand avoid 151 151 the illegal configurations all over. 152 152 153 + If "select" <symbol> is followed by "if" <expr>, <symbol> will be 154 + selected by the logical AND of the value of the current menu symbol 155 + and <expr>. This means, the lower limit can be downgraded due to the 156 + presence of "if" <expr>. This behavior may seem weird, but we rely on 157 + it. (The future of this behavior is undecided.) 158 + 153 159 - weak reverse dependencies: "imply" <symbol> ["if" <expr>] 154 160 155 161 This is similar to "select" as it enforces a lower limit on another ··· 190 184 ability to hook into a secondary subsystem while allowing the user to 191 185 configure that subsystem out without also having to unset these drivers. 192 186 193 - Note: If the combination of FOO=y and BAR=m causes a link error, 187 + Note: If the combination of FOO=y and BAZ=m causes a link error, 194 188 you can guard the function call with IS_REACHABLE():: 195 189 196 190 foo_init() ··· 207 201 tristate "foo" 208 202 imply BAR 209 203 imply BAZ 204 + 205 + Note: If "imply" <symbol> is followed by "if" <expr>, the default of <symbol> 206 + will be the logical AND of the value of the current menu symbol and <expr>. 207 + (The future of this behavior is undecided.) 210 208 211 209 - limiting menu display: "visible if" <expr> 212 210
+13 -18
Documentation/networking/af_xdp.rst
··· 329 329 sxdp_shared_umem_fd field as you registered the UMEM on that 330 330 socket. These two sockets will now share one and the same UMEM. 331 331 332 - In this case, it is possible to use the NIC's packet steering 333 - capabilities to steer the packets to the right queue. This is not 334 - possible in the previous example as there is only one queue shared 335 - among sockets, so the NIC cannot do this steering as it can only steer 336 - between queues. 332 + There is no need to supply an XDP program like the one in the previous 333 + case where sockets were bound to the same queue id and 334 + device. Instead, use the NIC's packet steering capabilities to steer 335 + the packets to the right queue. In the previous example, there is only 336 + one queue shared among sockets, so the NIC cannot do this steering. It 337 + can only steer between queues. 337 338 338 - In libxdp (or libbpf prior to version 1.0), you need to use the 339 - xsk_socket__create_shared() API as it takes a reference to a FILL ring 340 - and a COMPLETION ring that will be created for you and bound to the 341 - shared UMEM. You can use this function for all the sockets you create, 342 - or you can use it for the second and following ones and use 343 - xsk_socket__create() for the first one. Both methods yield the same 344 - result. 339 + In libbpf, you need to use the xsk_socket__create_shared() API as it 340 + takes a reference to a FILL ring and a COMPLETION ring that will be 341 + created for you and bound to the shared UMEM. You can use this 342 + function for all the sockets you create, or you can use it for the 343 + second and following ones and use xsk_socket__create() for the first 344 + one. Both methods yield the same result. 345 345 346 346 Note that a UMEM can be shared between sockets on the same queue id 347 347 and device, as well as between queues on the same device and between 348 - devices at the same time. It is also possible to redirect to any 349 - socket as long as it is bound to the same umem with XDP_SHARED_UMEM. 348 + devices at the same time. 350 349 351 350 XDP_USE_NEED_WAKEUP bind flag 352 351 ----------------------------- ··· 821 822 to the same queue id Y. In zero-copy mode, you should use the 822 823 switch, or other distribution mechanism, in your NIC to direct 823 824 traffic to the correct queue id and socket. 824 - 825 - Note that if you are using the XDP_SHARED_UMEM option, it is 826 - possible to switch traffic between any socket bound to the same 827 - umem. 828 825 829 826 Q: My packets are sometimes corrupted. What is wrong? 830 827
+1 -1
Documentation/userspace-api/media/v4l/dev-subdev.rst
··· 582 582 Devices generating the streams may allow enabling and disabling some of the 583 583 routes or have a fixed routing configuration. If the routes can be disabled, not 584 584 declaring the routes (or declaring them without 585 - ``VIDIOC_SUBDEV_STREAM_FL_ACTIVE`` flag set) in ``VIDIOC_SUBDEV_S_ROUTING`` will 585 + ``V4L2_SUBDEV_STREAM_FL_ACTIVE`` flag set) in ``VIDIOC_SUBDEV_S_ROUTING`` will 586 586 disable the routes. ``VIDIOC_SUBDEV_S_ROUTING`` will still return such routes 587 587 back to the user in the routes array, with the ``V4L2_SUBDEV_STREAM_FL_ACTIVE`` 588 588 flag unset.
+1 -3
MAINTAINERS
··· 1107 1107 S: Supported 1108 1108 F: Documentation/admin-guide/pm/amd-pstate.rst 1109 1109 F: drivers/cpufreq/amd-pstate* 1110 - F: include/linux/amd-pstate.h 1111 1110 F: tools/power/x86/amd_pstate_tracer/amd_pstate_trace.py 1112 1111 1113 1112 AMD PTDMA DRIVER ··· 15237 15238 F: include/linux/most.h 15238 15239 15239 15240 MOTORCOMM PHY DRIVER 15240 - M: Peter Geis <pgwipeout@gmail.com> 15241 15241 M: Frank <Frank.Sae@motor-comm.com> 15242 15242 L: netdev@vger.kernel.org 15243 15243 S: Maintained ··· 22677 22679 L: tomoyo-dev@lists.osdn.me (subscribers-only, for developers in Japanese) 22678 22680 L: tomoyo-users@lists.osdn.me (subscribers-only, for users in Japanese) 22679 22681 S: Maintained 22680 - W: https://tomoyo.osdn.jp/ 22682 + W: https://tomoyo.sourceforge.net/ 22681 22683 F: security/tomoyo/ 22682 22684 22683 22685 TOPSTAR LAPTOP EXTRAS DRIVER
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 10 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc2 5 + EXTRAVERSION = -rc3 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+3 -3
arch/arm64/include/asm/el2_setup.h
··· 146 146 /* Coprocessor traps */ 147 147 .macro __init_el2_cptr 148 148 __check_hvhe .LnVHE_\@, x1 149 - mov x0, #(CPACR_EL1_FPEN_EL1EN | CPACR_EL1_FPEN_EL0EN) 149 + mov x0, #CPACR_ELx_FPEN 150 150 msr cpacr_el1, x0 151 151 b .Lskip_set_cptr_\@ 152 152 .LnVHE_\@: ··· 277 277 278 278 // (h)VHE case 279 279 mrs x0, cpacr_el1 // Disable SVE traps 280 - orr x0, x0, #(CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN) 280 + orr x0, x0, #CPACR_ELx_ZEN 281 281 msr cpacr_el1, x0 282 282 b .Lskip_set_cptr_\@ 283 283 ··· 298 298 299 299 // (h)VHE case 300 300 mrs x0, cpacr_el1 // Disable SME traps 301 - orr x0, x0, #(CPACR_EL1_SMEN_EL0EN | CPACR_EL1_SMEN_EL1EN) 301 + orr x0, x0, #CPACR_ELx_SMEN 302 302 msr cpacr_el1, x0 303 303 b .Lskip_set_cptr_sme_\@ 304 304
+16 -20
arch/arm64/include/asm/io.h
··· 153 153 * emit the large TLP from the CPU. 154 154 */ 155 155 156 - static inline void __const_memcpy_toio_aligned32(volatile u32 __iomem *to, 157 - const u32 *from, size_t count) 156 + static __always_inline void 157 + __const_memcpy_toio_aligned32(volatile u32 __iomem *to, const u32 *from, 158 + size_t count) 158 159 { 159 160 switch (count) { 160 161 case 8: ··· 197 196 198 197 void __iowrite32_copy_full(void __iomem *to, const void *from, size_t count); 199 198 200 - static inline void __const_iowrite32_copy(void __iomem *to, const void *from, 201 - size_t count) 199 + static __always_inline void 200 + __iowrite32_copy(void __iomem *to, const void *from, size_t count) 202 201 { 203 - if (count == 8 || count == 4 || count == 2 || count == 1) { 202 + if (__builtin_constant_p(count) && 203 + (count == 8 || count == 4 || count == 2 || count == 1)) { 204 204 __const_memcpy_toio_aligned32(to, from, count); 205 205 dgh(); 206 206 } else { 207 207 __iowrite32_copy_full(to, from, count); 208 208 } 209 209 } 210 + #define __iowrite32_copy __iowrite32_copy 210 211 211 - #define __iowrite32_copy(to, from, count) \ 212 - (__builtin_constant_p(count) ? \ 213 - __const_iowrite32_copy(to, from, count) : \ 214 - __iowrite32_copy_full(to, from, count)) 215 - 216 - static inline void __const_memcpy_toio_aligned64(volatile u64 __iomem *to, 217 - const u64 *from, size_t count) 212 + static __always_inline void 213 + __const_memcpy_toio_aligned64(volatile u64 __iomem *to, const u64 *from, 214 + size_t count) 218 215 { 219 216 switch (count) { 220 217 case 8: ··· 254 255 255 256 void __iowrite64_copy_full(void __iomem *to, const void *from, size_t count); 256 257 257 - static inline void __const_iowrite64_copy(void __iomem *to, const void *from, 258 - size_t count) 258 + static __always_inline void 259 + __iowrite64_copy(void __iomem *to, const void *from, size_t count) 259 260 { 260 - if (count == 8 || count == 4 || count == 2 || count == 1) { 261 + if (__builtin_constant_p(count) && 262 + (count == 8 || count == 4 || count == 2 || count == 1)) { 261 263 __const_memcpy_toio_aligned64(to, from, count); 262 264 dgh(); 263 265 } else { 264 266 __iowrite64_copy_full(to, from, count); 265 267 } 266 268 } 267 - 268 - #define __iowrite64_copy(to, from, count) \ 269 - (__builtin_constant_p(count) ? \ 270 - __const_iowrite64_copy(to, from, count) : \ 271 - __iowrite64_copy_full(to, from, count)) 269 + #define __iowrite64_copy __iowrite64_copy 272 270 273 271 /* 274 272 * I/O memory mapping functions.
+6
arch/arm64/include/asm/kvm_arm.h
··· 305 305 GENMASK(19, 14) | \ 306 306 BIT(11)) 307 307 308 + #define CPTR_VHE_EL2_RES0 (GENMASK(63, 32) | \ 309 + GENMASK(27, 26) | \ 310 + GENMASK(23, 22) | \ 311 + GENMASK(19, 18) | \ 312 + GENMASK(15, 0)) 313 + 308 314 /* Hyp Debug Configuration Register bits */ 309 315 #define MDCR_EL2_E2TB_MASK (UL(0x3)) 310 316 #define MDCR_EL2_E2TB_SHIFT (UL(24))
+66 -5
arch/arm64/include/asm/kvm_emulate.h
··· 557 557 vcpu_set_flag((v), e); \ 558 558 } while (0) 559 559 560 + #define __build_check_all_or_none(r, bits) \ 561 + BUILD_BUG_ON(((r) & (bits)) && ((r) & (bits)) != (bits)) 562 + 563 + #define __cpacr_to_cptr_clr(clr, set) \ 564 + ({ \ 565 + u64 cptr = 0; \ 566 + \ 567 + if ((set) & CPACR_ELx_FPEN) \ 568 + cptr |= CPTR_EL2_TFP; \ 569 + if ((set) & CPACR_ELx_ZEN) \ 570 + cptr |= CPTR_EL2_TZ; \ 571 + if ((set) & CPACR_ELx_SMEN) \ 572 + cptr |= CPTR_EL2_TSM; \ 573 + if ((clr) & CPACR_ELx_TTA) \ 574 + cptr |= CPTR_EL2_TTA; \ 575 + if ((clr) & CPTR_EL2_TAM) \ 576 + cptr |= CPTR_EL2_TAM; \ 577 + if ((clr) & CPTR_EL2_TCPAC) \ 578 + cptr |= CPTR_EL2_TCPAC; \ 579 + \ 580 + cptr; \ 581 + }) 582 + 583 + #define __cpacr_to_cptr_set(clr, set) \ 584 + ({ \ 585 + u64 cptr = 0; \ 586 + \ 587 + if ((clr) & CPACR_ELx_FPEN) \ 588 + cptr |= CPTR_EL2_TFP; \ 589 + if ((clr) & CPACR_ELx_ZEN) \ 590 + cptr |= CPTR_EL2_TZ; \ 591 + if ((clr) & CPACR_ELx_SMEN) \ 592 + cptr |= CPTR_EL2_TSM; \ 593 + if ((set) & CPACR_ELx_TTA) \ 594 + cptr |= CPTR_EL2_TTA; \ 595 + if ((set) & CPTR_EL2_TAM) \ 596 + cptr |= CPTR_EL2_TAM; \ 597 + if ((set) & CPTR_EL2_TCPAC) \ 598 + cptr |= CPTR_EL2_TCPAC; \ 599 + \ 600 + cptr; \ 601 + }) 602 + 603 + #define cpacr_clear_set(clr, set) \ 604 + do { \ 605 + BUILD_BUG_ON((set) & CPTR_VHE_EL2_RES0); \ 606 + BUILD_BUG_ON((clr) & CPACR_ELx_E0POE); \ 607 + __build_check_all_or_none((clr), CPACR_ELx_FPEN); \ 608 + __build_check_all_or_none((set), CPACR_ELx_FPEN); \ 609 + __build_check_all_or_none((clr), CPACR_ELx_ZEN); \ 610 + __build_check_all_or_none((set), CPACR_ELx_ZEN); \ 611 + __build_check_all_or_none((clr), CPACR_ELx_SMEN); \ 612 + __build_check_all_or_none((set), CPACR_ELx_SMEN); \ 613 + \ 614 + if (has_vhe() || has_hvhe()) \ 615 + sysreg_clear_set(cpacr_el1, clr, set); \ 616 + else \ 617 + sysreg_clear_set(cptr_el2, \ 618 + __cpacr_to_cptr_clr(clr, set), \ 619 + __cpacr_to_cptr_set(clr, set));\ 620 + } while (0) 621 + 560 622 static __always_inline void kvm_write_cptr_el2(u64 val) 561 623 { 562 624 if (has_vhe() || has_hvhe()) ··· 632 570 u64 val; 633 571 634 572 if (has_vhe()) { 635 - val = (CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN | 636 - CPACR_EL1_ZEN_EL1EN); 573 + val = (CPACR_ELx_FPEN | CPACR_EL1_ZEN_EL1EN); 637 574 if (cpus_have_final_cap(ARM64_SME)) 638 575 val |= CPACR_EL1_SMEN_EL1EN; 639 576 } else if (has_hvhe()) { 640 - val = (CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN); 577 + val = CPACR_ELx_FPEN; 641 578 642 579 if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs()) 643 - val |= CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN; 580 + val |= CPACR_ELx_ZEN; 644 581 if (cpus_have_final_cap(ARM64_SME)) 645 - val |= CPACR_EL1_SMEN_EL1EN | CPACR_EL1_SMEN_EL0EN; 582 + val |= CPACR_ELx_SMEN; 646 583 } else { 647 584 val = CPTR_NVHE_EL2_RES1; 648 585
+24 -1
arch/arm64/include/asm/kvm_host.h
··· 76 76 DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); 77 77 78 78 extern unsigned int __ro_after_init kvm_sve_max_vl; 79 + extern unsigned int __ro_after_init kvm_host_sve_max_vl; 79 80 int __init kvm_arm_init_sve(void); 80 81 81 82 u32 __attribute_const__ kvm_target_cpu(void); ··· 522 521 u64 *vncr_array; 523 522 }; 524 523 524 + struct cpu_sve_state { 525 + __u64 zcr_el1; 526 + 527 + /* 528 + * Ordering is important since __sve_save_state/__sve_restore_state 529 + * relies on it. 530 + */ 531 + __u32 fpsr; 532 + __u32 fpcr; 533 + 534 + /* Must be SVE_VQ_BYTES (128 bit) aligned. */ 535 + __u8 sve_regs[]; 536 + }; 537 + 525 538 /* 526 539 * This structure is instantiated on a per-CPU basis, and contains 527 540 * data that is: ··· 549 534 */ 550 535 struct kvm_host_data { 551 536 struct kvm_cpu_context host_ctxt; 552 - struct user_fpsimd_state *fpsimd_state; /* hyp VA */ 537 + 538 + /* 539 + * All pointers in this union are hyp VA. 540 + * sve_state is only used in pKVM and if system_supports_sve(). 541 + */ 542 + union { 543 + struct user_fpsimd_state *fpsimd_state; 544 + struct cpu_sve_state *sve_state; 545 + }; 553 546 554 547 /* Ownership of the FP regs */ 555 548 enum {
+3 -1
arch/arm64/include/asm/kvm_hyp.h
··· 111 111 112 112 void __fpsimd_save_state(struct user_fpsimd_state *fp_regs); 113 113 void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs); 114 - void __sve_restore_state(void *sve_pffr, u32 *fpsr); 114 + void __sve_save_state(void *sve_pffr, u32 *fpsr, int save_ffr); 115 + void __sve_restore_state(void *sve_pffr, u32 *fpsr, int restore_ffr); 115 116 116 117 u64 __guest_enter(struct kvm_vcpu *vcpu); 117 118 ··· 143 142 144 143 extern unsigned long kvm_nvhe_sym(__icache_flags); 145 144 extern unsigned int kvm_nvhe_sym(kvm_arm_vmid_bits); 145 + extern unsigned int kvm_nvhe_sym(kvm_host_sve_max_vl); 146 146 147 147 #endif /* __ARM64_KVM_HYP_H__ */
+9
arch/arm64/include/asm/kvm_pkvm.h
··· 128 128 return (2 * KVM_FFA_MBOX_NR_PAGES) + DIV_ROUND_UP(desc_max, PAGE_SIZE); 129 129 } 130 130 131 + static inline size_t pkvm_host_sve_state_size(void) 132 + { 133 + if (!system_supports_sve()) 134 + return 0; 135 + 136 + return size_add(sizeof(struct cpu_sve_state), 137 + SVE_SIG_REGS_SIZE(sve_vq_from_vl(kvm_host_sve_max_vl))); 138 + } 139 + 131 140 #endif /* __ARM64_KVM_PKVM_H__ */
+3
arch/arm64/kernel/armv8_deprecated.c
··· 462 462 for (int i = 0; i < ARRAY_SIZE(insn_emulations); i++) { 463 463 struct insn_emulation *insn = insn_emulations[i]; 464 464 bool enable = READ_ONCE(insn->current_mode) == INSN_HW; 465 + if (insn->status == INSN_UNAVAILABLE) 466 + continue; 467 + 465 468 if (insn->set_hw_mode && insn->set_hw_mode(enable)) { 466 469 pr_warn("CPU[%u] cannot support the emulation of %s", 467 470 cpu, insn->name);
+76
arch/arm64/kvm/arm.c
··· 1931 1931 return size ? get_order(size) : 0; 1932 1932 } 1933 1933 1934 + static size_t pkvm_host_sve_state_order(void) 1935 + { 1936 + return get_order(pkvm_host_sve_state_size()); 1937 + } 1938 + 1934 1939 /* A lookup table holding the hypervisor VA for each vector slot */ 1935 1940 static void *hyp_spectre_vector_selector[BP_HARDEN_EL2_SLOTS]; 1936 1941 ··· 2315 2310 2316 2311 static void __init teardown_hyp_mode(void) 2317 2312 { 2313 + bool free_sve = system_supports_sve() && is_protected_kvm_enabled(); 2318 2314 int cpu; 2319 2315 2320 2316 free_hyp_pgds(); 2321 2317 for_each_possible_cpu(cpu) { 2322 2318 free_page(per_cpu(kvm_arm_hyp_stack_page, cpu)); 2323 2319 free_pages(kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu], nvhe_percpu_order()); 2320 + 2321 + if (free_sve) { 2322 + struct cpu_sve_state *sve_state; 2323 + 2324 + sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state; 2325 + free_pages((unsigned long) sve_state, pkvm_host_sve_state_order()); 2326 + } 2324 2327 } 2325 2328 } 2326 2329 ··· 2409 2396 free_hyp_pgds(); 2410 2397 2411 2398 return 0; 2399 + } 2400 + 2401 + static int init_pkvm_host_sve_state(void) 2402 + { 2403 + int cpu; 2404 + 2405 + if (!system_supports_sve()) 2406 + return 0; 2407 + 2408 + /* Allocate pages for host sve state in protected mode. */ 2409 + for_each_possible_cpu(cpu) { 2410 + struct page *page = alloc_pages(GFP_KERNEL, pkvm_host_sve_state_order()); 2411 + 2412 + if (!page) 2413 + return -ENOMEM; 2414 + 2415 + per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state = page_address(page); 2416 + } 2417 + 2418 + /* 2419 + * Don't map the pages in hyp since these are only used in protected 2420 + * mode, which will (re)create its own mapping when initialized. 2421 + */ 2422 + 2423 + return 0; 2424 + } 2425 + 2426 + /* 2427 + * Finalizes the initialization of hyp mode, once everything else is initialized 2428 + * and the initialziation process cannot fail. 2429 + */ 2430 + static void finalize_init_hyp_mode(void) 2431 + { 2432 + int cpu; 2433 + 2434 + if (system_supports_sve() && is_protected_kvm_enabled()) { 2435 + for_each_possible_cpu(cpu) { 2436 + struct cpu_sve_state *sve_state; 2437 + 2438 + sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state; 2439 + per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state = 2440 + kern_hyp_va(sve_state); 2441 + } 2442 + } else { 2443 + for_each_possible_cpu(cpu) { 2444 + struct user_fpsimd_state *fpsimd_state; 2445 + 2446 + fpsimd_state = &per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->host_ctxt.fp_regs; 2447 + per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->fpsimd_state = 2448 + kern_hyp_va(fpsimd_state); 2449 + } 2450 + } 2412 2451 } 2413 2452 2414 2453 static void pkvm_hyp_init_ptrauth(void) ··· 2631 2566 goto out_err; 2632 2567 } 2633 2568 2569 + err = init_pkvm_host_sve_state(); 2570 + if (err) 2571 + goto out_err; 2572 + 2634 2573 err = kvm_hyp_init_protection(hyp_va_bits); 2635 2574 if (err) { 2636 2575 kvm_err("Failed to init hyp memory protection\n"); ··· 2798 2729 err = kvm_init(sizeof(struct kvm_vcpu), 0, THIS_MODULE); 2799 2730 if (err) 2800 2731 goto out_subs; 2732 + 2733 + /* 2734 + * This should be called after initialization is done and failure isn't 2735 + * possible anymore. 2736 + */ 2737 + if (!in_hyp_mode) 2738 + finalize_init_hyp_mode(); 2801 2739 2802 2740 kvm_arm_initialised = true; 2803 2741
+11 -10
arch/arm64/kvm/emulate-nested.c
··· 2181 2181 if (forward_traps(vcpu, HCR_NV)) 2182 2182 return; 2183 2183 2184 + spsr = vcpu_read_sys_reg(vcpu, SPSR_EL2); 2185 + spsr = kvm_check_illegal_exception_return(vcpu, spsr); 2186 + 2184 2187 /* Check for an ERETAx */ 2185 2188 esr = kvm_vcpu_get_esr(vcpu); 2186 2189 if (esr_iss_is_eretax(esr) && !kvm_auth_eretax(vcpu, &elr)) { 2187 2190 /* 2188 - * Oh no, ERETAx failed to authenticate. If we have 2189 - * FPACCOMBINE, deliver an exception right away. If we 2190 - * don't, then let the mangled ELR value trickle down the 2191 + * Oh no, ERETAx failed to authenticate. 2192 + * 2193 + * If we have FPACCOMBINE and we don't have a pending 2194 + * Illegal Execution State exception (which has priority 2195 + * over FPAC), deliver an exception right away. 2196 + * 2197 + * Otherwise, let the mangled ELR value trickle down the 2191 2198 * ERET handling, and the guest will have a little surprise. 2192 2199 */ 2193 - if (kvm_has_pauth(vcpu->kvm, FPACCOMBINE)) { 2200 + if (kvm_has_pauth(vcpu->kvm, FPACCOMBINE) && !(spsr & PSR_IL_BIT)) { 2194 2201 esr &= ESR_ELx_ERET_ISS_ERETA; 2195 2202 esr |= FIELD_PREP(ESR_ELx_EC_MASK, ESR_ELx_EC_FPAC); 2196 2203 kvm_inject_nested_sync(vcpu, esr); ··· 2208 2201 preempt_disable(); 2209 2202 kvm_arch_vcpu_put(vcpu); 2210 2203 2211 - spsr = __vcpu_sys_reg(vcpu, SPSR_EL2); 2212 - spsr = kvm_check_illegal_exception_return(vcpu, spsr); 2213 2204 if (!esr_iss_is_eretax(esr)) 2214 2205 elr = __vcpu_sys_reg(vcpu, ELR_EL2); 2215 2206 2216 2207 trace_kvm_nested_eret(vcpu, elr, spsr); 2217 2208 2218 - /* 2219 - * Note that the current exception level is always the virtual EL2, 2220 - * since we set HCR_EL2.NV bit only when entering the virtual EL2. 2221 - */ 2222 2209 *vcpu_pc(vcpu) = elr; 2223 2210 *vcpu_cpsr(vcpu) = spsr; 2224 2211
+8 -3
arch/arm64/kvm/fpsimd.c
··· 90 90 fpsimd_save_and_flush_cpu_state(); 91 91 } 92 92 } 93 + 94 + /* 95 + * If normal guests gain SME support, maintain this behavior for pKVM 96 + * guests, which don't support SME. 97 + */ 98 + WARN_ON(is_protected_kvm_enabled() && system_supports_sme() && 99 + read_sysreg_s(SYS_SVCR)); 93 100 } 94 101 95 102 /* ··· 168 161 if (has_vhe() && system_supports_sme()) { 169 162 /* Also restore EL0 state seen on entry */ 170 163 if (vcpu_get_flag(vcpu, HOST_SME_ENABLED)) 171 - sysreg_clear_set(CPACR_EL1, 0, 172 - CPACR_EL1_SMEN_EL0EN | 173 - CPACR_EL1_SMEN_EL1EN); 164 + sysreg_clear_set(CPACR_EL1, 0, CPACR_ELx_SMEN); 174 165 else 175 166 sysreg_clear_set(CPACR_EL1, 176 167 CPACR_EL1_SMEN_EL0EN,
+2 -1
arch/arm64/kvm/guest.c
··· 251 251 case PSR_AA32_MODE_SVC: 252 252 case PSR_AA32_MODE_ABT: 253 253 case PSR_AA32_MODE_UND: 254 + case PSR_AA32_MODE_SYS: 254 255 if (!vcpu_el1_is_32bit(vcpu)) 255 256 return -EINVAL; 256 257 break; ··· 277 276 if (*vcpu_cpsr(vcpu) & PSR_MODE32_BIT) { 278 277 int i, nr_reg; 279 278 280 - switch (*vcpu_cpsr(vcpu)) { 279 + switch (*vcpu_cpsr(vcpu) & PSR_AA32_MODE_MASK) { 281 280 /* 282 281 * Either we are dealing with user mode, and only the 283 282 * first 15 registers (+ PC) must be narrowed to 32bit.
+16 -2
arch/arm64/kvm/hyp/aarch32.c
··· 50 50 u32 cpsr_cond; 51 51 int cond; 52 52 53 - /* Top two bits non-zero? Unconditional. */ 54 - if (kvm_vcpu_get_esr(vcpu) >> 30) 53 + /* 54 + * These are the exception classes that could fire with a 55 + * conditional instruction. 56 + */ 57 + switch (kvm_vcpu_trap_get_class(vcpu)) { 58 + case ESR_ELx_EC_CP15_32: 59 + case ESR_ELx_EC_CP15_64: 60 + case ESR_ELx_EC_CP14_MR: 61 + case ESR_ELx_EC_CP14_LS: 62 + case ESR_ELx_EC_FP_ASIMD: 63 + case ESR_ELx_EC_CP10_ID: 64 + case ESR_ELx_EC_CP14_64: 65 + case ESR_ELx_EC_SVC32: 66 + break; 67 + default: 55 68 return true; 69 + } 56 70 57 71 /* Is condition field valid? */ 58 72 cond = kvm_vcpu_get_condition(vcpu);
+6
arch/arm64/kvm/hyp/fpsimd.S
··· 25 25 sve_load 0, x1, x2, 3 26 26 ret 27 27 SYM_FUNC_END(__sve_restore_state) 28 + 29 + SYM_FUNC_START(__sve_save_state) 30 + mov x2, #1 31 + sve_save 0, x1, x2, 3 32 + ret 33 + SYM_FUNC_END(__sve_save_state)
+20 -16
arch/arm64/kvm/hyp/include/hyp/switch.h
··· 316 316 { 317 317 sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); 318 318 __sve_restore_state(vcpu_sve_pffr(vcpu), 319 - &vcpu->arch.ctxt.fp_regs.fpsr); 319 + &vcpu->arch.ctxt.fp_regs.fpsr, 320 + true); 320 321 write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR); 321 322 } 323 + 324 + static inline void __hyp_sve_save_host(void) 325 + { 326 + struct cpu_sve_state *sve_state = *host_data_ptr(sve_state); 327 + 328 + sve_state->zcr_el1 = read_sysreg_el1(SYS_ZCR); 329 + write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2); 330 + __sve_save_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl), 331 + &sve_state->fpsr, 332 + true); 333 + } 334 + 335 + static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu); 322 336 323 337 /* 324 338 * We trap the first access to the FP/SIMD to save the host context and ··· 344 330 { 345 331 bool sve_guest; 346 332 u8 esr_ec; 347 - u64 reg; 348 333 349 334 if (!system_supports_fpsimd()) 350 335 return false; ··· 366 353 /* Valid trap. Switch the context: */ 367 354 368 355 /* First disable enough traps to allow us to update the registers */ 369 - if (has_vhe() || has_hvhe()) { 370 - reg = CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN; 371 - if (sve_guest) 372 - reg |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN; 373 - 374 - sysreg_clear_set(cpacr_el1, 0, reg); 375 - } else { 376 - reg = CPTR_EL2_TFP; 377 - if (sve_guest) 378 - reg |= CPTR_EL2_TZ; 379 - 380 - sysreg_clear_set(cptr_el2, reg, 0); 381 - } 356 + if (sve_guest || (is_protected_kvm_enabled() && system_supports_sve())) 357 + cpacr_clear_set(0, CPACR_ELx_FPEN | CPACR_ELx_ZEN); 358 + else 359 + cpacr_clear_set(0, CPACR_ELx_FPEN); 382 360 isb(); 383 361 384 362 /* Write out the host state if it's in the registers */ 385 363 if (host_owns_fp_regs()) 386 - __fpsimd_save_state(*host_data_ptr(fpsimd_state)); 364 + kvm_hyp_save_fpsimd_host(vcpu); 387 365 388 366 /* Restore the guest state */ 389 367 if (sve_guest)
-1
arch/arm64/kvm/hyp/include/nvhe/pkvm.h
··· 59 59 } 60 60 61 61 void pkvm_hyp_vm_table_init(void *tbl); 62 - void pkvm_host_fpsimd_state_init(void); 63 62 64 63 int __pkvm_init_vm(struct kvm *host_kvm, unsigned long vm_hva, 65 64 unsigned long pgd_hva);
+76 -8
arch/arm64/kvm/hyp/nvhe/hyp-main.c
··· 23 23 24 24 void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt); 25 25 26 + static void __hyp_sve_save_guest(struct kvm_vcpu *vcpu) 27 + { 28 + __vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR); 29 + /* 30 + * On saving/restoring guest sve state, always use the maximum VL for 31 + * the guest. The layout of the data when saving the sve state depends 32 + * on the VL, so use a consistent (i.e., the maximum) guest VL. 33 + */ 34 + sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); 35 + __sve_save_state(vcpu_sve_pffr(vcpu), &vcpu->arch.ctxt.fp_regs.fpsr, true); 36 + write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2); 37 + } 38 + 39 + static void __hyp_sve_restore_host(void) 40 + { 41 + struct cpu_sve_state *sve_state = *host_data_ptr(sve_state); 42 + 43 + /* 44 + * On saving/restoring host sve state, always use the maximum VL for 45 + * the host. The layout of the data when saving the sve state depends 46 + * on the VL, so use a consistent (i.e., the maximum) host VL. 47 + * 48 + * Setting ZCR_EL2 to ZCR_ELx_LEN_MASK sets the effective length 49 + * supported by the system (or limited at EL3). 50 + */ 51 + write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2); 52 + __sve_restore_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl), 53 + &sve_state->fpsr, 54 + true); 55 + write_sysreg_el1(sve_state->zcr_el1, SYS_ZCR); 56 + } 57 + 58 + static void fpsimd_sve_flush(void) 59 + { 60 + *host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED; 61 + } 62 + 63 + static void fpsimd_sve_sync(struct kvm_vcpu *vcpu) 64 + { 65 + if (!guest_owns_fp_regs()) 66 + return; 67 + 68 + cpacr_clear_set(0, CPACR_ELx_FPEN | CPACR_ELx_ZEN); 69 + isb(); 70 + 71 + if (vcpu_has_sve(vcpu)) 72 + __hyp_sve_save_guest(vcpu); 73 + else 74 + __fpsimd_save_state(&vcpu->arch.ctxt.fp_regs); 75 + 76 + if (system_supports_sve()) 77 + __hyp_sve_restore_host(); 78 + else 79 + __fpsimd_restore_state(*host_data_ptr(fpsimd_state)); 80 + 81 + *host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED; 82 + } 83 + 26 84 static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) 27 85 { 28 86 struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu; 29 87 88 + fpsimd_sve_flush(); 89 + 30 90 hyp_vcpu->vcpu.arch.ctxt = host_vcpu->arch.ctxt; 31 91 32 92 hyp_vcpu->vcpu.arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state); 33 - hyp_vcpu->vcpu.arch.sve_max_vl = host_vcpu->arch.sve_max_vl; 93 + /* Limit guest vector length to the maximum supported by the host. */ 94 + hyp_vcpu->vcpu.arch.sve_max_vl = min(host_vcpu->arch.sve_max_vl, kvm_host_sve_max_vl); 34 95 35 96 hyp_vcpu->vcpu.arch.hw_mmu = host_vcpu->arch.hw_mmu; 36 97 37 98 hyp_vcpu->vcpu.arch.hcr_el2 = host_vcpu->arch.hcr_el2; 38 99 hyp_vcpu->vcpu.arch.mdcr_el2 = host_vcpu->arch.mdcr_el2; 39 - hyp_vcpu->vcpu.arch.cptr_el2 = host_vcpu->arch.cptr_el2; 40 100 41 101 hyp_vcpu->vcpu.arch.iflags = host_vcpu->arch.iflags; 42 102 ··· 114 54 struct vgic_v3_cpu_if *host_cpu_if = &host_vcpu->arch.vgic_cpu.vgic_v3; 115 55 unsigned int i; 116 56 57 + fpsimd_sve_sync(&hyp_vcpu->vcpu); 58 + 117 59 host_vcpu->arch.ctxt = hyp_vcpu->vcpu.arch.ctxt; 118 60 119 61 host_vcpu->arch.hcr_el2 = hyp_vcpu->vcpu.arch.hcr_el2; 120 - host_vcpu->arch.cptr_el2 = hyp_vcpu->vcpu.arch.cptr_el2; 121 62 122 63 host_vcpu->arch.fault = hyp_vcpu->vcpu.arch.fault; 123 64 ··· 139 78 if (unlikely(is_protected_kvm_enabled())) { 140 79 struct pkvm_hyp_vcpu *hyp_vcpu; 141 80 struct kvm *host_kvm; 81 + 82 + /* 83 + * KVM (and pKVM) doesn't support SME guests for now, and 84 + * ensures that SME features aren't enabled in pstate when 85 + * loading a vcpu. Therefore, if SME features enabled the host 86 + * is misbehaving. 87 + */ 88 + if (unlikely(system_supports_sme() && read_sysreg_s(SYS_SVCR))) { 89 + ret = -EINVAL; 90 + goto out; 91 + } 142 92 143 93 host_kvm = kern_hyp_va(host_vcpu->kvm); 144 94 hyp_vcpu = pkvm_load_hyp_vcpu(host_kvm->arch.pkvm.handle, ··· 477 405 handle_host_smc(host_ctxt); 478 406 break; 479 407 case ESR_ELx_EC_SVE: 480 - if (has_hvhe()) 481 - sysreg_clear_set(cpacr_el1, 0, (CPACR_EL1_ZEN_EL1EN | 482 - CPACR_EL1_ZEN_EL0EN)); 483 - else 484 - sysreg_clear_set(cptr_el2, CPTR_EL2_TZ, 0); 408 + cpacr_clear_set(0, CPACR_ELx_ZEN); 485 409 isb(); 486 410 sve_cond_update_zcr_vq(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2); 487 411 break;
+5 -12
arch/arm64/kvm/hyp/nvhe/pkvm.c
··· 18 18 /* Used by kvm_get_vttbr(). */ 19 19 unsigned int kvm_arm_vmid_bits; 20 20 21 + unsigned int kvm_host_sve_max_vl; 22 + 21 23 /* 22 24 * Set trap register values based on features in ID_AA64PFR0. 23 25 */ ··· 65 63 /* Trap SVE */ 66 64 if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE), feature_ids)) { 67 65 if (has_hvhe()) 68 - cptr_clear |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN; 66 + cptr_clear |= CPACR_ELx_ZEN; 69 67 else 70 68 cptr_set |= CPTR_EL2_TZ; 71 69 } ··· 247 245 { 248 246 WARN_ON(vm_table); 249 247 vm_table = tbl; 250 - } 251 - 252 - void pkvm_host_fpsimd_state_init(void) 253 - { 254 - unsigned long i; 255 - 256 - for (i = 0; i < hyp_nr_cpus; i++) { 257 - struct kvm_host_data *host_data = per_cpu_ptr(&kvm_host_data, i); 258 - 259 - host_data->fpsimd_state = &host_data->host_ctxt.fp_regs; 260 - } 261 248 } 262 249 263 250 /* ··· 576 585 577 586 if (ret) 578 587 unmap_donated_memory(hyp_vcpu, sizeof(*hyp_vcpu)); 588 + 589 + hyp_vcpu->vcpu.arch.cptr_el2 = kvm_get_reset_cptr_el2(&hyp_vcpu->vcpu); 579 590 580 591 return ret; 581 592 }
+24 -1
arch/arm64/kvm/hyp/nvhe/setup.c
··· 67 67 return 0; 68 68 } 69 69 70 + static int pkvm_create_host_sve_mappings(void) 71 + { 72 + void *start, *end; 73 + int ret, i; 74 + 75 + if (!system_supports_sve()) 76 + return 0; 77 + 78 + for (i = 0; i < hyp_nr_cpus; i++) { 79 + struct kvm_host_data *host_data = per_cpu_ptr(&kvm_host_data, i); 80 + struct cpu_sve_state *sve_state = host_data->sve_state; 81 + 82 + start = kern_hyp_va(sve_state); 83 + end = start + PAGE_ALIGN(pkvm_host_sve_state_size()); 84 + ret = pkvm_create_mappings(start, end, PAGE_HYP); 85 + if (ret) 86 + return ret; 87 + } 88 + 89 + return 0; 90 + } 91 + 70 92 static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, 71 93 unsigned long *per_cpu_base, 72 94 u32 hyp_va_bits) ··· 146 124 if (ret) 147 125 return ret; 148 126 } 127 + 128 + pkvm_create_host_sve_mappings(); 149 129 150 130 /* 151 131 * Map the host sections RO in the hypervisor, but transfer the ··· 324 300 goto out; 325 301 326 302 pkvm_hyp_vm_table_init(vm_table_base); 327 - pkvm_host_fpsimd_state_init(); 328 303 out: 329 304 /* 330 305 * We tail-called to here from handle___pkvm_init() and will not return,
+21 -3
arch/arm64/kvm/hyp/nvhe/switch.c
··· 48 48 val |= has_hvhe() ? CPACR_EL1_TTA : CPTR_EL2_TTA; 49 49 if (cpus_have_final_cap(ARM64_SME)) { 50 50 if (has_hvhe()) 51 - val &= ~(CPACR_EL1_SMEN_EL1EN | CPACR_EL1_SMEN_EL0EN); 51 + val &= ~CPACR_ELx_SMEN; 52 52 else 53 53 val |= CPTR_EL2_TSM; 54 54 } 55 55 56 56 if (!guest_owns_fp_regs()) { 57 57 if (has_hvhe()) 58 - val &= ~(CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN | 59 - CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN); 58 + val &= ~(CPACR_ELx_FPEN | CPACR_ELx_ZEN); 60 59 else 61 60 val |= CPTR_EL2_TFP | CPTR_EL2_TZ; 62 61 ··· 179 180 */ 180 181 return (kvm_hyp_handle_sysreg(vcpu, exit_code) || 181 182 kvm_handle_pvm_sysreg(vcpu, exit_code)); 183 + } 184 + 185 + static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu) 186 + { 187 + /* 188 + * Non-protected kvm relies on the host restoring its sve state. 189 + * Protected kvm restores the host's sve state as not to reveal that 190 + * fpsimd was used by a guest nor leak upper sve bits. 191 + */ 192 + if (unlikely(is_protected_kvm_enabled() && system_supports_sve())) { 193 + __hyp_sve_save_host(); 194 + 195 + /* Re-enable SVE traps if not supported for the guest vcpu. */ 196 + if (!vcpu_has_sve(vcpu)) 197 + cpacr_clear_set(CPACR_ELx_ZEN, 0); 198 + 199 + } else { 200 + __fpsimd_save_state(*host_data_ptr(fpsimd_state)); 201 + } 182 202 } 183 203 184 204 static const exit_handler_fn hyp_exit_handlers[] = {
+8 -4
arch/arm64/kvm/hyp/vhe/switch.c
··· 93 93 94 94 val = read_sysreg(cpacr_el1); 95 95 val |= CPACR_ELx_TTA; 96 - val &= ~(CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN | 97 - CPACR_EL1_SMEN_EL0EN | CPACR_EL1_SMEN_EL1EN); 96 + val &= ~(CPACR_ELx_ZEN | CPACR_ELx_SMEN); 98 97 99 98 /* 100 99 * With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to ··· 108 109 109 110 if (guest_owns_fp_regs()) { 110 111 if (vcpu_has_sve(vcpu)) 111 - val |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN; 112 + val |= CPACR_ELx_ZEN; 112 113 } else { 113 - val &= ~(CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN); 114 + val &= ~CPACR_ELx_FPEN; 114 115 __activate_traps_fpsimd32(vcpu); 115 116 } 116 117 ··· 259 260 write_sysreg_el2(elr, SYS_ELR); 260 261 261 262 return true; 263 + } 264 + 265 + static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu) 266 + { 267 + __fpsimd_save_state(*host_data_ptr(fpsimd_state)); 262 268 } 263 269 264 270 static const exit_handler_fn hyp_exit_handlers[] = {
+4 -2
arch/arm64/kvm/nested.c
··· 58 58 break; 59 59 60 60 case SYS_ID_AA64PFR1_EL1: 61 - /* Only support SSBS */ 62 - val &= NV_FTR(PFR1, SSBS); 61 + /* Only support BTI, SSBS, CSV2_frac */ 62 + val &= (NV_FTR(PFR1, BT) | 63 + NV_FTR(PFR1, SSBS) | 64 + NV_FTR(PFR1, CSV2_frac)); 63 65 break; 64 66 65 67 case SYS_ID_AA64MMFR0_EL1:
+3
arch/arm64/kvm/reset.c
··· 32 32 33 33 /* Maximum phys_shift supported for any VM on this host */ 34 34 static u32 __ro_after_init kvm_ipa_limit; 35 + unsigned int __ro_after_init kvm_host_sve_max_vl; 35 36 36 37 /* 37 38 * ARMv8 Reset Values ··· 52 51 { 53 52 if (system_supports_sve()) { 54 53 kvm_sve_max_vl = sve_max_virtualisable_vl(); 54 + kvm_host_sve_max_vl = sve_max_vl(); 55 + kvm_nvhe_sym(kvm_host_sve_max_vl) = kvm_host_sve_max_vl; 55 56 56 57 /* 57 58 * The get_sve_reg()/set_sve_reg() ioctl interface will need
+2 -2
arch/arm64/mm/contpte.c
··· 376 376 * clearing access/dirty for the whole block. 377 377 */ 378 378 unsigned long start = addr; 379 - unsigned long end = start + nr; 379 + unsigned long end = start + nr * PAGE_SIZE; 380 380 381 381 if (pte_cont(__ptep_get(ptep + nr - 1))) 382 382 end = ALIGN(end, CONT_PTE_SIZE); ··· 386 386 ptep = contpte_align_down(ptep); 387 387 } 388 388 389 - __clear_young_dirty_ptes(vma, start, ptep, end - start, flags); 389 + __clear_young_dirty_ptes(vma, start, ptep, (end - start) / PAGE_SIZE, flags); 390 390 } 391 391 EXPORT_SYMBOL_GPL(contpte_clear_young_dirty_ptes); 392 392
+2 -2
arch/loongarch/boot/dts/loongson-2k0500-ref.dts
··· 44 44 &gmac0 { 45 45 status = "okay"; 46 46 47 - phy-mode = "rgmii"; 47 + phy-mode = "rgmii-id"; 48 48 bus_id = <0x0>; 49 49 }; 50 50 51 51 &gmac1 { 52 52 status = "okay"; 53 53 54 - phy-mode = "rgmii"; 54 + phy-mode = "rgmii-id"; 55 55 bus_id = <0x1>; 56 56 }; 57 57
+2 -2
arch/loongarch/boot/dts/loongson-2k1000-ref.dts
··· 43 43 &gmac0 { 44 44 status = "okay"; 45 45 46 - phy-mode = "rgmii"; 46 + phy-mode = "rgmii-id"; 47 47 phy-handle = <&phy0>; 48 48 mdio { 49 49 compatible = "snps,dwmac-mdio"; ··· 58 58 &gmac1 { 59 59 status = "okay"; 60 60 61 - phy-mode = "rgmii"; 61 + phy-mode = "rgmii-id"; 62 62 phy-handle = <&phy1>; 63 63 mdio { 64 64 compatible = "snps,dwmac-mdio";
+1 -1
arch/loongarch/boot/dts/loongson-2k2000-ref.dts
··· 92 92 &gmac2 { 93 93 status = "okay"; 94 94 95 - phy-mode = "rgmii"; 95 + phy-mode = "rgmii-id"; 96 96 phy-handle = <&phy2>; 97 97 mdio { 98 98 compatible = "snps,dwmac-mdio";
+1
arch/loongarch/include/asm/numa.h
··· 56 56 static inline void early_numa_add_cpu(int cpuid, s16 node) { } 57 57 static inline void numa_add_cpu(unsigned int cpu) { } 58 58 static inline void numa_remove_cpu(unsigned int cpu) { } 59 + static inline void set_cpuid_to_node(int cpuid, s16 node) { } 59 60 60 61 static inline int early_cpu_to_node(int cpu) 61 62 {
+1 -1
arch/loongarch/include/asm/stackframe.h
··· 42 42 .macro JUMP_VIRT_ADDR temp1 temp2 43 43 li.d \temp1, CACHE_BASE 44 44 pcaddi \temp2, 0 45 - or \temp1, \temp1, \temp2 45 + bstrins.d \temp1, \temp2, (DMW_PABITS - 1), 0 46 46 jirl zero, \temp1, 0xc 47 47 .endm 48 48
+1 -1
arch/loongarch/kernel/head.S
··· 22 22 _head: 23 23 .word MZ_MAGIC /* "MZ", MS-DOS header */ 24 24 .org 0x8 25 - .dword kernel_entry /* Kernel entry point */ 25 + .dword _kernel_entry /* Kernel entry point (physical address) */ 26 26 .dword _kernel_asize /* Kernel image effective size */ 27 27 .quad PHYS_LINK_KADDR /* Kernel image load offset from start of RAM */ 28 28 .org 0x38 /* 0x20 ~ 0x37 reserved */
+2 -4
arch/loongarch/kernel/setup.c
··· 282 282 return; 283 283 284 284 /* Prefer to use built-in dtb, checking its legality first. */ 285 - if (!fdt_check_header(__dtb_start)) 285 + if (IS_ENABLED(CONFIG_BUILTIN_DTB) && !fdt_check_header(__dtb_start)) 286 286 fdt_pointer = __dtb_start; 287 287 else 288 288 fdt_pointer = efi_fdt_pointer(); /* Fallback to firmware dtb */ ··· 351 351 arch_reserve_vmcore(); 352 352 arch_reserve_crashkernel(); 353 353 354 - #ifdef CONFIG_ACPI_TABLE_UPGRADE 355 - acpi_table_upgrade(); 356 - #endif 357 354 #ifdef CONFIG_ACPI 355 + acpi_table_upgrade(); 358 356 acpi_gbl_use_default_register_widths = false; 359 357 acpi_boot_table_init(); 360 358 #endif
+4 -1
arch/loongarch/kernel/smp.c
··· 273 273 274 274 if (cpuid == loongson_sysconf.boot_cpu_id) { 275 275 cpu = 0; 276 - numa_add_cpu(cpu); 277 276 } else { 278 277 cpu = cpumask_next_zero(-1, cpu_present_mask); 279 278 } ··· 282 283 set_cpu_present(cpu, true); 283 284 __cpu_number_map[cpuid] = cpu; 284 285 __cpu_logical_map[cpu] = cpuid; 286 + 287 + early_numa_add_cpu(cpu, 0); 288 + set_cpuid_to_node(cpuid, 0); 285 289 } 286 290 287 291 loongson_sysconf.nr_cpus = num_processors; ··· 470 468 set_cpu_possible(0, true); 471 469 set_cpu_online(0, true); 472 470 set_my_cpu_offset(per_cpu_offset(0)); 471 + numa_add_cpu(0); 473 472 474 473 rr_node = first_node(node_online_map); 475 474 for_each_possible_cpu(cpu) {
+6 -4
arch/loongarch/kernel/vmlinux.lds.S
··· 6 6 7 7 #define PAGE_SIZE _PAGE_SIZE 8 8 #define RO_EXCEPTION_TABLE_ALIGN 4 9 + #define PHYSADDR_MASK 0xffffffffffff /* 48-bit */ 9 10 10 11 /* 11 12 * Put .bss..swapper_pg_dir as the first thing in .bss. This will ··· 143 142 144 143 #ifdef CONFIG_EFI_STUB 145 144 /* header symbols */ 146 - _kernel_asize = _end - _text; 147 - _kernel_fsize = _edata - _text; 148 - _kernel_vsize = _end - __initdata_begin; 149 - _kernel_rsize = _edata - __initdata_begin; 145 + _kernel_entry = ABSOLUTE(kernel_entry & PHYSADDR_MASK); 146 + _kernel_asize = ABSOLUTE(_end - _text); 147 + _kernel_fsize = ABSOLUTE(_edata - _text); 148 + _kernel_vsize = ABSOLUTE(_end - __initdata_begin); 149 + _kernel_rsize = ABSOLUTE(_edata - __initdata_begin); 150 150 #endif 151 151 152 152 .gptab.sdata : {
+4 -3
arch/riscv/kvm/aia_device.c
··· 237 237 238 238 static u32 aia_imsic_hart_index(struct kvm_aia *aia, gpa_t addr) 239 239 { 240 - u32 hart, group = 0; 240 + u32 hart = 0, group = 0; 241 241 242 - hart = (addr >> (aia->nr_guest_bits + IMSIC_MMIO_PAGE_SHIFT)) & 243 - GENMASK_ULL(aia->nr_hart_bits - 1, 0); 242 + if (aia->nr_hart_bits) 243 + hart = (addr >> (aia->nr_guest_bits + IMSIC_MMIO_PAGE_SHIFT)) & 244 + GENMASK_ULL(aia->nr_hart_bits - 1, 0); 244 245 if (aia->nr_group_bits) 245 246 group = (addr >> aia->nr_group_shift) & 246 247 GENMASK_ULL(aia->nr_group_bits - 1, 0);
+2 -2
arch/riscv/kvm/vcpu_onereg.c
··· 724 724 switch (reg_subtype) { 725 725 case KVM_REG_RISCV_ISA_SINGLE: 726 726 return riscv_vcpu_set_isa_ext_single(vcpu, reg_num, reg_val); 727 - case KVM_REG_RISCV_SBI_MULTI_EN: 727 + case KVM_REG_RISCV_ISA_MULTI_EN: 728 728 return riscv_vcpu_set_isa_ext_multi(vcpu, reg_num, reg_val, true); 729 - case KVM_REG_RISCV_SBI_MULTI_DIS: 729 + case KVM_REG_RISCV_ISA_MULTI_DIS: 730 730 return riscv_vcpu_set_isa_ext_multi(vcpu, reg_num, reg_val, false); 731 731 default: 732 732 return -ENOENT;
+2 -2
arch/riscv/mm/fault.c
··· 293 293 if (unlikely(access_error(cause, vma))) { 294 294 vma_end_read(vma); 295 295 count_vm_vma_lock_event(VMA_LOCK_SUCCESS); 296 - tsk->thread.bad_cause = SEGV_ACCERR; 297 - bad_area_nosemaphore(regs, code, addr); 296 + tsk->thread.bad_cause = cause; 297 + bad_area_nosemaphore(regs, SEGV_ACCERR, addr); 298 298 return; 299 299 } 300 300
+11 -10
arch/riscv/mm/init.c
··· 250 250 kernel_map.va_pa_offset = PAGE_OFFSET - phys_ram_base; 251 251 252 252 /* 253 - * memblock allocator is not aware of the fact that last 4K bytes of 254 - * the addressable memory can not be mapped because of IS_ERR_VALUE 255 - * macro. Make sure that last 4k bytes are not usable by memblock 256 - * if end of dram is equal to maximum addressable memory. For 64-bit 257 - * kernel, this problem can't happen here as the end of the virtual 258 - * address space is occupied by the kernel mapping then this check must 259 - * be done as soon as the kernel mapping base address is determined. 253 + * Reserve physical address space that would be mapped to virtual 254 + * addresses greater than (void *)(-PAGE_SIZE) because: 255 + * - This memory would overlap with ERR_PTR 256 + * - This memory belongs to high memory, which is not supported 257 + * 258 + * This is not applicable to 64-bit kernel, because virtual addresses 259 + * after (void *)(-PAGE_SIZE) are not linearly mapped: they are 260 + * occupied by kernel mapping. Also it is unrealistic for high memory 261 + * to exist on 64-bit platforms. 260 262 */ 261 263 if (!IS_ENABLED(CONFIG_64BIT)) { 262 - max_mapped_addr = __pa(~(ulong)0); 263 - if (max_mapped_addr == (phys_ram_end - 1)) 264 - memblock_set_current_limit(max_mapped_addr - 4096); 264 + max_mapped_addr = __va_to_pa_nodebug(-PAGE_SIZE); 265 + memblock_reserve(max_mapped_addr, (phys_addr_t)-max_mapped_addr); 265 266 } 266 267 267 268 min_low_pfn = PFN_UP(phys_ram_base);
+30 -24
arch/s390/kernel/crash_dump.c
··· 451 451 /* 452 452 * Initialize ELF header (new kernel) 453 453 */ 454 - static void *ehdr_init(Elf64_Ehdr *ehdr, int mem_chunk_cnt) 454 + static void *ehdr_init(Elf64_Ehdr *ehdr, int phdr_count) 455 455 { 456 456 memset(ehdr, 0, sizeof(*ehdr)); 457 457 memcpy(ehdr->e_ident, ELFMAG, SELFMAG); ··· 465 465 ehdr->e_phoff = sizeof(Elf64_Ehdr); 466 466 ehdr->e_ehsize = sizeof(Elf64_Ehdr); 467 467 ehdr->e_phentsize = sizeof(Elf64_Phdr); 468 - /* 469 - * Number of memory chunk PT_LOAD program headers plus one kernel 470 - * image PT_LOAD program header plus one PT_NOTE program header. 471 - */ 472 - ehdr->e_phnum = mem_chunk_cnt + 1 + 1; 468 + /* Number of PT_LOAD program headers plus PT_NOTE program header */ 469 + ehdr->e_phnum = phdr_count + 1; 473 470 return ehdr + 1; 474 471 } 475 472 ··· 500 503 /* 501 504 * Initialize ELF loads (new kernel) 502 505 */ 503 - static void loads_init(Elf64_Phdr *phdr) 506 + static void loads_init(Elf64_Phdr *phdr, bool os_info_has_vm) 504 507 { 505 - unsigned long old_identity_base = os_info_old_value(OS_INFO_IDENTITY_BASE); 508 + unsigned long old_identity_base = 0; 506 509 phys_addr_t start, end; 507 510 u64 idx; 508 511 512 + if (os_info_has_vm) 513 + old_identity_base = os_info_old_value(OS_INFO_IDENTITY_BASE); 509 514 for_each_physmem_range(idx, &oldmem_type, &start, &end) { 510 515 phdr->p_type = PT_LOAD; 511 516 phdr->p_vaddr = old_identity_base + start; ··· 519 520 phdr->p_align = PAGE_SIZE; 520 521 phdr++; 521 522 } 523 + } 524 + 525 + static bool os_info_has_vm(void) 526 + { 527 + return os_info_old_value(OS_INFO_KASLR_OFFSET); 522 528 } 523 529 524 530 /* ··· 570 566 return ptr; 571 567 } 572 568 573 - static size_t get_elfcorehdr_size(int mem_chunk_cnt) 569 + static size_t get_elfcorehdr_size(int phdr_count) 574 570 { 575 571 size_t size; 576 572 ··· 585 581 size += nt_vmcoreinfo_size(); 586 582 /* nt_final */ 587 583 size += sizeof(Elf64_Nhdr); 588 - /* PT_LOAD type program header for kernel text region */ 589 - size += sizeof(Elf64_Phdr); 590 584 /* PT_LOADS */ 591 - size += mem_chunk_cnt * sizeof(Elf64_Phdr); 585 + size += phdr_count * sizeof(Elf64_Phdr); 592 586 593 587 return size; 594 588 } ··· 597 595 int elfcorehdr_alloc(unsigned long long *addr, unsigned long long *size) 598 596 { 599 597 Elf64_Phdr *phdr_notes, *phdr_loads, *phdr_text; 598 + int mem_chunk_cnt, phdr_text_cnt; 600 599 size_t alloc_size; 601 - int mem_chunk_cnt; 602 600 void *ptr, *hdr; 603 601 u64 hdr_off; 604 602 ··· 617 615 } 618 616 619 617 mem_chunk_cnt = get_mem_chunk_cnt(); 618 + phdr_text_cnt = os_info_has_vm() ? 1 : 0; 620 619 621 - alloc_size = get_elfcorehdr_size(mem_chunk_cnt); 620 + alloc_size = get_elfcorehdr_size(mem_chunk_cnt + phdr_text_cnt); 622 621 623 622 hdr = kzalloc(alloc_size, GFP_KERNEL); 624 623 625 - /* Without elfcorehdr /proc/vmcore cannot be created. Thus creating 624 + /* 625 + * Without elfcorehdr /proc/vmcore cannot be created. Thus creating 626 626 * a dump with this crash kernel will fail. Panic now to allow other 627 627 * dump mechanisms to take over. 628 628 */ ··· 632 628 panic("s390 kdump allocating elfcorehdr failed"); 633 629 634 630 /* Init elf header */ 635 - ptr = ehdr_init(hdr, mem_chunk_cnt); 631 + phdr_notes = ehdr_init(hdr, mem_chunk_cnt + phdr_text_cnt); 636 632 /* Init program headers */ 637 - phdr_notes = ptr; 638 - ptr = PTR_ADD(ptr, sizeof(Elf64_Phdr)); 639 - phdr_text = ptr; 640 - ptr = PTR_ADD(ptr, sizeof(Elf64_Phdr)); 641 - phdr_loads = ptr; 642 - ptr = PTR_ADD(ptr, sizeof(Elf64_Phdr) * mem_chunk_cnt); 633 + if (phdr_text_cnt) { 634 + phdr_text = phdr_notes + 1; 635 + phdr_loads = phdr_text + 1; 636 + } else { 637 + phdr_loads = phdr_notes + 1; 638 + } 639 + ptr = PTR_ADD(phdr_loads, sizeof(Elf64_Phdr) * mem_chunk_cnt); 643 640 /* Init notes */ 644 641 hdr_off = PTR_DIFF(ptr, hdr); 645 642 ptr = notes_init(phdr_notes, ptr, ((unsigned long) hdr) + hdr_off); 646 643 /* Init kernel text program header */ 647 - text_init(phdr_text); 644 + if (phdr_text_cnt) 645 + text_init(phdr_text); 648 646 /* Init loads */ 649 - loads_init(phdr_loads); 647 + loads_init(phdr_loads, phdr_text_cnt); 650 648 /* Finalize program headers */ 651 649 hdr_off = PTR_DIFF(ptr, hdr); 652 650 *addr = (unsigned long long) hdr;
+1
arch/x86/include/asm/kvm_host.h
··· 2154 2154 2155 2155 int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 error_code, 2156 2156 void *insn, int insn_len); 2157 + void kvm_mmu_print_sptes(struct kvm_vcpu *vcpu, gpa_t gpa, const char *msg); 2157 2158 void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva); 2158 2159 void kvm_mmu_invalidate_addr(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, 2159 2160 u64 addr, unsigned long roots);
+1 -1
arch/x86/include/asm/vmxfeatures.h
··· 77 77 #define VMX_FEATURE_ENCLS_EXITING ( 2*32+ 15) /* "" VM-Exit on ENCLS (leaf dependent) */ 78 78 #define VMX_FEATURE_RDSEED_EXITING ( 2*32+ 16) /* "" VM-Exit on RDSEED */ 79 79 #define VMX_FEATURE_PAGE_MOD_LOGGING ( 2*32+ 17) /* "pml" Log dirty pages into buffer */ 80 - #define VMX_FEATURE_EPT_VIOLATION_VE ( 2*32+ 18) /* "" Conditionally reflect EPT violations as #VE exceptions */ 80 + #define VMX_FEATURE_EPT_VIOLATION_VE ( 2*32+ 18) /* Conditionally reflect EPT violations as #VE exceptions */ 81 81 #define VMX_FEATURE_PT_CONCEAL_VMX ( 2*32+ 19) /* "" Suppress VMX indicators in Processor Trace */ 82 82 #define VMX_FEATURE_XSAVES ( 2*32+ 20) /* "" Enable XSAVES and XRSTORS in guest */ 83 83 #define VMX_FEATURE_MODE_BASED_EPT_EXEC ( 2*32+ 22) /* "ept_mode_based_exec" Enable separate EPT EXEC bits for supervisor vs. user */
+8 -1
arch/x86/kernel/amd_nb.c
··· 215 215 216 216 int amd_smn_read(u16 node, u32 address, u32 *value) 217 217 { 218 - return __amd_smn_rw(node, address, value, false); 218 + int err = __amd_smn_rw(node, address, value, false); 219 + 220 + if (PCI_POSSIBLE_ERROR(*value)) { 221 + err = -ENODEV; 222 + *value = 0; 223 + } 224 + 225 + return err; 219 226 } 220 227 EXPORT_SYMBOL_GPL(amd_smn_read); 221 228
+9 -2
arch/x86/kernel/machine_kexec_64.c
··· 295 295 void machine_kexec(struct kimage *image) 296 296 { 297 297 unsigned long page_list[PAGES_NR]; 298 - void *control_page; 298 + unsigned int host_mem_enc_active; 299 299 int save_ftrace_enabled; 300 + void *control_page; 301 + 302 + /* 303 + * This must be done before load_segments() since if call depth tracking 304 + * is used then GS must be valid to make any function calls. 305 + */ 306 + host_mem_enc_active = cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT); 300 307 301 308 #ifdef CONFIG_KEXEC_JUMP 302 309 if (image->preserve_context) ··· 365 358 (unsigned long)page_list, 366 359 image->start, 367 360 image->preserve_context, 368 - cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)); 361 + host_mem_enc_active); 369 362 370 363 #ifdef CONFIG_KEXEC_JUMP 371 364 if (image->preserve_context)
+7 -4
arch/x86/kvm/Kconfig
··· 44 44 select KVM_VFIO 45 45 select HAVE_KVM_PM_NOTIFIER if PM 46 46 select KVM_GENERIC_HARDWARE_ENABLING 47 + select KVM_WERROR if WERROR 47 48 help 48 49 Support hosting fully virtualized guest machines using hardware 49 50 virtualization extensions. You will need a fairly recent ··· 67 66 # FRAME_WARN, i.e. KVM_WERROR=y with KASAN=y requires special tuning. 68 67 # Building KVM with -Werror and KASAN is still doable via enabling 69 68 # the kernel-wide WERROR=y. 70 - depends on KVM && EXPERT && !KASAN 69 + depends on KVM && ((EXPERT && !KASAN) || WERROR) 71 70 help 72 71 Add -Werror to the build flags for KVM. 73 72 ··· 98 97 99 98 config KVM_INTEL_PROVE_VE 100 99 bool "Check that guests do not receive #VE exceptions" 101 - default KVM_PROVE_MMU || DEBUG_KERNEL 102 - depends on KVM_INTEL 100 + depends on KVM_INTEL && EXPERT 103 101 help 104 - 105 102 Checks that KVM's page table management code will not incorrectly 106 103 let guests receive a virtualization exception. Virtualization 107 104 exceptions will be trapped by the hypervisor rather than injected 108 105 in the guest. 106 + 107 + Note: some CPUs appear to generate spurious EPT Violations #VEs 108 + that trigger KVM's WARN, in particular with eptad=0 and/or nested 109 + virtualization. 109 110 110 111 If unsure, say N. 111 112
+21 -18
arch/x86/kvm/lapic.c
··· 59 59 #define MAX_APIC_VECTOR 256 60 60 #define APIC_VECTORS_PER_REG 32 61 61 62 - static bool lapic_timer_advance_dynamic __read_mostly; 62 + /* 63 + * Enable local APIC timer advancement (tscdeadline mode only) with adaptive 64 + * tuning. When enabled, KVM programs the host timer event to fire early, i.e. 65 + * before the deadline expires, to account for the delay between taking the 66 + * VM-Exit (to inject the guest event) and the subsequent VM-Enter to resume 67 + * the guest, i.e. so that the interrupt arrives in the guest with minimal 68 + * latency relative to the deadline programmed by the guest. 69 + */ 70 + static bool lapic_timer_advance __read_mostly = true; 71 + module_param(lapic_timer_advance, bool, 0444); 72 + 63 73 #define LAPIC_TIMER_ADVANCE_ADJUST_MIN 100 /* clock cycles */ 64 74 #define LAPIC_TIMER_ADVANCE_ADJUST_MAX 10000 /* clock cycles */ 65 75 #define LAPIC_TIMER_ADVANCE_NS_INIT 1000 ··· 1864 1854 guest_tsc = kvm_read_l1_tsc(vcpu, rdtsc()); 1865 1855 trace_kvm_wait_lapic_expire(vcpu->vcpu_id, guest_tsc - tsc_deadline); 1866 1856 1867 - if (lapic_timer_advance_dynamic) { 1868 - adjust_lapic_timer_advance(vcpu, guest_tsc - tsc_deadline); 1869 - /* 1870 - * If the timer fired early, reread the TSC to account for the 1871 - * overhead of the above adjustment to avoid waiting longer 1872 - * than is necessary. 1873 - */ 1874 - if (guest_tsc < tsc_deadline) 1875 - guest_tsc = kvm_read_l1_tsc(vcpu, rdtsc()); 1876 - } 1857 + adjust_lapic_timer_advance(vcpu, guest_tsc - tsc_deadline); 1858 + 1859 + /* 1860 + * If the timer fired early, reread the TSC to account for the overhead 1861 + * of the above adjustment to avoid waiting longer than is necessary. 1862 + */ 1863 + if (guest_tsc < tsc_deadline) 1864 + guest_tsc = kvm_read_l1_tsc(vcpu, rdtsc()); 1877 1865 1878 1866 if (guest_tsc < tsc_deadline) 1879 1867 __wait_lapic_expire(vcpu, tsc_deadline - guest_tsc); ··· 2820 2812 return HRTIMER_NORESTART; 2821 2813 } 2822 2814 2823 - int kvm_create_lapic(struct kvm_vcpu *vcpu, int timer_advance_ns) 2815 + int kvm_create_lapic(struct kvm_vcpu *vcpu) 2824 2816 { 2825 2817 struct kvm_lapic *apic; 2826 2818 ··· 2853 2845 hrtimer_init(&apic->lapic_timer.timer, CLOCK_MONOTONIC, 2854 2846 HRTIMER_MODE_ABS_HARD); 2855 2847 apic->lapic_timer.timer.function = apic_timer_fn; 2856 - if (timer_advance_ns == -1) { 2848 + if (lapic_timer_advance) 2857 2849 apic->lapic_timer.timer_advance_ns = LAPIC_TIMER_ADVANCE_NS_INIT; 2858 - lapic_timer_advance_dynamic = true; 2859 - } else { 2860 - apic->lapic_timer.timer_advance_ns = timer_advance_ns; 2861 - lapic_timer_advance_dynamic = false; 2862 - } 2863 2850 2864 2851 /* 2865 2852 * Stuff the APIC ENABLE bit in lieu of temporarily incrementing
+1 -1
arch/x86/kvm/lapic.h
··· 85 85 86 86 struct dest_map; 87 87 88 - int kvm_create_lapic(struct kvm_vcpu *vcpu, int timer_advance_ns); 88 + int kvm_create_lapic(struct kvm_vcpu *vcpu); 89 89 void kvm_free_lapic(struct kvm_vcpu *vcpu); 90 90 91 91 int kvm_apic_has_interrupt(struct kvm_vcpu *vcpu);
+36 -12
arch/x86/kvm/mmu/mmu.c
··· 336 336 #ifdef CONFIG_X86_64 337 337 static void __set_spte(u64 *sptep, u64 spte) 338 338 { 339 + KVM_MMU_WARN_ON(is_ept_ve_possible(spte)); 339 340 WRITE_ONCE(*sptep, spte); 340 341 } 341 342 342 343 static void __update_clear_spte_fast(u64 *sptep, u64 spte) 343 344 { 345 + KVM_MMU_WARN_ON(is_ept_ve_possible(spte)); 344 346 WRITE_ONCE(*sptep, spte); 345 347 } 346 348 347 349 static u64 __update_clear_spte_slow(u64 *sptep, u64 spte) 348 350 { 351 + KVM_MMU_WARN_ON(is_ept_ve_possible(spte)); 349 352 return xchg(sptep, spte); 350 353 } 351 354 ··· 4104 4101 return leaf; 4105 4102 } 4106 4103 4104 + static int get_sptes_lockless(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, 4105 + int *root_level) 4106 + { 4107 + int leaf; 4108 + 4109 + walk_shadow_page_lockless_begin(vcpu); 4110 + 4111 + if (is_tdp_mmu_active(vcpu)) 4112 + leaf = kvm_tdp_mmu_get_walk(vcpu, addr, sptes, root_level); 4113 + else 4114 + leaf = get_walk(vcpu, addr, sptes, root_level); 4115 + 4116 + walk_shadow_page_lockless_end(vcpu); 4117 + return leaf; 4118 + } 4119 + 4107 4120 /* return true if reserved bit(s) are detected on a valid, non-MMIO SPTE. */ 4108 4121 static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) 4109 4122 { ··· 4128 4109 int root, leaf, level; 4129 4110 bool reserved = false; 4130 4111 4131 - walk_shadow_page_lockless_begin(vcpu); 4132 - 4133 - if (is_tdp_mmu_active(vcpu)) 4134 - leaf = kvm_tdp_mmu_get_walk(vcpu, addr, sptes, &root); 4135 - else 4136 - leaf = get_walk(vcpu, addr, sptes, &root); 4137 - 4138 - walk_shadow_page_lockless_end(vcpu); 4139 - 4112 + leaf = get_sptes_lockless(vcpu, addr, sptes, &root); 4140 4113 if (unlikely(leaf < 0)) { 4141 4114 *sptep = 0ull; 4142 4115 return reserved; ··· 4410 4399 if (!kvm_apicv_activated(vcpu->kvm)) 4411 4400 return RET_PF_EMULATE; 4412 4401 } 4413 - 4414 - fault->mmu_seq = vcpu->kvm->mmu_invalidate_seq; 4415 - smp_rmb(); 4416 4402 4417 4403 /* 4418 4404 * Check for a relevant mmu_notifier invalidation event before getting ··· 5928 5920 insn_len); 5929 5921 } 5930 5922 EXPORT_SYMBOL_GPL(kvm_mmu_page_fault); 5923 + 5924 + void kvm_mmu_print_sptes(struct kvm_vcpu *vcpu, gpa_t gpa, const char *msg) 5925 + { 5926 + u64 sptes[PT64_ROOT_MAX_LEVEL + 1]; 5927 + int root_level, leaf, level; 5928 + 5929 + leaf = get_sptes_lockless(vcpu, gpa, sptes, &root_level); 5930 + if (unlikely(leaf < 0)) 5931 + return; 5932 + 5933 + pr_err("%s %llx", msg, gpa); 5934 + for (level = root_level; level >= leaf; level--) 5935 + pr_cont(", spte[%d] = 0x%llx", level, sptes[level]); 5936 + pr_cont("\n"); 5937 + } 5938 + EXPORT_SYMBOL_GPL(kvm_mmu_print_sptes); 5931 5939 5932 5940 static void __kvm_mmu_invalidate_addr(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, 5933 5941 u64 addr, hpa_t root_hpa)
+9
arch/x86/kvm/mmu/spte.h
··· 3 3 #ifndef KVM_X86_MMU_SPTE_H 4 4 #define KVM_X86_MMU_SPTE_H 5 5 6 + #include <asm/vmx.h> 7 + 6 8 #include "mmu.h" 7 9 #include "mmu_internal.h" 8 10 ··· 276 274 static inline bool is_shadow_present_pte(u64 pte) 277 275 { 278 276 return !!(pte & SPTE_MMU_PRESENT_MASK); 277 + } 278 + 279 + static inline bool is_ept_ve_possible(u64 spte) 280 + { 281 + return (shadow_present_mask & VMX_EPT_SUPPRESS_VE_BIT) && 282 + !(spte & VMX_EPT_SUPPRESS_VE_BIT) && 283 + (spte & VMX_EPT_RWX_MASK) != VMX_EPT_MISCONFIG_WX_VALUE; 279 284 } 280 285 281 286 /*
+2
arch/x86/kvm/mmu/tdp_iter.h
··· 21 21 22 22 static inline u64 kvm_tdp_mmu_write_spte_atomic(tdp_ptep_t sptep, u64 new_spte) 23 23 { 24 + KVM_MMU_WARN_ON(is_ept_ve_possible(new_spte)); 24 25 return xchg(rcu_dereference(sptep), new_spte); 25 26 } 26 27 27 28 static inline void __kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 new_spte) 28 29 { 30 + KVM_MMU_WARN_ON(is_ept_ve_possible(new_spte)); 29 31 WRITE_ONCE(*rcu_dereference(sptep), new_spte); 30 32 } 31 33
+1 -1
arch/x86/kvm/mmu/tdp_mmu.c
··· 626 626 * SPTEs. 627 627 */ 628 628 handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte, 629 - 0, iter->level, true); 629 + SHADOW_NONPRESENT_VALUE, iter->level, true); 630 630 631 631 return 0; 632 632 }
+14 -5
arch/x86/kvm/svm/sev.c
··· 779 779 */ 780 780 fpstate_set_confidential(&vcpu->arch.guest_fpu); 781 781 vcpu->arch.guest_state_protected = true; 782 + 783 + /* 784 + * SEV-ES guest mandates LBR Virtualization to be _always_ ON. Enable it 785 + * only after setting guest_state_protected because KVM_SET_MSRS allows 786 + * dynamic toggling of LBRV (for performance reason) on write access to 787 + * MSR_IA32_DEBUGCTLMSR when guest_state_protected is not set. 788 + */ 789 + svm_enable_lbrv(vcpu); 782 790 return 0; 783 791 } 784 792 ··· 2414 2406 if (!boot_cpu_has(X86_FEATURE_SEV_ES)) 2415 2407 goto out; 2416 2408 2409 + if (!lbrv) { 2410 + WARN_ONCE(!boot_cpu_has(X86_FEATURE_LBRV), 2411 + "LBRV must be present for SEV-ES support"); 2412 + goto out; 2413 + } 2414 + 2417 2415 /* Has the system been allocated ASIDs for SEV-ES? */ 2418 2416 if (min_sev_asid == 1) 2419 2417 goto out; ··· 3230 3216 struct kvm_vcpu *vcpu = &svm->vcpu; 3231 3217 3232 3218 svm->vmcb->control.nested_ctl |= SVM_NESTED_CTL_SEV_ES_ENABLE; 3233 - svm->vmcb->control.virt_ext |= LBR_CTL_ENABLE_MASK; 3234 3219 3235 3220 /* 3236 3221 * An SEV-ES guest requires a VMSA area that is a separate from the ··· 3281 3268 /* Clear intercepts on selected MSRs */ 3282 3269 set_msr_interception(vcpu, svm->msrpm, MSR_EFER, 1, 1); 3283 3270 set_msr_interception(vcpu, svm->msrpm, MSR_IA32_CR_PAT, 1, 1); 3284 - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHFROMIP, 1, 1); 3285 - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHTOIP, 1, 1); 3286 - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTFROMIP, 1, 1); 3287 - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTTOIP, 1, 1); 3288 3271 } 3289 3272 3290 3273 void sev_init_vmcb(struct vcpu_svm *svm)
+51 -18
arch/x86/kvm/svm/svm.c
··· 99 99 { .index = MSR_IA32_SPEC_CTRL, .always = false }, 100 100 { .index = MSR_IA32_PRED_CMD, .always = false }, 101 101 { .index = MSR_IA32_FLUSH_CMD, .always = false }, 102 + { .index = MSR_IA32_DEBUGCTLMSR, .always = false }, 102 103 { .index = MSR_IA32_LASTBRANCHFROMIP, .always = false }, 103 104 { .index = MSR_IA32_LASTBRANCHTOIP, .always = false }, 104 105 { .index = MSR_IA32_LASTINTFROMIP, .always = false }, ··· 216 215 module_param(vgif, int, 0444); 217 216 218 217 /* enable/disable LBR virtualization */ 219 - static int lbrv = true; 218 + int lbrv = true; 220 219 module_param(lbrv, int, 0444); 221 220 222 221 static int tsc_scaling = true; ··· 991 990 vmcb_mark_dirty(to_vmcb, VMCB_LBR); 992 991 } 993 992 994 - static void svm_enable_lbrv(struct kvm_vcpu *vcpu) 993 + void svm_enable_lbrv(struct kvm_vcpu *vcpu) 995 994 { 996 995 struct vcpu_svm *svm = to_svm(vcpu); 997 996 ··· 1001 1000 set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTFROMIP, 1, 1); 1002 1001 set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTTOIP, 1, 1); 1003 1002 1003 + if (sev_es_guest(vcpu->kvm)) 1004 + set_msr_interception(vcpu, svm->msrpm, MSR_IA32_DEBUGCTLMSR, 1, 1); 1005 + 1004 1006 /* Move the LBR msrs to the vmcb02 so that the guest can see them. */ 1005 1007 if (is_guest_mode(vcpu)) 1006 1008 svm_copy_lbrs(svm->vmcb, svm->vmcb01.ptr); ··· 1012 1008 static void svm_disable_lbrv(struct kvm_vcpu *vcpu) 1013 1009 { 1014 1010 struct vcpu_svm *svm = to_svm(vcpu); 1011 + 1012 + KVM_BUG_ON(sev_es_guest(vcpu->kvm), vcpu->kvm); 1015 1013 1016 1014 svm->vmcb->control.virt_ext &= ~LBR_CTL_ENABLE_MASK; 1017 1015 set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHFROMIP, 0, 0); ··· 2828 2822 return 0; 2829 2823 } 2830 2824 2825 + static bool 2826 + sev_es_prevent_msr_access(struct kvm_vcpu *vcpu, struct msr_data *msr_info) 2827 + { 2828 + return sev_es_guest(vcpu->kvm) && 2829 + vcpu->arch.guest_state_protected && 2830 + svm_msrpm_offset(msr_info->index) != MSR_INVALID && 2831 + !msr_write_intercepted(vcpu, msr_info->index); 2832 + } 2833 + 2831 2834 static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) 2832 2835 { 2833 2836 struct vcpu_svm *svm = to_svm(vcpu); 2837 + 2838 + if (sev_es_prevent_msr_access(vcpu, msr_info)) { 2839 + msr_info->data = 0; 2840 + return -EINVAL; 2841 + } 2834 2842 2835 2843 switch (msr_info->index) { 2836 2844 case MSR_AMD64_TSC_RATIO: ··· 2996 2976 2997 2977 u32 ecx = msr->index; 2998 2978 u64 data = msr->data; 2979 + 2980 + if (sev_es_prevent_msr_access(vcpu, msr)) 2981 + return -EINVAL; 2982 + 2999 2983 switch (ecx) { 3000 2984 case MSR_AMD64_TSC_RATIO: 3001 2985 ··· 3870 3846 struct vcpu_svm *svm = to_svm(vcpu); 3871 3847 3872 3848 /* 3873 - * KVM should never request an NMI window when vNMI is enabled, as KVM 3874 - * allows at most one to-be-injected NMI and one pending NMI, i.e. if 3875 - * two NMIs arrive simultaneously, KVM will inject one and set 3876 - * V_NMI_PENDING for the other. WARN, but continue with the standard 3877 - * single-step approach to try and salvage the pending NMI. 3849 + * If NMIs are outright masked, i.e. the vCPU is already handling an 3850 + * NMI, and KVM has not yet intercepted an IRET, then there is nothing 3851 + * more to do at this time as KVM has already enabled IRET intercepts. 3852 + * If KVM has already intercepted IRET, then single-step over the IRET, 3853 + * as NMIs aren't architecturally unmasked until the IRET completes. 3854 + * 3855 + * If vNMI is enabled, KVM should never request an NMI window if NMIs 3856 + * are masked, as KVM allows at most one to-be-injected NMI and one 3857 + * pending NMI. If two NMIs arrive simultaneously, KVM will inject one 3858 + * NMI and set V_NMI_PENDING for the other, but if and only if NMIs are 3859 + * unmasked. KVM _will_ request an NMI window in some situations, e.g. 3860 + * if the vCPU is in an STI shadow or if GIF=0, KVM can't immediately 3861 + * inject the NMI. In those situations, KVM needs to single-step over 3862 + * the STI shadow or intercept STGI. 3878 3863 */ 3879 - WARN_ON_ONCE(is_vnmi_enabled(svm)); 3864 + if (svm_get_nmi_mask(vcpu)) { 3865 + WARN_ON_ONCE(is_vnmi_enabled(svm)); 3880 3866 3881 - if (svm_get_nmi_mask(vcpu) && !svm->awaiting_iret_completion) 3882 - return; /* IRET will cause a vm exit */ 3867 + if (!svm->awaiting_iret_completion) 3868 + return; /* IRET will cause a vm exit */ 3869 + } 3883 3870 3884 3871 /* 3885 3872 * SEV-ES guests are responsible for signaling when a vCPU is ready to ··· 5300 5265 5301 5266 nrips = nrips && boot_cpu_has(X86_FEATURE_NRIPS); 5302 5267 5268 + if (lbrv) { 5269 + if (!boot_cpu_has(X86_FEATURE_LBRV)) 5270 + lbrv = false; 5271 + else 5272 + pr_info("LBR virtualization supported\n"); 5273 + } 5303 5274 /* 5304 5275 * Note, SEV setup consumes npt_enabled and enable_mmio_caching (which 5305 5276 * may be modified by svm_adjust_mmio_mask()), as well as nrips. ··· 5357 5316 if (!vnmi) { 5358 5317 svm_x86_ops.is_vnmi_pending = NULL; 5359 5318 svm_x86_ops.set_vnmi_pending = NULL; 5360 - } 5361 - 5362 - 5363 - if (lbrv) { 5364 - if (!boot_cpu_has(X86_FEATURE_LBRV)) 5365 - lbrv = false; 5366 - else 5367 - pr_info("LBR virtualization supported\n"); 5368 5319 } 5369 5320 5370 5321 if (!enable_pmu)
+3 -1
arch/x86/kvm/svm/svm.h
··· 30 30 #define IOPM_SIZE PAGE_SIZE * 3 31 31 #define MSRPM_SIZE PAGE_SIZE * 2 32 32 33 - #define MAX_DIRECT_ACCESS_MSRS 47 33 + #define MAX_DIRECT_ACCESS_MSRS 48 34 34 #define MSRPM_OFFSETS 32 35 35 extern u32 msrpm_offsets[MSRPM_OFFSETS] __read_mostly; 36 36 extern bool npt_enabled; ··· 39 39 extern bool intercept_smi; 40 40 extern bool x2avic_enabled; 41 41 extern bool vnmi; 42 + extern int lbrv; 42 43 43 44 /* 44 45 * Clean bits in VMCB. ··· 553 552 void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, u32 *msrpm); 554 553 void svm_vcpu_free_msrpm(u32 *msrpm); 555 554 void svm_copy_lbrs(struct vmcb *to_vmcb, struct vmcb *from_vmcb); 555 + void svm_enable_lbrv(struct kvm_vcpu *vcpu); 556 556 void svm_update_lbrv(struct kvm_vcpu *vcpu); 557 557 558 558 int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer);
+5
arch/x86/kvm/vmx/nested.c
··· 2242 2242 vmcs_write64(EPT_POINTER, 2243 2243 construct_eptp(&vmx->vcpu, 0, PT64_ROOT_4LEVEL)); 2244 2244 2245 + if (vmx->ve_info) 2246 + vmcs_write64(VE_INFORMATION_ADDRESS, __pa(vmx->ve_info)); 2247 + 2245 2248 /* All VMFUNCs are currently emulated through L0 vmexits. */ 2246 2249 if (cpu_has_vmx_vmfunc()) 2247 2250 vmcs_write64(VM_FUNCTION_CONTROL, 0); ··· 6232 6229 return true; 6233 6230 else if (is_alignment_check(intr_info) && 6234 6231 !vmx_guest_inject_ac(vcpu)) 6232 + return true; 6233 + else if (is_ve_fault(intr_info)) 6235 6234 return true; 6236 6235 return false; 6237 6236 case EXIT_REASON_EXTERNAL_INTERRUPT:
+9 -2
arch/x86/kvm/vmx/vmx.c
··· 5218 5218 if (is_invalid_opcode(intr_info)) 5219 5219 return handle_ud(vcpu); 5220 5220 5221 - if (KVM_BUG_ON(is_ve_fault(intr_info), vcpu->kvm)) 5222 - return -EIO; 5221 + if (WARN_ON_ONCE(is_ve_fault(intr_info))) { 5222 + struct vmx_ve_information *ve_info = vmx->ve_info; 5223 + 5224 + WARN_ONCE(ve_info->exit_reason != EXIT_REASON_EPT_VIOLATION, 5225 + "Unexpected #VE on VM-Exit reason 0x%x", ve_info->exit_reason); 5226 + dump_vmcs(vcpu); 5227 + kvm_mmu_print_sptes(vcpu, ve_info->guest_physical_address, "#VE"); 5228 + return 1; 5229 + } 5223 5230 5224 5231 error_code = 0; 5225 5232 if (intr_info & INTR_INFO_DELIVER_CODE_MASK)
+1 -10
arch/x86/kvm/x86.c
··· 164 164 static u32 __read_mostly tsc_tolerance_ppm = 250; 165 165 module_param(tsc_tolerance_ppm, uint, 0644); 166 166 167 - /* 168 - * lapic timer advance (tscdeadline mode only) in nanoseconds. '-1' enables 169 - * adaptive tuning starting from default advancement of 1000ns. '0' disables 170 - * advancement entirely. Any other value is used as-is and disables adaptive 171 - * tuning, i.e. allows privileged userspace to set an exact advancement time. 172 - */ 173 - static int __read_mostly lapic_timer_advance_ns = -1; 174 - module_param(lapic_timer_advance_ns, int, 0644); 175 - 176 167 static bool __read_mostly vector_hashing = true; 177 168 module_param(vector_hashing, bool, 0444); 178 169 ··· 12160 12169 if (r < 0) 12161 12170 return r; 12162 12171 12163 - r = kvm_create_lapic(vcpu, lapic_timer_advance_ns); 12172 + r = kvm_create_lapic(vcpu); 12164 12173 if (r < 0) 12165 12174 goto fail_mmu_destroy; 12166 12175
+2 -2
drivers/acpi/ac.c
··· 145 145 dev_name(&adev->dev), event, 146 146 (u32) ac->state); 147 147 acpi_notifier_call_chain(adev, event, (u32) ac->state); 148 - kobject_uevent(&ac->charger->dev.kobj, KOBJ_CHANGE); 148 + power_supply_changed(ac->charger); 149 149 } 150 150 } 151 151 ··· 268 268 if (acpi_ac_get_state(ac)) 269 269 return 0; 270 270 if (old_state != ac->state) 271 - kobject_uevent(&ac->charger->dev.kobj, KOBJ_CHANGE); 271 + power_supply_changed(ac->charger); 272 272 273 273 return 0; 274 274 }
+1 -1
drivers/acpi/apei/einj-core.c
··· 909 909 if (einj_initialized) 910 910 platform_driver_unregister(&einj_driver); 911 911 912 - platform_device_del(einj_dev); 912 + platform_device_unregister(einj_dev); 913 913 } 914 914 915 915 module_init(einj_init);
+7 -2
drivers/acpi/ec.c
··· 1333 1333 if (ec->busy_polling || bits > 8) 1334 1334 acpi_ec_burst_enable(ec); 1335 1335 1336 - for (i = 0; i < bytes; ++i, ++address, ++value) 1336 + for (i = 0; i < bytes; ++i, ++address, ++value) { 1337 1337 result = (function == ACPI_READ) ? 1338 1338 acpi_ec_read(ec, address, value) : 1339 1339 acpi_ec_write(ec, address, *value); 1340 + if (result < 0) 1341 + break; 1342 + } 1340 1343 1341 1344 if (ec->busy_polling || bits > 8) 1342 1345 acpi_ec_burst_disable(ec); ··· 1351 1348 return AE_NOT_FOUND; 1352 1349 case -ETIME: 1353 1350 return AE_TIME; 1354 - default: 1351 + case 0: 1355 1352 return AE_OK; 1353 + default: 1354 + return AE_ERROR; 1356 1355 } 1357 1356 } 1358 1357
+2 -2
drivers/acpi/sbs.c
··· 610 610 if (sbs->charger_exists) { 611 611 acpi_ac_get_present(sbs); 612 612 if (sbs->charger_present != saved_charger_state) 613 - kobject_uevent(&sbs->charger->dev.kobj, KOBJ_CHANGE); 613 + power_supply_changed(sbs->charger); 614 614 } 615 615 616 616 if (sbs->manager_present) { ··· 622 622 acpi_battery_read(bat); 623 623 if (saved_battery_state == bat->present) 624 624 continue; 625 - kobject_uevent(&bat->bat->dev.kobj, KOBJ_CHANGE); 625 + power_supply_changed(bat->bat); 626 626 } 627 627 } 628 628 }
+6 -3
drivers/ata/pata_macio.c
··· 915 915 .sg_tablesize = MAX_DCMDS, 916 916 /* We may not need that strict one */ 917 917 .dma_boundary = ATA_DMA_BOUNDARY, 918 - /* Not sure what the real max is but we know it's less than 64K, let's 919 - * use 64K minus 256 918 + /* 919 + * The SCSI core requires the segment size to cover at least a page, so 920 + * for 64K page size kernels this must be at least 64K. However the 921 + * hardware can't handle 64K, so pata_macio_qc_prep() will split large 922 + * requests. 920 923 */ 921 - .max_segment_size = MAX_DBDMA_SEG, 924 + .max_segment_size = SZ_64K, 922 925 .device_configure = pata_macio_device_configure, 923 926 .sdev_groups = ata_common_sdev_groups, 924 927 .can_queue = ATA_DEF_QUEUE,
+2 -2
drivers/block/null_blk/main.c
··· 1824 1824 dev->queue_mode = NULL_Q_MQ; 1825 1825 } 1826 1826 1827 - dev->blocksize = round_down(dev->blocksize, 512); 1828 - dev->blocksize = clamp_t(unsigned int, dev->blocksize, 512, 4096); 1827 + if (blk_validate_block_size(dev->blocksize)) 1828 + return -EINVAL; 1829 1829 1830 1830 if (dev->use_per_node_hctx) { 1831 1831 if (dev->submit_queues != nr_online_nodes)
+1 -1
drivers/char/tpm/tpm.h
··· 28 28 #include <linux/tpm_eventlog.h> 29 29 30 30 #ifdef CONFIG_X86 31 - #include <asm/intel-family.h> 31 + #include <asm/cpu_device_id.h> 32 32 #endif 33 33 34 34 #define TPM_MINOR 224 /* officially assigned */
+2 -1
drivers/char/tpm/tpm_tis_core.c
··· 1020 1020 interrupt = 0; 1021 1021 1022 1022 tpm_tis_write32(priv, reg, ~TPM_GLOBAL_INT_ENABLE & interrupt); 1023 - flush_work(&priv->free_irq_work); 1023 + if (priv->free_irq_work.func) 1024 + flush_work(&priv->free_irq_work); 1024 1025 1025 1026 tpm_tis_clkrun_enable(chip, false); 1026 1027
+1 -1
drivers/char/tpm/tpm_tis_core.h
··· 210 210 static inline bool is_bsw(void) 211 211 { 212 212 #ifdef CONFIG_X86 213 - return ((boot_cpu_data.x86_model == INTEL_FAM6_ATOM_AIRMONT) ? 1 : 0); 213 + return (boot_cpu_data.x86_vfm == INTEL_ATOM_AIRMONT) ? 1 : 0; 214 214 #else 215 215 return false; 216 216 #endif
-8
drivers/clk/sifive/sifive-prci.c
··· 4 4 * Copyright (C) 2020 Zong Li 5 5 */ 6 6 7 - #include <linux/clkdev.h> 8 7 #include <linux/delay.h> 9 8 #include <linux/io.h> 10 9 #include <linux/module.h> ··· 532 533 r = devm_clk_hw_register(dev, &pic->hw); 533 534 if (r) { 534 535 dev_warn(dev, "Failed to register clock %s: %d\n", 535 - init.name, r); 536 - return r; 537 - } 538 - 539 - r = clk_hw_register_clkdev(&pic->hw, pic->name, dev_name(dev)); 540 - if (r) { 541 - dev_warn(dev, "Failed to register clkdev for %s: %d\n", 542 536 init.name, r); 543 537 return r; 544 538 }
+2 -1
drivers/cpufreq/amd-pstate-ut.c
··· 26 26 #include <linux/module.h> 27 27 #include <linux/moduleparam.h> 28 28 #include <linux/fs.h> 29 - #include <linux/amd-pstate.h> 30 29 31 30 #include <acpi/cppc_acpi.h> 31 + 32 + #include "amd-pstate.h" 32 33 33 34 /* 34 35 * Abbreviations:
+34 -2
drivers/cpufreq/amd-pstate.c
··· 36 36 #include <linux/delay.h> 37 37 #include <linux/uaccess.h> 38 38 #include <linux/static_call.h> 39 - #include <linux/amd-pstate.h> 40 39 #include <linux/topology.h> 41 40 42 41 #include <acpi/processor.h> ··· 45 46 #include <asm/processor.h> 46 47 #include <asm/cpufeature.h> 47 48 #include <asm/cpu_device_id.h> 49 + 50 + #include "amd-pstate.h" 48 51 #include "amd-pstate-trace.h" 49 52 50 53 #define AMD_PSTATE_TRANSITION_LATENCY 20000 51 54 #define AMD_PSTATE_TRANSITION_DELAY 1000 52 55 #define CPPC_HIGHEST_PERF_PERFORMANCE 196 53 56 #define CPPC_HIGHEST_PERF_DEFAULT 166 57 + 58 + #define AMD_CPPC_EPP_PERFORMANCE 0x00 59 + #define AMD_CPPC_EPP_BALANCE_PERFORMANCE 0x80 60 + #define AMD_CPPC_EPP_BALANCE_POWERSAVE 0xBF 61 + #define AMD_CPPC_EPP_POWERSAVE 0xFF 62 + 63 + /* 64 + * enum amd_pstate_mode - driver working mode of amd pstate 65 + */ 66 + enum amd_pstate_mode { 67 + AMD_PSTATE_UNDEFINED = 0, 68 + AMD_PSTATE_DISABLE, 69 + AMD_PSTATE_PASSIVE, 70 + AMD_PSTATE_ACTIVE, 71 + AMD_PSTATE_GUIDED, 72 + AMD_PSTATE_MAX, 73 + }; 74 + 75 + static const char * const amd_pstate_mode_string[] = { 76 + [AMD_PSTATE_UNDEFINED] = "undefined", 77 + [AMD_PSTATE_DISABLE] = "disable", 78 + [AMD_PSTATE_PASSIVE] = "passive", 79 + [AMD_PSTATE_ACTIVE] = "active", 80 + [AMD_PSTATE_GUIDED] = "guided", 81 + NULL, 82 + }; 83 + 84 + struct quirk_entry { 85 + u32 nominal_freq; 86 + u32 lowest_freq; 87 + }; 54 88 55 89 /* 56 90 * TODO: We need more time to fine tune processors with shared memory solution ··· 701 669 if (state) 702 670 policy->cpuinfo.max_freq = cpudata->max_freq; 703 671 else 704 - policy->cpuinfo.max_freq = cpudata->nominal_freq; 672 + policy->cpuinfo.max_freq = cpudata->nominal_freq * 1000; 705 673 706 674 policy->max = policy->cpuinfo.max_freq; 707 675
+2 -1
drivers/cpufreq/intel_pstate.c
··· 1153 1153 static void __intel_pstate_update_max_freq(struct cpudata *cpudata, 1154 1154 struct cpufreq_policy *policy) 1155 1155 { 1156 - intel_pstate_get_hwp_cap(cpudata); 1156 + if (hwp_active) 1157 + intel_pstate_get_hwp_cap(cpudata); 1157 1158 1158 1159 policy->cpuinfo.max_freq = READ_ONCE(global.no_turbo) ? 1159 1160 cpudata->pstate.max_freq : cpudata->pstate.turbo_freq;
+9 -9
drivers/cxl/core/region.c
··· 2352 2352 struct device *dev; 2353 2353 int rc; 2354 2354 2355 - switch (mode) { 2356 - case CXL_DECODER_RAM: 2357 - case CXL_DECODER_PMEM: 2358 - break; 2359 - default: 2360 - dev_err(&cxlrd->cxlsd.cxld.dev, "unsupported mode %d\n", mode); 2361 - return ERR_PTR(-EINVAL); 2362 - } 2363 - 2364 2355 cxlr = cxl_region_alloc(cxlrd, id); 2365 2356 if (IS_ERR(cxlr)) 2366 2357 return cxlr; ··· 2405 2414 enum cxl_decoder_mode mode, int id) 2406 2415 { 2407 2416 int rc; 2417 + 2418 + switch (mode) { 2419 + case CXL_DECODER_RAM: 2420 + case CXL_DECODER_PMEM: 2421 + break; 2422 + default: 2423 + dev_err(&cxlrd->cxlsd.cxld.dev, "unsupported mode %d\n", mode); 2424 + return ERR_PTR(-EINVAL); 2425 + } 2408 2426 2409 2427 rc = memregion_alloc(GFP_KERNEL); 2410 2428 if (rc < 0)
+5 -3
drivers/edac/amd64_edac.c
··· 81 81 amd64_warn("%s: error reading F%dx%03x.\n", 82 82 func, PCI_FUNC(pdev->devfn), offset); 83 83 84 - return err; 84 + return pcibios_err_to_errno(err); 85 85 } 86 86 87 87 int __amd64_write_pci_cfg_dword(struct pci_dev *pdev, int offset, ··· 94 94 amd64_warn("%s: error writing to F%dx%03x.\n", 95 95 func, PCI_FUNC(pdev->devfn), offset); 96 96 97 - return err; 97 + return pcibios_err_to_errno(err); 98 98 } 99 99 100 100 /* ··· 1025 1025 } 1026 1026 1027 1027 ret = pci_read_config_dword(pdev, REG_LOCAL_NODE_TYPE_MAP, &tmp); 1028 - if (ret) 1028 + if (ret) { 1029 + ret = pcibios_err_to_errno(ret); 1029 1030 goto out; 1031 + } 1030 1032 1031 1033 gpu_node_map.node_count = FIELD_GET(LNTM_NODE_COUNT, tmp); 1032 1034 gpu_node_map.base_node_id = FIELD_GET(LNTM_BASE_NODE_ID, tmp);
+2 -2
drivers/edac/igen6_edac.c
··· 800 800 801 801 rc = pci_read_config_word(imc->pdev, ERRCMD_OFFSET, &errcmd); 802 802 if (rc) 803 - return rc; 803 + return pcibios_err_to_errno(rc); 804 804 805 805 if (enable) 806 806 errcmd |= ERRCMD_CE | ERRSTS_UE; ··· 809 809 810 810 rc = pci_write_config_word(imc->pdev, ERRCMD_OFFSET, errcmd); 811 811 if (rc) 812 - return rc; 812 + return pcibios_err_to_errno(rc); 813 813 814 814 return 0; 815 815 }
+4 -4
drivers/firmware/efi/efi-pstore.c
··· 136 136 &size, record->buf); 137 137 if (status != EFI_SUCCESS) { 138 138 kfree(record->buf); 139 - return -EIO; 139 + return efi_status_to_err(status); 140 140 } 141 141 142 142 /* ··· 189 189 return 0; 190 190 191 191 if (status != EFI_SUCCESS) 192 - return -EIO; 192 + return efi_status_to_err(status); 193 193 194 194 /* skip variables that don't concern us */ 195 195 if (efi_guidcmp(guid, LINUX_EFI_CRASH_GUID)) ··· 227 227 record->size, record->psi->buf, 228 228 true); 229 229 efivar_unlock(); 230 - return status == EFI_SUCCESS ? 0 : -EIO; 230 + return efi_status_to_err(status); 231 231 }; 232 232 233 233 static int efi_pstore_erase(struct pstore_record *record) ··· 238 238 PSTORE_EFI_ATTRIBUTES, 0, NULL); 239 239 240 240 if (status != EFI_SUCCESS && status != EFI_NOT_FOUND) 241 - return -EIO; 241 + return efi_status_to_err(status); 242 242 return 0; 243 243 } 244 244
+1 -1
drivers/firmware/efi/libstub/loongarch.c
··· 41 41 unsigned long __weak kernel_entry_address(unsigned long kernel_addr, 42 42 efi_loaded_image_t *image) 43 43 { 44 - return *(unsigned long *)(kernel_addr + 8) - VMLINUX_LOAD_ADDRESS + kernel_addr; 44 + return *(unsigned long *)(kernel_addr + 8) - PHYSADDR(VMLINUX_LOAD_ADDRESS) + kernel_addr; 45 45 } 46 46 47 47 efi_status_t efi_boot_kernel(void *handle, efi_loaded_image_t *image,
+1
drivers/firmware/efi/libstub/zboot.lds
··· 41 41 } 42 42 43 43 /DISCARD/ : { 44 + *(.discard .discard.*) 44 45 *(.modinfo .init.modinfo) 45 46 } 46 47 }
+6 -7
drivers/firmware/efi/runtime-wrappers.c
··· 213 213 * Calls the appropriate efi_runtime_service() with the appropriate 214 214 * arguments. 215 215 */ 216 - static void efi_call_rts(struct work_struct *work) 216 + static void __nocfi efi_call_rts(struct work_struct *work) 217 217 { 218 218 const union efi_rts_args *args = efi_rts_work.args; 219 219 efi_status_t status = EFI_NOT_FOUND; ··· 435 435 return status; 436 436 } 437 437 438 - static efi_status_t 438 + static efi_status_t __nocfi 439 439 virt_efi_set_variable_nb(efi_char16_t *name, efi_guid_t *vendor, u32 attr, 440 440 unsigned long data_size, void *data) 441 441 { ··· 469 469 return status; 470 470 } 471 471 472 - static efi_status_t 472 + static efi_status_t __nocfi 473 473 virt_efi_query_variable_info_nb(u32 attr, u64 *storage_space, 474 474 u64 *remaining_space, u64 *max_variable_size) 475 475 { ··· 499 499 return status; 500 500 } 501 501 502 - static void virt_efi_reset_system(int reset_type, 503 - efi_status_t status, 504 - unsigned long data_size, 505 - efi_char16_t *data) 502 + static void __nocfi 503 + virt_efi_reset_system(int reset_type, efi_status_t status, 504 + unsigned long data_size, efi_char16_t *data) 506 505 { 507 506 if (down_trylock(&efi_runtime_lock)) { 508 507 pr_warn("failed to invoke the reset_system() runtime service:\n"
+1 -1
drivers/gpio/Kconfig
··· 1576 1576 are "output only" GPIOs. 1577 1577 1578 1578 config GPIO_TQMX86 1579 - tristate "TQ-Systems QTMX86 GPIO" 1579 + tristate "TQ-Systems TQMx86 GPIO" 1580 1580 depends on MFD_TQMX86 || COMPILE_TEST 1581 1581 depends on HAS_IOPORT_MAP 1582 1582 select GPIOLIB_IRQCHIP
+1
drivers/gpio/gpio-gw-pld.c
··· 130 130 }; 131 131 module_i2c_driver(gw_pld_driver); 132 132 133 + MODULE_DESCRIPTION("Gateworks I2C PLD GPIO expander"); 133 134 MODULE_LICENSE("GPL"); 134 135 MODULE_AUTHOR("Linus Walleij <linus.walleij@linaro.org>");
+1
drivers/gpio/gpio-mc33880.c
··· 168 168 module_exit(mc33880_exit); 169 169 170 170 MODULE_AUTHOR("Mocean Laboratories <info@mocean-labs.com>"); 171 + MODULE_DESCRIPTION("MC33880 high-side/low-side switch GPIO driver"); 171 172 MODULE_LICENSE("GPL v2"); 172 173
+1
drivers/gpio/gpio-pcf857x.c
··· 438 438 } 439 439 module_exit(pcf857x_exit); 440 440 441 + MODULE_DESCRIPTION("Driver for pcf857x, pca857x, and pca967x I2C GPIO expanders"); 441 442 MODULE_LICENSE("GPL"); 442 443 MODULE_AUTHOR("David Brownell");
+1
drivers/gpio/gpio-pl061.c
··· 438 438 }; 439 439 module_amba_driver(pl061_gpio_driver); 440 440 441 + MODULE_DESCRIPTION("Driver for the ARM PrimeCell(tm) General Purpose Input/Output (PL061)"); 441 442 MODULE_LICENSE("GPL v2");
+80 -30
drivers/gpio/gpio-tqmx86.c
··· 6 6 * Vadim V.Vlasov <vvlasov@dev.rtsoft.ru> 7 7 */ 8 8 9 + #include <linux/bitmap.h> 9 10 #include <linux/bitops.h> 10 11 #include <linux/errno.h> 11 12 #include <linux/gpio/driver.h> ··· 29 28 #define TQMX86_GPIIC 3 /* GPI Interrupt Configuration Register */ 30 29 #define TQMX86_GPIIS 4 /* GPI Interrupt Status Register */ 31 30 31 + #define TQMX86_GPII_NONE 0 32 32 #define TQMX86_GPII_FALLING BIT(0) 33 33 #define TQMX86_GPII_RISING BIT(1) 34 + /* Stored in irq_type as a trigger type, but not actually valid as a register 35 + * value, so the name doesn't use "GPII" 36 + */ 37 + #define TQMX86_INT_BOTH (BIT(0) | BIT(1)) 34 38 #define TQMX86_GPII_MASK (BIT(0) | BIT(1)) 35 39 #define TQMX86_GPII_BITS 2 40 + /* Stored in irq_type with GPII bits */ 41 + #define TQMX86_INT_UNMASKED BIT(2) 36 42 37 43 struct tqmx86_gpio_data { 38 44 struct gpio_chip chip; 39 45 void __iomem *io_base; 40 46 int irq; 47 + /* Lock must be held for accessing output and irq_type fields */ 41 48 raw_spinlock_t spinlock; 49 + DECLARE_BITMAP(output, TQMX86_NGPIO); 42 50 u8 irq_type[TQMX86_NGPI]; 43 51 }; 44 52 ··· 74 64 { 75 65 struct tqmx86_gpio_data *gpio = gpiochip_get_data(chip); 76 66 unsigned long flags; 77 - u8 val; 78 67 79 68 raw_spin_lock_irqsave(&gpio->spinlock, flags); 80 - val = tqmx86_gpio_read(gpio, TQMX86_GPIOD); 81 - if (value) 82 - val |= BIT(offset); 83 - else 84 - val &= ~BIT(offset); 85 - tqmx86_gpio_write(gpio, val, TQMX86_GPIOD); 69 + __assign_bit(offset, gpio->output, value); 70 + tqmx86_gpio_write(gpio, bitmap_get_value8(gpio->output, 0), TQMX86_GPIOD); 86 71 raw_spin_unlock_irqrestore(&gpio->spinlock, flags); 87 72 } 88 73 ··· 112 107 return GPIO_LINE_DIRECTION_OUT; 113 108 } 114 109 110 + static void tqmx86_gpio_irq_config(struct tqmx86_gpio_data *gpio, int offset) 111 + __must_hold(&gpio->spinlock) 112 + { 113 + u8 type = TQMX86_GPII_NONE, gpiic; 114 + 115 + if (gpio->irq_type[offset] & TQMX86_INT_UNMASKED) { 116 + type = gpio->irq_type[offset] & TQMX86_GPII_MASK; 117 + 118 + if (type == TQMX86_INT_BOTH) 119 + type = tqmx86_gpio_get(&gpio->chip, offset + TQMX86_NGPO) 120 + ? TQMX86_GPII_FALLING 121 + : TQMX86_GPII_RISING; 122 + } 123 + 124 + gpiic = tqmx86_gpio_read(gpio, TQMX86_GPIIC); 125 + gpiic &= ~(TQMX86_GPII_MASK << (offset * TQMX86_GPII_BITS)); 126 + gpiic |= type << (offset * TQMX86_GPII_BITS); 127 + tqmx86_gpio_write(gpio, gpiic, TQMX86_GPIIC); 128 + } 129 + 115 130 static void tqmx86_gpio_irq_mask(struct irq_data *data) 116 131 { 117 132 unsigned int offset = (data->hwirq - TQMX86_NGPO); 118 133 struct tqmx86_gpio_data *gpio = gpiochip_get_data( 119 134 irq_data_get_irq_chip_data(data)); 120 135 unsigned long flags; 121 - u8 gpiic, mask; 122 - 123 - mask = TQMX86_GPII_MASK << (offset * TQMX86_GPII_BITS); 124 136 125 137 raw_spin_lock_irqsave(&gpio->spinlock, flags); 126 - gpiic = tqmx86_gpio_read(gpio, TQMX86_GPIIC); 127 - gpiic &= ~mask; 128 - tqmx86_gpio_write(gpio, gpiic, TQMX86_GPIIC); 138 + gpio->irq_type[offset] &= ~TQMX86_INT_UNMASKED; 139 + tqmx86_gpio_irq_config(gpio, offset); 129 140 raw_spin_unlock_irqrestore(&gpio->spinlock, flags); 141 + 130 142 gpiochip_disable_irq(&gpio->chip, irqd_to_hwirq(data)); 131 143 } 132 144 ··· 153 131 struct tqmx86_gpio_data *gpio = gpiochip_get_data( 154 132 irq_data_get_irq_chip_data(data)); 155 133 unsigned long flags; 156 - u8 gpiic, mask; 157 - 158 - mask = TQMX86_GPII_MASK << (offset * TQMX86_GPII_BITS); 159 134 160 135 gpiochip_enable_irq(&gpio->chip, irqd_to_hwirq(data)); 136 + 161 137 raw_spin_lock_irqsave(&gpio->spinlock, flags); 162 - gpiic = tqmx86_gpio_read(gpio, TQMX86_GPIIC); 163 - gpiic &= ~mask; 164 - gpiic |= gpio->irq_type[offset] << (offset * TQMX86_GPII_BITS); 165 - tqmx86_gpio_write(gpio, gpiic, TQMX86_GPIIC); 138 + gpio->irq_type[offset] |= TQMX86_INT_UNMASKED; 139 + tqmx86_gpio_irq_config(gpio, offset); 166 140 raw_spin_unlock_irqrestore(&gpio->spinlock, flags); 167 141 } 168 142 ··· 169 151 unsigned int offset = (data->hwirq - TQMX86_NGPO); 170 152 unsigned int edge_type = type & IRQF_TRIGGER_MASK; 171 153 unsigned long flags; 172 - u8 new_type, gpiic; 154 + u8 new_type; 173 155 174 156 switch (edge_type) { 175 157 case IRQ_TYPE_EDGE_RISING: ··· 179 161 new_type = TQMX86_GPII_FALLING; 180 162 break; 181 163 case IRQ_TYPE_EDGE_BOTH: 182 - new_type = TQMX86_GPII_FALLING | TQMX86_GPII_RISING; 164 + new_type = TQMX86_INT_BOTH; 183 165 break; 184 166 default: 185 167 return -EINVAL; /* not supported */ 186 168 } 187 169 188 - gpio->irq_type[offset] = new_type; 189 - 190 170 raw_spin_lock_irqsave(&gpio->spinlock, flags); 191 - gpiic = tqmx86_gpio_read(gpio, TQMX86_GPIIC); 192 - gpiic &= ~((TQMX86_GPII_MASK) << (offset * TQMX86_GPII_BITS)); 193 - gpiic |= new_type << (offset * TQMX86_GPII_BITS); 194 - tqmx86_gpio_write(gpio, gpiic, TQMX86_GPIIC); 171 + gpio->irq_type[offset] &= ~TQMX86_GPII_MASK; 172 + gpio->irq_type[offset] |= new_type; 173 + tqmx86_gpio_irq_config(gpio, offset); 195 174 raw_spin_unlock_irqrestore(&gpio->spinlock, flags); 196 175 197 176 return 0; ··· 199 184 struct gpio_chip *chip = irq_desc_get_handler_data(desc); 200 185 struct tqmx86_gpio_data *gpio = gpiochip_get_data(chip); 201 186 struct irq_chip *irq_chip = irq_desc_get_chip(desc); 202 - unsigned long irq_bits; 203 - int i = 0; 187 + unsigned long irq_bits, flags; 188 + int i; 204 189 u8 irq_status; 205 190 206 191 chained_irq_enter(irq_chip, desc); ··· 209 194 tqmx86_gpio_write(gpio, irq_status, TQMX86_GPIIS); 210 195 211 196 irq_bits = irq_status; 197 + 198 + raw_spin_lock_irqsave(&gpio->spinlock, flags); 199 + for_each_set_bit(i, &irq_bits, TQMX86_NGPI) { 200 + /* 201 + * Edge-both triggers are implemented by flipping the edge 202 + * trigger after each interrupt, as the controller only supports 203 + * either rising or falling edge triggers, but not both. 204 + * 205 + * Internally, the TQMx86 GPIO controller has separate status 206 + * registers for rising and falling edge interrupts. GPIIC 207 + * configures which bits from which register are visible in the 208 + * interrupt status register GPIIS and defines what triggers the 209 + * parent IRQ line. Writing to GPIIS always clears both rising 210 + * and falling interrupt flags internally, regardless of the 211 + * currently configured trigger. 212 + * 213 + * In consequence, we can cleanly implement the edge-both 214 + * trigger in software by first clearing the interrupt and then 215 + * setting the new trigger based on the current GPIO input in 216 + * tqmx86_gpio_irq_config() - even if an edge arrives between 217 + * reading the input and setting the trigger, we will have a new 218 + * interrupt pending. 219 + */ 220 + if ((gpio->irq_type[i] & TQMX86_GPII_MASK) == TQMX86_INT_BOTH) 221 + tqmx86_gpio_irq_config(gpio, i); 222 + } 223 + raw_spin_unlock_irqrestore(&gpio->spinlock, flags); 224 + 212 225 for_each_set_bit(i, &irq_bits, TQMX86_NGPI) 213 226 generic_handle_domain_irq(gpio->chip.irq.domain, 214 227 i + TQMX86_NGPO); ··· 319 276 gpio->io_base = io_base; 320 277 321 278 tqmx86_gpio_write(gpio, (u8)~TQMX86_DIR_INPUT_MASK, TQMX86_GPIODD); 279 + 280 + /* 281 + * Reading the previous output state is not possible with TQMx86 hardware. 282 + * Initialize all outputs to 0 to have a defined state that matches the 283 + * shadow register. 284 + */ 285 + tqmx86_gpio_write(gpio, 0, TQMX86_GPIOD); 322 286 323 287 chip = &gpio->chip; 324 288 chip->label = "gpio-tqmx86";
+49 -42
drivers/gpu/drm/amd/include/pptable.h
··· 477 477 } ATOM_PPLIB_STATE_V2; 478 478 479 479 typedef struct _StateArray{ 480 - //how many states we have 481 - UCHAR ucNumEntries; 482 - 483 - ATOM_PPLIB_STATE_V2 states[1]; 480 + //how many states we have 481 + UCHAR ucNumEntries; 482 + 483 + ATOM_PPLIB_STATE_V2 states[] /* __counted_by(ucNumEntries) */; 484 484 }StateArray; 485 485 486 486 487 487 typedef struct _ClockInfoArray{ 488 - //how many clock levels we have 489 - UCHAR ucNumEntries; 490 - 491 - //sizeof(ATOM_PPLIB_CLOCK_INFO) 492 - UCHAR ucEntrySize; 493 - 494 - UCHAR clockInfo[]; 488 + //how many clock levels we have 489 + UCHAR ucNumEntries; 490 + 491 + //sizeof(ATOM_PPLIB_CLOCK_INFO) 492 + UCHAR ucEntrySize; 493 + 494 + UCHAR clockInfo[]; 495 495 }ClockInfoArray; 496 496 497 497 typedef struct _NonClockInfoArray{ 498 + //how many non-clock levels we have. normally should be same as number of states 499 + UCHAR ucNumEntries; 500 + //sizeof(ATOM_PPLIB_NONCLOCK_INFO) 501 + UCHAR ucEntrySize; 498 502 499 - //how many non-clock levels we have. normally should be same as number of states 500 - UCHAR ucNumEntries; 501 - //sizeof(ATOM_PPLIB_NONCLOCK_INFO) 502 - UCHAR ucEntrySize; 503 - 504 - ATOM_PPLIB_NONCLOCK_INFO nonClockInfo[]; 503 + ATOM_PPLIB_NONCLOCK_INFO nonClockInfo[] __counted_by(ucNumEntries); 505 504 }NonClockInfoArray; 506 505 507 506 typedef struct _ATOM_PPLIB_Clock_Voltage_Dependency_Record ··· 512 513 513 514 typedef struct _ATOM_PPLIB_Clock_Voltage_Dependency_Table 514 515 { 515 - UCHAR ucNumEntries; // Number of entries. 516 - ATOM_PPLIB_Clock_Voltage_Dependency_Record entries[1]; // Dynamically allocate entries. 516 + // Number of entries. 517 + UCHAR ucNumEntries; 518 + // Dynamically allocate entries. 519 + ATOM_PPLIB_Clock_Voltage_Dependency_Record entries[] __counted_by(ucNumEntries); 517 520 }ATOM_PPLIB_Clock_Voltage_Dependency_Table; 518 521 519 522 typedef struct _ATOM_PPLIB_Clock_Voltage_Limit_Record ··· 530 529 531 530 typedef struct _ATOM_PPLIB_Clock_Voltage_Limit_Table 532 531 { 533 - UCHAR ucNumEntries; // Number of entries. 534 - ATOM_PPLIB_Clock_Voltage_Limit_Record entries[1]; // Dynamically allocate entries. 532 + // Number of entries. 533 + UCHAR ucNumEntries; 534 + // Dynamically allocate entries. 535 + ATOM_PPLIB_Clock_Voltage_Limit_Record entries[] __counted_by(ucNumEntries); 535 536 }ATOM_PPLIB_Clock_Voltage_Limit_Table; 536 537 537 538 union _ATOM_PPLIB_CAC_Leakage_Record ··· 556 553 557 554 typedef struct _ATOM_PPLIB_CAC_Leakage_Table 558 555 { 559 - UCHAR ucNumEntries; // Number of entries. 560 - ATOM_PPLIB_CAC_Leakage_Record entries[1]; // Dynamically allocate entries. 556 + // Number of entries. 557 + UCHAR ucNumEntries; 558 + // Dynamically allocate entries. 559 + ATOM_PPLIB_CAC_Leakage_Record entries[] __counted_by(ucNumEntries); 561 560 }ATOM_PPLIB_CAC_Leakage_Table; 562 561 563 562 typedef struct _ATOM_PPLIB_PhaseSheddingLimits_Record ··· 573 568 574 569 typedef struct _ATOM_PPLIB_PhaseSheddingLimits_Table 575 570 { 576 - UCHAR ucNumEntries; // Number of entries. 577 - ATOM_PPLIB_PhaseSheddingLimits_Record entries[1]; // Dynamically allocate entries. 571 + // Number of entries. 572 + UCHAR ucNumEntries; 573 + // Dynamically allocate entries. 574 + ATOM_PPLIB_PhaseSheddingLimits_Record entries[] __counted_by(ucNumEntries); 578 575 }ATOM_PPLIB_PhaseSheddingLimits_Table; 579 576 580 577 typedef struct _VCEClockInfo{ ··· 587 580 }VCEClockInfo; 588 581 589 582 typedef struct _VCEClockInfoArray{ 590 - UCHAR ucNumEntries; 591 - VCEClockInfo entries[1]; 583 + UCHAR ucNumEntries; 584 + VCEClockInfo entries[] __counted_by(ucNumEntries); 592 585 }VCEClockInfoArray; 593 586 594 587 typedef struct _ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record ··· 599 592 600 593 typedef struct _ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table 601 594 { 602 - UCHAR numEntries; 603 - ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record entries[1]; 595 + UCHAR numEntries; 596 + ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record entries[] __counted_by(numEntries); 604 597 }ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table; 605 598 606 599 typedef struct _ATOM_PPLIB_VCE_State_Record ··· 611 604 612 605 typedef struct _ATOM_PPLIB_VCE_State_Table 613 606 { 614 - UCHAR numEntries; 615 - ATOM_PPLIB_VCE_State_Record entries[1]; 607 + UCHAR numEntries; 608 + ATOM_PPLIB_VCE_State_Record entries[] __counted_by(numEntries); 616 609 }ATOM_PPLIB_VCE_State_Table; 617 610 618 611 ··· 633 626 }UVDClockInfo; 634 627 635 628 typedef struct _UVDClockInfoArray{ 636 - UCHAR ucNumEntries; 637 - UVDClockInfo entries[1]; 629 + UCHAR ucNumEntries; 630 + UVDClockInfo entries[] __counted_by(ucNumEntries); 638 631 }UVDClockInfoArray; 639 632 640 633 typedef struct _ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record ··· 645 638 646 639 typedef struct _ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table 647 640 { 648 - UCHAR numEntries; 649 - ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record entries[1]; 641 + UCHAR numEntries; 642 + ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record entries[] __counted_by(numEntries); 650 643 }ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table; 651 644 652 645 typedef struct _ATOM_PPLIB_UVD_Table ··· 664 657 }ATOM_PPLIB_SAMClk_Voltage_Limit_Record; 665 658 666 659 typedef struct _ATOM_PPLIB_SAMClk_Voltage_Limit_Table{ 667 - UCHAR numEntries; 668 - ATOM_PPLIB_SAMClk_Voltage_Limit_Record entries[]; 660 + UCHAR numEntries; 661 + ATOM_PPLIB_SAMClk_Voltage_Limit_Record entries[] __counted_by(numEntries); 669 662 }ATOM_PPLIB_SAMClk_Voltage_Limit_Table; 670 663 671 664 typedef struct _ATOM_PPLIB_SAMU_Table ··· 682 675 }ATOM_PPLIB_ACPClk_Voltage_Limit_Record; 683 676 684 677 typedef struct _ATOM_PPLIB_ACPClk_Voltage_Limit_Table{ 685 - UCHAR numEntries; 686 - ATOM_PPLIB_ACPClk_Voltage_Limit_Record entries[1]; 678 + UCHAR numEntries; 679 + ATOM_PPLIB_ACPClk_Voltage_Limit_Record entries[] __counted_by(numEntries); 687 680 }ATOM_PPLIB_ACPClk_Voltage_Limit_Table; 688 681 689 682 typedef struct _ATOM_PPLIB_ACP_Table ··· 750 743 } ATOM_PPLIB_VQ_Budgeting_Record; 751 744 752 745 typedef struct ATOM_PPLIB_VQ_Budgeting_Table { 753 - UCHAR revid; 754 - UCHAR numEntries; 755 - ATOM_PPLIB_VQ_Budgeting_Record entries[1]; 746 + UCHAR revid; 747 + UCHAR numEntries; 748 + ATOM_PPLIB_VQ_Budgeting_Record entries[] __counted_by(numEntries); 756 749 } ATOM_PPLIB_VQ_Budgeting_Table; 757 750 758 751 #pragma pack()
+11 -9
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c
··· 226 226 struct amdgpu_device *adev = smu->adev; 227 227 int ret = 0; 228 228 229 - if (!en && adev->in_s4) { 230 - /* Adds a GFX reset as workaround just before sending the 231 - * MP1_UNLOAD message to prevent GC/RLC/PMFW from entering 232 - * an invalid state. 233 - */ 234 - ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_GfxDeviceDriverReset, 235 - SMU_RESET_MODE_2, NULL); 236 - if (ret) 237 - return ret; 229 + if (!en && !adev->in_s0ix) { 230 + if (adev->in_s4) { 231 + /* Adds a GFX reset as workaround just before sending the 232 + * MP1_UNLOAD message to prevent GC/RLC/PMFW from entering 233 + * an invalid state. 234 + */ 235 + ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_GfxDeviceDriverReset, 236 + SMU_RESET_MODE_2, NULL); 237 + if (ret) 238 + return ret; 239 + } 238 240 239 241 ret = smu_cmn_send_smc_msg(smu, SMU_MSG_PrepareMp1ForUnload, NULL); 240 242 }
-5
drivers/gpu/drm/arm/display/komeda/komeda_color_mgmt.c
··· 72 72 u32 segment_width; 73 73 }; 74 74 75 - struct gamma_curve_segment { 76 - u32 start; 77 - u32 end; 78 - }; 79 - 80 75 static struct gamma_curve_sector sector_tbl[] = { 81 76 { 0, 4, 4 }, 82 77 { 16, 4, 4 },
+3 -1
drivers/gpu/drm/panel/panel-sitronix-st7789v.c
··· 643 643 if (ret) 644 644 return dev_err_probe(dev, ret, "Failed to get backlight\n"); 645 645 646 - of_drm_get_panel_orientation(spi->dev.of_node, &ctx->orientation); 646 + ret = of_drm_get_panel_orientation(spi->dev.of_node, &ctx->orientation); 647 + if (ret) 648 + return dev_err_probe(&spi->dev, ret, "Failed to get orientation\n"); 647 649 648 650 drm_panel_add(&ctx->panel); 649 651
+6 -13
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
··· 746 746 dev->vram_size = pci_resource_len(pdev, 2); 747 747 748 748 drm_info(&dev->drm, 749 - "Register MMIO at 0x%pa size is %llu kiB\n", 749 + "Register MMIO at 0x%pa size is %llu KiB\n", 750 750 &rmmio_start, (uint64_t)rmmio_size / 1024); 751 751 dev->rmmio = devm_ioremap(dev->drm.dev, 752 752 rmmio_start, ··· 765 765 fifo_size = pci_resource_len(pdev, 2); 766 766 767 767 drm_info(&dev->drm, 768 - "FIFO at %pa size is %llu kiB\n", 768 + "FIFO at %pa size is %llu KiB\n", 769 769 &fifo_start, (uint64_t)fifo_size / 1024); 770 770 dev->fifo_mem = devm_memremap(dev->drm.dev, 771 771 fifo_start, ··· 790 790 * SVGA_REG_VRAM_SIZE. 791 791 */ 792 792 drm_info(&dev->drm, 793 - "VRAM at %pa size is %llu kiB\n", 793 + "VRAM at %pa size is %llu KiB\n", 794 794 &dev->vram_start, (uint64_t)dev->vram_size / 1024); 795 795 796 796 return 0; ··· 960 960 vmw_read(dev_priv, 961 961 SVGA_REG_SUGGESTED_GBOBJECT_MEM_SIZE_KB); 962 962 963 - /* 964 - * Workaround for low memory 2D VMs to compensate for the 965 - * allocation taken by fbdev 966 - */ 967 - if (!(dev_priv->capabilities & SVGA_CAP_3D)) 968 - mem_size *= 3; 969 - 970 963 dev_priv->max_mob_pages = mem_size * 1024 / PAGE_SIZE; 971 964 dev_priv->max_primary_mem = 972 965 vmw_read(dev_priv, SVGA_REG_MAX_PRIMARY_MEM); ··· 984 991 dev_priv->max_primary_mem = dev_priv->vram_size; 985 992 } 986 993 drm_info(&dev_priv->drm, 987 - "Legacy memory limits: VRAM = %llu kB, FIFO = %llu kB, surface = %u kB\n", 994 + "Legacy memory limits: VRAM = %llu KiB, FIFO = %llu KiB, surface = %u KiB\n", 988 995 (u64)dev_priv->vram_size / 1024, 989 996 (u64)dev_priv->fifo_mem_size / 1024, 990 997 dev_priv->memory_size / 1024); 991 998 992 999 drm_info(&dev_priv->drm, 993 - "MOB limits: max mob size = %u kB, max mob pages = %u\n", 1000 + "MOB limits: max mob size = %u KiB, max mob pages = %u\n", 994 1001 dev_priv->max_mob_size / 1024, dev_priv->max_mob_pages); 995 1002 996 1003 ret = vmw_dma_masks(dev_priv); ··· 1008 1015 (unsigned)dev_priv->max_gmr_pages); 1009 1016 } 1010 1017 drm_info(&dev_priv->drm, 1011 - "Maximum display memory size is %llu kiB\n", 1018 + "Maximum display memory size is %llu KiB\n", 1012 1019 (uint64_t)dev_priv->max_primary_mem / 1024); 1013 1020 1014 1021 /* Need mmio memory to check for fifo pitchlock cap. */
-3
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
··· 1043 1043 int vmw_kms_write_svga(struct vmw_private *vmw_priv, 1044 1044 unsigned width, unsigned height, unsigned pitch, 1045 1045 unsigned bpp, unsigned depth); 1046 - bool vmw_kms_validate_mode_vram(struct vmw_private *dev_priv, 1047 - uint32_t pitch, 1048 - uint32_t height); 1049 1046 int vmw_kms_present(struct vmw_private *dev_priv, 1050 1047 struct drm_file *file_priv, 1051 1048 struct vmw_framebuffer *vfb,
+2 -2
drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
··· 94 94 } else 95 95 new_max_pages = gman->max_gmr_pages * 2; 96 96 if (new_max_pages > gman->max_gmr_pages && new_max_pages >= gman->used_gmr_pages) { 97 - DRM_WARN("vmwgfx: increasing guest mob limits to %u kB.\n", 97 + DRM_WARN("vmwgfx: increasing guest mob limits to %u KiB.\n", 98 98 ((new_max_pages) << (PAGE_SHIFT - 10))); 99 99 100 100 gman->max_gmr_pages = new_max_pages; 101 101 } else { 102 102 char buf[256]; 103 103 snprintf(buf, sizeof(buf), 104 - "vmwgfx, error: guest graphics is out of memory (mob limit at: %ukB).\n", 104 + "vmwgfx, error: guest graphics is out of memory (mob limit at: %u KiB).\n", 105 105 ((gman->max_gmr_pages) << (PAGE_SHIFT - 10))); 106 106 vmw_host_printf(buf); 107 107 DRM_WARN("%s", buf);
+10 -18
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 224 224 new_image = vmw_du_cursor_plane_acquire_image(new_vps); 225 225 226 226 changed = false; 227 - if (old_image && new_image) 227 + if (old_image && new_image && old_image != new_image) 228 228 changed = memcmp(old_image, new_image, size) != 0; 229 229 230 230 return changed; ··· 2171 2171 return 0; 2172 2172 } 2173 2173 2174 + static 2174 2175 bool vmw_kms_validate_mode_vram(struct vmw_private *dev_priv, 2175 - uint32_t pitch, 2176 - uint32_t height) 2176 + u64 pitch, 2177 + u64 height) 2177 2178 { 2178 - return ((u64) pitch * (u64) height) < (u64) 2179 - ((dev_priv->active_display_unit == vmw_du_screen_target) ? 2180 - dev_priv->max_primary_mem : dev_priv->vram_size); 2179 + return (pitch * height) < (u64)dev_priv->vram_size; 2181 2180 } 2182 2181 2183 2182 /** ··· 2872 2873 enum drm_mode_status vmw_connector_mode_valid(struct drm_connector *connector, 2873 2874 struct drm_display_mode *mode) 2874 2875 { 2876 + enum drm_mode_status ret; 2875 2877 struct drm_device *dev = connector->dev; 2876 2878 struct vmw_private *dev_priv = vmw_priv(dev); 2877 - u32 max_width = dev_priv->texture_max_width; 2878 - u32 max_height = dev_priv->texture_max_height; 2879 2879 u32 assumed_cpp = 4; 2880 2880 2881 2881 if (dev_priv->assume_16bpp) 2882 2882 assumed_cpp = 2; 2883 2883 2884 - if (dev_priv->active_display_unit == vmw_du_screen_target) { 2885 - max_width = min(dev_priv->stdu_max_width, max_width); 2886 - max_height = min(dev_priv->stdu_max_height, max_height); 2887 - } 2888 - 2889 - if (max_width < mode->hdisplay) 2890 - return MODE_BAD_HVALUE; 2891 - 2892 - if (max_height < mode->vdisplay) 2893 - return MODE_BAD_VVALUE; 2884 + ret = drm_mode_validate_size(mode, dev_priv->texture_max_width, 2885 + dev_priv->texture_max_height); 2886 + if (ret != MODE_OK) 2887 + return ret; 2894 2888 2895 2889 if (!vmw_kms_validate_mode_vram(dev_priv, 2896 2890 mode->hdisplay * assumed_cpp,
+53 -7
drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
··· 43 43 #define vmw_connector_to_stdu(x) \ 44 44 container_of(x, struct vmw_screen_target_display_unit, base.connector) 45 45 46 - 46 + /* 47 + * Some renderers such as llvmpipe will align the width and height of their 48 + * buffers to match their tile size. We need to keep this in mind when exposing 49 + * modes to userspace so that this possible over-allocation will not exceed 50 + * graphics memory. 64x64 pixels seems to be a reasonable upper bound for the 51 + * tile size of current renderers. 52 + */ 53 + #define GPU_TILE_SIZE 64 47 54 48 55 enum stdu_content_type { 49 56 SAME_AS_DISPLAY = 0, ··· 90 83 struct vmw_stdu_update { 91 84 SVGA3dCmdHeader header; 92 85 SVGA3dCmdUpdateGBScreenTarget body; 93 - }; 94 - 95 - struct vmw_stdu_dma { 96 - SVGA3dCmdHeader header; 97 - SVGA3dCmdSurfaceDMA body; 98 86 }; 99 87 100 88 struct vmw_stdu_surface_copy { ··· 416 414 { 417 415 struct vmw_private *dev_priv; 418 416 struct vmw_screen_target_display_unit *stdu; 417 + struct drm_crtc_state *new_crtc_state; 419 418 int ret; 420 419 421 420 if (!crtc) { ··· 426 423 427 424 stdu = vmw_crtc_to_stdu(crtc); 428 425 dev_priv = vmw_priv(crtc->dev); 426 + new_crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 429 427 430 428 if (dev_priv->vkms_enabled) 431 429 drm_crtc_vblank_off(crtc); ··· 437 433 DRM_ERROR("Failed to blank CRTC\n"); 438 434 439 435 (void) vmw_stdu_update_st(dev_priv, stdu); 436 + 437 + /* Don't destroy the Screen Target if we are only setting the 438 + * display as inactive 439 + */ 440 + if (new_crtc_state->enable && 441 + !new_crtc_state->active && 442 + !new_crtc_state->mode_changed) 443 + return; 440 444 441 445 ret = vmw_stdu_destroy_st(dev_priv, stdu); 442 446 if (ret) ··· 841 829 vmw_stdu_destroy(vmw_connector_to_stdu(connector)); 842 830 } 843 831 832 + static enum drm_mode_status 833 + vmw_stdu_connector_mode_valid(struct drm_connector *connector, 834 + struct drm_display_mode *mode) 835 + { 836 + enum drm_mode_status ret; 837 + struct drm_device *dev = connector->dev; 838 + struct vmw_private *dev_priv = vmw_priv(dev); 839 + u64 assumed_cpp = dev_priv->assume_16bpp ? 2 : 4; 840 + /* Align width and height to account for GPU tile over-alignment */ 841 + u64 required_mem = ALIGN(mode->hdisplay, GPU_TILE_SIZE) * 842 + ALIGN(mode->vdisplay, GPU_TILE_SIZE) * 843 + assumed_cpp; 844 + required_mem = ALIGN(required_mem, PAGE_SIZE); 844 845 846 + ret = drm_mode_validate_size(mode, dev_priv->stdu_max_width, 847 + dev_priv->stdu_max_height); 848 + if (ret != MODE_OK) 849 + return ret; 850 + 851 + ret = drm_mode_validate_size(mode, dev_priv->texture_max_width, 852 + dev_priv->texture_max_height); 853 + if (ret != MODE_OK) 854 + return ret; 855 + 856 + if (required_mem > dev_priv->max_primary_mem) 857 + return MODE_MEM; 858 + 859 + if (required_mem > dev_priv->max_mob_pages * PAGE_SIZE) 860 + return MODE_MEM; 861 + 862 + if (required_mem > dev_priv->max_mob_size) 863 + return MODE_MEM; 864 + 865 + return MODE_OK; 866 + } 845 867 846 868 static const struct drm_connector_funcs vmw_stdu_connector_funcs = { 847 869 .dpms = vmw_du_connector_dpms, ··· 891 845 static const struct 892 846 drm_connector_helper_funcs vmw_stdu_connector_helper_funcs = { 893 847 .get_modes = vmw_connector_get_modes, 894 - .mode_valid = vmw_connector_mode_valid 848 + .mode_valid = vmw_stdu_connector_mode_valid 895 849 }; 896 850 897 851
+1
drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
··· 1749 1749 if (!xe_gt_is_media_type(gt)) { 1750 1750 pf_release_vf_config_ggtt(gt, config); 1751 1751 pf_release_vf_config_lmem(gt, config); 1752 + pf_update_vf_lmtt(gt_to_xe(gt), vfid); 1752 1753 } 1753 1754 pf_release_config_ctxs(gt, config); 1754 1755 pf_release_config_dbs(gt, config);
+2 -2
drivers/hid/hid-asus.c
··· 1204 1204 } 1205 1205 1206 1206 /* match many more n-key devices */ 1207 - if (drvdata->quirks & QUIRK_ROG_NKEY_KEYBOARD) { 1208 - for (int i = 0; i < *rsize + 1; i++) { 1207 + if (drvdata->quirks & QUIRK_ROG_NKEY_KEYBOARD && *rsize > 15) { 1208 + for (int i = 0; i < *rsize - 15; i++) { 1209 1209 /* offset to the count from 0x5a report part always 14 */ 1210 1210 if (rdesc[i] == 0x85 && rdesc[i + 1] == 0x5a && 1211 1211 rdesc[i + 14] == 0x95 && rdesc[i + 15] == 0x05) {
-1
drivers/hid/hid-core.c
··· 1448 1448 hid_warn(hid, 1449 1449 "%s() called with too large value %d (n: %d)! (%s)\n", 1450 1450 __func__, value, n, current->comm); 1451 - WARN_ON(1); 1452 1451 value &= m; 1453 1452 } 1454 1453 }
+2
drivers/hid/hid-debug.c
··· 3366 3366 [KEY_CAMERA_ACCESS_ENABLE] = "CameraAccessEnable", 3367 3367 [KEY_CAMERA_ACCESS_DISABLE] = "CameraAccessDisable", 3368 3368 [KEY_CAMERA_ACCESS_TOGGLE] = "CameraAccessToggle", 3369 + [KEY_ACCESSIBILITY] = "Accessibility", 3370 + [KEY_DO_NOT_DISTURB] = "DoNotDisturb", 3369 3371 [KEY_DICTATE] = "Dictate", 3370 3372 [KEY_MICMUTE] = "MicrophoneMute", 3371 3373 [KEY_BRIGHTNESS_MIN] = "BrightnessMin",
+2
drivers/hid/hid-ids.h
··· 423 423 #define I2C_DEVICE_ID_HP_SPECTRE_X360_13_AW0020NG 0x29DF 424 424 #define I2C_DEVICE_ID_ASUS_TP420IA_TOUCHSCREEN 0x2BC8 425 425 #define I2C_DEVICE_ID_ASUS_GV301RA_TOUCHSCREEN 0x2C82 426 + #define I2C_DEVICE_ID_ASUS_UX3402_TOUCHSCREEN 0x2F2C 427 + #define I2C_DEVICE_ID_ASUS_UX6404_TOUCHSCREEN 0x4116 426 428 #define USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN 0x2544 427 429 #define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN 0x2706 428 430 #define I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN 0x261A
+13
drivers/hid/hid-input.c
··· 377 377 HID_BATTERY_QUIRK_IGNORE }, 378 378 { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_ASUS_GV301RA_TOUCHSCREEN), 379 379 HID_BATTERY_QUIRK_IGNORE }, 380 + { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_ASUS_UX3402_TOUCHSCREEN), 381 + HID_BATTERY_QUIRK_IGNORE }, 382 + { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_ASUS_UX6404_TOUCHSCREEN), 383 + HID_BATTERY_QUIRK_IGNORE }, 380 384 { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN), 381 385 HID_BATTERY_QUIRK_IGNORE }, 382 386 { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN), ··· 837 833 break; 838 834 } 839 835 836 + if ((usage->hid & 0xf0) == 0x90) { /* SystemControl*/ 837 + switch (usage->hid & 0xf) { 838 + case 0xb: map_key_clear(KEY_DO_NOT_DISTURB); break; 839 + default: goto ignore; 840 + } 841 + break; 842 + } 843 + 840 844 if ((usage->hid & 0xf0) == 0xa0) { /* SystemControl */ 841 845 switch (usage->hid & 0xf) { 842 846 case 0x9: map_key_clear(KEY_MICMUTE); break; 847 + case 0xa: map_key_clear(KEY_ACCESSIBILITY); break; 843 848 default: goto ignore; 844 849 } 845 850 break;
+3 -1
drivers/hid/hid-logitech-dj.c
··· 1284 1284 */ 1285 1285 msleep(50); 1286 1286 1287 - if (retval) 1287 + if (retval) { 1288 + kfree(dj_report); 1288 1289 return retval; 1290 + } 1289 1291 } 1290 1292 1291 1293 /*
+1
drivers/hid/hid-logitech-hidpp.c
··· 27 27 #include "usbhid/usbhid.h" 28 28 #include "hid-ids.h" 29 29 30 + MODULE_DESCRIPTION("Support for Logitech devices relying on the HID++ specification"); 30 31 MODULE_LICENSE("GPL"); 31 32 MODULE_AUTHOR("Benjamin Tissoires <benjamin.tissoires@gmail.com>"); 32 33 MODULE_AUTHOR("Nestor Lopez Casado <nlopezcasad@logitech.com>");
+4 -2
drivers/hid/hid-nintendo.c
··· 2725 2725 ret = joycon_power_supply_create(ctlr); 2726 2726 if (ret) { 2727 2727 hid_err(hdev, "Failed to create power_supply; ret=%d\n", ret); 2728 - goto err_close; 2728 + goto err_ida; 2729 2729 } 2730 2730 2731 2731 ret = joycon_input_create(ctlr); 2732 2732 if (ret) { 2733 2733 hid_err(hdev, "Failed to create input device; ret=%d\n", ret); 2734 - goto err_close; 2734 + goto err_ida; 2735 2735 } 2736 2736 2737 2737 ctlr->ctlr_state = JOYCON_CTLR_STATE_READ; ··· 2739 2739 hid_dbg(hdev, "probe - success\n"); 2740 2740 return 0; 2741 2741 2742 + err_ida: 2743 + ida_free(&nintendo_player_id_allocator, ctlr->player_id); 2742 2744 err_close: 2743 2745 hid_hw_close(hdev); 2744 2746 err_stop:
+3 -1
drivers/hid/hid-nvidia-shield.c
··· 283 283 return haptics; 284 284 285 285 input_set_capability(haptics, EV_FF, FF_RUMBLE); 286 - input_ff_create_memless(haptics, NULL, play_effect); 286 + ret = input_ff_create_memless(haptics, NULL, play_effect); 287 + if (ret) 288 + goto err; 287 289 288 290 ret = input_register_device(haptics); 289 291 if (ret)
+47 -12
drivers/hid/i2c-hid/i2c-hid-of-elan.c
··· 31 31 struct regulator *vcc33; 32 32 struct regulator *vccio; 33 33 struct gpio_desc *reset_gpio; 34 + bool no_reset_on_power_off; 34 35 const struct elan_i2c_hid_chip_data *chip_data; 35 36 }; 36 37 ··· 41 40 container_of(ops, struct i2c_hid_of_elan, ops); 42 41 int ret; 43 42 43 + gpiod_set_value_cansleep(ihid_elan->reset_gpio, 1); 44 + 44 45 if (ihid_elan->vcc33) { 45 46 ret = regulator_enable(ihid_elan->vcc33); 46 47 if (ret) 47 - return ret; 48 + goto err_deassert_reset; 48 49 } 49 50 50 51 ret = regulator_enable(ihid_elan->vccio); 51 - if (ret) { 52 - regulator_disable(ihid_elan->vcc33); 53 - return ret; 54 - } 52 + if (ret) 53 + goto err_disable_vcc33; 55 54 56 55 if (ihid_elan->chip_data->post_power_delay_ms) 57 56 msleep(ihid_elan->chip_data->post_power_delay_ms); ··· 61 60 msleep(ihid_elan->chip_data->post_gpio_reset_on_delay_ms); 62 61 63 62 return 0; 63 + 64 + err_disable_vcc33: 65 + if (ihid_elan->vcc33) 66 + regulator_disable(ihid_elan->vcc33); 67 + err_deassert_reset: 68 + if (ihid_elan->no_reset_on_power_off) 69 + gpiod_set_value_cansleep(ihid_elan->reset_gpio, 0); 70 + 71 + return ret; 64 72 } 65 73 66 74 static void elan_i2c_hid_power_down(struct i2chid_ops *ops) ··· 77 67 struct i2c_hid_of_elan *ihid_elan = 78 68 container_of(ops, struct i2c_hid_of_elan, ops); 79 69 80 - gpiod_set_value_cansleep(ihid_elan->reset_gpio, 1); 70 + /* 71 + * Do not assert reset when the hardware allows for it to remain 72 + * deasserted regardless of the state of the (shared) power supply to 73 + * avoid wasting power when the supply is left on. 74 + */ 75 + if (!ihid_elan->no_reset_on_power_off) 76 + gpiod_set_value_cansleep(ihid_elan->reset_gpio, 1); 77 + 81 78 if (ihid_elan->chip_data->post_gpio_reset_off_delay_ms) 82 79 msleep(ihid_elan->chip_data->post_gpio_reset_off_delay_ms); 83 80 ··· 96 79 static int i2c_hid_of_elan_probe(struct i2c_client *client) 97 80 { 98 81 struct i2c_hid_of_elan *ihid_elan; 82 + int ret; 99 83 100 84 ihid_elan = devm_kzalloc(&client->dev, sizeof(*ihid_elan), GFP_KERNEL); 101 85 if (!ihid_elan) ··· 111 93 if (IS_ERR(ihid_elan->reset_gpio)) 112 94 return PTR_ERR(ihid_elan->reset_gpio); 113 95 96 + ihid_elan->no_reset_on_power_off = of_property_read_bool(client->dev.of_node, 97 + "no-reset-on-power-off"); 98 + 114 99 ihid_elan->vccio = devm_regulator_get(&client->dev, "vccio"); 115 - if (IS_ERR(ihid_elan->vccio)) 116 - return PTR_ERR(ihid_elan->vccio); 100 + if (IS_ERR(ihid_elan->vccio)) { 101 + ret = PTR_ERR(ihid_elan->vccio); 102 + goto err_deassert_reset; 103 + } 117 104 118 105 ihid_elan->chip_data = device_get_match_data(&client->dev); 119 106 120 107 if (ihid_elan->chip_data->main_supply_name) { 121 108 ihid_elan->vcc33 = devm_regulator_get(&client->dev, 122 109 ihid_elan->chip_data->main_supply_name); 123 - if (IS_ERR(ihid_elan->vcc33)) 124 - return PTR_ERR(ihid_elan->vcc33); 110 + if (IS_ERR(ihid_elan->vcc33)) { 111 + ret = PTR_ERR(ihid_elan->vcc33); 112 + goto err_deassert_reset; 113 + } 125 114 } 126 115 127 - return i2c_hid_core_probe(client, &ihid_elan->ops, 128 - ihid_elan->chip_data->hid_descriptor_address, 0); 116 + ret = i2c_hid_core_probe(client, &ihid_elan->ops, 117 + ihid_elan->chip_data->hid_descriptor_address, 0); 118 + if (ret) 119 + goto err_deassert_reset; 120 + 121 + return 0; 122 + 123 + err_deassert_reset: 124 + if (ihid_elan->no_reset_on_power_off) 125 + gpiod_set_value_cansleep(ihid_elan->reset_gpio, 0); 126 + 127 + return ret; 129 128 } 130 129 131 130 static const struct elan_i2c_hid_chip_data elan_ekth6915_chip_data = {
+44 -35
drivers/hid/intel-ish-hid/ishtp/loader.c
··· 84 84 static int loader_xfer_cmd(struct ishtp_device *dev, void *req, int req_len, 85 85 void *resp, int resp_len) 86 86 { 87 - struct loader_msg_header *req_hdr = req; 88 - struct loader_msg_header *resp_hdr = resp; 87 + union loader_msg_header req_hdr; 88 + union loader_msg_header resp_hdr; 89 89 struct device *devc = dev->devc; 90 90 int rv; 91 91 ··· 93 93 dev->fw_loader_rx_size = resp_len; 94 94 95 95 rv = loader_write_message(dev, req, req_len); 96 + req_hdr.val32 = le32_to_cpup(req); 97 + 96 98 if (rv < 0) { 97 - dev_err(devc, "write cmd %u failed:%d\n", req_hdr->command, rv); 99 + dev_err(devc, "write cmd %u failed:%d\n", req_hdr.command, rv); 98 100 return rv; 99 101 } 100 102 101 103 /* Wait the ACK */ 102 104 wait_event_interruptible_timeout(dev->wait_loader_recvd_msg, dev->fw_loader_received, 103 105 ISHTP_LOADER_TIMEOUT); 106 + resp_hdr.val32 = le32_to_cpup(resp); 104 107 dev->fw_loader_rx_size = 0; 105 108 dev->fw_loader_rx_buf = NULL; 106 109 if (!dev->fw_loader_received) { 107 - dev_err(devc, "wait response of cmd %u timeout\n", req_hdr->command); 110 + dev_err(devc, "wait response of cmd %u timeout\n", req_hdr.command); 108 111 return -ETIMEDOUT; 109 112 } 110 113 111 - if (!resp_hdr->is_response) { 112 - dev_err(devc, "not a response for %u\n", req_hdr->command); 114 + if (!resp_hdr.is_response) { 115 + dev_err(devc, "not a response for %u\n", req_hdr.command); 113 116 return -EBADMSG; 114 117 } 115 118 116 - if (req_hdr->command != resp_hdr->command) { 117 - dev_err(devc, "unexpected cmd response %u:%u\n", req_hdr->command, 118 - resp_hdr->command); 119 + if (req_hdr.command != resp_hdr.command) { 120 + dev_err(devc, "unexpected cmd response %u:%u\n", req_hdr.command, 121 + resp_hdr.command); 119 122 return -EBADMSG; 120 123 } 121 124 122 - if (resp_hdr->status) { 123 - dev_err(devc, "cmd %u failed %u\n", req_hdr->command, resp_hdr->status); 125 + if (resp_hdr.status) { 126 + dev_err(devc, "cmd %u failed %u\n", req_hdr.command, resp_hdr.status); 124 127 return -EIO; 125 128 } 126 129 ··· 141 138 struct loader_xfer_dma_fragment *fragment, 142 139 void **dma_bufs, u32 fragment_size) 143 140 { 141 + dma_addr_t dma_addr; 144 142 int i; 145 143 146 144 for (i = 0; i < FRAGMENT_MAX_NUM; i++) { 147 145 if (dma_bufs[i]) { 148 - dma_free_coherent(dev->devc, fragment_size, dma_bufs[i], 149 - fragment->fragment_tbl[i].ddr_adrs); 146 + dma_addr = le64_to_cpu(fragment->fragment_tbl[i].ddr_adrs); 147 + dma_free_coherent(dev->devc, fragment_size, dma_bufs[i], dma_addr); 150 148 dma_bufs[i] = NULL; 151 149 } 152 150 } ··· 160 156 * @fragment: The ISHTP firmware fragment descriptor 161 157 * @dma_bufs: The array of DMA fragment buffers 162 158 * @fragment_size: The size of a single DMA fragment 159 + * @fragment_count: Number of fragments 163 160 * 164 161 * Return: 0 on success, negative error code on failure 165 162 */ 166 163 static int prepare_dma_bufs(struct ishtp_device *dev, 167 164 const struct firmware *ish_fw, 168 165 struct loader_xfer_dma_fragment *fragment, 169 - void **dma_bufs, u32 fragment_size) 166 + void **dma_bufs, u32 fragment_size, u32 fragment_count) 170 167 { 168 + dma_addr_t dma_addr; 171 169 u32 offset = 0; 170 + u32 length; 172 171 int i; 173 172 174 - for (i = 0; i < fragment->fragment_cnt && offset < ish_fw->size; i++) { 175 - dma_bufs[i] = dma_alloc_coherent(dev->devc, fragment_size, 176 - &fragment->fragment_tbl[i].ddr_adrs, GFP_KERNEL); 173 + for (i = 0; i < fragment_count && offset < ish_fw->size; i++) { 174 + dma_bufs[i] = dma_alloc_coherent(dev->devc, fragment_size, &dma_addr, GFP_KERNEL); 177 175 if (!dma_bufs[i]) 178 176 return -ENOMEM; 179 177 180 - fragment->fragment_tbl[i].length = clamp(ish_fw->size - offset, 0, fragment_size); 181 - fragment->fragment_tbl[i].fw_off = offset; 182 - memcpy(dma_bufs[i], ish_fw->data + offset, fragment->fragment_tbl[i].length); 178 + fragment->fragment_tbl[i].ddr_adrs = cpu_to_le64(dma_addr); 179 + length = clamp(ish_fw->size - offset, 0, fragment_size); 180 + fragment->fragment_tbl[i].length = cpu_to_le32(length); 181 + fragment->fragment_tbl[i].fw_off = cpu_to_le32(offset); 182 + memcpy(dma_bufs[i], ish_fw->data + offset, length); 183 183 clflush_cache_range(dma_bufs[i], fragment_size); 184 184 185 - offset += fragment->fragment_tbl[i].length; 185 + offset += length; 186 186 } 187 187 188 188 return 0; ··· 214 206 { 215 207 DEFINE_RAW_FLEX(struct loader_xfer_dma_fragment, fragment, fragment_tbl, FRAGMENT_MAX_NUM); 216 208 struct ishtp_device *dev = container_of(work, struct ishtp_device, work_fw_loader); 217 - struct loader_xfer_query query = { 218 - .header.command = LOADER_CMD_XFER_QUERY, 219 - }; 220 - struct loader_start start = { 221 - .header.command = LOADER_CMD_START, 222 - }; 209 + union loader_msg_header query_hdr = { .command = LOADER_CMD_XFER_QUERY, }; 210 + union loader_msg_header start_hdr = { .command = LOADER_CMD_START, }; 211 + union loader_msg_header fragment_hdr = { .command = LOADER_CMD_XFER_FRAGMENT, }; 212 + struct loader_xfer_query query = { .header = cpu_to_le32(query_hdr.val32), }; 213 + struct loader_start start = { .header = cpu_to_le32(start_hdr.val32), }; 223 214 union loader_recv_message recv_msg; 224 215 char *filename = dev->driver_data->fw_filename; 225 216 const struct firmware *ish_fw; 226 217 void *dma_bufs[FRAGMENT_MAX_NUM] = {}; 227 218 u32 fragment_size; 219 + u32 fragment_count; 228 220 int retry = ISHTP_LOADER_RETRY_TIMES; 229 221 int rv; 230 222 ··· 234 226 return; 235 227 } 236 228 237 - fragment->fragment.header.command = LOADER_CMD_XFER_FRAGMENT; 238 - fragment->fragment.xfer_mode = LOADER_XFER_MODE_DMA; 239 - fragment->fragment.is_last = 1; 240 - fragment->fragment.size = ish_fw->size; 229 + fragment->fragment.header = cpu_to_le32(fragment_hdr.val32); 230 + fragment->fragment.xfer_mode = cpu_to_le32(LOADER_XFER_MODE_DMA); 231 + fragment->fragment.is_last = cpu_to_le32(1); 232 + fragment->fragment.size = cpu_to_le32(ish_fw->size); 241 233 /* Calculate the size of a single DMA fragment */ 242 234 fragment_size = PFN_ALIGN(DIV_ROUND_UP(ish_fw->size, FRAGMENT_MAX_NUM)); 243 235 /* Calculate the count of DMA fragments */ 244 - fragment->fragment_cnt = DIV_ROUND_UP(ish_fw->size, fragment_size); 236 + fragment_count = DIV_ROUND_UP(ish_fw->size, fragment_size); 237 + fragment->fragment_cnt = cpu_to_le32(fragment_count); 245 238 246 - rv = prepare_dma_bufs(dev, ish_fw, fragment, dma_bufs, fragment_size); 239 + rv = prepare_dma_bufs(dev, ish_fw, fragment, dma_bufs, fragment_size, fragment_count); 247 240 if (rv) { 248 241 dev_err(dev->devc, "prepare DMA buffer failed.\n"); 249 242 goto out; 250 243 } 251 244 252 245 do { 253 - query.image_size = ish_fw->size; 246 + query.image_size = cpu_to_le32(ish_fw->size); 254 247 rv = loader_xfer_cmd(dev, &query, sizeof(query), recv_msg.raw_data, 255 248 sizeof(struct loader_xfer_query_ack)); 256 249 if (rv) ··· 264 255 recv_msg.query_ack.version_build); 265 256 266 257 rv = loader_xfer_cmd(dev, fragment, 267 - struct_size(fragment, fragment_tbl, fragment->fragment_cnt), 258 + struct_size(fragment, fragment_tbl, fragment_count), 268 259 recv_msg.raw_data, sizeof(struct loader_xfer_fragment_ack)); 269 260 if (rv) 270 261 continue; /* try again if failed */
+18 -13
drivers/hid/intel-ish-hid/ishtp/loader.h
··· 30 30 #define LOADER_XFER_MODE_DMA BIT(0) 31 31 32 32 /** 33 - * struct loader_msg_header - ISHTP firmware loader message header 33 + * union loader_msg_header - ISHTP firmware loader message header 34 34 * @command: Command type 35 35 * @is_response: Indicates if the message is a response 36 36 * @has_next: Indicates if there is a next message 37 37 * @reserved: Reserved for future use 38 38 * @status: Status of the message 39 + * @val32: entire header as a 32-bit value 39 40 */ 40 - struct loader_msg_header { 41 - __le32 command:7; 42 - __le32 is_response:1; 43 - __le32 has_next:1; 44 - __le32 reserved:15; 45 - __le32 status:8; 41 + union loader_msg_header { 42 + struct { 43 + __u32 command:7; 44 + __u32 is_response:1; 45 + __u32 has_next:1; 46 + __u32 reserved:15; 47 + __u32 status:8; 48 + }; 49 + __u32 val32; 46 50 }; 47 51 48 52 /** ··· 55 51 * @image_size: Size of the image 56 52 */ 57 53 struct loader_xfer_query { 58 - struct loader_msg_header header; 54 + __le32 header; 59 55 __le32 image_size; 60 56 }; 61 57 ··· 107 103 * @capability: Loader capability 108 104 */ 109 105 struct loader_xfer_query_ack { 110 - struct loader_msg_header header; 106 + __le32 header; 111 107 __le16 version_major; 112 108 __le16 version_minor; 113 109 __le16 version_hotfix; ··· 126 122 * @is_last: Is last 127 123 */ 128 124 struct loader_xfer_fragment { 129 - struct loader_msg_header header; 125 + __le32 header; 130 126 __le32 xfer_mode; 131 127 __le32 offset; 132 128 __le32 size; ··· 138 134 * @header: Header of the message 139 135 */ 140 136 struct loader_xfer_fragment_ack { 141 - struct loader_msg_header header; 137 + __le32 header; 142 138 }; 143 139 144 140 /** ··· 174 170 * @header: Header of the message 175 171 */ 176 172 struct loader_start { 177 - struct loader_msg_header header; 173 + __le32 header; 178 174 }; 179 175 180 176 /** ··· 182 178 * @header: Header of the message 183 179 */ 184 180 struct loader_start_ack { 185 - struct loader_msg_header header; 181 + __le32 header; 186 182 }; 187 183 188 184 union loader_recv_message { 185 + __le32 header; 189 186 struct loader_xfer_query_ack query_ack; 190 187 struct loader_xfer_fragment_ack fragment_ack; 191 188 struct loader_start_ack start_ack;
+5 -6
drivers/i2c/busses/i2c-synquacer.c
··· 138 138 int irq; 139 139 struct device *dev; 140 140 void __iomem *base; 141 - struct clk *pclk; 142 141 u32 pclkrate; 143 142 u32 speed_khz; 144 143 u32 timeout_ms; ··· 534 535 static int synquacer_i2c_probe(struct platform_device *pdev) 535 536 { 536 537 struct synquacer_i2c *i2c; 538 + struct clk *pclk; 537 539 u32 bus_speed; 538 540 int ret; 539 541 ··· 550 550 device_property_read_u32(&pdev->dev, "socionext,pclk-rate", 551 551 &i2c->pclkrate); 552 552 553 - i2c->pclk = devm_clk_get_enabled(&pdev->dev, "pclk"); 554 - if (IS_ERR(i2c->pclk)) 555 - return dev_err_probe(&pdev->dev, PTR_ERR(i2c->pclk), 553 + pclk = devm_clk_get_enabled(&pdev->dev, "pclk"); 554 + if (IS_ERR(pclk)) 555 + return dev_err_probe(&pdev->dev, PTR_ERR(pclk), 556 556 "failed to get and enable clock\n"); 557 557 558 - dev_dbg(&pdev->dev, "clock source %p\n", i2c->pclk); 559 - i2c->pclkrate = clk_get_rate(i2c->pclk); 558 + i2c->pclkrate = clk_get_rate(pclk); 560 559 561 560 if (i2c->pclkrate < SYNQUACER_I2C_MIN_CLK_RATE || 562 561 i2c->pclkrate > SYNQUACER_I2C_MAX_CLK_RATE)
+5 -14
drivers/input/touchscreen/silead.c
··· 71 71 struct regulator_bulk_data regulators[2]; 72 72 char fw_name[64]; 73 73 struct touchscreen_properties prop; 74 - u32 max_fingers; 75 74 u32 chip_id; 76 75 struct input_mt_pos pos[SILEAD_MAX_FINGERS]; 77 76 int slots[SILEAD_MAX_FINGERS]; ··· 135 136 touchscreen_parse_properties(data->input, true, &data->prop); 136 137 silead_apply_efi_fw_min_max(data); 137 138 138 - input_mt_init_slots(data->input, data->max_fingers, 139 + input_mt_init_slots(data->input, SILEAD_MAX_FINGERS, 139 140 INPUT_MT_DIRECT | INPUT_MT_DROP_UNUSED | 140 141 INPUT_MT_TRACK); 141 142 ··· 255 256 return; 256 257 } 257 258 258 - if (buf[0] > data->max_fingers) { 259 + if (buf[0] > SILEAD_MAX_FINGERS) { 259 260 dev_warn(dev, "More touches reported then supported %d > %d\n", 260 - buf[0], data->max_fingers); 261 - buf[0] = data->max_fingers; 261 + buf[0], SILEAD_MAX_FINGERS); 262 + buf[0] = SILEAD_MAX_FINGERS; 262 263 } 263 264 264 265 if (silead_ts_handle_pen_data(data, buf)) ··· 314 315 315 316 static int silead_ts_init(struct i2c_client *client) 316 317 { 317 - struct silead_ts_data *data = i2c_get_clientdata(client); 318 318 int error; 319 319 320 320 error = i2c_smbus_write_byte_data(client, SILEAD_REG_RESET, ··· 323 325 usleep_range(SILEAD_CMD_SLEEP_MIN, SILEAD_CMD_SLEEP_MAX); 324 326 325 327 error = i2c_smbus_write_byte_data(client, SILEAD_REG_TOUCH_NR, 326 - data->max_fingers); 328 + SILEAD_MAX_FINGERS); 327 329 if (error) 328 330 goto i2c_write_err; 329 331 usleep_range(SILEAD_CMD_SLEEP_MIN, SILEAD_CMD_SLEEP_MAX); ··· 588 590 struct device *dev = &client->dev; 589 591 const char *str; 590 592 int error; 591 - 592 - error = device_property_read_u32(dev, "silead,max-fingers", 593 - &data->max_fingers); 594 - if (error) { 595 - dev_dbg(dev, "Max fingers read error %d\n", error); 596 - data->max_fingers = 5; /* Most devices handle up-to 5 fingers */ 597 - } 598 593 599 594 error = device_property_read_string(dev, "firmware-name", &str); 600 595 if (!error)
+2 -1
drivers/iommu/amd/amd_iommu.h
··· 129 129 static inline bool amd_iommu_gt_ppr_supported(void) 130 130 { 131 131 return (check_feature(FEATURE_GT) && 132 - check_feature(FEATURE_PPR)); 132 + check_feature(FEATURE_PPR) && 133 + check_feature(FEATURE_EPHSUP)); 133 134 } 134 135 135 136 static inline u64 iommu_virt_to_phys(void *vaddr)
+9
drivers/iommu/amd/init.c
··· 1626 1626 } 1627 1627 } 1628 1628 1629 + static void __init free_sysfs(struct amd_iommu *iommu) 1630 + { 1631 + if (iommu->iommu.dev) { 1632 + iommu_device_unregister(&iommu->iommu); 1633 + iommu_device_sysfs_remove(&iommu->iommu); 1634 + } 1635 + } 1636 + 1629 1637 static void __init free_iommu_one(struct amd_iommu *iommu) 1630 1638 { 1639 + free_sysfs(iommu); 1631 1640 free_cwwb_sem(iommu); 1632 1641 free_command_buffer(iommu); 1633 1642 free_event_buffer(iommu);
+24 -24
drivers/iommu/amd/iommu.c
··· 2032 2032 struct protection_domain *domain) 2033 2033 { 2034 2034 struct amd_iommu *iommu = get_amd_iommu_from_dev_data(dev_data); 2035 - struct pci_dev *pdev; 2036 2035 int ret = 0; 2037 2036 2038 2037 /* Update data structures */ ··· 2046 2047 domain->dev_iommu[iommu->index] += 1; 2047 2048 domain->dev_cnt += 1; 2048 2049 2049 - pdev = dev_is_pci(dev_data->dev) ? to_pci_dev(dev_data->dev) : NULL; 2050 + /* Setup GCR3 table */ 2050 2051 if (pdom_is_sva_capable(domain)) { 2051 2052 ret = init_gcr3_table(dev_data, domain); 2052 2053 if (ret) 2053 2054 return ret; 2054 - 2055 - if (pdev) { 2056 - pdev_enable_caps(pdev); 2057 - 2058 - /* 2059 - * Device can continue to function even if IOPF 2060 - * enablement failed. Hence in error path just 2061 - * disable device PRI support. 2062 - */ 2063 - if (amd_iommu_iopf_add_device(iommu, dev_data)) 2064 - pdev_disable_cap_pri(pdev); 2065 - } 2066 - } else if (pdev) { 2067 - pdev_enable_cap_ats(pdev); 2068 2055 } 2069 - 2070 - /* Update device table */ 2071 - amd_iommu_dev_update_dte(dev_data, true); 2072 2056 2073 2057 return ret; 2074 2058 } ··· 2145 2163 2146 2164 do_detach(dev_data); 2147 2165 2166 + out: 2167 + spin_unlock(&dev_data->lock); 2168 + 2169 + spin_unlock_irqrestore(&domain->lock, flags); 2170 + 2148 2171 /* Remove IOPF handler */ 2149 2172 if (ppr) 2150 2173 amd_iommu_iopf_remove_device(iommu, dev_data); ··· 2157 2170 if (dev_is_pci(dev)) 2158 2171 pdev_disable_caps(to_pci_dev(dev)); 2159 2172 2160 - out: 2161 - spin_unlock(&dev_data->lock); 2162 - 2163 - spin_unlock_irqrestore(&domain->lock, flags); 2164 2173 } 2165 2174 2166 2175 static struct iommu_device *amd_iommu_probe_device(struct device *dev) ··· 2468 2485 struct iommu_dev_data *dev_data = dev_iommu_priv_get(dev); 2469 2486 struct protection_domain *domain = to_pdomain(dom); 2470 2487 struct amd_iommu *iommu = get_amd_iommu_from_dev(dev); 2488 + struct pci_dev *pdev; 2471 2489 int ret; 2472 2490 2473 2491 /* ··· 2501 2517 } 2502 2518 #endif 2503 2519 2504 - iommu_completion_wait(iommu); 2520 + pdev = dev_is_pci(dev_data->dev) ? to_pci_dev(dev_data->dev) : NULL; 2521 + if (pdev && pdom_is_sva_capable(domain)) { 2522 + pdev_enable_caps(pdev); 2523 + 2524 + /* 2525 + * Device can continue to function even if IOPF 2526 + * enablement failed. Hence in error path just 2527 + * disable device PRI support. 2528 + */ 2529 + if (amd_iommu_iopf_add_device(iommu, dev_data)) 2530 + pdev_disable_cap_pri(pdev); 2531 + } else if (pdev) { 2532 + pdev_enable_cap_ats(pdev); 2533 + } 2534 + 2535 + /* Update device table */ 2536 + amd_iommu_dev_update_dte(dev_data, true); 2505 2537 2506 2538 return ret; 2507 2539 }
+5 -20
drivers/iommu/amd/ppr.c
··· 222 222 if (iommu->iopf_queue) 223 223 return ret; 224 224 225 - snprintf(iommu->iopfq_name, sizeof(iommu->iopfq_name), 226 - "amdiommu-%#x-iopfq", 225 + snprintf(iommu->iopfq_name, sizeof(iommu->iopfq_name), "amdvi-%#x", 227 226 PCI_SEG_DEVID_TO_SBDF(iommu->pci_seg->id, iommu->devid)); 228 227 229 228 iommu->iopf_queue = iopf_queue_alloc(iommu->iopfq_name); ··· 248 249 int amd_iommu_iopf_add_device(struct amd_iommu *iommu, 249 250 struct iommu_dev_data *dev_data) 250 251 { 251 - unsigned long flags; 252 252 int ret = 0; 253 253 254 254 if (!dev_data->pri_enabled) 255 255 return ret; 256 256 257 - raw_spin_lock_irqsave(&iommu->lock, flags); 258 - 259 - if (!iommu->iopf_queue) { 260 - ret = -EINVAL; 261 - goto out_unlock; 262 - } 257 + if (!iommu->iopf_queue) 258 + return -EINVAL; 263 259 264 260 ret = iopf_queue_add_device(iommu->iopf_queue, dev_data->dev); 265 261 if (ret) 266 - goto out_unlock; 262 + return ret; 267 263 268 264 dev_data->ppr = true; 269 - 270 - out_unlock: 271 - raw_spin_unlock_irqrestore(&iommu->lock, flags); 272 - return ret; 265 + return 0; 273 266 } 274 267 275 268 /* Its assumed that caller has verified that device was added to iopf queue */ 276 269 void amd_iommu_iopf_remove_device(struct amd_iommu *iommu, 277 270 struct iommu_dev_data *dev_data) 278 271 { 279 - unsigned long flags; 280 - 281 - raw_spin_lock_irqsave(&iommu->lock, flags); 282 - 283 272 iopf_queue_remove_device(iommu->iopf_queue, dev_data->dev); 284 273 dev_data->ppr = false; 285 - 286 - raw_spin_unlock_irqrestore(&iommu->lock, flags); 287 274 }
+4 -4
drivers/iommu/dma-iommu.c
··· 686 686 687 687 /* Check the domain allows at least some access to the device... */ 688 688 if (map) { 689 - dma_addr_t base = dma_range_map_min(map); 690 - if (base > domain->geometry.aperture_end || 689 + if (dma_range_map_min(map) > domain->geometry.aperture_end || 691 690 dma_range_map_max(map) < domain->geometry.aperture_start) { 692 691 pr_warn("specified DMA range outside IOMMU capability\n"); 693 692 return -EFAULT; 694 693 } 695 - /* ...then finally give it a kicking to make sure it fits */ 696 - base_pfn = max(base, domain->geometry.aperture_start) >> order; 697 694 } 695 + /* ...then finally give it a kicking to make sure it fits */ 696 + base_pfn = max_t(unsigned long, base_pfn, 697 + domain->geometry.aperture_start >> order); 698 698 699 699 /* start_pfn is always nonzero for an already-initialised domain */ 700 700 mutex_lock(&cookie->mutex);
+12 -32
drivers/irqchip/irq-gic-v3-its.c
··· 1846 1846 { 1847 1847 struct its_device *its_dev = irq_data_get_irq_chip_data(d); 1848 1848 u32 event = its_get_event_id(d); 1849 - int ret = 0; 1850 1849 1851 1850 if (!info->map) 1852 1851 return -EINVAL; 1853 - 1854 - raw_spin_lock(&its_dev->event_map.vlpi_lock); 1855 1852 1856 1853 if (!its_dev->event_map.vm) { 1857 1854 struct its_vlpi_map *maps; 1858 1855 1859 1856 maps = kcalloc(its_dev->event_map.nr_lpis, sizeof(*maps), 1860 1857 GFP_ATOMIC); 1861 - if (!maps) { 1862 - ret = -ENOMEM; 1863 - goto out; 1864 - } 1858 + if (!maps) 1859 + return -ENOMEM; 1865 1860 1866 1861 its_dev->event_map.vm = info->map->vm; 1867 1862 its_dev->event_map.vlpi_maps = maps; 1868 1863 } else if (its_dev->event_map.vm != info->map->vm) { 1869 - ret = -EINVAL; 1870 - goto out; 1864 + return -EINVAL; 1871 1865 } 1872 1866 1873 1867 /* Get our private copy of the mapping information */ ··· 1893 1899 its_dev->event_map.nr_vlpis++; 1894 1900 } 1895 1901 1896 - out: 1897 - raw_spin_unlock(&its_dev->event_map.vlpi_lock); 1898 - return ret; 1902 + return 0; 1899 1903 } 1900 1904 1901 1905 static int its_vlpi_get(struct irq_data *d, struct its_cmd_info *info) 1902 1906 { 1903 1907 struct its_device *its_dev = irq_data_get_irq_chip_data(d); 1904 1908 struct its_vlpi_map *map; 1905 - int ret = 0; 1906 - 1907 - raw_spin_lock(&its_dev->event_map.vlpi_lock); 1908 1909 1909 1910 map = get_vlpi_map(d); 1910 1911 1911 - if (!its_dev->event_map.vm || !map) { 1912 - ret = -EINVAL; 1913 - goto out; 1914 - } 1912 + if (!its_dev->event_map.vm || !map) 1913 + return -EINVAL; 1915 1914 1916 1915 /* Copy our mapping information to the incoming request */ 1917 1916 *info->map = *map; 1918 1917 1919 - out: 1920 - raw_spin_unlock(&its_dev->event_map.vlpi_lock); 1921 - return ret; 1918 + return 0; 1922 1919 } 1923 1920 1924 1921 static int its_vlpi_unmap(struct irq_data *d) 1925 1922 { 1926 1923 struct its_device *its_dev = irq_data_get_irq_chip_data(d); 1927 1924 u32 event = its_get_event_id(d); 1928 - int ret = 0; 1929 1925 1930 - raw_spin_lock(&its_dev->event_map.vlpi_lock); 1931 - 1932 - if (!its_dev->event_map.vm || !irqd_is_forwarded_to_vcpu(d)) { 1933 - ret = -EINVAL; 1934 - goto out; 1935 - } 1926 + if (!its_dev->event_map.vm || !irqd_is_forwarded_to_vcpu(d)) 1927 + return -EINVAL; 1936 1928 1937 1929 /* Drop the virtual mapping */ 1938 1930 its_send_discard(its_dev, event); ··· 1942 1962 kfree(its_dev->event_map.vlpi_maps); 1943 1963 } 1944 1964 1945 - out: 1946 - raw_spin_unlock(&its_dev->event_map.vlpi_lock); 1947 - return ret; 1965 + return 0; 1948 1966 } 1949 1967 1950 1968 static int its_vlpi_prop_update(struct irq_data *d, struct its_cmd_info *info) ··· 1969 1991 /* Need a v4 ITS */ 1970 1992 if (!is_v4(its_dev->its)) 1971 1993 return -EINVAL; 1994 + 1995 + guard(raw_spinlock_irq)(&its_dev->event_map.vlpi_lock); 1972 1996 1973 1997 /* Unmap request? */ 1974 1998 if (!info)
+7 -2
drivers/irqchip/irq-riscv-intc.c
··· 253 253 static int __init riscv_intc_acpi_init(union acpi_subtable_headers *header, 254 254 const unsigned long end) 255 255 { 256 - struct fwnode_handle *fn; 257 256 struct acpi_madt_rintc *rintc; 257 + struct fwnode_handle *fn; 258 + int rc; 258 259 259 260 rintc = (struct acpi_madt_rintc *)header; 260 261 ··· 274 273 return -ENOMEM; 275 274 } 276 275 277 - return riscv_intc_init_common(fn, &riscv_intc_chip); 276 + rc = riscv_intc_init_common(fn, &riscv_intc_chip); 277 + if (rc) 278 + irq_domain_free_fwnode(fn); 279 + 280 + return rc; 278 281 } 279 282 280 283 IRQCHIP_ACPI_DECLARE(riscv_intc, ACPI_MADT_TYPE_RINTC, NULL,
+17 -17
drivers/irqchip/irq-sifive-plic.c
··· 85 85 struct plic_priv *priv; 86 86 }; 87 87 static int plic_parent_irq __ro_after_init; 88 - static bool plic_cpuhp_setup_done __ro_after_init; 88 + static bool plic_global_setup_done __ro_after_init; 89 89 static DEFINE_PER_CPU(struct plic_handler, plic_handlers); 90 90 91 91 static int plic_irq_set_type(struct irq_data *d, unsigned int type); ··· 487 487 unsigned long plic_quirks = 0; 488 488 struct plic_handler *handler; 489 489 u32 nr_irqs, parent_hwirq; 490 - struct irq_domain *domain; 491 490 struct plic_priv *priv; 492 491 irq_hw_number_t hwirq; 493 - bool cpuhp_setup; 494 492 495 493 if (is_of_node(dev->fwnode)) { 496 494 const struct of_device_id *id; ··· 547 549 continue; 548 550 } 549 551 550 - /* Find parent domain and register chained handler */ 551 - domain = irq_find_matching_fwnode(riscv_get_intc_hwnode(), DOMAIN_BUS_ANY); 552 - if (!plic_parent_irq && domain) { 553 - plic_parent_irq = irq_create_mapping(domain, RV_IRQ_EXT); 554 - if (plic_parent_irq) 555 - irq_set_chained_handler(plic_parent_irq, plic_handle_irq); 556 - } 557 - 558 552 /* 559 553 * When running in M-mode we need to ignore the S-mode handler. 560 554 * Here we assume it always comes later, but that might be a ··· 587 597 goto fail_cleanup_contexts; 588 598 589 599 /* 590 - * We can have multiple PLIC instances so setup cpuhp state 600 + * We can have multiple PLIC instances so setup global state 591 601 * and register syscore operations only once after context 592 602 * handlers of all online CPUs are initialized. 593 603 */ 594 - if (!plic_cpuhp_setup_done) { 595 - cpuhp_setup = true; 604 + if (!plic_global_setup_done) { 605 + struct irq_domain *domain; 606 + bool global_setup = true; 607 + 596 608 for_each_online_cpu(cpu) { 597 609 handler = per_cpu_ptr(&plic_handlers, cpu); 598 610 if (!handler->present) { 599 - cpuhp_setup = false; 611 + global_setup = false; 600 612 break; 601 613 } 602 614 } 603 - if (cpuhp_setup) { 615 + 616 + if (global_setup) { 617 + /* Find parent domain and register chained handler */ 618 + domain = irq_find_matching_fwnode(riscv_get_intc_hwnode(), DOMAIN_BUS_ANY); 619 + if (domain) 620 + plic_parent_irq = irq_create_mapping(domain, RV_IRQ_EXT); 621 + if (plic_parent_irq) 622 + irq_set_chained_handler(plic_parent_irq, plic_handle_irq); 623 + 604 624 cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING, 605 625 "irqchip/sifive/plic:starting", 606 626 plic_starting_cpu, plic_dying_cpu); 607 627 register_syscore_ops(&plic_irq_syscore_ops); 608 - plic_cpuhp_setup_done = true; 628 + plic_global_setup_done = true; 609 629 } 610 630 } 611 631
+3 -3
drivers/media/pci/intel/ipu6/ipu6-isys-queue.c
··· 301 301 out_requeue: 302 302 if (bl && bl->nbufs) 303 303 ipu6_isys_buffer_list_queue(bl, 304 - (IPU6_ISYS_BUFFER_LIST_FL_INCOMING | 305 - error) ? 304 + IPU6_ISYS_BUFFER_LIST_FL_INCOMING | 305 + (error ? 306 306 IPU6_ISYS_BUFFER_LIST_FL_SET_STATE : 307 - 0, error ? VB2_BUF_STATE_ERROR : 307 + 0), error ? VB2_BUF_STATE_ERROR : 308 308 VB2_BUF_STATE_QUEUED); 309 309 flush_firmware_streamon_fail(stream); 310 310
+43 -28
drivers/media/pci/intel/ipu6/ipu6-isys.c
··· 678 678 container_of(asc, struct sensor_async_sd, asc); 679 679 int ret; 680 680 681 + if (s_asd->csi2.port >= isys->pdata->ipdata->csi2.nports) { 682 + dev_err(&isys->adev->auxdev.dev, "invalid csi2 port %u\n", 683 + s_asd->csi2.port); 684 + return -EINVAL; 685 + } 686 + 681 687 ret = ipu_bridge_instantiate_vcm(sd->dev); 682 688 if (ret) { 683 689 dev_err(&isys->adev->auxdev.dev, "instantiate vcm failed\n"); ··· 931 925 .resume = isys_resume, 932 926 }; 933 927 934 - static void isys_remove(struct auxiliary_device *auxdev) 928 + static void free_fw_msg_bufs(struct ipu6_isys *isys) 935 929 { 936 - struct ipu6_bus_device *adev = auxdev_to_adev(auxdev); 937 - struct ipu6_isys *isys = dev_get_drvdata(&auxdev->dev); 938 - struct ipu6_device *isp = adev->isp; 930 + struct device *dev = &isys->adev->auxdev.dev; 939 931 struct isys_fw_msgs *fwmsg, *safe; 940 - unsigned int i; 941 932 942 933 list_for_each_entry_safe(fwmsg, safe, &isys->framebuflist, head) 943 - dma_free_attrs(&auxdev->dev, sizeof(struct isys_fw_msgs), 944 - fwmsg, fwmsg->dma_addr, 0); 934 + dma_free_attrs(dev, sizeof(struct isys_fw_msgs), fwmsg, 935 + fwmsg->dma_addr, 0); 945 936 946 937 list_for_each_entry_safe(fwmsg, safe, &isys->framebuflist_fw, head) 947 - dma_free_attrs(&auxdev->dev, sizeof(struct isys_fw_msgs), 948 - fwmsg, fwmsg->dma_addr, 0); 949 - 950 - isys_unregister_devices(isys); 951 - isys_notifier_cleanup(isys); 952 - 953 - cpu_latency_qos_remove_request(&isys->pm_qos); 954 - 955 - if (!isp->secure_mode) { 956 - ipu6_cpd_free_pkg_dir(adev); 957 - ipu6_buttress_unmap_fw_image(adev, &adev->fw_sgt); 958 - release_firmware(adev->fw); 959 - } 960 - 961 - for (i = 0; i < IPU6_ISYS_MAX_STREAMS; i++) 962 - mutex_destroy(&isys->streams[i].mutex); 963 - 964 - isys_iwake_watermark_cleanup(isys); 965 - mutex_destroy(&isys->stream_mutex); 966 - mutex_destroy(&isys->mutex); 938 + dma_free_attrs(dev, sizeof(struct isys_fw_msgs), fwmsg, 939 + fwmsg->dma_addr, 0); 967 940 } 968 941 969 942 static int alloc_fw_msg_bufs(struct ipu6_isys *isys, int amount) ··· 1125 1140 1126 1141 ret = isys_register_devices(isys); 1127 1142 if (ret) 1128 - goto out_remove_pkg_dir_shared_buffer; 1143 + goto free_fw_msg_bufs; 1129 1144 1130 1145 ipu6_mmu_hw_cleanup(adev->mmu); 1131 1146 1132 1147 return 0; 1133 1148 1149 + free_fw_msg_bufs: 1150 + free_fw_msg_bufs(isys); 1134 1151 out_remove_pkg_dir_shared_buffer: 1135 1152 if (!isp->secure_mode) 1136 1153 ipu6_cpd_free_pkg_dir(adev); ··· 1152 1165 ipu6_mmu_hw_cleanup(adev->mmu); 1153 1166 1154 1167 return ret; 1168 + } 1169 + 1170 + static void isys_remove(struct auxiliary_device *auxdev) 1171 + { 1172 + struct ipu6_bus_device *adev = auxdev_to_adev(auxdev); 1173 + struct ipu6_isys *isys = dev_get_drvdata(&auxdev->dev); 1174 + struct ipu6_device *isp = adev->isp; 1175 + unsigned int i; 1176 + 1177 + free_fw_msg_bufs(isys); 1178 + 1179 + isys_unregister_devices(isys); 1180 + isys_notifier_cleanup(isys); 1181 + 1182 + cpu_latency_qos_remove_request(&isys->pm_qos); 1183 + 1184 + if (!isp->secure_mode) { 1185 + ipu6_cpd_free_pkg_dir(adev); 1186 + ipu6_buttress_unmap_fw_image(adev, &adev->fw_sgt); 1187 + release_firmware(adev->fw); 1188 + } 1189 + 1190 + for (i = 0; i < IPU6_ISYS_MAX_STREAMS; i++) 1191 + mutex_destroy(&isys->streams[i].mutex); 1192 + 1193 + isys_iwake_watermark_cleanup(isys); 1194 + mutex_destroy(&isys->stream_mutex); 1195 + mutex_destroy(&isys->mutex); 1155 1196 } 1156 1197 1157 1198 struct fwmsg {
+1 -4
drivers/media/pci/intel/ipu6/ipu6.c
··· 285 285 #define IPU6_ISYS_CSI2_NPORTS 4 286 286 #define IPU6SE_ISYS_CSI2_NPORTS 4 287 287 #define IPU6_TGL_ISYS_CSI2_NPORTS 8 288 - #define IPU6EP_MTL_ISYS_CSI2_NPORTS 4 288 + #define IPU6EP_MTL_ISYS_CSI2_NPORTS 6 289 289 290 290 static void ipu6_internal_pdata_init(struct ipu6_device *isp) 291 291 { ··· 726 726 727 727 pm_runtime_forbid(&pdev->dev); 728 728 pm_runtime_get_noresume(&pdev->dev); 729 - 730 - pci_release_regions(pdev); 731 - pci_disable_device(pdev); 732 729 733 730 release_firmware(isp->cpd_fw); 734 731
+4 -1
drivers/media/pci/intel/ivsc/mei_csi.c
··· 677 677 return -ENODEV; 678 678 679 679 ret = ipu_bridge_init(&ipu->dev, ipu_bridge_parse_ssdb); 680 + put_device(&ipu->dev); 680 681 if (ret < 0) 681 682 return ret; 682 - if (WARN_ON(!dev_fwnode(dev))) 683 + if (!dev_fwnode(dev)) { 684 + dev_err(dev, "mei-csi probed without device fwnode!\n"); 683 685 return -ENXIO; 686 + } 684 687 685 688 csi = devm_kzalloc(dev, sizeof(struct mei_csi), GFP_KERNEL); 686 689 if (!csi)
+4 -3
drivers/media/pci/mgb4/mgb4_core.c
··· 642 642 struct mgb4_dev *mgbdev = pci_get_drvdata(pdev); 643 643 int i; 644 644 645 - #ifdef CONFIG_DEBUG_FS 646 - debugfs_remove_recursive(mgbdev->debugfs); 647 - #endif 648 645 #if IS_REACHABLE(CONFIG_HWMON) 649 646 hwmon_device_unregister(mgbdev->hwmon_dev); 650 647 #endif ··· 655 658 for (i = 0; i < MGB4_VIN_DEVICES; i++) 656 659 if (mgbdev->vin[i]) 657 660 mgb4_vin_free(mgbdev->vin[i]); 661 + 662 + #ifdef CONFIG_DEBUG_FS 663 + debugfs_remove_recursive(mgbdev->debugfs); 664 + #endif 658 665 659 666 device_remove_groups(&mgbdev->pdev->dev, mgb4_pci_groups); 660 667 free_spi(mgbdev);
+31 -13
drivers/net/ethernet/intel/ice/ice.h
··· 409 409 struct ice_tc_cfg tc_cfg; 410 410 struct bpf_prog *xdp_prog; 411 411 struct ice_tx_ring **xdp_rings; /* XDP ring array */ 412 - unsigned long *af_xdp_zc_qps; /* tracks AF_XDP ZC enabled qps */ 413 412 u16 num_xdp_txq; /* Used XDP queues */ 414 413 u8 xdp_mapping_mode; /* ICE_MAP_MODE_[CONTIG|SCATTER] */ 415 414 ··· 746 747 } 747 748 748 749 /** 750 + * ice_get_xp_from_qid - get ZC XSK buffer pool bound to a queue ID 751 + * @vsi: pointer to VSI 752 + * @qid: index of a queue to look at XSK buff pool presence 753 + * 754 + * Return: A pointer to xsk_buff_pool structure if there is a buffer pool 755 + * attached and configured as zero-copy, NULL otherwise. 756 + */ 757 + static inline struct xsk_buff_pool *ice_get_xp_from_qid(struct ice_vsi *vsi, 758 + u16 qid) 759 + { 760 + struct xsk_buff_pool *pool = xsk_get_pool_from_qid(vsi->netdev, qid); 761 + 762 + if (!ice_is_xdp_ena_vsi(vsi)) 763 + return NULL; 764 + 765 + return (pool && pool->dev) ? pool : NULL; 766 + } 767 + 768 + /** 749 769 * ice_xsk_pool - get XSK buffer pool bound to a ring 750 770 * @ring: Rx ring to use 751 771 * ··· 776 758 struct ice_vsi *vsi = ring->vsi; 777 759 u16 qid = ring->q_index; 778 760 779 - if (!ice_is_xdp_ena_vsi(vsi) || !test_bit(qid, vsi->af_xdp_zc_qps)) 780 - return NULL; 781 - 782 - return xsk_get_pool_from_qid(vsi->netdev, qid); 761 + return ice_get_xp_from_qid(vsi, qid); 783 762 } 784 763 785 764 /** ··· 801 786 if (!ring) 802 787 return; 803 788 804 - if (!ice_is_xdp_ena_vsi(vsi) || !test_bit(qid, vsi->af_xdp_zc_qps)) { 805 - ring->xsk_pool = NULL; 806 - return; 807 - } 808 - 809 - ring->xsk_pool = xsk_get_pool_from_qid(vsi->netdev, qid); 789 + ring->xsk_pool = ice_get_xp_from_qid(vsi, qid); 810 790 } 811 791 812 792 /** ··· 930 920 int ice_down_up(struct ice_vsi *vsi); 931 921 int ice_vsi_cfg_lan(struct ice_vsi *vsi); 932 922 struct ice_vsi *ice_lb_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi); 923 + 924 + enum ice_xdp_cfg { 925 + ICE_XDP_CFG_FULL, /* Fully apply new config in .ndo_bpf() */ 926 + ICE_XDP_CFG_PART, /* Save/use part of config in VSI rebuild */ 927 + }; 928 + 933 929 int ice_vsi_determine_xdp_res(struct ice_vsi *vsi); 934 - int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog); 935 - int ice_destroy_xdp_rings(struct ice_vsi *vsi); 930 + int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog, 931 + enum ice_xdp_cfg cfg_type); 932 + int ice_destroy_xdp_rings(struct ice_vsi *vsi, enum ice_xdp_cfg cfg_type); 933 + void ice_map_xdp_rings(struct ice_vsi *vsi); 936 934 int 937 935 ice_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, 938 936 u32 flags);
+3
drivers/net/ethernet/intel/ice/ice_base.c
··· 842 842 } 843 843 rx_rings_rem -= rx_rings_per_v; 844 844 } 845 + 846 + if (ice_is_xdp_ena_vsi(vsi)) 847 + ice_map_xdp_rings(vsi); 845 848 } 846 849 847 850 /**
+11 -18
drivers/net/ethernet/intel/ice/ice_lib.c
··· 114 114 if (!vsi->q_vectors) 115 115 goto err_vectors; 116 116 117 - vsi->af_xdp_zc_qps = bitmap_zalloc(max_t(int, vsi->alloc_txq, vsi->alloc_rxq), GFP_KERNEL); 118 - if (!vsi->af_xdp_zc_qps) 119 - goto err_zc_qps; 120 - 121 117 return 0; 122 118 123 - err_zc_qps: 124 - devm_kfree(dev, vsi->q_vectors); 125 119 err_vectors: 126 120 devm_kfree(dev, vsi->rxq_map); 127 121 err_rxq_map: ··· 303 309 304 310 dev = ice_pf_to_dev(pf); 305 311 306 - bitmap_free(vsi->af_xdp_zc_qps); 307 - vsi->af_xdp_zc_qps = NULL; 308 312 /* free the ring and vector containers */ 309 313 devm_kfree(dev, vsi->q_vectors); 310 314 vsi->q_vectors = NULL; ··· 2274 2282 if (ret) 2275 2283 goto unroll_vector_base; 2276 2284 2285 + if (ice_is_xdp_ena_vsi(vsi)) { 2286 + ret = ice_vsi_determine_xdp_res(vsi); 2287 + if (ret) 2288 + goto unroll_vector_base; 2289 + ret = ice_prepare_xdp_rings(vsi, vsi->xdp_prog, 2290 + ICE_XDP_CFG_PART); 2291 + if (ret) 2292 + goto unroll_vector_base; 2293 + } 2294 + 2277 2295 ice_vsi_map_rings_to_vectors(vsi); 2278 2296 2279 2297 /* Associate q_vector rings to napi */ 2280 2298 ice_vsi_set_napi_queues(vsi); 2281 2299 2282 2300 vsi->stat_offsets_loaded = false; 2283 - 2284 - if (ice_is_xdp_ena_vsi(vsi)) { 2285 - ret = ice_vsi_determine_xdp_res(vsi); 2286 - if (ret) 2287 - goto unroll_vector_base; 2288 - ret = ice_prepare_xdp_rings(vsi, vsi->xdp_prog); 2289 - if (ret) 2290 - goto unroll_vector_base; 2291 - } 2292 2301 2293 2302 /* ICE_VSI_CTRL does not need RSS so skip RSS processing */ 2294 2303 if (vsi->type != ICE_VSI_CTRL) ··· 2430 2437 /* return value check can be skipped here, it always returns 2431 2438 * 0 if reset is in progress 2432 2439 */ 2433 - ice_destroy_xdp_rings(vsi); 2440 + ice_destroy_xdp_rings(vsi, ICE_XDP_CFG_PART); 2434 2441 2435 2442 ice_vsi_clear_rings(vsi); 2436 2443 ice_vsi_free_q_vectors(vsi);
+82 -62
drivers/net/ethernet/intel/ice/ice_main.c
··· 2707 2707 bpf_prog_put(old_prog); 2708 2708 } 2709 2709 2710 - /** 2711 - * ice_prepare_xdp_rings - Allocate, configure and setup Tx rings for XDP 2712 - * @vsi: VSI to bring up Tx rings used by XDP 2713 - * @prog: bpf program that will be assigned to VSI 2714 - * 2715 - * Return 0 on success and negative value on error 2716 - */ 2717 - int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog) 2710 + static struct ice_tx_ring *ice_xdp_ring_from_qid(struct ice_vsi *vsi, int qid) 2718 2711 { 2719 - u16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 }; 2720 - int xdp_rings_rem = vsi->num_xdp_txq; 2721 - struct ice_pf *pf = vsi->back; 2722 - struct ice_qs_cfg xdp_qs_cfg = { 2723 - .qs_mutex = &pf->avail_q_mutex, 2724 - .pf_map = pf->avail_txqs, 2725 - .pf_map_size = pf->max_pf_txqs, 2726 - .q_count = vsi->num_xdp_txq, 2727 - .scatter_count = ICE_MAX_SCATTER_TXQS, 2728 - .vsi_map = vsi->txq_map, 2729 - .vsi_map_offset = vsi->alloc_txq, 2730 - .mapping_mode = ICE_VSI_MAP_CONTIG 2731 - }; 2732 - struct device *dev; 2733 - int i, v_idx; 2734 - int status; 2735 - 2736 - dev = ice_pf_to_dev(pf); 2737 - vsi->xdp_rings = devm_kcalloc(dev, vsi->num_xdp_txq, 2738 - sizeof(*vsi->xdp_rings), GFP_KERNEL); 2739 - if (!vsi->xdp_rings) 2740 - return -ENOMEM; 2741 - 2742 - vsi->xdp_mapping_mode = xdp_qs_cfg.mapping_mode; 2743 - if (__ice_vsi_get_qs(&xdp_qs_cfg)) 2744 - goto err_map_xdp; 2712 + struct ice_q_vector *q_vector; 2713 + struct ice_tx_ring *ring; 2745 2714 2746 2715 if (static_key_enabled(&ice_xdp_locking_key)) 2747 - netdev_warn(vsi->netdev, 2748 - "Could not allocate one XDP Tx ring per CPU, XDP_TX/XDP_REDIRECT actions will be slower\n"); 2716 + return vsi->xdp_rings[qid % vsi->num_xdp_txq]; 2749 2717 2750 - if (ice_xdp_alloc_setup_rings(vsi)) 2751 - goto clear_xdp_rings; 2718 + q_vector = vsi->rx_rings[qid]->q_vector; 2719 + ice_for_each_tx_ring(ring, q_vector->tx) 2720 + if (ice_ring_is_xdp(ring)) 2721 + return ring; 2722 + 2723 + return NULL; 2724 + } 2725 + 2726 + /** 2727 + * ice_map_xdp_rings - Map XDP rings to interrupt vectors 2728 + * @vsi: the VSI with XDP rings being configured 2729 + * 2730 + * Map XDP rings to interrupt vectors and perform the configuration steps 2731 + * dependent on the mapping. 2732 + */ 2733 + void ice_map_xdp_rings(struct ice_vsi *vsi) 2734 + { 2735 + int xdp_rings_rem = vsi->num_xdp_txq; 2736 + int v_idx, q_idx; 2752 2737 2753 2738 /* follow the logic from ice_vsi_map_rings_to_vectors */ 2754 2739 ice_for_each_q_vector(vsi, v_idx) { ··· 2754 2769 xdp_rings_rem -= xdp_rings_per_v; 2755 2770 } 2756 2771 2757 - ice_for_each_rxq(vsi, i) { 2758 - if (static_key_enabled(&ice_xdp_locking_key)) { 2759 - vsi->rx_rings[i]->xdp_ring = vsi->xdp_rings[i % vsi->num_xdp_txq]; 2760 - } else { 2761 - struct ice_q_vector *q_vector = vsi->rx_rings[i]->q_vector; 2762 - struct ice_tx_ring *ring; 2763 - 2764 - ice_for_each_tx_ring(ring, q_vector->tx) { 2765 - if (ice_ring_is_xdp(ring)) { 2766 - vsi->rx_rings[i]->xdp_ring = ring; 2767 - break; 2768 - } 2769 - } 2770 - } 2771 - ice_tx_xsk_pool(vsi, i); 2772 + ice_for_each_rxq(vsi, q_idx) { 2773 + vsi->rx_rings[q_idx]->xdp_ring = ice_xdp_ring_from_qid(vsi, 2774 + q_idx); 2775 + ice_tx_xsk_pool(vsi, q_idx); 2772 2776 } 2777 + } 2778 + 2779 + /** 2780 + * ice_prepare_xdp_rings - Allocate, configure and setup Tx rings for XDP 2781 + * @vsi: VSI to bring up Tx rings used by XDP 2782 + * @prog: bpf program that will be assigned to VSI 2783 + * @cfg_type: create from scratch or restore the existing configuration 2784 + * 2785 + * Return 0 on success and negative value on error 2786 + */ 2787 + int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog, 2788 + enum ice_xdp_cfg cfg_type) 2789 + { 2790 + u16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 }; 2791 + struct ice_pf *pf = vsi->back; 2792 + struct ice_qs_cfg xdp_qs_cfg = { 2793 + .qs_mutex = &pf->avail_q_mutex, 2794 + .pf_map = pf->avail_txqs, 2795 + .pf_map_size = pf->max_pf_txqs, 2796 + .q_count = vsi->num_xdp_txq, 2797 + .scatter_count = ICE_MAX_SCATTER_TXQS, 2798 + .vsi_map = vsi->txq_map, 2799 + .vsi_map_offset = vsi->alloc_txq, 2800 + .mapping_mode = ICE_VSI_MAP_CONTIG 2801 + }; 2802 + struct device *dev; 2803 + int status, i; 2804 + 2805 + dev = ice_pf_to_dev(pf); 2806 + vsi->xdp_rings = devm_kcalloc(dev, vsi->num_xdp_txq, 2807 + sizeof(*vsi->xdp_rings), GFP_KERNEL); 2808 + if (!vsi->xdp_rings) 2809 + return -ENOMEM; 2810 + 2811 + vsi->xdp_mapping_mode = xdp_qs_cfg.mapping_mode; 2812 + if (__ice_vsi_get_qs(&xdp_qs_cfg)) 2813 + goto err_map_xdp; 2814 + 2815 + if (static_key_enabled(&ice_xdp_locking_key)) 2816 + netdev_warn(vsi->netdev, 2817 + "Could not allocate one XDP Tx ring per CPU, XDP_TX/XDP_REDIRECT actions will be slower\n"); 2818 + 2819 + if (ice_xdp_alloc_setup_rings(vsi)) 2820 + goto clear_xdp_rings; 2773 2821 2774 2822 /* omit the scheduler update if in reset path; XDP queues will be 2775 2823 * taken into account at the end of ice_vsi_rebuild, where 2776 2824 * ice_cfg_vsi_lan is being called 2777 2825 */ 2778 - if (ice_is_reset_in_progress(pf->state)) 2826 + if (cfg_type == ICE_XDP_CFG_PART) 2779 2827 return 0; 2828 + 2829 + ice_map_xdp_rings(vsi); 2780 2830 2781 2831 /* tell the Tx scheduler that right now we have 2782 2832 * additional queues ··· 2862 2842 /** 2863 2843 * ice_destroy_xdp_rings - undo the configuration made by ice_prepare_xdp_rings 2864 2844 * @vsi: VSI to remove XDP rings 2845 + * @cfg_type: disable XDP permanently or allow it to be restored later 2865 2846 * 2866 2847 * Detach XDP rings from irq vectors, clean up the PF bitmap and free 2867 2848 * resources 2868 2849 */ 2869 - int ice_destroy_xdp_rings(struct ice_vsi *vsi) 2850 + int ice_destroy_xdp_rings(struct ice_vsi *vsi, enum ice_xdp_cfg cfg_type) 2870 2851 { 2871 2852 u16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 }; 2872 2853 struct ice_pf *pf = vsi->back; 2873 2854 int i, v_idx; 2874 2855 2875 2856 /* q_vectors are freed in reset path so there's no point in detaching 2876 - * rings; in case of rebuild being triggered not from reset bits 2877 - * in pf->state won't be set, so additionally check first q_vector 2878 - * against NULL 2857 + * rings 2879 2858 */ 2880 - if (ice_is_reset_in_progress(pf->state) || !vsi->q_vectors[0]) 2859 + if (cfg_type == ICE_XDP_CFG_PART) 2881 2860 goto free_qmap; 2882 2861 2883 2862 ice_for_each_q_vector(vsi, v_idx) { ··· 2917 2898 if (static_key_enabled(&ice_xdp_locking_key)) 2918 2899 static_branch_dec(&ice_xdp_locking_key); 2919 2900 2920 - if (ice_is_reset_in_progress(pf->state) || !vsi->q_vectors[0]) 2901 + if (cfg_type == ICE_XDP_CFG_PART) 2921 2902 return 0; 2922 2903 2923 2904 ice_vsi_assign_bpf_prog(vsi, NULL); ··· 3028 3009 if (xdp_ring_err) { 3029 3010 NL_SET_ERR_MSG_MOD(extack, "Not enough Tx resources for XDP"); 3030 3011 } else { 3031 - xdp_ring_err = ice_prepare_xdp_rings(vsi, prog); 3012 + xdp_ring_err = ice_prepare_xdp_rings(vsi, prog, 3013 + ICE_XDP_CFG_FULL); 3032 3014 if (xdp_ring_err) 3033 3015 NL_SET_ERR_MSG_MOD(extack, "Setting up XDP Tx resources failed"); 3034 3016 } ··· 3040 3020 NL_SET_ERR_MSG_MOD(extack, "Setting up XDP Rx resources failed"); 3041 3021 } else if (ice_is_xdp_ena_vsi(vsi) && !prog) { 3042 3022 xdp_features_clear_redirect_target(vsi->netdev); 3043 - xdp_ring_err = ice_destroy_xdp_rings(vsi); 3023 + xdp_ring_err = ice_destroy_xdp_rings(vsi, ICE_XDP_CFG_FULL); 3044 3024 if (xdp_ring_err) 3045 3025 NL_SET_ERR_MSG_MOD(extack, "Freeing XDP Tx resources failed"); 3046 3026 /* reallocate Rx queues that were used for zero-copy */
+108 -8
drivers/net/ethernet/intel/ice/ice_nvm.c
··· 374 374 * 375 375 * Read the specified word from the copy of the Shadow RAM found in the 376 376 * specified NVM module. 377 + * 378 + * Note that the Shadow RAM copy is always located after the CSS header, and 379 + * is aligned to 64-byte (32-word) offsets. 377 380 */ 378 381 static int 379 382 ice_read_nvm_sr_copy(struct ice_hw *hw, enum ice_bank_select bank, u32 offset, u16 *data) 380 383 { 381 - return ice_read_nvm_module(hw, bank, ICE_NVM_SR_COPY_WORD_OFFSET + offset, data); 384 + u32 sr_copy; 385 + 386 + switch (bank) { 387 + case ICE_ACTIVE_FLASH_BANK: 388 + sr_copy = roundup(hw->flash.banks.active_css_hdr_len, 32); 389 + break; 390 + case ICE_INACTIVE_FLASH_BANK: 391 + sr_copy = roundup(hw->flash.banks.inactive_css_hdr_len, 32); 392 + break; 393 + } 394 + 395 + return ice_read_nvm_module(hw, bank, sr_copy + offset, data); 382 396 } 383 397 384 398 /** ··· 454 440 ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len, 455 441 u16 module_type) 456 442 { 457 - u16 pfa_len, pfa_ptr; 458 - u16 next_tlv; 443 + u16 pfa_len, pfa_ptr, next_tlv, max_tlv; 459 444 int status; 460 445 461 446 status = ice_read_sr_word(hw, ICE_SR_PFA_PTR, &pfa_ptr); ··· 467 454 ice_debug(hw, ICE_DBG_INIT, "Failed to read PFA length.\n"); 468 455 return status; 469 456 } 457 + 458 + /* The Preserved Fields Area contains a sequence of Type-Length-Value 459 + * structures which define its contents. The PFA length includes all 460 + * of the TLVs, plus the initial length word itself, *and* one final 461 + * word at the end after all of the TLVs. 462 + */ 463 + if (check_add_overflow(pfa_ptr, pfa_len - 1, &max_tlv)) { 464 + dev_warn(ice_hw_to_dev(hw), "PFA starts at offset %u. PFA length of %u caused 16-bit arithmetic overflow.\n", 465 + pfa_ptr, pfa_len); 466 + return -EINVAL; 467 + } 468 + 470 469 /* Starting with first TLV after PFA length, iterate through the list 471 470 * of TLVs to find the requested one. 472 471 */ 473 472 next_tlv = pfa_ptr + 1; 474 - while (next_tlv < pfa_ptr + pfa_len) { 473 + while (next_tlv < max_tlv) { 475 474 u16 tlv_sub_module_type; 476 475 u16 tlv_len; 477 476 ··· 507 482 } 508 483 return -EINVAL; 509 484 } 510 - /* Check next TLV, i.e. current TLV pointer + length + 2 words 511 - * (for current TLV's type and length) 512 - */ 513 - next_tlv = next_tlv + tlv_len + 2; 485 + 486 + if (check_add_overflow(next_tlv, 2, &next_tlv) || 487 + check_add_overflow(next_tlv, tlv_len, &next_tlv)) { 488 + dev_warn(ice_hw_to_dev(hw), "TLV of type %u and length 0x%04x caused 16-bit arithmetic overflow. The PFA starts at 0x%04x and has length of 0x%04x\n", 489 + tlv_sub_module_type, tlv_len, pfa_ptr, pfa_len); 490 + return -EINVAL; 491 + } 514 492 } 515 493 /* Module does not exist */ 516 494 return -ENOENT; ··· 1038 1010 } 1039 1011 1040 1012 /** 1013 + * ice_get_nvm_css_hdr_len - Read the CSS header length from the NVM CSS header 1014 + * @hw: pointer to the HW struct 1015 + * @bank: whether to read from the active or inactive flash bank 1016 + * @hdr_len: storage for header length in words 1017 + * 1018 + * Read the CSS header length from the NVM CSS header and add the Authentication 1019 + * header size, and then convert to words. 1020 + * 1021 + * Return: zero on success, or a negative error code on failure. 1022 + */ 1023 + static int 1024 + ice_get_nvm_css_hdr_len(struct ice_hw *hw, enum ice_bank_select bank, 1025 + u32 *hdr_len) 1026 + { 1027 + u16 hdr_len_l, hdr_len_h; 1028 + u32 hdr_len_dword; 1029 + int status; 1030 + 1031 + status = ice_read_nvm_module(hw, bank, ICE_NVM_CSS_HDR_LEN_L, 1032 + &hdr_len_l); 1033 + if (status) 1034 + return status; 1035 + 1036 + status = ice_read_nvm_module(hw, bank, ICE_NVM_CSS_HDR_LEN_H, 1037 + &hdr_len_h); 1038 + if (status) 1039 + return status; 1040 + 1041 + /* CSS header length is in DWORD, so convert to words and add 1042 + * authentication header size 1043 + */ 1044 + hdr_len_dword = hdr_len_h << 16 | hdr_len_l; 1045 + *hdr_len = (hdr_len_dword * 2) + ICE_NVM_AUTH_HEADER_LEN; 1046 + 1047 + return 0; 1048 + } 1049 + 1050 + /** 1051 + * ice_determine_css_hdr_len - Discover CSS header length for the device 1052 + * @hw: pointer to the HW struct 1053 + * 1054 + * Determine the size of the CSS header at the start of the NVM module. This 1055 + * is useful for locating the Shadow RAM copy in the NVM, as the Shadow RAM is 1056 + * always located just after the CSS header. 1057 + * 1058 + * Return: zero on success, or a negative error code on failure. 1059 + */ 1060 + static int ice_determine_css_hdr_len(struct ice_hw *hw) 1061 + { 1062 + struct ice_bank_info *banks = &hw->flash.banks; 1063 + int status; 1064 + 1065 + status = ice_get_nvm_css_hdr_len(hw, ICE_ACTIVE_FLASH_BANK, 1066 + &banks->active_css_hdr_len); 1067 + if (status) 1068 + return status; 1069 + 1070 + status = ice_get_nvm_css_hdr_len(hw, ICE_INACTIVE_FLASH_BANK, 1071 + &banks->inactive_css_hdr_len); 1072 + if (status) 1073 + return status; 1074 + 1075 + return 0; 1076 + } 1077 + 1078 + /** 1041 1079 * ice_init_nvm - initializes NVM setting 1042 1080 * @hw: pointer to the HW struct 1043 1081 * ··· 1146 1052 status = ice_determine_active_flash_banks(hw); 1147 1053 if (status) { 1148 1054 ice_debug(hw, ICE_DBG_NVM, "Failed to determine active flash banks.\n"); 1055 + return status; 1056 + } 1057 + 1058 + status = ice_determine_css_hdr_len(hw); 1059 + if (status) { 1060 + ice_debug(hw, ICE_DBG_NVM, "Failed to determine Shadow RAM copy offsets.\n"); 1149 1061 return status; 1150 1062 } 1151 1063
+6 -8
drivers/net/ethernet/intel/ice/ice_type.h
··· 482 482 u32 orom_size; /* Size of OROM bank */ 483 483 u32 netlist_ptr; /* Pointer to 1st Netlist bank */ 484 484 u32 netlist_size; /* Size of Netlist bank */ 485 + u32 active_css_hdr_len; /* Active CSS header length */ 486 + u32 inactive_css_hdr_len; /* Inactive CSS header length */ 485 487 enum ice_flash_bank nvm_bank; /* Active NVM bank */ 486 488 enum ice_flash_bank orom_bank; /* Active OROM bank */ 487 489 enum ice_flash_bank netlist_bank; /* Active Netlist bank */ ··· 1089 1087 #define ICE_SR_SECTOR_SIZE_IN_WORDS 0x800 1090 1088 1091 1089 /* CSS Header words */ 1090 + #define ICE_NVM_CSS_HDR_LEN_L 0x02 1091 + #define ICE_NVM_CSS_HDR_LEN_H 0x03 1092 1092 #define ICE_NVM_CSS_SREV_L 0x14 1093 1093 #define ICE_NVM_CSS_SREV_H 0x15 1094 1094 1095 - /* Length of CSS header section in words */ 1096 - #define ICE_CSS_HEADER_LENGTH 330 1097 - 1098 - /* Offset of Shadow RAM copy in the NVM bank area. */ 1099 - #define ICE_NVM_SR_COPY_WORD_OFFSET roundup(ICE_CSS_HEADER_LENGTH, 32) 1100 - 1101 - /* Size in bytes of Option ROM trailer */ 1102 - #define ICE_NVM_OROM_TRAILER_LENGTH (2 * ICE_CSS_HEADER_LENGTH) 1095 + /* Length of Authentication header section in words */ 1096 + #define ICE_NVM_AUTH_HEADER_LEN 0x08 1103 1097 1104 1098 /* The Link Topology Netlist section is stored as a series of words. It is 1105 1099 * stored in the NVM as a TLV, with the first two words containing the type
+6 -7
drivers/net/ethernet/intel/ice/ice_xsk.c
··· 269 269 if (!pool) 270 270 return -EINVAL; 271 271 272 - clear_bit(qid, vsi->af_xdp_zc_qps); 273 272 xsk_pool_dma_unmap(pool, ICE_RX_DMA_ATTR); 274 273 275 274 return 0; ··· 298 299 ICE_RX_DMA_ATTR); 299 300 if (err) 300 301 return err; 301 - 302 - set_bit(qid, vsi->af_xdp_zc_qps); 303 302 304 303 return 0; 305 304 } ··· 346 349 int ice_realloc_zc_buf(struct ice_vsi *vsi, bool zc) 347 350 { 348 351 struct ice_rx_ring *rx_ring; 349 - unsigned long q; 352 + uint i; 350 353 351 - for_each_set_bit(q, vsi->af_xdp_zc_qps, 352 - max_t(int, vsi->alloc_txq, vsi->alloc_rxq)) { 353 - rx_ring = vsi->rx_rings[q]; 354 + ice_for_each_rxq(vsi, i) { 355 + rx_ring = vsi->rx_rings[i]; 356 + if (!rx_ring->xsk_pool) 357 + continue; 358 + 354 359 if (ice_realloc_rx_xdp_bufs(rx_ring, zc)) 355 360 return -ENOMEM; 356 361 }
+7 -2
drivers/net/ethernet/intel/igc/igc_ethtool.c
··· 1629 1629 struct igc_hw *hw = &adapter->hw; 1630 1630 u32 eeer; 1631 1631 1632 + linkmode_set_bit(ETHTOOL_LINK_MODE_2500baseT_Full_BIT, 1633 + edata->supported); 1634 + linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, 1635 + edata->supported); 1636 + linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT, 1637 + edata->supported); 1638 + 1632 1639 if (hw->dev_spec._base.eee_enable) 1633 1640 mii_eee_cap1_mod_linkmode_t(edata->advertised, 1634 1641 adapter->eee_advert); 1635 - 1636 - *edata = adapter->eee; 1637 1642 1638 1643 eeer = rd32(IGC_EEER); 1639 1644
+4
drivers/net/ethernet/intel/igc/igc_main.c
··· 12 12 #include <linux/bpf_trace.h> 13 13 #include <net/xdp_sock_drv.h> 14 14 #include <linux/pci.h> 15 + #include <linux/mdio.h> 15 16 16 17 #include <net/ipv6.h> 17 18 ··· 4976 4975 /* start the watchdog. */ 4977 4976 hw->mac.get_link_status = true; 4978 4977 schedule_work(&adapter->watchdog_task); 4978 + 4979 + adapter->eee_advert = MDIO_EEE_100TX | MDIO_EEE_1000T | 4980 + MDIO_EEE_2_5GT; 4979 4981 } 4980 4982 4981 4983 /**
+22 -11
drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
··· 2519 2519 * - when available free entries are less. 2520 2520 * Lower priority ones out of avaialble free entries are always 2521 2521 * chosen when 'high vs low' question arises. 2522 + * 2523 + * For a VF base MCAM match rule is set by its PF. And all the 2524 + * further MCAM rules installed by VF on its own are 2525 + * concatenated with the base rule set by its PF. Hence PF entries 2526 + * should be at lower priority compared to VF entries. Otherwise 2527 + * base rule is hit always and rules installed by VF will be of 2528 + * no use. Hence if the request is from PF then allocate low 2529 + * priority entries. 2522 2530 */ 2531 + if (!(pcifunc & RVU_PFVF_FUNC_MASK)) 2532 + goto lprio_alloc; 2523 2533 2524 2534 /* Get the search range for priority allocation request */ 2525 2535 if (req->priority) { ··· 2537 2527 &start, &end, &reverse); 2538 2528 goto alloc; 2539 2529 } 2540 - 2541 - /* For a VF base MCAM match rule is set by its PF. And all the 2542 - * further MCAM rules installed by VF on its own are 2543 - * concatenated with the base rule set by its PF. Hence PF entries 2544 - * should be at lower priority compared to VF entries. Otherwise 2545 - * base rule is hit always and rules installed by VF will be of 2546 - * no use. Hence if the request is from PF and NOT a priority 2547 - * allocation request then allocate low priority entries. 2548 - */ 2549 - if (!(pcifunc & RVU_PFVF_FUNC_MASK)) 2550 - goto lprio_alloc; 2551 2530 2552 2531 /* Find out the search range for non-priority allocation request 2553 2532 * ··· 2567 2568 reverse = true; 2568 2569 start = 0; 2569 2570 end = mcam->bmap_entries; 2571 + /* Ensure PF requests are always at bottom and if PF requests 2572 + * for higher/lower priority entry wrt reference entry then 2573 + * honour that criteria and start search for entries from bottom 2574 + * and not in mid zone. 2575 + */ 2576 + if (!(pcifunc & RVU_PFVF_FUNC_MASK) && 2577 + req->priority == NPC_MCAM_HIGHER_PRIO) 2578 + end = req->ref_entry; 2579 + 2580 + if (!(pcifunc & RVU_PFVF_FUNC_MASK) && 2581 + req->priority == NPC_MCAM_LOWER_PRIO) 2582 + start = req->ref_entry; 2570 2583 } 2571 2584 2572 2585 alloc:
+71 -35
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 1131 1131 { 1132 1132 const struct mtk_soc_data *soc = eth->soc; 1133 1133 dma_addr_t phy_ring_tail; 1134 - int cnt = MTK_QDMA_RING_SIZE; 1134 + int cnt = soc->tx.fq_dma_size; 1135 1135 dma_addr_t dma_addr; 1136 - int i; 1136 + int i, j, len; 1137 1137 1138 1138 if (MTK_HAS_CAPS(eth->soc->caps, MTK_SRAM)) 1139 1139 eth->scratch_ring = eth->sram_base; ··· 1142 1142 cnt * soc->tx.desc_size, 1143 1143 &eth->phy_scratch_ring, 1144 1144 GFP_KERNEL); 1145 + 1145 1146 if (unlikely(!eth->scratch_ring)) 1146 - return -ENOMEM; 1147 - 1148 - eth->scratch_head = kcalloc(cnt, MTK_QDMA_PAGE_SIZE, GFP_KERNEL); 1149 - if (unlikely(!eth->scratch_head)) 1150 - return -ENOMEM; 1151 - 1152 - dma_addr = dma_map_single(eth->dma_dev, 1153 - eth->scratch_head, cnt * MTK_QDMA_PAGE_SIZE, 1154 - DMA_FROM_DEVICE); 1155 - if (unlikely(dma_mapping_error(eth->dma_dev, dma_addr))) 1156 1147 return -ENOMEM; 1157 1148 1158 1149 phy_ring_tail = eth->phy_scratch_ring + soc->tx.desc_size * (cnt - 1); 1159 1150 1160 - for (i = 0; i < cnt; i++) { 1161 - dma_addr_t addr = dma_addr + i * MTK_QDMA_PAGE_SIZE; 1162 - struct mtk_tx_dma_v2 *txd; 1151 + for (j = 0; j < DIV_ROUND_UP(soc->tx.fq_dma_size, MTK_FQ_DMA_LENGTH); j++) { 1152 + len = min_t(int, cnt - j * MTK_FQ_DMA_LENGTH, MTK_FQ_DMA_LENGTH); 1153 + eth->scratch_head[j] = kcalloc(len, MTK_QDMA_PAGE_SIZE, GFP_KERNEL); 1163 1154 1164 - txd = eth->scratch_ring + i * soc->tx.desc_size; 1165 - txd->txd1 = addr; 1166 - if (i < cnt - 1) 1167 - txd->txd2 = eth->phy_scratch_ring + 1168 - (i + 1) * soc->tx.desc_size; 1155 + if (unlikely(!eth->scratch_head[j])) 1156 + return -ENOMEM; 1169 1157 1170 - txd->txd3 = TX_DMA_PLEN0(MTK_QDMA_PAGE_SIZE); 1171 - if (MTK_HAS_CAPS(soc->caps, MTK_36BIT_DMA)) 1172 - txd->txd3 |= TX_DMA_PREP_ADDR64(addr); 1173 - txd->txd4 = 0; 1174 - if (mtk_is_netsys_v2_or_greater(eth)) { 1175 - txd->txd5 = 0; 1176 - txd->txd6 = 0; 1177 - txd->txd7 = 0; 1178 - txd->txd8 = 0; 1158 + dma_addr = dma_map_single(eth->dma_dev, 1159 + eth->scratch_head[j], len * MTK_QDMA_PAGE_SIZE, 1160 + DMA_FROM_DEVICE); 1161 + 1162 + if (unlikely(dma_mapping_error(eth->dma_dev, dma_addr))) 1163 + return -ENOMEM; 1164 + 1165 + for (i = 0; i < cnt; i++) { 1166 + struct mtk_tx_dma_v2 *txd; 1167 + 1168 + txd = eth->scratch_ring + (j * MTK_FQ_DMA_LENGTH + i) * soc->tx.desc_size; 1169 + txd->txd1 = dma_addr + i * MTK_QDMA_PAGE_SIZE; 1170 + if (j * MTK_FQ_DMA_LENGTH + i < cnt) 1171 + txd->txd2 = eth->phy_scratch_ring + 1172 + (j * MTK_FQ_DMA_LENGTH + i + 1) * soc->tx.desc_size; 1173 + 1174 + txd->txd3 = TX_DMA_PLEN0(MTK_QDMA_PAGE_SIZE); 1175 + if (MTK_HAS_CAPS(soc->caps, MTK_36BIT_DMA)) 1176 + txd->txd3 |= TX_DMA_PREP_ADDR64(dma_addr + i * MTK_QDMA_PAGE_SIZE); 1177 + 1178 + txd->txd4 = 0; 1179 + if (mtk_is_netsys_v2_or_greater(eth)) { 1180 + txd->txd5 = 0; 1181 + txd->txd6 = 0; 1182 + txd->txd7 = 0; 1183 + txd->txd8 = 0; 1184 + } 1179 1185 } 1180 1186 } 1181 1187 ··· 2463 2457 if (MTK_HAS_CAPS(soc->caps, MTK_QDMA)) 2464 2458 ring_size = MTK_QDMA_RING_SIZE; 2465 2459 else 2466 - ring_size = MTK_DMA_SIZE; 2460 + ring_size = soc->tx.dma_size; 2467 2461 2468 2462 ring->buf = kcalloc(ring_size, sizeof(*ring->buf), 2469 2463 GFP_KERNEL); ··· 2471 2465 goto no_tx_mem; 2472 2466 2473 2467 if (MTK_HAS_CAPS(soc->caps, MTK_SRAM)) { 2474 - ring->dma = eth->sram_base + ring_size * sz; 2475 - ring->phys = eth->phy_scratch_ring + ring_size * (dma_addr_t)sz; 2468 + ring->dma = eth->sram_base + soc->tx.fq_dma_size * sz; 2469 + ring->phys = eth->phy_scratch_ring + soc->tx.fq_dma_size * (dma_addr_t)sz; 2476 2470 } else { 2477 2471 ring->dma = dma_alloc_coherent(eth->dma_dev, ring_size * sz, 2478 2472 &ring->phys, GFP_KERNEL); ··· 2594 2588 static int mtk_rx_alloc(struct mtk_eth *eth, int ring_no, int rx_flag) 2595 2589 { 2596 2590 const struct mtk_reg_map *reg_map = eth->soc->reg_map; 2591 + const struct mtk_soc_data *soc = eth->soc; 2597 2592 struct mtk_rx_ring *ring; 2598 2593 int rx_data_len, rx_dma_size, tx_ring_size; 2599 2594 int i; ··· 2602 2595 if (MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA)) 2603 2596 tx_ring_size = MTK_QDMA_RING_SIZE; 2604 2597 else 2605 - tx_ring_size = MTK_DMA_SIZE; 2598 + tx_ring_size = soc->tx.dma_size; 2606 2599 2607 2600 if (rx_flag == MTK_RX_FLAGS_QDMA) { 2608 2601 if (ring_no) ··· 2617 2610 rx_dma_size = MTK_HW_LRO_DMA_SIZE; 2618 2611 } else { 2619 2612 rx_data_len = ETH_DATA_LEN; 2620 - rx_dma_size = MTK_DMA_SIZE; 2613 + rx_dma_size = soc->rx.dma_size; 2621 2614 } 2622 2615 2623 2616 ring->frag_size = mtk_max_frag_size(rx_data_len); ··· 3146 3139 mtk_rx_clean(eth, &eth->rx_ring[i], false); 3147 3140 } 3148 3141 3149 - kfree(eth->scratch_head); 3142 + for (i = 0; i < DIV_ROUND_UP(soc->tx.fq_dma_size, MTK_FQ_DMA_LENGTH); i++) { 3143 + kfree(eth->scratch_head[i]); 3144 + eth->scratch_head[i] = NULL; 3145 + } 3150 3146 } 3151 3147 3152 3148 static bool mtk_hw_reset_check(struct mtk_eth *eth) ··· 5062 5052 .desc_size = sizeof(struct mtk_tx_dma), 5063 5053 .dma_max_len = MTK_TX_DMA_BUF_LEN, 5064 5054 .dma_len_offset = 16, 5055 + .dma_size = MTK_DMA_SIZE(2K), 5056 + .fq_dma_size = MTK_DMA_SIZE(2K), 5065 5057 }, 5066 5058 .rx = { 5067 5059 .desc_size = sizeof(struct mtk_rx_dma), 5068 5060 .irq_done_mask = MTK_RX_DONE_INT, 5069 5061 .dma_l4_valid = RX_DMA_L4_VALID, 5062 + .dma_size = MTK_DMA_SIZE(2K), 5070 5063 .dma_max_len = MTK_TX_DMA_BUF_LEN, 5071 5064 .dma_len_offset = 16, 5072 5065 }, ··· 5089 5076 .desc_size = sizeof(struct mtk_tx_dma), 5090 5077 .dma_max_len = MTK_TX_DMA_BUF_LEN, 5091 5078 .dma_len_offset = 16, 5079 + .dma_size = MTK_DMA_SIZE(2K), 5080 + .fq_dma_size = MTK_DMA_SIZE(2K), 5092 5081 }, 5093 5082 .rx = { 5094 5083 .desc_size = sizeof(struct mtk_rx_dma), 5095 5084 .irq_done_mask = MTK_RX_DONE_INT, 5096 5085 .dma_l4_valid = RX_DMA_L4_VALID, 5086 + .dma_size = MTK_DMA_SIZE(2K), 5097 5087 .dma_max_len = MTK_TX_DMA_BUF_LEN, 5098 5088 .dma_len_offset = 16, 5099 5089 }, ··· 5118 5102 .desc_size = sizeof(struct mtk_tx_dma), 5119 5103 .dma_max_len = MTK_TX_DMA_BUF_LEN, 5120 5104 .dma_len_offset = 16, 5105 + .dma_size = MTK_DMA_SIZE(2K), 5106 + .fq_dma_size = MTK_DMA_SIZE(2K), 5121 5107 }, 5122 5108 .rx = { 5123 5109 .desc_size = sizeof(struct mtk_rx_dma), 5124 5110 .irq_done_mask = MTK_RX_DONE_INT, 5125 5111 .dma_l4_valid = RX_DMA_L4_VALID, 5112 + .dma_size = MTK_DMA_SIZE(2K), 5126 5113 .dma_max_len = MTK_TX_DMA_BUF_LEN, 5127 5114 .dma_len_offset = 16, 5128 5115 }, ··· 5146 5127 .desc_size = sizeof(struct mtk_tx_dma), 5147 5128 .dma_max_len = MTK_TX_DMA_BUF_LEN, 5148 5129 .dma_len_offset = 16, 5130 + .dma_size = MTK_DMA_SIZE(2K), 5131 + .fq_dma_size = MTK_DMA_SIZE(2K), 5149 5132 }, 5150 5133 .rx = { 5151 5134 .desc_size = sizeof(struct mtk_rx_dma), 5152 5135 .irq_done_mask = MTK_RX_DONE_INT, 5153 5136 .dma_l4_valid = RX_DMA_L4_VALID, 5137 + .dma_size = MTK_DMA_SIZE(2K), 5154 5138 .dma_max_len = MTK_TX_DMA_BUF_LEN, 5155 5139 .dma_len_offset = 16, 5156 5140 }, ··· 5172 5150 .desc_size = sizeof(struct mtk_tx_dma), 5173 5151 .dma_max_len = MTK_TX_DMA_BUF_LEN, 5174 5152 .dma_len_offset = 16, 5153 + .dma_size = MTK_DMA_SIZE(2K), 5154 + .fq_dma_size = MTK_DMA_SIZE(2K), 5175 5155 }, 5176 5156 .rx = { 5177 5157 .desc_size = sizeof(struct mtk_rx_dma), 5178 5158 .irq_done_mask = MTK_RX_DONE_INT, 5179 5159 .dma_l4_valid = RX_DMA_L4_VALID, 5160 + .dma_size = MTK_DMA_SIZE(2K), 5180 5161 .dma_max_len = MTK_TX_DMA_BUF_LEN, 5181 5162 .dma_len_offset = 16, 5182 5163 }, ··· 5201 5176 .desc_size = sizeof(struct mtk_tx_dma_v2), 5202 5177 .dma_max_len = MTK_TX_DMA_BUF_LEN_V2, 5203 5178 .dma_len_offset = 8, 5179 + .dma_size = MTK_DMA_SIZE(2K), 5180 + .fq_dma_size = MTK_DMA_SIZE(2K), 5204 5181 }, 5205 5182 .rx = { 5206 5183 .desc_size = sizeof(struct mtk_rx_dma), ··· 5210 5183 .dma_l4_valid = RX_DMA_L4_VALID_V2, 5211 5184 .dma_max_len = MTK_TX_DMA_BUF_LEN, 5212 5185 .dma_len_offset = 16, 5186 + .dma_size = MTK_DMA_SIZE(2K), 5213 5187 }, 5214 5188 }; 5215 5189 ··· 5230 5202 .desc_size = sizeof(struct mtk_tx_dma_v2), 5231 5203 .dma_max_len = MTK_TX_DMA_BUF_LEN_V2, 5232 5204 .dma_len_offset = 8, 5205 + .dma_size = MTK_DMA_SIZE(2K), 5206 + .fq_dma_size = MTK_DMA_SIZE(2K), 5233 5207 }, 5234 5208 .rx = { 5235 5209 .desc_size = sizeof(struct mtk_rx_dma), ··· 5239 5209 .dma_l4_valid = RX_DMA_L4_VALID_V2, 5240 5210 .dma_max_len = MTK_TX_DMA_BUF_LEN, 5241 5211 .dma_len_offset = 16, 5212 + .dma_size = MTK_DMA_SIZE(2K), 5242 5213 }, 5243 5214 }; 5244 5215 ··· 5259 5228 .desc_size = sizeof(struct mtk_tx_dma_v2), 5260 5229 .dma_max_len = MTK_TX_DMA_BUF_LEN_V2, 5261 5230 .dma_len_offset = 8, 5231 + .dma_size = MTK_DMA_SIZE(2K), 5232 + .fq_dma_size = MTK_DMA_SIZE(4K), 5262 5233 }, 5263 5234 .rx = { 5264 5235 .desc_size = sizeof(struct mtk_rx_dma_v2), ··· 5268 5235 .dma_l4_valid = RX_DMA_L4_VALID_V2, 5269 5236 .dma_max_len = MTK_TX_DMA_BUF_LEN_V2, 5270 5237 .dma_len_offset = 8, 5238 + .dma_size = MTK_DMA_SIZE(2K), 5271 5239 }, 5272 5240 }; 5273 5241 ··· 5283 5249 .desc_size = sizeof(struct mtk_tx_dma), 5284 5250 .dma_max_len = MTK_TX_DMA_BUF_LEN, 5285 5251 .dma_len_offset = 16, 5252 + .dma_size = MTK_DMA_SIZE(2K), 5286 5253 }, 5287 5254 .rx = { 5288 5255 .desc_size = sizeof(struct mtk_rx_dma), ··· 5291 5256 .dma_l4_valid = RX_DMA_L4_VALID_PDMA, 5292 5257 .dma_max_len = MTK_TX_DMA_BUF_LEN, 5293 5258 .dma_len_offset = 16, 5259 + .dma_size = MTK_DMA_SIZE(2K), 5294 5260 }, 5295 5261 }; 5296 5262
+7 -2
drivers/net/ethernet/mediatek/mtk_eth_soc.h
··· 32 32 #define MTK_TX_DMA_BUF_LEN 0x3fff 33 33 #define MTK_TX_DMA_BUF_LEN_V2 0xffff 34 34 #define MTK_QDMA_RING_SIZE 2048 35 - #define MTK_DMA_SIZE 512 35 + #define MTK_DMA_SIZE(x) (SZ_##x) 36 + #define MTK_FQ_DMA_HEAD 32 37 + #define MTK_FQ_DMA_LENGTH 2048 36 38 #define MTK_RX_ETH_HLEN (ETH_HLEN + ETH_FCS_LEN) 37 39 #define MTK_RX_HLEN (NET_SKB_PAD + MTK_RX_ETH_HLEN + NET_IP_ALIGN) 38 40 #define MTK_DMA_DUMMY_DESC 0xffffffff ··· 1178 1176 u32 desc_size; 1179 1177 u32 dma_max_len; 1180 1178 u32 dma_len_offset; 1179 + u32 dma_size; 1180 + u32 fq_dma_size; 1181 1181 } tx; 1182 1182 struct { 1183 1183 u32 desc_size; ··· 1187 1183 u32 dma_l4_valid; 1188 1184 u32 dma_max_len; 1189 1185 u32 dma_len_offset; 1186 + u32 dma_size; 1190 1187 } rx; 1191 1188 }; 1192 1189 ··· 1269 1264 struct napi_struct rx_napi; 1270 1265 void *scratch_ring; 1271 1266 dma_addr_t phy_scratch_ring; 1272 - void *scratch_head; 1267 + void *scratch_head[MTK_FQ_DMA_HEAD]; 1273 1268 struct clk *clks[MTK_CLK_MAX]; 1274 1269 1275 1270 struct mii_bus *mii_bus;
+4
drivers/net/ethernet/mellanox/mlx5/core/fw.c
··· 373 373 do { 374 374 if (mlx5_get_nic_state(dev) == MLX5_INITIAL_SEG_NIC_INTERFACE_DISABLED) 375 375 break; 376 + if (pci_channel_offline(dev->pdev)) { 377 + mlx5_core_err(dev, "PCI channel offline, stop waiting for NIC IFC\n"); 378 + return -EACCES; 379 + } 376 380 377 381 cond_resched(); 378 382 } while (!time_after(jiffies, end));
+8
drivers/net/ethernet/mellanox/mlx5/core/health.c
··· 248 248 do { 249 249 if (mlx5_get_nic_state(dev) == MLX5_INITIAL_SEG_NIC_INTERFACE_DISABLED) 250 250 break; 251 + if (pci_channel_offline(dev->pdev)) { 252 + mlx5_core_err(dev, "PCI channel offline, stop waiting for NIC IFC\n"); 253 + goto unlock; 254 + } 251 255 252 256 msleep(20); 253 257 } while (!time_after(jiffies, end)); ··· 320 316 if (test_bit(MLX5_BREAK_FW_WAIT, &dev->intf_state)) { 321 317 mlx5_core_warn(dev, "device is being removed, stop waiting for PCI\n"); 322 318 return -ENODEV; 319 + } 320 + if (pci_channel_offline(dev->pdev)) { 321 + mlx5_core_err(dev, "PCI channel offline, stop waiting for PCI\n"); 322 + return -EACCES; 323 323 } 324 324 msleep(100); 325 325 }
+6 -2
drivers/net/ethernet/mellanox/mlx5/core/lag/port_sel.c
··· 88 88 &dest, 1); 89 89 if (IS_ERR(lag_definer->rules[idx])) { 90 90 err = PTR_ERR(lag_definer->rules[idx]); 91 - while (i--) 92 - while (j--) 91 + do { 92 + while (j--) { 93 + idx = i * ldev->buckets + j; 93 94 mlx5_del_flow_rules(lag_definer->rules[idx]); 95 + } 96 + j = ldev->buckets; 97 + } while (i--); 94 98 goto destroy_fg; 95 99 } 96 100 }
+4
drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
··· 74 74 ret = -EBUSY; 75 75 goto pci_unlock; 76 76 } 77 + if (pci_channel_offline(dev->pdev)) { 78 + ret = -EACCES; 79 + goto pci_unlock; 80 + } 77 81 78 82 /* Check if semaphore is already locked */ 79 83 ret = vsc_read(dev, VSC_SEMAPHORE_OFFSET, &lock_val);
+3
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 1298 1298 1299 1299 if (!err) 1300 1300 mlx5_function_disable(dev, boot); 1301 + else 1302 + mlx5_stop_health_poll(dev, boot); 1303 + 1301 1304 return err; 1302 1305 } 1303 1306
+1
drivers/net/ethernet/pensando/ionic/ionic_txrx.c
··· 586 586 netdev_dbg(netdev, "tx ionic_xdp_post_frame err %d\n", err); 587 587 goto out_xdp_abort; 588 588 } 589 + buf_info->page = NULL; 589 590 stats->xdp_tx++; 590 591 591 592 /* the Tx completion will free the buffers */
+97 -7
drivers/net/phy/micrel.c
··· 866 866 { 867 867 int ret; 868 868 869 + /* Chip can be powered down by the bootstrap code. */ 870 + ret = phy_read(phydev, MII_BMCR); 871 + if (ret < 0) 872 + return ret; 873 + if (ret & BMCR_PDOWN) { 874 + ret = phy_write(phydev, MII_BMCR, ret & ~BMCR_PDOWN); 875 + if (ret < 0) 876 + return ret; 877 + usleep_range(1000, 2000); 878 + } 879 + 869 880 ret = phy_write_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_DEVID1, 0xB61A); 870 881 if (ret) 871 882 return ret; ··· 1950 1939 {0x1c, 0x20, 0xeeee}, 1951 1940 }; 1952 1941 1953 - static int ksz9477_config_init(struct phy_device *phydev) 1942 + static int ksz9477_phy_errata(struct phy_device *phydev) 1954 1943 { 1955 1944 int err; 1956 1945 int i; ··· 1978 1967 return err; 1979 1968 } 1980 1969 1970 + err = genphy_restart_aneg(phydev); 1971 + if (err) 1972 + return err; 1973 + 1974 + return err; 1975 + } 1976 + 1977 + static int ksz9477_config_init(struct phy_device *phydev) 1978 + { 1979 + int err; 1980 + 1981 + /* Only KSZ9897 family of switches needs this fix. */ 1982 + if ((phydev->phy_id & 0xf) == 1) { 1983 + err = ksz9477_phy_errata(phydev); 1984 + if (err) 1985 + return err; 1986 + } 1987 + 1981 1988 /* According to KSZ9477 Errata DS80000754C (Module 4) all EEE modes 1982 1989 * in this switch shall be regarded as broken. 1983 1990 */ 1984 1991 if (phydev->dev_flags & MICREL_NO_EEE) 1985 1992 phydev->eee_broken_modes = -1; 1986 - 1987 - err = genphy_restart_aneg(phydev); 1988 - if (err) 1989 - return err; 1990 1993 1991 1994 return kszphy_config_init(phydev); 1992 1995 } ··· 2097 2072 usleep_range(1000, 2000); 2098 2073 2099 2074 ret = kszphy_config_reset(phydev); 2075 + if (ret) 2076 + return ret; 2077 + 2078 + /* Enable PHY Interrupts */ 2079 + if (phy_interrupt_is_valid(phydev)) { 2080 + phydev->interrupts = PHY_INTERRUPT_ENABLED; 2081 + if (phydev->drv->config_intr) 2082 + phydev->drv->config_intr(phydev); 2083 + } 2084 + 2085 + return 0; 2086 + } 2087 + 2088 + static int ksz9477_resume(struct phy_device *phydev) 2089 + { 2090 + int ret; 2091 + 2092 + /* No need to initialize registers if not powered down. */ 2093 + ret = phy_read(phydev, MII_BMCR); 2094 + if (ret < 0) 2095 + return ret; 2096 + if (!(ret & BMCR_PDOWN)) 2097 + return 0; 2098 + 2099 + genphy_resume(phydev); 2100 + 2101 + /* After switching from power-down to normal mode, an internal global 2102 + * reset is automatically generated. Wait a minimum of 1 ms before 2103 + * read/write access to the PHY registers. 2104 + */ 2105 + usleep_range(1000, 2000); 2106 + 2107 + /* Only KSZ9897 family of switches needs this fix. */ 2108 + if ((phydev->phy_id & 0xf) == 1) { 2109 + ret = ksz9477_phy_errata(phydev); 2110 + if (ret) 2111 + return ret; 2112 + } 2113 + 2114 + /* Enable PHY Interrupts */ 2115 + if (phy_interrupt_is_valid(phydev)) { 2116 + phydev->interrupts = PHY_INTERRUPT_ENABLED; 2117 + if (phydev->drv->config_intr) 2118 + phydev->drv->config_intr(phydev); 2119 + } 2120 + 2121 + return 0; 2122 + } 2123 + 2124 + static int ksz8061_resume(struct phy_device *phydev) 2125 + { 2126 + int ret; 2127 + 2128 + /* This function can be called twice when the Ethernet device is on. */ 2129 + ret = phy_read(phydev, MII_BMCR); 2130 + if (ret < 0) 2131 + return ret; 2132 + if (!(ret & BMCR_PDOWN)) 2133 + return 0; 2134 + 2135 + genphy_resume(phydev); 2136 + usleep_range(1000, 2000); 2137 + 2138 + /* Re-program the value after chip is reset. */ 2139 + ret = phy_write_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_DEVID1, 0xB61A); 2100 2140 if (ret) 2101 2141 return ret; 2102 2142 ··· 5429 5339 .config_intr = kszphy_config_intr, 5430 5340 .handle_interrupt = kszphy_handle_interrupt, 5431 5341 .suspend = kszphy_suspend, 5432 - .resume = kszphy_resume, 5342 + .resume = ksz8061_resume, 5433 5343 }, { 5434 5344 .phy_id = PHY_ID_KSZ9021, 5435 5345 .phy_id_mask = 0x000ffffe, ··· 5583 5493 .config_intr = kszphy_config_intr, 5584 5494 .handle_interrupt = kszphy_handle_interrupt, 5585 5495 .suspend = genphy_suspend, 5586 - .resume = genphy_resume, 5496 + .resume = ksz9477_resume, 5587 5497 .get_features = ksz9477_get_features, 5588 5498 } }; 5589 5499
+20 -22
drivers/net/virtio_net.c
··· 2686 2686 { 2687 2687 struct scatterlist *sgs[5], hdr, stat; 2688 2688 u32 out_num = 0, tmp, in_num = 0; 2689 + bool ok; 2689 2690 int ret; 2690 2691 2691 2692 /* Caller should know better */ ··· 2732 2731 } 2733 2732 2734 2733 unlock: 2734 + ok = vi->ctrl->status == VIRTIO_NET_OK; 2735 2735 mutex_unlock(&vi->cvq_lock); 2736 - return vi->ctrl->status == VIRTIO_NET_OK; 2736 + return ok; 2737 2737 } 2738 2738 2739 2739 static bool virtnet_send_command(struct virtnet_info *vi, u8 class, u8 cmd, ··· 4259 4257 struct virtio_net_ctrl_coal_rx *coal_rx __free(kfree) = NULL; 4260 4258 bool rx_ctrl_dim_on = !!ec->use_adaptive_rx_coalesce; 4261 4259 struct scatterlist sgs_rx; 4262 - int ret = 0; 4263 4260 int i; 4264 4261 4265 4262 if (rx_ctrl_dim_on && !virtio_has_feature(vi->vdev, VIRTIO_NET_F_VQ_NOTF_COAL)) ··· 4268 4267 ec->rx_max_coalesced_frames != vi->intr_coal_rx.max_packets)) 4269 4268 return -EINVAL; 4270 4269 4271 - /* Acquire all queues dim_locks */ 4272 - for (i = 0; i < vi->max_queue_pairs; i++) 4273 - mutex_lock(&vi->rq[i].dim_lock); 4274 - 4275 4270 if (rx_ctrl_dim_on && !vi->rx_dim_enabled) { 4276 4271 vi->rx_dim_enabled = true; 4277 - for (i = 0; i < vi->max_queue_pairs; i++) 4272 + for (i = 0; i < vi->max_queue_pairs; i++) { 4273 + mutex_lock(&vi->rq[i].dim_lock); 4278 4274 vi->rq[i].dim_enabled = true; 4279 - goto unlock; 4275 + mutex_unlock(&vi->rq[i].dim_lock); 4276 + } 4277 + return 0; 4280 4278 } 4281 4279 4282 4280 coal_rx = kzalloc(sizeof(*coal_rx), GFP_KERNEL); 4283 - if (!coal_rx) { 4284 - ret = -ENOMEM; 4285 - goto unlock; 4286 - } 4281 + if (!coal_rx) 4282 + return -ENOMEM; 4287 4283 4288 4284 if (!rx_ctrl_dim_on && vi->rx_dim_enabled) { 4289 4285 vi->rx_dim_enabled = false; 4290 - for (i = 0; i < vi->max_queue_pairs; i++) 4286 + for (i = 0; i < vi->max_queue_pairs; i++) { 4287 + mutex_lock(&vi->rq[i].dim_lock); 4291 4288 vi->rq[i].dim_enabled = false; 4289 + mutex_unlock(&vi->rq[i].dim_lock); 4290 + } 4292 4291 } 4293 4292 4294 4293 /* Since the per-queue coalescing params can be set, ··· 4301 4300 4302 4301 if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_NOTF_COAL, 4303 4302 VIRTIO_NET_CTRL_NOTF_COAL_RX_SET, 4304 - &sgs_rx)) { 4305 - ret = -EINVAL; 4306 - goto unlock; 4307 - } 4303 + &sgs_rx)) 4304 + return -EINVAL; 4308 4305 4309 4306 vi->intr_coal_rx.max_usecs = ec->rx_coalesce_usecs; 4310 4307 vi->intr_coal_rx.max_packets = ec->rx_max_coalesced_frames; 4311 4308 for (i = 0; i < vi->max_queue_pairs; i++) { 4309 + mutex_lock(&vi->rq[i].dim_lock); 4312 4310 vi->rq[i].intr_coal.max_usecs = ec->rx_coalesce_usecs; 4313 4311 vi->rq[i].intr_coal.max_packets = ec->rx_max_coalesced_frames; 4314 - } 4315 - unlock: 4316 - for (i = vi->max_queue_pairs - 1; i >= 0; i--) 4317 4312 mutex_unlock(&vi->rq[i].dim_lock); 4313 + } 4318 4314 4319 - return ret; 4315 + return 0; 4320 4316 } 4321 4317 4322 4318 static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi, ··· 4415 4417 if (err) 4416 4418 pr_debug("%s: Failed to send dim parameters on rxq%d\n", 4417 4419 dev->name, qnum); 4418 - dim->state = DIM_START_MEASURE; 4419 4420 } 4420 4421 out: 4422 + dim->state = DIM_START_MEASURE; 4421 4423 mutex_unlock(&rq->dim_lock); 4422 4424 } 4423 4425
+1 -1
drivers/net/vmxnet3/vmxnet3_drv.c
··· 2034 2034 rq->data_ring.base, 2035 2035 rq->data_ring.basePA); 2036 2036 rq->data_ring.base = NULL; 2037 - rq->data_ring.desc_size = 0; 2038 2037 } 2038 + rq->data_ring.desc_size = 0; 2039 2039 } 2040 2040 } 2041 2041
+4 -4
drivers/net/vxlan/vxlan_core.c
··· 1446 1446 struct vxlan_fdb *f; 1447 1447 u32 ifindex = 0; 1448 1448 1449 + /* Ignore packets from invalid src-address */ 1450 + if (!is_valid_ether_addr(src_mac)) 1451 + return true; 1452 + 1449 1453 #if IS_ENABLED(CONFIG_IPV6) 1450 1454 if (src_ip->sa.sa_family == AF_INET6 && 1451 1455 (ipv6_addr_type(&src_ip->sin6.sin6_addr) & IPV6_ADDR_LINKLOCAL)) ··· 1618 1614 1619 1615 /* Ignore packet loops (and multicast echo) */ 1620 1616 if (ether_addr_equal(eth_hdr(skb)->h_source, vxlan->dev->dev_addr)) 1621 - return false; 1622 - 1623 - /* Ignore packets from invalid src-address */ 1624 - if (!is_valid_ether_addr(eth_hdr(skb)->h_source)) 1625 1617 return false; 1626 1618 1627 1619 /* Get address from the outer IP header */
+1
drivers/net/wireless/ath/ath10k/Kconfig
··· 45 45 depends on ATH10K 46 46 depends on ARCH_QCOM || COMPILE_TEST 47 47 depends on QCOM_SMEM 48 + depends on QCOM_RPROC_COMMON || QCOM_RPROC_COMMON=n 48 49 select QCOM_SCM 49 50 select QCOM_QMI_HELPERS 50 51 help
+1 -1
drivers/net/wireless/ath/ath11k/core.c
··· 604 604 .coldboot_cal_ftm = true, 605 605 .cbcal_restart_fw = false, 606 606 .fw_mem_mode = 0, 607 - .num_vdevs = 16 + 1, 607 + .num_vdevs = 3, 608 608 .num_peers = 512, 609 609 .supports_suspend = false, 610 610 .hal_desc_sz = sizeof(struct hal_rx_desc_qcn9074),
+25 -13
drivers/net/wireless/ath/ath11k/mac.c
··· 7988 7988 struct ath11k_base *ab = ar->ab; 7989 7989 struct ath11k_vif *arvif = ath11k_vif_to_arvif(vif); 7990 7990 int ret; 7991 - struct cur_regulatory_info *reg_info; 7992 - enum ieee80211_ap_reg_power power_type; 7993 7991 7994 7992 mutex_lock(&ar->conf_mutex); 7995 7993 ··· 7998 8000 if (ath11k_wmi_supports_6ghz_cc_ext(ar) && 7999 8001 ctx->def.chan->band == NL80211_BAND_6GHZ && 8000 8002 arvif->vdev_type == WMI_VDEV_TYPE_STA) { 8001 - reg_info = &ab->reg_info_store[ar->pdev_idx]; 8002 - power_type = vif->bss_conf.power_type; 8003 - 8004 - ath11k_dbg(ab, ATH11K_DBG_MAC, "chanctx power type %d\n", power_type); 8005 - 8006 - if (power_type == IEEE80211_REG_UNSET_AP) { 8007 - ret = -EINVAL; 8008 - goto out; 8009 - } 8010 - 8011 - ath11k_reg_handle_chan_list(ab, reg_info, power_type); 8012 8003 arvif->chanctx = *ctx; 8013 8004 ath11k_mac_parse_tx_pwr_env(ar, vif, ctx); 8014 8005 } ··· 9613 9626 struct ath11k *ar = hw->priv; 9614 9627 struct ath11k_vif *arvif = ath11k_vif_to_arvif(vif); 9615 9628 struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); 9629 + enum ieee80211_ap_reg_power power_type; 9630 + struct cur_regulatory_info *reg_info; 9616 9631 struct ath11k_peer *peer; 9617 9632 int ret = 0; 9618 9633 ··· 9693 9704 if (ret) 9694 9705 ath11k_warn(ar->ab, "Unable to authorize peer %pM vdev %d: %d\n", 9695 9706 sta->addr, arvif->vdev_id, ret); 9707 + } 9708 + 9709 + if (!ret && 9710 + ath11k_wmi_supports_6ghz_cc_ext(ar) && 9711 + arvif->vdev_type == WMI_VDEV_TYPE_STA && 9712 + arvif->chanctx.def.chan && 9713 + arvif->chanctx.def.chan->band == NL80211_BAND_6GHZ) { 9714 + reg_info = &ar->ab->reg_info_store[ar->pdev_idx]; 9715 + power_type = vif->bss_conf.power_type; 9716 + 9717 + if (power_type == IEEE80211_REG_UNSET_AP) { 9718 + ath11k_warn(ar->ab, "invalid power type %d\n", 9719 + power_type); 9720 + ret = -EINVAL; 9721 + } else { 9722 + ret = ath11k_reg_handle_chan_list(ar->ab, 9723 + reg_info, 9724 + power_type); 9725 + if (ret) 9726 + ath11k_warn(ar->ab, 9727 + "failed to handle chan list with power type %d\n", 9728 + power_type); 9729 + } 9696 9730 } 9697 9731 } else if (old_state == IEEE80211_STA_AUTHORIZED && 9698 9732 new_state == IEEE80211_STA_ASSOC) {
+17 -8
drivers/net/wireless/ath/ath11k/pcic.c
··· 561 561 { 562 562 int i, j, n, ret, num_vectors = 0; 563 563 u32 user_base_data = 0, base_vector = 0; 564 + struct ath11k_ext_irq_grp *irq_grp; 564 565 unsigned long irq_flags; 565 566 566 567 ret = ath11k_pcic_get_user_msi_assignment(ab, "DP", &num_vectors, ··· 575 574 irq_flags |= IRQF_NOBALANCING; 576 575 577 576 for (i = 0; i < ATH11K_EXT_IRQ_GRP_NUM_MAX; i++) { 578 - struct ath11k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i]; 577 + irq_grp = &ab->ext_irq_grp[i]; 579 578 u32 num_irq = 0; 580 579 581 580 irq_grp->ab = ab; 582 581 irq_grp->grp_id = i; 583 582 irq_grp->napi_ndev = alloc_netdev_dummy(0); 584 - if (!irq_grp->napi_ndev) 585 - return -ENOMEM; 583 + if (!irq_grp->napi_ndev) { 584 + ret = -ENOMEM; 585 + goto fail_allocate; 586 + } 586 587 587 588 netif_napi_add(irq_grp->napi_ndev, &irq_grp->napi, 588 589 ath11k_pcic_ext_grp_napi_poll); ··· 609 606 int irq = ath11k_pcic_get_msi_irq(ab, vector); 610 607 611 608 if (irq < 0) { 612 - for (n = 0; n <= i; n++) { 613 - irq_grp = &ab->ext_irq_grp[n]; 614 - free_netdev(irq_grp->napi_ndev); 615 - } 616 - return irq; 609 + ret = irq; 610 + goto fail_irq; 617 611 } 618 612 619 613 ab->irq_num[irq_idx] = irq; ··· 635 635 } 636 636 637 637 return 0; 638 + fail_irq: 639 + /* i ->napi_ndev was properly allocated. Free it also */ 640 + i += 1; 641 + fail_allocate: 642 + for (n = 0; n < i; n++) { 643 + irq_grp = &ab->ext_irq_grp[n]; 644 + free_netdev(irq_grp->napi_ndev); 645 + } 646 + return ret; 638 647 } 639 648 640 649 int ath11k_pcic_config_irq(struct ath11k_base *ab)
+1 -1
drivers/net/wireless/intel/iwlwifi/iwl-drv.c
··· 1815 1815 err_fw: 1816 1816 #ifdef CONFIG_IWLWIFI_DEBUGFS 1817 1817 debugfs_remove_recursive(drv->dbgfs_drv); 1818 - iwl_dbg_tlv_free(drv->trans); 1819 1818 #endif 1819 + iwl_dbg_tlv_free(drv->trans); 1820 1820 kfree(drv); 1821 1821 err: 1822 1822 return ERR_PTR(ret);
+13 -3
drivers/net/wireless/intel/iwlwifi/mvm/d3.c
··· 595 595 void *_data) 596 596 { 597 597 struct wowlan_key_gtk_type_iter *data = _data; 598 + __le32 *cipher = NULL; 599 + 600 + if (key->keyidx == 4 || key->keyidx == 5) 601 + cipher = &data->kek_kck_cmd->igtk_cipher; 602 + if (key->keyidx == 6 || key->keyidx == 7) 603 + cipher = &data->kek_kck_cmd->bigtk_cipher; 598 604 599 605 switch (key->cipher) { 600 606 default: ··· 612 606 return; 613 607 case WLAN_CIPHER_SUITE_BIP_GMAC_256: 614 608 case WLAN_CIPHER_SUITE_BIP_GMAC_128: 615 - data->kek_kck_cmd->igtk_cipher = cpu_to_le32(STA_KEY_FLG_GCMP); 609 + if (cipher) 610 + *cipher = cpu_to_le32(STA_KEY_FLG_GCMP); 616 611 return; 617 612 case WLAN_CIPHER_SUITE_AES_CMAC: 618 - data->kek_kck_cmd->igtk_cipher = cpu_to_le32(STA_KEY_FLG_CCM); 613 + case WLAN_CIPHER_SUITE_BIP_CMAC_256: 614 + if (cipher) 615 + *cipher = cpu_to_le32(STA_KEY_FLG_CCM); 619 616 return; 620 617 case WLAN_CIPHER_SUITE_CCMP: 621 618 if (!sta) ··· 2350 2341 2351 2342 out: 2352 2343 if (iwl_fw_lookup_notif_ver(mvm->fw, LONG_GROUP, 2353 - WOWLAN_GET_STATUSES, 0) < 10) { 2344 + WOWLAN_GET_STATUSES, 2345 + IWL_FW_CMD_VER_UNKNOWN) < 10) { 2354 2346 mvmvif->seqno_valid = true; 2355 2347 /* +0x10 because the set API expects next-to-use, not last-used */ 2356 2348 mvmvif->seqno = status->non_qos_seq_ctr + 0x10;
+9
drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
··· 1617 1617 &beacon_cmd.tim_size, 1618 1618 beacon->data, beacon->len); 1619 1619 1620 + if (iwl_fw_lookup_cmd_ver(mvm->fw, 1621 + BEACON_TEMPLATE_CMD, 0) >= 14) { 1622 + u32 offset = iwl_mvm_find_ie_offset(beacon->data, 1623 + WLAN_EID_S1G_TWT, 1624 + beacon->len); 1625 + 1626 + beacon_cmd.btwt_offset = cpu_to_le32(offset); 1627 + } 1628 + 1620 1629 iwl_mvm_mac_ctxt_send_beacon_cmd(mvm, beacon, &beacon_cmd, 1621 1630 sizeof(beacon_cmd)); 1622 1631 }
+2 -12
drivers/net/wireless/intel/iwlwifi/mvm/fw.c
··· 94 94 { 95 95 struct iwl_rx_packet *pkt = rxb_addr(rxb); 96 96 struct iwl_mfu_assert_dump_notif *mfu_dump_notif = (void *)pkt->data; 97 - __le32 *dump_data = mfu_dump_notif->data; 98 - int n_words = le32_to_cpu(mfu_dump_notif->data_size) / sizeof(__le32); 99 - int i; 100 97 101 98 if (mfu_dump_notif->index_num == 0) 102 99 IWL_INFO(mvm, "MFUART assert id 0x%x occurred\n", 103 100 le32_to_cpu(mfu_dump_notif->assert_id)); 104 - 105 - for (i = 0; i < n_words; i++) 106 - IWL_DEBUG_INFO(mvm, 107 - "MFUART assert dump, dword %u: 0x%08x\n", 108 - le16_to_cpu(mfu_dump_notif->index_num) * 109 - n_words + i, 110 - le32_to_cpu(dump_data[i])); 111 101 } 112 102 113 103 static bool iwl_alive_fn(struct iwl_notif_wait_data *notif_wait, ··· 885 895 int ret; 886 896 u16 len = 0; 887 897 u32 n_subbands; 888 - u8 cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw, cmd_id, 889 - IWL_FW_CMD_VER_UNKNOWN); 898 + u8 cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw, cmd_id, 3); 899 + 890 900 if (cmd_ver >= 7) { 891 901 len = sizeof(cmd.v7); 892 902 n_subbands = IWL_NUM_SUB_BANDS_V2;
+1 -1
drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
··· 873 873 } 874 874 } 875 875 876 - static u32 iwl_mvm_find_ie_offset(u8 *beacon, u8 eid, u32 frame_size) 876 + u32 iwl_mvm_find_ie_offset(u8 *beacon, u8 eid, u32 frame_size) 877 877 { 878 878 struct ieee80211_mgmt *mgmt = (void *)beacon; 879 879 const u8 *ie;
+38 -1
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
··· 1128 1128 RCU_INIT_POINTER(mvmvif->deflink.probe_resp_data, NULL); 1129 1129 } 1130 1130 1131 + static void iwl_mvm_cleanup_sta_iterator(void *data, struct ieee80211_sta *sta) 1132 + { 1133 + struct iwl_mvm *mvm = data; 1134 + struct iwl_mvm_sta *mvm_sta; 1135 + struct ieee80211_vif *vif; 1136 + int link_id; 1137 + 1138 + mvm_sta = iwl_mvm_sta_from_mac80211(sta); 1139 + vif = mvm_sta->vif; 1140 + 1141 + if (!sta->valid_links) 1142 + return; 1143 + 1144 + for (link_id = 0; link_id < ARRAY_SIZE((sta)->link); link_id++) { 1145 + struct iwl_mvm_link_sta *mvm_link_sta; 1146 + 1147 + mvm_link_sta = 1148 + rcu_dereference_check(mvm_sta->link[link_id], 1149 + lockdep_is_held(&mvm->mutex)); 1150 + if (mvm_link_sta && !(vif->active_links & BIT(link_id))) { 1151 + /* 1152 + * We have a link STA but the link is inactive in 1153 + * mac80211. This will happen if we failed to 1154 + * deactivate the link but mac80211 roll back the 1155 + * deactivation of the link. 1156 + * Delete the stale data to avoid issues later on. 1157 + */ 1158 + iwl_mvm_mld_free_sta_link(mvm, mvm_sta, mvm_link_sta, 1159 + link_id, false); 1160 + } 1161 + } 1162 + } 1163 + 1131 1164 static void iwl_mvm_restart_cleanup(struct iwl_mvm *mvm) 1132 1165 { 1133 1166 iwl_mvm_stop_device(mvm); ··· 1182 1149 * gone down during the HW restart 1183 1150 */ 1184 1151 ieee80211_iterate_interfaces(mvm->hw, 0, iwl_mvm_cleanup_iterator, mvm); 1152 + 1153 + /* cleanup stations as links may be gone after restart */ 1154 + ieee80211_iterate_stations_atomic(mvm->hw, 1155 + iwl_mvm_cleanup_sta_iterator, mvm); 1185 1156 1186 1157 mvm->p2p_device_vif = NULL; 1187 1158 ··· 6385 6348 .len[0] = sizeof(cmd), 6386 6349 .data[1] = data, 6387 6350 .len[1] = size, 6388 - .flags = sync ? 0 : CMD_ASYNC, 6351 + .flags = CMD_SEND_IN_RFKILL | (sync ? 0 : CMD_ASYNC), 6389 6352 }; 6390 6353 int ret; 6391 6354
-2
drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c
··· 75 75 goto out_free_bf; 76 76 77 77 iwl_mvm_tcm_add_vif(mvm, vif); 78 - INIT_DELAYED_WORK(&mvmvif->csa_work, 79 - iwl_mvm_channel_switch_disconnect_wk); 80 78 81 79 if (vif->type == NL80211_IFTYPE_MONITOR) { 82 80 mvm->monitor_on = true;
+7 -6
drivers/net/wireless/intel/iwlwifi/mvm/mld-sta.c
··· 515 515 return iwl_mvm_mld_send_sta_cmd(mvm, &cmd); 516 516 } 517 517 518 - static void iwl_mvm_mld_free_sta_link(struct iwl_mvm *mvm, 519 - struct iwl_mvm_sta *mvm_sta, 520 - struct iwl_mvm_link_sta *mvm_sta_link, 521 - unsigned int link_id, 522 - bool is_in_fw) 518 + void iwl_mvm_mld_free_sta_link(struct iwl_mvm *mvm, 519 + struct iwl_mvm_sta *mvm_sta, 520 + struct iwl_mvm_link_sta *mvm_sta_link, 521 + unsigned int link_id, 522 + bool is_in_fw) 523 523 { 524 524 RCU_INIT_POINTER(mvm->fw_id_to_mac_id[mvm_sta_link->sta_id], 525 525 is_in_fw ? ERR_PTR(-EINVAL) : NULL); ··· 1014 1014 1015 1015 cmd.modify.tid = cpu_to_le32(data->tid); 1016 1016 1017 - ret = iwl_mvm_send_cmd_pdu(mvm, cmd_id, 0, sizeof(cmd), &cmd); 1017 + ret = iwl_mvm_send_cmd_pdu(mvm, cmd_id, CMD_SEND_IN_RFKILL, 1018 + sizeof(cmd), &cmd); 1018 1019 data->sta_mask = new_sta_mask; 1019 1020 if (ret) 1020 1021 return ret;
+1
drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
··· 1758 1758 void iwl_mvm_get_sync_time(struct iwl_mvm *mvm, int clock_type, u32 *gp2, 1759 1759 u64 *boottime, ktime_t *realtime); 1760 1760 u32 iwl_mvm_get_systime(struct iwl_mvm *mvm); 1761 + u32 iwl_mvm_find_ie_offset(u8 *beacon, u8 eid, u32 frame_size); 1761 1762 1762 1763 /* Tx / Host Commands */ 1763 1764 int __must_check iwl_mvm_send_cmd(struct iwl_mvm *mvm,
+2 -7
drivers/net/wireless/intel/iwlwifi/mvm/rs.h
··· 122 122 123 123 #define LINK_QUAL_AGG_FRAME_LIMIT_DEF (63) 124 124 #define LINK_QUAL_AGG_FRAME_LIMIT_MAX (63) 125 - /* 126 - * FIXME - various places in firmware API still use u8, 127 - * e.g. LQ command and SCD config command. 128 - * This should be 256 instead. 129 - */ 130 - #define LINK_QUAL_AGG_FRAME_LIMIT_GEN2_DEF (255) 131 - #define LINK_QUAL_AGG_FRAME_LIMIT_GEN2_MAX (255) 125 + #define LINK_QUAL_AGG_FRAME_LIMIT_GEN2_DEF (64) 126 + #define LINK_QUAL_AGG_FRAME_LIMIT_GEN2_MAX (64) 132 127 #define LINK_QUAL_AGG_FRAME_LIMIT_MIN (0) 133 128 134 129 #define LQ_SIZE 2 /* 2 mode tables: "Active" and "Search" */
+4 -1
drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
··· 2450 2450 * 2451 2451 * We mark it as mac header, for upper layers to know where 2452 2452 * all radio tap header ends. 2453 + * 2454 + * Since data doesn't move data while putting data on skb and that is 2455 + * the only way we use, data + len is the next place that hdr would be put 2453 2456 */ 2454 - skb_reset_mac_header(skb); 2457 + skb_set_mac_header(skb, skb->len); 2455 2458 2456 2459 /* 2457 2460 * Override the nss from the rx_vec since the rate_n_flags has
+8 -4
drivers/net/wireless/intel/iwlwifi/mvm/scan.c
··· 1313 1313 if (IWL_MVM_ADWELL_MAX_BUDGET) 1314 1314 cmd->v7.adwell_max_budget = 1315 1315 cpu_to_le16(IWL_MVM_ADWELL_MAX_BUDGET); 1316 - else if (params->ssids && params->ssids[0].ssid_len) 1316 + else if (params->n_ssids && params->ssids[0].ssid_len) 1317 1317 cmd->v7.adwell_max_budget = 1318 1318 cpu_to_le16(IWL_SCAN_ADWELL_MAX_BUDGET_DIRECTED_SCAN); 1319 1319 else ··· 1418 1418 if (IWL_MVM_ADWELL_MAX_BUDGET) 1419 1419 general_params->adwell_max_budget = 1420 1420 cpu_to_le16(IWL_MVM_ADWELL_MAX_BUDGET); 1421 - else if (params->ssids && params->ssids[0].ssid_len) 1421 + else if (params->n_ssids && params->ssids[0].ssid_len) 1422 1422 general_params->adwell_max_budget = 1423 1423 cpu_to_le16(IWL_SCAN_ADWELL_MAX_BUDGET_DIRECTED_SCAN); 1424 1424 else ··· 1730 1730 break; 1731 1731 } 1732 1732 1733 - if (k == idex_b && idex_b < SCAN_BSSID_MAX_SIZE) { 1733 + if (k == idex_b && idex_b < SCAN_BSSID_MAX_SIZE && 1734 + !WARN_ONCE(!is_valid_ether_addr(scan_6ghz_params[j].bssid), 1735 + "scan: invalid BSSID at index %u, index_b=%u\n", 1736 + j, idex_b)) { 1734 1737 memcpy(&pp->bssid_array[idex_b++], 1735 1738 scan_6ghz_params[j].bssid, ETH_ALEN); 1736 1739 } ··· 3322 3319 3323 3320 ret = iwl_mvm_send_cmd_pdu(mvm, 3324 3321 WIDE_ID(IWL_ALWAYS_LONG_GROUP, SCAN_ABORT_UMAC), 3325 - 0, sizeof(cmd), &cmd); 3322 + CMD_SEND_IN_RFKILL, sizeof(cmd), &cmd); 3326 3323 if (!ret) 3327 3324 mvm->scan_uid_status[uid] = type << IWL_MVM_SCAN_STOPPING_SHIFT; 3328 3325 3326 + IWL_DEBUG_SCAN(mvm, "Scan abort: ret=%d\n", ret); 3329 3327 return ret; 3330 3328 } 3331 3329
+8 -4
drivers/net/wireless/intel/iwlwifi/mvm/sta.c
··· 2848 2848 .action = start ? cpu_to_le32(IWL_RX_BAID_ACTION_ADD) : 2849 2849 cpu_to_le32(IWL_RX_BAID_ACTION_REMOVE), 2850 2850 }; 2851 - u32 cmd_id = WIDE_ID(DATA_PATH_GROUP, RX_BAID_ALLOCATION_CONFIG_CMD); 2851 + struct iwl_host_cmd hcmd = { 2852 + .id = WIDE_ID(DATA_PATH_GROUP, RX_BAID_ALLOCATION_CONFIG_CMD), 2853 + .flags = CMD_SEND_IN_RFKILL, 2854 + .len[0] = sizeof(cmd), 2855 + .data[0] = &cmd, 2856 + }; 2852 2857 int ret; 2853 2858 2854 2859 BUILD_BUG_ON(sizeof(struct iwl_rx_baid_cfg_resp) != sizeof(baid)); ··· 2865 2860 cmd.alloc.ssn = cpu_to_le16(ssn); 2866 2861 cmd.alloc.win_size = cpu_to_le16(buf_size); 2867 2862 baid = -EIO; 2868 - } else if (iwl_fw_lookup_cmd_ver(mvm->fw, cmd_id, 1) == 1) { 2863 + } else if (iwl_fw_lookup_cmd_ver(mvm->fw, hcmd.id, 1) == 1) { 2869 2864 cmd.remove_v1.baid = cpu_to_le32(baid); 2870 2865 BUILD_BUG_ON(sizeof(cmd.remove_v1) > sizeof(cmd.remove)); 2871 2866 } else { ··· 2874 2869 cmd.remove.tid = cpu_to_le32(tid); 2875 2870 } 2876 2871 2877 - ret = iwl_mvm_send_cmd_pdu_status(mvm, cmd_id, sizeof(cmd), 2878 - &cmd, &baid); 2872 + ret = iwl_mvm_send_cmd_status(mvm, &hcmd, &baid); 2879 2873 if (ret) 2880 2874 return ret; 2881 2875
+5
drivers/net/wireless/intel/iwlwifi/mvm/sta.h
··· 662 662 struct ieee80211_sta *sta); 663 663 int iwl_mvm_mld_rm_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif, 664 664 struct ieee80211_sta *sta); 665 + void iwl_mvm_mld_free_sta_link(struct iwl_mvm *mvm, 666 + struct iwl_mvm_sta *mvm_sta, 667 + struct iwl_mvm_link_sta *mvm_sta_link, 668 + unsigned int link_id, 669 + bool is_in_fw); 665 670 int iwl_mvm_mld_rm_sta_id(struct iwl_mvm *mvm, u8 sta_id); 666 671 int iwl_mvm_mld_update_sta_links(struct iwl_mvm *mvm, 667 672 struct ieee80211_vif *vif,
+4
drivers/net/wireless/mediatek/mt76/mt7615/main.c
··· 1326 1326 #endif /* CONFIG_PM */ 1327 1327 1328 1328 const struct ieee80211_ops mt7615_ops = { 1329 + .add_chanctx = ieee80211_emulate_add_chanctx, 1330 + .remove_chanctx = ieee80211_emulate_remove_chanctx, 1331 + .change_chanctx = ieee80211_emulate_change_chanctx, 1332 + .switch_vif_chanctx = ieee80211_emulate_switch_vif_chanctx, 1329 1333 .tx = mt7615_tx, 1330 1334 .start = mt7615_start, 1331 1335 .stop = mt7615_stop,
+24 -17
drivers/net/wireless/microchip/wilc1000/cfg80211.c
··· 237 237 struct wilc_vif *vif; 238 238 u32 channelnum; 239 239 int result; 240 + int srcu_idx; 240 241 241 - rcu_read_lock(); 242 + srcu_idx = srcu_read_lock(&wl->srcu); 242 243 vif = wilc_get_wl_to_vif(wl); 243 244 if (IS_ERR(vif)) { 244 - rcu_read_unlock(); 245 + srcu_read_unlock(&wl->srcu, srcu_idx); 245 246 return PTR_ERR(vif); 246 247 } 247 248 ··· 253 252 if (result) 254 253 netdev_err(vif->ndev, "Error in setting channel\n"); 255 254 256 - rcu_read_unlock(); 255 + srcu_read_unlock(&wl->srcu, srcu_idx); 257 256 return result; 258 257 } 259 258 ··· 806 805 struct wilc *wl = wiphy_priv(wiphy); 807 806 struct wilc_vif *vif; 808 807 struct wilc_priv *priv; 808 + int srcu_idx; 809 809 810 - rcu_read_lock(); 810 + srcu_idx = srcu_read_lock(&wl->srcu); 811 811 vif = wilc_get_wl_to_vif(wl); 812 812 if (IS_ERR(vif)) 813 813 goto out; ··· 863 861 netdev_err(priv->dev, "Error in setting WIPHY PARAMS\n"); 864 862 865 863 out: 866 - rcu_read_unlock(); 864 + srcu_read_unlock(&wl->srcu, srcu_idx); 867 865 return ret; 868 866 } 869 867 ··· 1539 1537 1540 1538 if (type == NL80211_IFTYPE_MONITOR) { 1541 1539 struct net_device *ndev; 1540 + int srcu_idx; 1542 1541 1543 - rcu_read_lock(); 1542 + srcu_idx = srcu_read_lock(&wl->srcu); 1544 1543 vif = wilc_get_vif_from_type(wl, WILC_AP_MODE); 1545 1544 if (!vif) { 1546 1545 vif = wilc_get_vif_from_type(wl, WILC_GO_MODE); 1547 1546 if (!vif) { 1548 - rcu_read_unlock(); 1547 + srcu_read_unlock(&wl->srcu, srcu_idx); 1549 1548 goto validate_interface; 1550 1549 } 1551 1550 } 1552 1551 1553 1552 if (vif->monitor_flag) { 1554 - rcu_read_unlock(); 1553 + srcu_read_unlock(&wl->srcu, srcu_idx); 1555 1554 goto validate_interface; 1556 1555 } 1557 1556 ··· 1560 1557 if (ndev) { 1561 1558 vif->monitor_flag = 1; 1562 1559 } else { 1563 - rcu_read_unlock(); 1560 + srcu_read_unlock(&wl->srcu, srcu_idx); 1564 1561 return ERR_PTR(-EINVAL); 1565 1562 } 1566 1563 1567 1564 wdev = &vif->priv.wdev; 1568 - rcu_read_unlock(); 1565 + srcu_read_unlock(&wl->srcu, srcu_idx); 1569 1566 return wdev; 1570 1567 } 1571 1568 ··· 1613 1610 list_del_rcu(&vif->list); 1614 1611 wl->vif_num--; 1615 1612 mutex_unlock(&wl->vif_mutex); 1616 - synchronize_rcu(); 1613 + synchronize_srcu(&wl->srcu); 1617 1614 return 0; 1618 1615 } 1619 1616 ··· 1638 1635 { 1639 1636 struct wilc *wl = wiphy_priv(wiphy); 1640 1637 struct wilc_vif *vif; 1638 + int srcu_idx; 1641 1639 1642 - rcu_read_lock(); 1640 + srcu_idx = srcu_read_lock(&wl->srcu); 1643 1641 vif = wilc_get_wl_to_vif(wl); 1644 1642 if (IS_ERR(vif)) { 1645 - rcu_read_unlock(); 1643 + srcu_read_unlock(&wl->srcu, srcu_idx); 1646 1644 return; 1647 1645 } 1648 1646 1649 1647 netdev_info(vif->ndev, "cfg set wake up = %d\n", enabled); 1650 1648 wilc_set_wowlan_trigger(vif, enabled); 1651 - rcu_read_unlock(); 1649 + srcu_read_unlock(&wl->srcu, srcu_idx); 1652 1650 } 1653 1651 1654 1652 static int set_tx_power(struct wiphy *wiphy, struct wireless_dev *wdev, 1655 1653 enum nl80211_tx_power_setting type, int mbm) 1656 1654 { 1657 1655 int ret; 1656 + int srcu_idx; 1658 1657 s32 tx_power = MBM_TO_DBM(mbm); 1659 1658 struct wilc *wl = wiphy_priv(wiphy); 1660 1659 struct wilc_vif *vif; ··· 1664 1659 if (!wl->initialized) 1665 1660 return -EIO; 1666 1661 1667 - rcu_read_lock(); 1662 + srcu_idx = srcu_read_lock(&wl->srcu); 1668 1663 vif = wilc_get_wl_to_vif(wl); 1669 1664 if (IS_ERR(vif)) { 1670 - rcu_read_unlock(); 1665 + srcu_read_unlock(&wl->srcu, srcu_idx); 1671 1666 return -EINVAL; 1672 1667 } 1673 1668 ··· 1679 1674 ret = wilc_set_tx_power(vif, tx_power); 1680 1675 if (ret) 1681 1676 netdev_err(vif->ndev, "Failed to set tx power\n"); 1682 - rcu_read_unlock(); 1677 + srcu_read_unlock(&wl->srcu, srcu_idx); 1683 1678 1684 1679 return ret; 1685 1680 } ··· 1762 1757 init_completion(&wl->cfg_event); 1763 1758 init_completion(&wl->sync_event); 1764 1759 init_completion(&wl->txq_thread_started); 1760 + init_srcu_struct(&wl->srcu); 1765 1761 } 1766 1762 1767 1763 void wlan_deinit_locks(struct wilc *wilc) ··· 1773 1767 mutex_destroy(&wilc->txq_add_to_head_cs); 1774 1768 mutex_destroy(&wilc->vif_mutex); 1775 1769 mutex_destroy(&wilc->deinit_lock); 1770 + cleanup_srcu_struct(&wilc->srcu); 1776 1771 } 1777 1772 1778 1773 int wilc_cfg80211_init(struct wilc **wilc, struct device *dev, int io_type,
+10 -7
drivers/net/wireless/microchip/wilc1000/hif.c
··· 1570 1570 struct host_if_drv *hif_drv; 1571 1571 struct host_if_msg *msg; 1572 1572 struct wilc_vif *vif; 1573 + int srcu_idx; 1573 1574 int result; 1574 1575 int id; 1575 1576 1576 1577 id = get_unaligned_le32(&buffer[length - 4]); 1577 - rcu_read_lock(); 1578 + srcu_idx = srcu_read_lock(&wilc->srcu); 1578 1579 vif = wilc_get_vif_from_idx(wilc, id); 1579 1580 if (!vif) 1580 1581 goto out; ··· 1594 1593 msg->body.net_info.rssi = buffer[8]; 1595 1594 msg->body.net_info.mgmt = kmemdup(&buffer[9], 1596 1595 msg->body.net_info.frame_len, 1597 - GFP_ATOMIC); 1596 + GFP_KERNEL); 1598 1597 if (!msg->body.net_info.mgmt) { 1599 1598 kfree(msg); 1600 1599 goto out; ··· 1607 1606 kfree(msg); 1608 1607 } 1609 1608 out: 1610 - rcu_read_unlock(); 1609 + srcu_read_unlock(&wilc->srcu, srcu_idx); 1611 1610 } 1612 1611 1613 1612 void wilc_gnrl_async_info_received(struct wilc *wilc, u8 *buffer, u32 length) ··· 1615 1614 struct host_if_drv *hif_drv; 1616 1615 struct host_if_msg *msg; 1617 1616 struct wilc_vif *vif; 1617 + int srcu_idx; 1618 1618 int result; 1619 1619 int id; 1620 1620 1621 1621 mutex_lock(&wilc->deinit_lock); 1622 1622 1623 1623 id = get_unaligned_le32(&buffer[length - 4]); 1624 - rcu_read_lock(); 1624 + srcu_idx = srcu_read_lock(&wilc->srcu); 1625 1625 vif = wilc_get_vif_from_idx(wilc, id); 1626 1626 if (!vif) 1627 1627 goto out; ··· 1649 1647 kfree(msg); 1650 1648 } 1651 1649 out: 1652 - rcu_read_unlock(); 1650 + srcu_read_unlock(&wilc->srcu, srcu_idx); 1653 1651 mutex_unlock(&wilc->deinit_lock); 1654 1652 } 1655 1653 ··· 1657 1655 { 1658 1656 struct host_if_drv *hif_drv; 1659 1657 struct wilc_vif *vif; 1658 + int srcu_idx; 1660 1659 int result; 1661 1660 int id; 1662 1661 1663 1662 id = get_unaligned_le32(&buffer[length - 4]); 1664 - rcu_read_lock(); 1663 + srcu_idx = srcu_read_lock(&wilc->srcu); 1665 1664 vif = wilc_get_vif_from_idx(wilc, id); 1666 1665 if (!vif) 1667 1666 goto out; ··· 1687 1684 } 1688 1685 } 1689 1686 out: 1690 - rcu_read_unlock(); 1687 + srcu_read_unlock(&wilc->srcu, srcu_idx); 1691 1688 } 1692 1689 1693 1690 int wilc_remain_on_channel(struct wilc_vif *vif, u64 cookie, u16 chan,
+25 -18
drivers/net/wireless/microchip/wilc1000/netdev.c
··· 127 127 128 128 int wilc_wlan_get_num_conn_ifcs(struct wilc *wilc) 129 129 { 130 + int srcu_idx; 130 131 u8 ret_val = 0; 131 132 struct wilc_vif *vif; 132 133 133 - rcu_read_lock(); 134 + srcu_idx = srcu_read_lock(&wilc->srcu); 134 135 wilc_for_each_vif(wilc, vif) { 135 136 if (!is_zero_ether_addr(vif->bssid)) 136 137 ret_val++; 137 138 } 138 - rcu_read_unlock(); 139 + srcu_read_unlock(&wilc->srcu, srcu_idx); 139 140 return ret_val; 140 141 } 141 142 142 143 static void wilc_wake_tx_queues(struct wilc *wl) 143 144 { 145 + int srcu_idx; 144 146 struct wilc_vif *ifc; 145 147 146 - rcu_read_lock(); 148 + srcu_idx = srcu_read_lock(&wl->srcu); 147 149 wilc_for_each_vif(wl, ifc) { 148 150 if (ifc->mac_opened && netif_queue_stopped(ifc->ndev)) 149 151 netif_wake_queue(ifc->ndev); 150 152 } 151 - rcu_read_unlock(); 153 + srcu_read_unlock(&wl->srcu, srcu_idx); 152 154 } 153 155 154 156 static int wilc_txq_task(void *vp) ··· 655 653 struct sockaddr *addr = (struct sockaddr *)p; 656 654 unsigned char mac_addr[ETH_ALEN]; 657 655 struct wilc_vif *tmp_vif; 656 + int srcu_idx; 658 657 659 658 if (!is_valid_ether_addr(addr->sa_data)) 660 659 return -EADDRNOTAVAIL; ··· 667 664 668 665 /* Verify MAC Address is not already in use: */ 669 666 670 - rcu_read_lock(); 667 + srcu_idx = srcu_read_lock(&wilc->srcu); 671 668 wilc_for_each_vif(wilc, tmp_vif) { 672 669 wilc_get_mac_address(tmp_vif, mac_addr); 673 670 if (ether_addr_equal(addr->sa_data, mac_addr)) { 674 671 if (vif != tmp_vif) { 675 - rcu_read_unlock(); 672 + srcu_read_unlock(&wilc->srcu, srcu_idx); 676 673 return -EADDRNOTAVAIL; 677 674 } 678 - rcu_read_unlock(); 675 + srcu_read_unlock(&wilc->srcu, srcu_idx); 679 676 return 0; 680 677 } 681 678 } 682 - rcu_read_unlock(); 679 + srcu_read_unlock(&wilc->srcu, srcu_idx); 683 680 684 681 result = wilc_set_mac_address(vif, (u8 *)addr->sa_data); 685 682 if (result) ··· 767 764 wilc_tx_complete); 768 765 769 766 if (queue_count > FLOW_CONTROL_UPPER_THRESHOLD) { 767 + int srcu_idx; 770 768 struct wilc_vif *vif; 771 769 772 - rcu_read_lock(); 770 + srcu_idx = srcu_read_lock(&wilc->srcu); 773 771 wilc_for_each_vif(wilc, vif) { 774 772 if (vif->mac_opened) 775 773 netif_stop_queue(vif->ndev); 776 774 } 777 - rcu_read_unlock(); 775 + srcu_read_unlock(&wilc->srcu, srcu_idx); 778 776 } 779 777 780 778 return NETDEV_TX_OK; ··· 819 815 unsigned int frame_len = 0; 820 816 struct wilc_vif *vif; 821 817 struct sk_buff *skb; 818 + int srcu_idx; 822 819 int stats; 823 820 824 821 if (!wilc) 825 822 return; 826 823 827 - rcu_read_lock(); 824 + srcu_idx = srcu_read_lock(&wilc->srcu); 828 825 wilc_netdev = get_if_handler(wilc, buff); 829 826 if (!wilc_netdev) 830 827 goto out; ··· 853 848 netdev_dbg(wilc_netdev, "netif_rx ret value is: %d\n", stats); 854 849 } 855 850 out: 856 - rcu_read_unlock(); 851 + srcu_read_unlock(&wilc->srcu, srcu_idx); 857 852 } 858 853 859 854 void wilc_wfi_mgmt_rx(struct wilc *wilc, u8 *buff, u32 size, bool is_auth) 860 855 { 856 + int srcu_idx; 861 857 struct wilc_vif *vif; 862 858 863 - rcu_read_lock(); 859 + srcu_idx = srcu_read_lock(&wilc->srcu); 864 860 wilc_for_each_vif(wilc, vif) { 865 861 struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)buff; 866 862 u16 type = le16_to_cpup((__le16 *)buff); ··· 882 876 if (vif->monitor_flag) 883 877 wilc_wfi_monitor_rx(wilc->monitor_dev, buff, size); 884 878 } 885 - rcu_read_unlock(); 879 + srcu_read_unlock(&wilc->srcu, srcu_idx); 886 880 } 887 881 888 882 static const struct net_device_ops wilc_netdev_ops = { ··· 912 906 list_del_rcu(&vif->list); 913 907 wilc->vif_num--; 914 908 mutex_unlock(&wilc->vif_mutex); 915 - synchronize_rcu(); 909 + synchronize_srcu(&wilc->srcu); 916 910 if (vif->ndev) 917 911 unregister_netdev(vif->ndev); 918 912 } ··· 931 925 { 932 926 int idx = 0; 933 927 struct wilc_vif *vif; 928 + int srcu_idx; 934 929 935 - rcu_read_lock(); 930 + srcu_idx = srcu_read_lock(&wl->srcu); 936 931 wilc_for_each_vif(wl, vif) { 937 932 if (vif->idx == 0) 938 933 idx = 1; 939 934 else 940 935 idx = 0; 941 936 } 942 - rcu_read_unlock(); 937 + srcu_read_unlock(&wl->srcu, srcu_idx); 943 938 return idx; 944 939 } 945 940 ··· 990 983 list_add_tail_rcu(&vif->list, &wl->vif_list); 991 984 wl->vif_num += 1; 992 985 mutex_unlock(&wl->vif_mutex); 993 - synchronize_rcu(); 986 + synchronize_srcu(&wl->srcu); 994 987 995 988 return vif; 996 989
+10 -2
drivers/net/wireless/microchip/wilc1000/netdev.h
··· 32 32 33 33 #define wilc_for_each_vif(w, v) \ 34 34 struct wilc *_w = w; \ 35 - list_for_each_entry_rcu(v, &_w->vif_list, list, \ 36 - rcu_read_lock_held()) 35 + list_for_each_entry_srcu(v, &_w->vif_list, list, \ 36 + srcu_read_lock_held(&_w->srcu)) 37 37 38 38 struct wilc_wfi_stats { 39 39 unsigned long rx_packets; ··· 220 220 221 221 /* protect vif list */ 222 222 struct mutex vif_mutex; 223 + /* Sleepable RCU struct to manipulate vif list. Sleepable version is 224 + * needed over the classic RCU version because the driver's current 225 + * design involves some sleeping code while manipulating a vif 226 + * retrieved from vif list (so in a SRCU critical section), like: 227 + * - sending commands to the chip, using info from retrieved vif 228 + * - registering a new monitoring net device 229 + */ 230 + struct srcu_struct srcu; 223 231 u8 open_ifcs; 224 232 225 233 /* protect head of transmit queue */
+3 -2
drivers/net/wireless/microchip/wilc1000/wlan.c
··· 712 712 u32 *vmm_table = wilc->vmm_table; 713 713 u8 ac_pkt_num_to_chip[NQUEUES] = {0, 0, 0, 0}; 714 714 const struct wilc_hif_func *func; 715 + int srcu_idx; 715 716 u8 *txb = wilc->tx_buffer; 716 717 struct wilc_vif *vif; 717 718 ··· 724 723 725 724 mutex_lock(&wilc->txq_add_to_head_cs); 726 725 727 - rcu_read_lock(); 726 + srcu_idx = srcu_read_lock(&wilc->srcu); 728 727 wilc_for_each_vif(wilc, vif) 729 728 wilc_wlan_txq_filter_dup_tcp_ack(vif->ndev); 730 - rcu_read_unlock(); 729 + srcu_read_unlock(&wilc->srcu, srcu_idx); 731 730 732 731 for (ac = 0; ac < NQUEUES; ac++) 733 732 tqe_q[ac] = wilc_wlan_txq_get_first(wilc, ac);
-15
drivers/net/wireless/realtek/rtlwifi/core.c
··· 633 633 } 634 634 } 635 635 636 - if (changed & IEEE80211_CONF_CHANGE_RETRY_LIMITS) { 637 - rtl_dbg(rtlpriv, COMP_MAC80211, DBG_LOUD, 638 - "IEEE80211_CONF_CHANGE_RETRY_LIMITS %x\n", 639 - hw->conf.long_frame_max_tx_count); 640 - /* brought up everything changes (changed == ~0) indicates first 641 - * open, so use our default value instead of that of wiphy. 642 - */ 643 - if (changed != ~0) { 644 - mac->retry_long = hw->conf.long_frame_max_tx_count; 645 - mac->retry_short = hw->conf.long_frame_max_tx_count; 646 - rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_RETRY_LIMIT, 647 - (u8 *)(&hw->conf.long_frame_max_tx_count)); 648 - } 649 - } 650 - 651 636 if (changed & IEEE80211_CONF_CHANGE_CHANNEL && 652 637 !rtlpriv->proximity.proxim_on) { 653 638 struct ieee80211_channel *channel = hw->conf.chandef.chan;
+1 -1
drivers/net/wwan/iosm/iosm_ipc_devlink.c
··· 211 211 rc = PTR_ERR(devlink->cd_regions[i]); 212 212 dev_err(devlink->dev, "Devlink region fail,err %d", rc); 213 213 /* Delete previously created regions */ 214 - for ( ; i >= 0; i--) 214 + for (i--; i >= 0; i--) 215 215 devlink_region_destroy(devlink->cd_regions[i]); 216 216 goto region_create_fail; 217 217 }
+3 -3
drivers/nvme/host/fabrics.c
··· 180 180 cmd.prop_get.offset = cpu_to_le32(off); 181 181 182 182 ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, &res, NULL, 0, 183 - NVME_QID_ANY, 0); 183 + NVME_QID_ANY, NVME_SUBMIT_RESERVED); 184 184 185 185 if (ret >= 0) 186 186 *val = le64_to_cpu(res.u64); ··· 226 226 cmd.prop_get.offset = cpu_to_le32(off); 227 227 228 228 ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, &res, NULL, 0, 229 - NVME_QID_ANY, 0); 229 + NVME_QID_ANY, NVME_SUBMIT_RESERVED); 230 230 231 231 if (ret >= 0) 232 232 *val = le64_to_cpu(res.u64); ··· 271 271 cmd.prop_set.value = cpu_to_le64(val); 272 272 273 273 ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, NULL, NULL, 0, 274 - NVME_QID_ANY, 0); 274 + NVME_QID_ANY, NVME_SUBMIT_RESERVED); 275 275 if (unlikely(ret)) 276 276 dev_err(ctrl->device, 277 277 "Property Set error: %d, offset %#x\n",
+1 -1
drivers/nvme/host/pr.c
··· 77 77 if (nvme_is_path_error(nvme_sc)) 78 78 return PR_STS_PATH_FAILED; 79 79 80 - switch (nvme_sc) { 80 + switch (nvme_sc & 0x7ff) { 81 81 case NVME_SC_SUCCESS: 82 82 return PR_STS_SUCCESS; 83 83 case NVME_SC_RESERVATION_CONFLICT:
+74 -51
drivers/of/irq.c
··· 25 25 #include <linux/string.h> 26 26 #include <linux/slab.h> 27 27 28 + #include "of_private.h" 29 + 28 30 /** 29 31 * irq_of_parse_and_map - Parse and map an interrupt into linux virq space 30 32 * @dev: Device node of the device whose interrupt is to be mapped ··· 98 96 NULL, 99 97 }; 100 98 99 + const __be32 *of_irq_parse_imap_parent(const __be32 *imap, int len, struct of_phandle_args *out_irq) 100 + { 101 + u32 intsize, addrsize; 102 + struct device_node *np; 103 + 104 + /* Get the interrupt parent */ 105 + if (of_irq_workarounds & OF_IMAP_NO_PHANDLE) 106 + np = of_node_get(of_irq_dflt_pic); 107 + else 108 + np = of_find_node_by_phandle(be32_to_cpup(imap)); 109 + imap++; 110 + 111 + /* Check if not found */ 112 + if (!np) { 113 + pr_debug(" -> imap parent not found !\n"); 114 + return NULL; 115 + } 116 + 117 + /* Get #interrupt-cells and #address-cells of new parent */ 118 + if (of_property_read_u32(np, "#interrupt-cells", 119 + &intsize)) { 120 + pr_debug(" -> parent lacks #interrupt-cells!\n"); 121 + of_node_put(np); 122 + return NULL; 123 + } 124 + if (of_property_read_u32(np, "#address-cells", 125 + &addrsize)) 126 + addrsize = 0; 127 + 128 + pr_debug(" -> intsize=%d, addrsize=%d\n", 129 + intsize, addrsize); 130 + 131 + /* Check for malformed properties */ 132 + if (WARN_ON(addrsize + intsize > MAX_PHANDLE_ARGS) 133 + || (len < (addrsize + intsize))) { 134 + of_node_put(np); 135 + return NULL; 136 + } 137 + 138 + pr_debug(" -> imaplen=%d\n", len); 139 + 140 + imap += addrsize + intsize; 141 + 142 + out_irq->np = np; 143 + for (int i = 0; i < intsize; i++) 144 + out_irq->args[i] = be32_to_cpup(imap - intsize + i); 145 + out_irq->args_count = intsize; 146 + 147 + return imap; 148 + } 149 + 101 150 /** 102 151 * of_irq_parse_raw - Low level interrupt tree parsing 103 152 * @addr: address specifier (start of "reg" property of the device) in be32 format ··· 165 112 */ 166 113 int of_irq_parse_raw(const __be32 *addr, struct of_phandle_args *out_irq) 167 114 { 168 - struct device_node *ipar, *tnode, *old = NULL, *newpar = NULL; 115 + struct device_node *ipar, *tnode, *old = NULL; 169 116 __be32 initial_match_array[MAX_PHANDLE_ARGS]; 170 117 const __be32 *match_array = initial_match_array; 171 - const __be32 *tmp, *imap, *imask, dummy_imask[] = { [0 ... MAX_PHANDLE_ARGS] = cpu_to_be32(~0) }; 172 - u32 intsize = 1, addrsize, newintsize = 0, newaddrsize = 0; 173 - int imaplen, match, i, rc = -EINVAL; 118 + const __be32 *tmp, dummy_imask[] = { [0 ... MAX_PHANDLE_ARGS] = cpu_to_be32(~0) }; 119 + u32 intsize = 1, addrsize; 120 + int i, rc = -EINVAL; 174 121 175 122 #ifdef DEBUG 176 123 of_print_phandle_args("of_irq_parse_raw: ", out_irq); ··· 229 176 230 177 /* Now start the actual "proper" walk of the interrupt tree */ 231 178 while (ipar != NULL) { 179 + int imaplen, match; 180 + const __be32 *imap, *oldimap, *imask; 181 + struct device_node *newpar; 232 182 /* 233 183 * Now check if cursor is an interrupt-controller and 234 184 * if it is then we are done, unless there is an ··· 272 216 273 217 /* Parse interrupt-map */ 274 218 match = 0; 275 - while (imaplen > (addrsize + intsize + 1) && !match) { 219 + while (imaplen > (addrsize + intsize + 1)) { 276 220 /* Compare specifiers */ 277 221 match = 1; 278 222 for (i = 0; i < (addrsize + intsize); i++, imaplen--) ··· 280 224 281 225 pr_debug(" -> match=%d (imaplen=%d)\n", match, imaplen); 282 226 283 - /* Get the interrupt parent */ 284 - if (of_irq_workarounds & OF_IMAP_NO_PHANDLE) 285 - newpar = of_node_get(of_irq_dflt_pic); 286 - else 287 - newpar = of_find_node_by_phandle(be32_to_cpup(imap)); 288 - imap++; 289 - --imaplen; 290 - 291 - /* Check if not found */ 292 - if (newpar == NULL) { 293 - pr_debug(" -> imap parent not found !\n"); 227 + oldimap = imap; 228 + imap = of_irq_parse_imap_parent(oldimap, imaplen, out_irq); 229 + if (!imap) 294 230 goto fail; 295 - } 296 231 297 - if (!of_device_is_available(newpar)) 298 - match = 0; 232 + match &= of_device_is_available(out_irq->np); 233 + if (match) 234 + break; 299 235 300 - /* Get #interrupt-cells and #address-cells of new 301 - * parent 302 - */ 303 - if (of_property_read_u32(newpar, "#interrupt-cells", 304 - &newintsize)) { 305 - pr_debug(" -> parent lacks #interrupt-cells!\n"); 306 - goto fail; 307 - } 308 - if (of_property_read_u32(newpar, "#address-cells", 309 - &newaddrsize)) 310 - newaddrsize = 0; 311 - 312 - pr_debug(" -> newintsize=%d, newaddrsize=%d\n", 313 - newintsize, newaddrsize); 314 - 315 - /* Check for malformed properties */ 316 - if (WARN_ON(newaddrsize + newintsize > MAX_PHANDLE_ARGS) 317 - || (imaplen < (newaddrsize + newintsize))) { 318 - rc = -EFAULT; 319 - goto fail; 320 - } 321 - 322 - imap += newaddrsize + newintsize; 323 - imaplen -= newaddrsize + newintsize; 324 - 236 + of_node_put(out_irq->np); 237 + imaplen -= imap - oldimap; 325 238 pr_debug(" -> imaplen=%d\n", imaplen); 326 239 } 327 240 if (!match) { ··· 312 287 * Successfully parsed an interrupt-map translation; copy new 313 288 * interrupt specifier into the out_irq structure 314 289 */ 315 - match_array = imap - newaddrsize - newintsize; 316 - for (i = 0; i < newintsize; i++) 317 - out_irq->args[i] = be32_to_cpup(imap - newintsize + i); 318 - out_irq->args_count = intsize = newintsize; 319 - addrsize = newaddrsize; 290 + match_array = oldimap + 1; 291 + 292 + newpar = out_irq->np; 293 + intsize = out_irq->args_count; 294 + addrsize = (imap - match_array) - intsize; 320 295 321 296 if (ipar == newpar) { 322 297 pr_debug("%pOF interrupt-map entry to self\n", ipar); ··· 325 300 326 301 skiplevel: 327 302 /* Iterate again with new parent */ 328 - out_irq->np = newpar; 329 303 pr_debug(" -> new parent: %pOF\n", newpar); 330 304 of_node_put(ipar); 331 305 ipar = newpar; ··· 334 310 335 311 fail: 336 312 of_node_put(ipar); 337 - of_node_put(newpar); 338 313 339 314 return rc; 340 315 }
+3
drivers/of/of_private.h
··· 159 159 extern int of_bus_n_addr_cells(struct device_node *np); 160 160 extern int of_bus_n_size_cells(struct device_node *np); 161 161 162 + const __be32 *of_irq_parse_imap_parent(const __be32 *imap, int len, 163 + struct of_phandle_args *out_irq); 164 + 162 165 struct bus_dma_region; 163 166 #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA) 164 167 int of_dma_get_range(struct device_node *np,
+1
drivers/of/of_test.c
··· 54 54 kunit_test_suites( 55 55 &of_dtb_suite, 56 56 ); 57 + MODULE_DESCRIPTION("KUnit tests for OF APIs"); 57 58 MODULE_LICENSE("GPL");
+10 -20
drivers/of/property.c
··· 1306 1306 static struct device_node *parse_interrupt_map(struct device_node *np, 1307 1307 const char *prop_name, int index) 1308 1308 { 1309 - const __be32 *imap, *imap_end, *addr; 1309 + const __be32 *imap, *imap_end; 1310 1310 struct of_phandle_args sup_args; 1311 1311 u32 addrcells, intcells; 1312 - int i, imaplen; 1312 + int imaplen; 1313 1313 1314 1314 if (!IS_ENABLED(CONFIG_OF_IRQ)) 1315 1315 return NULL; ··· 1322 1322 addrcells = of_bus_n_addr_cells(np); 1323 1323 1324 1324 imap = of_get_property(np, "interrupt-map", &imaplen); 1325 - if (!imap || imaplen <= (addrcells + intcells)) 1325 + imaplen /= sizeof(*imap); 1326 + if (!imap) 1326 1327 return NULL; 1328 + 1327 1329 imap_end = imap + imaplen; 1328 1330 1329 - while (imap < imap_end) { 1330 - addr = imap; 1331 - imap += addrcells; 1331 + for (int i = 0; imap + addrcells + intcells + 1 < imap_end; i++) { 1332 + imap += addrcells + intcells; 1332 1333 1333 - sup_args.np = np; 1334 - sup_args.args_count = intcells; 1335 - for (i = 0; i < intcells; i++) 1336 - sup_args.args[i] = be32_to_cpu(imap[i]); 1337 - imap += intcells; 1338 - 1339 - /* 1340 - * Upon success, the function of_irq_parse_raw() returns 1341 - * interrupt controller DT node pointer in sup_args.np. 1342 - */ 1343 - if (of_irq_parse_raw(addr, &sup_args)) 1334 + imap = of_irq_parse_imap_parent(imap, imap_end - imap, &sup_args); 1335 + if (!imap) 1344 1336 return NULL; 1345 1337 1346 - if (!index) 1338 + if (i == index) 1347 1339 return sup_args.np; 1348 1340 1349 1341 of_node_put(sup_args.np); 1350 - imap += sup_args.args_count + 1; 1351 - index--; 1352 1342 } 1353 1343 1354 1344 return NULL;
-4
drivers/pci/access.c
··· 289 289 { 290 290 might_sleep(); 291 291 292 - lock_map_acquire(&dev->cfg_access_lock); 293 - 294 292 raw_spin_lock_irq(&pci_lock); 295 293 if (dev->block_cfg_access) 296 294 pci_wait_cfg(dev); ··· 343 345 raw_spin_unlock_irqrestore(&pci_lock, flags); 344 346 345 347 wake_up_all(&pci_cfg_wait); 346 - 347 - lock_map_release(&dev->cfg_access_lock); 348 348 } 349 349 EXPORT_SYMBOL_GPL(pci_cfg_access_unlock); 350 350
-1
drivers/pci/pci.c
··· 4883 4883 */ 4884 4884 int pci_bridge_secondary_bus_reset(struct pci_dev *dev) 4885 4885 { 4886 - lock_map_assert_held(&dev->cfg_access_lock); 4887 4886 pcibios_reset_secondary_bus(dev); 4888 4887 4889 4888 return pci_bridge_wait_for_secondary_bus(dev, "bus reset");
-3
drivers/pci/probe.c
··· 2546 2546 dev->dev.dma_mask = &dev->dma_mask; 2547 2547 dev->dev.dma_parms = &dev->dma_parms; 2548 2548 dev->dev.coherent_dma_mask = 0xffffffffull; 2549 - lockdep_register_key(&dev->cfg_access_key); 2550 - lockdep_init_map(&dev->cfg_access_lock, dev_name(&dev->dev), 2551 - &dev->cfg_access_key, 0); 2552 2549 2553 2550 dma_set_max_seg_size(&dev->dev, 65536); 2554 2551 dma_set_seg_boundary(&dev->dev, 0xffffffff);
+1
drivers/platform/x86/Kconfig
··· 136 136 config YT2_1380 137 137 tristate "Lenovo Yoga Tablet 2 1380 fast charge driver" 138 138 depends on SERIAL_DEV_BUS 139 + depends on EXTCON 139 140 depends on ACPI 140 141 help 141 142 Say Y here to enable support for the custom fast charging protocol
+43 -7
drivers/platform/x86/amd/hsmp.c
··· 907 907 return ret; 908 908 } 909 909 910 + /* 911 + * This check is only needed for backward compatibility of previous platforms. 912 + * All new platforms are expected to support ACPI based probing. 913 + */ 914 + static bool legacy_hsmp_support(void) 915 + { 916 + if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD) 917 + return false; 918 + 919 + switch (boot_cpu_data.x86) { 920 + case 0x19: 921 + switch (boot_cpu_data.x86_model) { 922 + case 0x00 ... 0x1F: 923 + case 0x30 ... 0x3F: 924 + case 0x90 ... 0x9F: 925 + case 0xA0 ... 0xAF: 926 + return true; 927 + default: 928 + return false; 929 + } 930 + case 0x1A: 931 + switch (boot_cpu_data.x86_model) { 932 + case 0x00 ... 0x1F: 933 + return true; 934 + default: 935 + return false; 936 + } 937 + default: 938 + return false; 939 + } 940 + 941 + return false; 942 + } 943 + 910 944 static int __init hsmp_plt_init(void) 911 945 { 912 946 int ret = -ENODEV; 913 - 914 - if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD || boot_cpu_data.x86 < 0x19) { 915 - pr_err("HSMP is not supported on Family:%x model:%x\n", 916 - boot_cpu_data.x86, boot_cpu_data.x86_model); 917 - return ret; 918 - } 919 947 920 948 /* 921 949 * amd_nb_num() returns number of SMN/DF interfaces present in the system ··· 958 930 return ret; 959 931 960 932 if (!plat_dev.is_acpi_device) { 961 - ret = hsmp_plat_dev_register(); 933 + if (legacy_hsmp_support()) { 934 + /* Not ACPI device, but supports HSMP, register a plat_dev */ 935 + ret = hsmp_plat_dev_register(); 936 + } else { 937 + /* Not ACPI, Does not support HSMP */ 938 + pr_info("HSMP is not supported on Family:%x model:%x\n", 939 + boot_cpu_data.x86, boot_cpu_data.x86_model); 940 + ret = -ENODEV; 941 + } 962 942 if (ret) 963 943 platform_driver_unregister(&amd_hsmp_driver); 964 944 }
+39 -62
drivers/platform/x86/dell/dell-smbios-base.c
··· 11 11 */ 12 12 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 13 13 14 + #include <linux/container_of.h> 14 15 #include <linux/kernel.h> 15 16 #include <linux/module.h> 16 17 #include <linux/capability.h> ··· 26 25 static int da_num_tokens; 27 26 static struct platform_device *platform_device; 28 27 static struct calling_interface_token *da_tokens; 29 - static struct device_attribute *token_location_attrs; 30 - static struct device_attribute *token_value_attrs; 28 + static struct token_sysfs_data *token_entries; 31 29 static struct attribute **token_attrs; 32 30 static DEFINE_MUTEX(smbios_mutex); 31 + 32 + struct token_sysfs_data { 33 + struct device_attribute location_attr; 34 + struct device_attribute value_attr; 35 + struct calling_interface_token *token; 36 + }; 33 37 34 38 struct smbios_device { 35 39 struct list_head list; ··· 422 416 } 423 417 } 424 418 425 - static int match_attribute(struct device *dev, 426 - struct device_attribute *attr) 427 - { 428 - int i; 429 - 430 - for (i = 0; i < da_num_tokens * 2; i++) { 431 - if (!token_attrs[i]) 432 - continue; 433 - if (strcmp(token_attrs[i]->name, attr->attr.name) == 0) 434 - return i/2; 435 - } 436 - dev_dbg(dev, "couldn't match: %s\n", attr->attr.name); 437 - return -EINVAL; 438 - } 439 - 440 419 static ssize_t location_show(struct device *dev, 441 420 struct device_attribute *attr, char *buf) 442 421 { 443 - int i; 422 + struct token_sysfs_data *data = container_of(attr, struct token_sysfs_data, location_attr); 444 423 445 424 if (!capable(CAP_SYS_ADMIN)) 446 425 return -EPERM; 447 426 448 - i = match_attribute(dev, attr); 449 - if (i > 0) 450 - return sysfs_emit(buf, "%08x", da_tokens[i].location); 451 - return 0; 427 + return sysfs_emit(buf, "%08x", data->token->location); 452 428 } 453 429 454 430 static ssize_t value_show(struct device *dev, 455 431 struct device_attribute *attr, char *buf) 456 432 { 457 - int i; 433 + struct token_sysfs_data *data = container_of(attr, struct token_sysfs_data, value_attr); 458 434 459 435 if (!capable(CAP_SYS_ADMIN)) 460 436 return -EPERM; 461 437 462 - i = match_attribute(dev, attr); 463 - if (i > 0) 464 - return sysfs_emit(buf, "%08x", da_tokens[i].value); 465 - return 0; 438 + return sysfs_emit(buf, "%08x", data->token->value); 466 439 } 467 440 468 441 static struct attribute_group smbios_attribute_group = { ··· 458 473 { 459 474 char *location_name; 460 475 char *value_name; 461 - size_t size; 462 476 int ret; 463 477 int i, j; 464 478 465 - /* (number of tokens + 1 for null terminated */ 466 - size = sizeof(struct device_attribute) * (da_num_tokens + 1); 467 - token_location_attrs = kzalloc(size, GFP_KERNEL); 468 - if (!token_location_attrs) 479 + token_entries = kcalloc(da_num_tokens, sizeof(*token_entries), GFP_KERNEL); 480 + if (!token_entries) 469 481 return -ENOMEM; 470 - token_value_attrs = kzalloc(size, GFP_KERNEL); 471 - if (!token_value_attrs) 472 - goto out_allocate_value; 473 482 474 483 /* need to store both location and value + terminator*/ 475 - size = sizeof(struct attribute *) * ((2 * da_num_tokens) + 1); 476 - token_attrs = kzalloc(size, GFP_KERNEL); 484 + token_attrs = kcalloc((2 * da_num_tokens) + 1, sizeof(*token_attrs), GFP_KERNEL); 477 485 if (!token_attrs) 478 486 goto out_allocate_attrs; 479 487 ··· 474 496 /* skip empty */ 475 497 if (da_tokens[i].tokenID == 0) 476 498 continue; 499 + 500 + token_entries[i].token = &da_tokens[i]; 501 + 477 502 /* add location */ 478 503 location_name = kasprintf(GFP_KERNEL, "%04x_location", 479 504 da_tokens[i].tokenID); 480 505 if (location_name == NULL) 481 506 goto out_unwind_strings; 482 - sysfs_attr_init(&token_location_attrs[i].attr); 483 - token_location_attrs[i].attr.name = location_name; 484 - token_location_attrs[i].attr.mode = 0444; 485 - token_location_attrs[i].show = location_show; 486 - token_attrs[j++] = &token_location_attrs[i].attr; 507 + 508 + sysfs_attr_init(&token_entries[i].location_attr.attr); 509 + token_entries[i].location_attr.attr.name = location_name; 510 + token_entries[i].location_attr.attr.mode = 0444; 511 + token_entries[i].location_attr.show = location_show; 512 + token_attrs[j++] = &token_entries[i].location_attr.attr; 487 513 488 514 /* add value */ 489 515 value_name = kasprintf(GFP_KERNEL, "%04x_value", 490 516 da_tokens[i].tokenID); 491 - if (value_name == NULL) 492 - goto loop_fail_create_value; 493 - sysfs_attr_init(&token_value_attrs[i].attr); 494 - token_value_attrs[i].attr.name = value_name; 495 - token_value_attrs[i].attr.mode = 0444; 496 - token_value_attrs[i].show = value_show; 497 - token_attrs[j++] = &token_value_attrs[i].attr; 498 - continue; 517 + if (!value_name) { 518 + kfree(location_name); 519 + goto out_unwind_strings; 520 + } 499 521 500 - loop_fail_create_value: 501 - kfree(location_name); 502 - goto out_unwind_strings; 522 + sysfs_attr_init(&token_entries[i].value_attr.attr); 523 + token_entries[i].value_attr.attr.name = value_name; 524 + token_entries[i].value_attr.attr.mode = 0444; 525 + token_entries[i].value_attr.show = value_show; 526 + token_attrs[j++] = &token_entries[i].value_attr.attr; 503 527 } 504 528 smbios_attribute_group.attrs = token_attrs; 505 529 ··· 512 532 513 533 out_unwind_strings: 514 534 while (i--) { 515 - kfree(token_location_attrs[i].attr.name); 516 - kfree(token_value_attrs[i].attr.name); 535 + kfree(token_entries[i].location_attr.attr.name); 536 + kfree(token_entries[i].value_attr.attr.name); 517 537 } 518 538 kfree(token_attrs); 519 539 out_allocate_attrs: 520 - kfree(token_value_attrs); 521 - out_allocate_value: 522 - kfree(token_location_attrs); 540 + kfree(token_entries); 523 541 524 542 return -ENOMEM; 525 543 } ··· 529 551 sysfs_remove_group(&pdev->dev.kobj, 530 552 &smbios_attribute_group); 531 553 for (i = 0; i < da_num_tokens; i++) { 532 - kfree(token_location_attrs[i].attr.name); 533 - kfree(token_value_attrs[i].attr.name); 554 + kfree(token_entries[i].location_attr.attr.name); 555 + kfree(token_entries[i].value_attr.attr.name); 534 556 } 535 557 kfree(token_attrs); 536 - kfree(token_value_attrs); 537 - kfree(token_location_attrs); 558 + kfree(token_entries); 538 559 } 539 560 540 561 static int __init dell_smbios_init(void)
+1 -58
drivers/platform/x86/touchscreen_dmi.c
··· 34 34 PROPERTY_ENTRY_U32("touchscreen-size-y", 1280), 35 35 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 36 36 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 37 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 38 37 PROPERTY_ENTRY_BOOL("silead,home-button"), 39 38 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-archos-101-cesium-educ.fw"), 40 39 { } ··· 48 49 PROPERTY_ENTRY_U32("touchscreen-size-x", 1850), 49 50 PROPERTY_ENTRY_U32("touchscreen-size-y", 1280), 50 51 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 51 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 52 52 PROPERTY_ENTRY_BOOL("silead,home-button"), 53 53 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-bush-bush-windows-tablet.fw"), 54 54 { } ··· 77 79 PROPERTY_ENTRY_U32("touchscreen-size-y", 1148), 78 80 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 79 81 PROPERTY_ENTRY_STRING("firmware-name", "gsl3676-chuwi-hi8-air.fw"), 80 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 81 82 { } 82 83 }; 83 84 ··· 92 95 PROPERTY_ENTRY_U32("touchscreen-size-y", 1148), 93 96 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 94 97 PROPERTY_ENTRY_STRING("firmware-name", "gsl3680-chuwi-hi8-pro.fw"), 95 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 96 98 PROPERTY_ENTRY_BOOL("silead,home-button"), 97 99 { } 98 100 }; ··· 119 123 PROPERTY_ENTRY_U32("touchscreen-fuzz-x", 5), 120 124 PROPERTY_ENTRY_U32("touchscreen-fuzz-y", 4), 121 125 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-chuwi-hi10-air.fw"), 122 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 123 126 PROPERTY_ENTRY_BOOL("silead,home-button"), 124 127 { } 125 128 }; ··· 134 139 PROPERTY_ENTRY_U32("touchscreen-size-x", 1908), 135 140 PROPERTY_ENTRY_U32("touchscreen-size-y", 1270), 136 141 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-chuwi-hi10plus.fw"), 137 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 138 142 PROPERTY_ENTRY_BOOL("silead,home-button"), 139 143 PROPERTY_ENTRY_BOOL("silead,pen-supported"), 140 144 PROPERTY_ENTRY_U32("silead,pen-resolution-x", 8), ··· 165 171 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 166 172 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-chuwi-hi10-pro.fw"), 167 173 PROPERTY_ENTRY_U32_ARRAY("silead,efi-fw-min-max", chuwi_hi10_pro_efi_min_max), 168 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 169 174 PROPERTY_ENTRY_BOOL("silead,home-button"), 170 175 PROPERTY_ENTRY_BOOL("silead,pen-supported"), 171 176 PROPERTY_ENTRY_U32("silead,pen-resolution-x", 8), ··· 194 201 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 195 202 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 196 203 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-chuwi-hibook.fw"), 197 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 198 204 PROPERTY_ENTRY_BOOL("silead,home-button"), 199 205 { } 200 206 }; ··· 219 227 PROPERTY_ENTRY_U32("touchscreen-size-y", 1140), 220 228 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 221 229 PROPERTY_ENTRY_STRING("firmware-name", "gsl3676-chuwi-vi8.fw"), 222 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 223 230 PROPERTY_ENTRY_BOOL("silead,home-button"), 224 231 { } 225 232 }; ··· 246 255 PROPERTY_ENTRY_U32("touchscreen-size-x", 1858), 247 256 PROPERTY_ENTRY_U32("touchscreen-size-y", 1280), 248 257 PROPERTY_ENTRY_STRING("firmware-name", "gsl3680-chuwi-vi10.fw"), 249 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 250 258 PROPERTY_ENTRY_BOOL("silead,home-button"), 251 259 { } 252 260 }; ··· 261 271 PROPERTY_ENTRY_U32("touchscreen-size-x", 2040), 262 272 PROPERTY_ENTRY_U32("touchscreen-size-y", 1524), 263 273 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-chuwi-surbook-mini.fw"), 264 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 265 274 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 266 275 { } 267 276 }; ··· 278 289 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 279 290 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 280 291 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-connect-tablet9.fw"), 281 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 282 292 { } 283 293 }; 284 294 ··· 294 306 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 295 307 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 296 308 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-csl-panther-tab-hd.fw"), 297 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 298 309 { } 299 310 }; 300 311 ··· 309 322 PROPERTY_ENTRY_U32("touchscreen-size-y", 896), 310 323 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 311 324 PROPERTY_ENTRY_STRING("firmware-name", "gsl3670-cube-iwork8-air.fw"), 312 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 313 325 { } 314 326 }; 315 327 ··· 332 346 PROPERTY_ENTRY_U32("touchscreen-size-x", 1961), 333 347 PROPERTY_ENTRY_U32("touchscreen-size-y", 1513), 334 348 PROPERTY_ENTRY_STRING("firmware-name", "gsl3692-cube-knote-i1101.fw"), 335 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 336 349 PROPERTY_ENTRY_BOOL("silead,home-button"), 337 350 { } 338 351 }; ··· 345 360 PROPERTY_ENTRY_U32("touchscreen-size-x", 890), 346 361 PROPERTY_ENTRY_U32("touchscreen-size-y", 630), 347 362 PROPERTY_ENTRY_STRING("firmware-name", "gsl1686-dexp-ursus-7w.fw"), 348 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 349 363 PROPERTY_ENTRY_BOOL("silead,home-button"), 350 364 { } 351 365 }; ··· 360 376 PROPERTY_ENTRY_U32("touchscreen-size-x", 1720), 361 377 PROPERTY_ENTRY_U32("touchscreen-size-y", 1137), 362 378 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-dexp-ursus-kx210i.fw"), 363 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 364 379 PROPERTY_ENTRY_BOOL("silead,home-button"), 365 380 { } 366 381 }; ··· 374 391 PROPERTY_ENTRY_U32("touchscreen-size-y", 1500), 375 392 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 376 393 PROPERTY_ENTRY_STRING("firmware-name", "gsl1686-digma_citi_e200.fw"), 377 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 378 394 PROPERTY_ENTRY_BOOL("silead,home-button"), 379 395 { } 380 396 }; ··· 432 450 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 433 451 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 434 452 PROPERTY_ENTRY_STRING("firmware-name", "gsl3680-irbis_tw90.fw"), 435 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 436 453 PROPERTY_ENTRY_BOOL("silead,home-button"), 437 454 { } 438 455 }; ··· 447 466 PROPERTY_ENTRY_U32("touchscreen-size-x", 1960), 448 467 PROPERTY_ENTRY_U32("touchscreen-size-y", 1510), 449 468 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-irbis-tw118.fw"), 450 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 451 469 { } 452 470 }; 453 471 ··· 463 483 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 464 484 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 465 485 PROPERTY_ENTRY_STRING("firmware-name", "gsl3670-itworks-tw891.fw"), 466 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 467 486 { } 468 487 }; 469 488 ··· 475 496 PROPERTY_ENTRY_U32("touchscreen-size-x", 1980), 476 497 PROPERTY_ENTRY_U32("touchscreen-size-y", 1500), 477 498 PROPERTY_ENTRY_STRING("firmware-name", "gsl3692-jumper-ezpad-6-pro.fw"), 478 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 479 499 PROPERTY_ENTRY_BOOL("silead,home-button"), 480 500 { } 481 501 }; ··· 489 511 PROPERTY_ENTRY_U32("touchscreen-size-y", 1500), 490 512 PROPERTY_ENTRY_STRING("firmware-name", "gsl3692-jumper-ezpad-6-pro-b.fw"), 491 513 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 492 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 493 514 PROPERTY_ENTRY_BOOL("silead,home-button"), 494 515 { } 495 516 }; ··· 504 527 PROPERTY_ENTRY_U32("touchscreen-size-x", 1950), 505 528 PROPERTY_ENTRY_U32("touchscreen-size-y", 1525), 506 529 PROPERTY_ENTRY_STRING("firmware-name", "gsl3692-jumper-ezpad-6-m4.fw"), 507 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 508 530 PROPERTY_ENTRY_BOOL("silead,home-button"), 509 531 { } 510 532 }; ··· 520 544 PROPERTY_ENTRY_U32("touchscreen-size-y", 1526), 521 545 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 522 546 PROPERTY_ENTRY_STRING("firmware-name", "gsl3680-jumper-ezpad-7.fw"), 523 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 524 547 PROPERTY_ENTRY_BOOL("silead,stuck-controller-bug"), 525 548 { } 526 549 }; ··· 536 561 PROPERTY_ENTRY_U32("touchscreen-size-y", 1138), 537 562 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 538 563 PROPERTY_ENTRY_STRING("firmware-name", "gsl3676-jumper-ezpad-mini3.fw"), 539 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 540 564 { } 541 565 }; 542 566 ··· 552 578 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 553 579 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 554 580 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-mpman-converter9.fw"), 555 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 556 581 { } 557 582 }; 558 583 ··· 567 594 PROPERTY_ENTRY_U32("touchscreen-size-y", 1150), 568 595 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 569 596 PROPERTY_ENTRY_STRING("firmware-name", "gsl3680-mpman-mpwin895cl.fw"), 570 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 571 597 PROPERTY_ENTRY_BOOL("silead,home-button"), 572 598 { } 573 599 }; ··· 583 611 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 584 612 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 585 613 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-myria-my8307.fw"), 586 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 587 614 PROPERTY_ENTRY_BOOL("silead,home-button"), 588 615 { } 589 616 }; ··· 599 628 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 600 629 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 601 630 PROPERTY_ENTRY_STRING("firmware-name", "gsl3676-onda-obook-20-plus.fw"), 602 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 603 631 PROPERTY_ENTRY_BOOL("silead,home-button"), 604 632 { } 605 633 }; ··· 615 645 PROPERTY_ENTRY_U32("touchscreen-size-y", 1140), 616 646 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 617 647 PROPERTY_ENTRY_STRING("firmware-name", "gsl3676-onda-v80-plus-v3.fw"), 618 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 619 648 PROPERTY_ENTRY_BOOL("silead,home-button"), 620 649 { } 621 650 }; ··· 638 669 PROPERTY_ENTRY_U32("touchscreen-size-y", 1140), 639 670 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 640 671 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-onda-v820w-32g.fw"), 641 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 642 672 PROPERTY_ENTRY_BOOL("silead,home-button"), 643 673 { } 644 674 }; ··· 655 687 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 656 688 PROPERTY_ENTRY_STRING("firmware-name", 657 689 "gsl3676-onda-v891-v5.fw"), 658 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 659 690 PROPERTY_ENTRY_BOOL("silead,home-button"), 660 691 { } 661 692 }; ··· 670 703 PROPERTY_ENTRY_U32("touchscreen-size-x", 1676), 671 704 PROPERTY_ENTRY_U32("touchscreen-size-y", 1130), 672 705 PROPERTY_ENTRY_STRING("firmware-name", "gsl3680-onda-v891w-v1.fw"), 673 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 674 706 PROPERTY_ENTRY_BOOL("silead,home-button"), 675 707 { } 676 708 }; ··· 686 720 PROPERTY_ENTRY_U32("touchscreen-size-y", 1135), 687 721 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 688 722 PROPERTY_ENTRY_STRING("firmware-name", "gsl3676-onda-v891w-v3.fw"), 689 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 690 723 PROPERTY_ENTRY_BOOL("silead,home-button"), 691 724 { } 692 725 }; ··· 724 759 PROPERTY_ENTRY_U32("touchscreen-size-x", 1984), 725 760 PROPERTY_ENTRY_U32("touchscreen-size-y", 1532), 726 761 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-pipo-w11.fw"), 727 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 728 762 PROPERTY_ENTRY_BOOL("silead,home-button"), 729 763 { } 730 764 }; ··· 739 775 PROPERTY_ENTRY_U32("touchscreen-size-x", 1915), 740 776 PROPERTY_ENTRY_U32("touchscreen-size-y", 1269), 741 777 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-positivo-c4128b.fw"), 742 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 743 778 { } 744 779 }; 745 780 ··· 754 791 PROPERTY_ENTRY_U32("touchscreen-size-y", 1146), 755 792 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 756 793 PROPERTY_ENTRY_STRING("firmware-name", "gsl3680-pov-mobii-wintab-p800w-v20.fw"), 757 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 758 794 PROPERTY_ENTRY_BOOL("silead,home-button"), 759 795 { } 760 796 }; ··· 770 808 PROPERTY_ENTRY_U32("touchscreen-size-y", 1148), 771 809 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 772 810 PROPERTY_ENTRY_STRING("firmware-name", "gsl3692-pov-mobii-wintab-p800w.fw"), 773 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 774 811 PROPERTY_ENTRY_BOOL("silead,home-button"), 775 812 { } 776 813 }; ··· 786 825 PROPERTY_ENTRY_U32("touchscreen-size-y", 1520), 787 826 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 788 827 PROPERTY_ENTRY_STRING("firmware-name", "gsl3692-pov-mobii-wintab-p1006w-v10.fw"), 789 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 790 828 PROPERTY_ENTRY_BOOL("silead,home-button"), 791 829 { } 792 830 }; ··· 802 842 PROPERTY_ENTRY_U32("touchscreen-size-y", 1144), 803 843 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 804 844 PROPERTY_ENTRY_STRING("firmware-name", "gsl3680-predia-basic.fw"), 805 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 806 845 PROPERTY_ENTRY_BOOL("silead,home-button"), 807 846 { } 808 847 }; ··· 818 859 PROPERTY_ENTRY_U32("touchscreen-size-y", 874), 819 860 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 820 861 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-rca-cambio-w101-v2.fw"), 821 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 822 862 { } 823 863 }; 824 864 ··· 832 874 PROPERTY_ENTRY_U32("touchscreen-size-y", 1140), 833 875 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 834 876 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-rwc-nanote-p8.fw"), 835 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 836 877 { } 837 878 }; 838 879 ··· 847 890 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 848 891 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 849 892 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-schneider-sct101ctm.fw"), 850 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 851 893 PROPERTY_ENTRY_BOOL("silead,home-button"), 852 894 { } 853 895 }; ··· 862 906 PROPERTY_ENTRY_U32("touchscreen-size-x", 1723), 863 907 PROPERTY_ENTRY_U32("touchscreen-size-y", 1077), 864 908 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-globalspace-solt-ivw116.fw"), 865 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 866 909 PROPERTY_ENTRY_BOOL("silead,home-button"), 867 910 { } 868 911 }; ··· 878 923 PROPERTY_ENTRY_U32("touchscreen-size-y", 1270), 879 924 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 880 925 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-techbite-arc-11-6.fw"), 881 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 882 926 { } 883 927 }; 884 928 ··· 893 939 PROPERTY_ENTRY_U32("touchscreen-size-y", 1264), 894 940 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 895 941 PROPERTY_ENTRY_STRING("firmware-name", "gsl3692-teclast-tbook11.fw"), 896 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 897 942 PROPERTY_ENTRY_BOOL("silead,home-button"), 898 943 { } 899 944 }; ··· 918 965 PROPERTY_ENTRY_U32("touchscreen-size-y", 1264), 919 966 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 920 967 PROPERTY_ENTRY_STRING("firmware-name", "gsl3692-teclast-x16-plus.fw"), 921 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 922 968 PROPERTY_ENTRY_BOOL("silead,home-button"), 923 969 { } 924 970 }; ··· 940 988 PROPERTY_ENTRY_U32("touchscreen-size-x", 1980), 941 989 PROPERTY_ENTRY_U32("touchscreen-size-y", 1500), 942 990 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-teclast-x3-plus.fw"), 943 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 944 991 PROPERTY_ENTRY_BOOL("silead,home-button"), 945 992 { } 946 993 }; ··· 955 1004 PROPERTY_ENTRY_BOOL("touchscreen-inverted-x"), 956 1005 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 957 1006 PROPERTY_ENTRY_STRING("firmware-name", "gsl1686-teclast_x98plus2.fw"), 958 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 959 1007 { } 960 1008 }; 961 1009 ··· 968 1018 PROPERTY_ENTRY_U32("touchscreen-size-y", 1530), 969 1019 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 970 1020 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-trekstor-primebook-c11.fw"), 971 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 972 1021 PROPERTY_ENTRY_BOOL("silead,home-button"), 973 1022 { } 974 1023 }; ··· 981 1032 PROPERTY_ENTRY_U32("touchscreen-size-x", 2624), 982 1033 PROPERTY_ENTRY_U32("touchscreen-size-y", 1920), 983 1034 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-trekstor-primebook-c13.fw"), 984 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 985 1035 PROPERTY_ENTRY_BOOL("silead,home-button"), 986 1036 { } 987 1037 }; ··· 994 1046 PROPERTY_ENTRY_U32("touchscreen-size-x", 2500), 995 1047 PROPERTY_ENTRY_U32("touchscreen-size-y", 1900), 996 1048 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-trekstor-primetab-t13b.fw"), 997 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 998 1049 PROPERTY_ENTRY_BOOL("silead,home-button"), 999 1050 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 1000 1051 { } ··· 1021 1074 PROPERTY_ENTRY_U32("touchscreen-size-y", 1280), 1022 1075 PROPERTY_ENTRY_U32("touchscreen-inverted-y", 1), 1023 1076 PROPERTY_ENTRY_STRING("firmware-name", "gsl3670-surftab-twin-10-1-st10432-8.fw"), 1024 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 1025 1077 PROPERTY_ENTRY_BOOL("silead,home-button"), 1026 1078 { } 1027 1079 }; ··· 1036 1090 PROPERTY_ENTRY_U32("touchscreen-size-x", 884), 1037 1091 PROPERTY_ENTRY_U32("touchscreen-size-y", 632), 1038 1092 PROPERTY_ENTRY_STRING("firmware-name", "gsl1686-surftab-wintron70-st70416-6.fw"), 1039 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 1040 1093 PROPERTY_ENTRY_BOOL("silead,home-button"), 1041 1094 { } 1042 1095 }; ··· 1052 1107 PROPERTY_ENTRY_U32("touchscreen-fuzz-y", 6), 1053 1108 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 1054 1109 PROPERTY_ENTRY_STRING("firmware-name", "gsl3680-viglen-connect-10.fw"), 1055 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 1056 1110 PROPERTY_ENTRY_BOOL("silead,home-button"), 1057 1111 { } 1058 1112 }; ··· 1065 1121 PROPERTY_ENTRY_U32("touchscreen-size-x", 1920), 1066 1122 PROPERTY_ENTRY_U32("touchscreen-size-y", 1280), 1067 1123 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-vinga-twizzle_j116.fw"), 1068 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 1069 1124 PROPERTY_ENTRY_BOOL("silead,home-button"), 1070 1125 { } 1071 1126 }; ··· 1850 1907 u32 u32val; 1851 1908 int i, ret; 1852 1909 1853 - strscpy(orig_str, str, sizeof(orig_str)); 1910 + strscpy(orig_str, str); 1854 1911 1855 1912 /* 1856 1913 * str is part of the static_command_line from init/main.c and poking
+1
drivers/pnp/base.h
··· 6 6 7 7 extern struct mutex pnp_lock; 8 8 extern const struct attribute_group *pnp_dev_groups[]; 9 + extern const struct bus_type pnp_bus_type; 9 10 10 11 int pnp_register_protocol(struct pnp_protocol *protocol); 11 12 void pnp_unregister_protocol(struct pnp_protocol *protocol);
+6
drivers/pnp/driver.c
··· 266 266 .dev_groups = pnp_dev_groups, 267 267 }; 268 268 269 + bool dev_is_pnp(const struct device *dev) 270 + { 271 + return dev->bus == &pnp_bus_type; 272 + } 273 + EXPORT_SYMBOL_GPL(dev_is_pnp); 274 + 269 275 int pnp_register_driver(struct pnp_driver *drv) 270 276 { 271 277 drv->driver.name = drv->name;
+2 -1
drivers/ptp/ptp_chardev.c
··· 85 85 } 86 86 87 87 if (info->verify(info, pin, func, chan)) { 88 - pr_err("driver cannot use function %u on pin %u\n", func, chan); 88 + pr_err("driver cannot use function %u and channel %u on pin %u\n", 89 + func, chan, pin); 89 90 return -EOPNOTSUPP; 90 91 } 91 92
+22 -9
drivers/scsi/device_handler/scsi_dh_alua.c
··· 414 414 } 415 415 } 416 416 417 - static enum scsi_disposition alua_check_sense(struct scsi_device *sdev, 418 - struct scsi_sense_hdr *sense_hdr) 417 + static void alua_handle_state_transition(struct scsi_device *sdev) 419 418 { 420 419 struct alua_dh_data *h = sdev->handler_data; 421 420 struct alua_port_group *pg; 422 421 422 + rcu_read_lock(); 423 + pg = rcu_dereference(h->pg); 424 + if (pg) 425 + pg->state = SCSI_ACCESS_STATE_TRANSITIONING; 426 + rcu_read_unlock(); 427 + alua_check(sdev, false); 428 + } 429 + 430 + static enum scsi_disposition alua_check_sense(struct scsi_device *sdev, 431 + struct scsi_sense_hdr *sense_hdr) 432 + { 423 433 switch (sense_hdr->sense_key) { 424 434 case NOT_READY: 425 435 if (sense_hdr->asc == 0x04 && sense_hdr->ascq == 0x0a) { 426 436 /* 427 437 * LUN Not Accessible - ALUA state transition 428 438 */ 429 - rcu_read_lock(); 430 - pg = rcu_dereference(h->pg); 431 - if (pg) 432 - pg->state = SCSI_ACCESS_STATE_TRANSITIONING; 433 - rcu_read_unlock(); 434 - alua_check(sdev, false); 439 + alua_handle_state_transition(sdev); 435 440 return NEEDS_RETRY; 436 441 } 437 442 break; 438 443 case UNIT_ATTENTION: 444 + if (sense_hdr->asc == 0x04 && sense_hdr->ascq == 0x0a) { 445 + /* 446 + * LUN Not Accessible - ALUA state transition 447 + */ 448 + alua_handle_state_transition(sdev); 449 + return NEEDS_RETRY; 450 + } 439 451 if (sense_hdr->asc == 0x29 && sense_hdr->ascq == 0x00) { 440 452 /* 441 453 * Power On, Reset, or Bus Device Reset. ··· 514 502 515 503 retval = scsi_test_unit_ready(sdev, ALUA_FAILOVER_TIMEOUT * HZ, 516 504 ALUA_FAILOVER_RETRIES, &sense_hdr); 517 - if (sense_hdr.sense_key == NOT_READY && 505 + if ((sense_hdr.sense_key == NOT_READY || 506 + sense_hdr.sense_key == UNIT_ATTENTION) && 518 507 sense_hdr.asc == 0x04 && sense_hdr.ascq == 0x0a) 519 508 return SCSI_DH_RETRY; 520 509 else if (retval)
+1 -1
drivers/scsi/mpi3mr/mpi3mr_transport.c
··· 1364 1364 continue; 1365 1365 1366 1366 if (i > sizeof(mr_sas_port->phy_mask) * 8) { 1367 - ioc_warn(mrioc, "skipping port %u, max allowed value is %lu\n", 1367 + ioc_warn(mrioc, "skipping port %u, max allowed value is %zu\n", 1368 1368 i, sizeof(mr_sas_port->phy_mask) * 8); 1369 1369 goto out_fail; 1370 1370 }
+2 -2
drivers/scsi/mpt3sas/mpt3sas_scsih.c
··· 302 302 303 303 /** 304 304 * _scsih_set_debug_level - global setting of ioc->logging_level. 305 - * @val: ? 306 - * @kp: ? 305 + * @val: value of the parameter to be set 306 + * @kp: pointer to kernel_param structure 307 307 * 308 308 * Note: The logging levels are defined in mpt3sas_debug.h. 309 309 */
+1
drivers/scsi/qedf/qedf.h
··· 363 363 #define QEDF_IN_RECOVERY 5 364 364 #define QEDF_DBG_STOP_IO 6 365 365 #define QEDF_PROBING 8 366 + #define QEDF_STAG_IN_PROGRESS 9 366 367 unsigned long flags; /* Miscellaneous state flags */ 367 368 int fipvlan_retries; 368 369 u8 num_queues;
+44 -3
drivers/scsi/qedf/qedf_main.c
··· 318 318 */ 319 319 if (resp == fc_lport_flogi_resp) { 320 320 qedf->flogi_cnt++; 321 + qedf->flogi_pending++; 322 + 323 + if (test_bit(QEDF_UNLOADING, &qedf->flags)) { 324 + QEDF_ERR(&qedf->dbg_ctx, "Driver unloading\n"); 325 + qedf->flogi_pending = 0; 326 + } 327 + 321 328 if (qedf->flogi_pending >= QEDF_FLOGI_RETRY_CNT) { 322 329 schedule_delayed_work(&qedf->stag_work, 2); 323 330 return NULL; 324 331 } 325 - qedf->flogi_pending++; 332 + 326 333 return fc_elsct_send(lport, did, fp, op, qedf_flogi_resp, 327 334 arg, timeout); 328 335 } ··· 919 912 struct qedf_ctx *qedf; 920 913 struct qed_link_output if_link; 921 914 915 + qedf = lport_priv(lport); 916 + 922 917 if (lport->vport) { 918 + clear_bit(QEDF_STAG_IN_PROGRESS, &qedf->flags); 923 919 printk_ratelimited("Cannot issue host reset on NPIV port.\n"); 924 920 return; 925 921 } 926 - 927 - qedf = lport_priv(lport); 928 922 929 923 qedf->flogi_pending = 0; 930 924 /* For host reset, essentially do a soft link up/down */ ··· 946 938 if (!if_link.link_up) { 947 939 QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC, 948 940 "Physical link is not up.\n"); 941 + clear_bit(QEDF_STAG_IN_PROGRESS, &qedf->flags); 949 942 return; 950 943 } 951 944 /* Flush and wait to make sure link down is processed */ ··· 959 950 "Queue link up work.\n"); 960 951 queue_delayed_work(qedf->link_update_wq, &qedf->link_update, 961 952 0); 953 + clear_bit(QEDF_STAG_IN_PROGRESS, &qedf->flags); 962 954 } 963 955 964 956 /* Reset the host by gracefully logging out and then logging back in */ ··· 3473 3463 } 3474 3464 3475 3465 /* Start the Slowpath-process */ 3466 + memset(&slowpath_params, 0, sizeof(struct qed_slowpath_params)); 3476 3467 slowpath_params.int_mode = QED_INT_MODE_MSIX; 3477 3468 slowpath_params.drv_major = QEDF_DRIVER_MAJOR_VER; 3478 3469 slowpath_params.drv_minor = QEDF_DRIVER_MINOR_VER; ··· 3732 3721 { 3733 3722 struct qedf_ctx *qedf; 3734 3723 int rc; 3724 + int cnt = 0; 3735 3725 3736 3726 if (!pdev) { 3737 3727 QEDF_ERR(NULL, "pdev is NULL.\n"); ··· 3748 3736 if (test_bit(QEDF_UNLOADING, &qedf->flags)) { 3749 3737 QEDF_ERR(&qedf->dbg_ctx, "Already removing PCI function.\n"); 3750 3738 return; 3739 + } 3740 + 3741 + stag_in_prog: 3742 + if (test_bit(QEDF_STAG_IN_PROGRESS, &qedf->flags)) { 3743 + QEDF_ERR(&qedf->dbg_ctx, "Stag in progress, cnt=%d.\n", cnt); 3744 + cnt++; 3745 + 3746 + if (cnt < 5) { 3747 + msleep(500); 3748 + goto stag_in_prog; 3749 + } 3751 3750 } 3752 3751 3753 3752 if (mode != QEDF_MODE_RECOVERY) ··· 4019 3996 { 4020 3997 struct qedf_ctx *qedf = 4021 3998 container_of(work, struct qedf_ctx, stag_work.work); 3999 + 4000 + if (!qedf) { 4001 + QEDF_ERR(&qedf->dbg_ctx, "qedf is NULL"); 4002 + return; 4003 + } 4004 + 4005 + if (test_bit(QEDF_IN_RECOVERY, &qedf->flags)) { 4006 + QEDF_ERR(&qedf->dbg_ctx, 4007 + "Already is in recovery, hence not calling software context reset.\n"); 4008 + return; 4009 + } 4010 + 4011 + if (test_bit(QEDF_UNLOADING, &qedf->flags)) { 4012 + QEDF_ERR(&qedf->dbg_ctx, "Driver unloading\n"); 4013 + return; 4014 + } 4015 + 4016 + set_bit(QEDF_STAG_IN_PROGRESS, &qedf->flags); 4022 4017 4023 4018 printk_ratelimited("[%s]:[%s:%d]:%d: Performing software context reset.", 4024 4019 dev_name(&qedf->pdev->dev), __func__, __LINE__,
+7
drivers/scsi/scsi.c
··· 350 350 if (result < SCSI_VPD_HEADER_SIZE) 351 351 return 0; 352 352 353 + if (result > sizeof(vpd)) { 354 + dev_warn_once(&sdev->sdev_gendev, 355 + "%s: long VPD page 0 length: %d bytes\n", 356 + __func__, result); 357 + result = sizeof(vpd); 358 + } 359 + 353 360 result -= SCSI_VPD_HEADER_SIZE; 354 361 if (!memchr(&vpd[SCSI_VPD_HEADER_SIZE], page, result)) 355 362 return 0;
+1 -1
drivers/scsi/sr.h
··· 65 65 int sr_get_last_session(struct cdrom_device_info *, struct cdrom_multisession *); 66 66 int sr_get_mcn(struct cdrom_device_info *, struct cdrom_mcn *); 67 67 int sr_reset(struct cdrom_device_info *); 68 - int sr_select_speed(struct cdrom_device_info *cdi, int speed); 68 + int sr_select_speed(struct cdrom_device_info *cdi, unsigned long speed); 69 69 int sr_audio_ioctl(struct cdrom_device_info *, unsigned int, void *); 70 70 71 71 int sr_is_xa(Scsi_CD *);
+4 -1
drivers/scsi/sr_ioctl.c
··· 425 425 return 0; 426 426 } 427 427 428 - int sr_select_speed(struct cdrom_device_info *cdi, int speed) 428 + int sr_select_speed(struct cdrom_device_info *cdi, unsigned long speed) 429 429 { 430 430 Scsi_CD *cd = cdi->handle; 431 431 struct packet_command cgc; 432 + 433 + /* avoid exceeding the max speed or overflowing integer bounds */ 434 + speed = clamp(0, speed, 0xffff / 177); 432 435 433 436 if (speed == 0) 434 437 speed = 0xffff; /* set to max */
+25 -10
drivers/thermal/thermal_core.c
··· 467 467 governor->trip_crossed(tz, trip, crossed_up); 468 468 } 469 469 470 + static void thermal_trip_crossed(struct thermal_zone_device *tz, 471 + const struct thermal_trip *trip, 472 + struct thermal_governor *governor, 473 + bool crossed_up) 474 + { 475 + if (crossed_up) { 476 + thermal_notify_tz_trip_up(tz, trip); 477 + thermal_debug_tz_trip_up(tz, trip); 478 + } else { 479 + thermal_notify_tz_trip_down(tz, trip); 480 + thermal_debug_tz_trip_down(tz, trip); 481 + } 482 + thermal_governor_trip_crossed(governor, tz, trip, crossed_up); 483 + } 484 + 470 485 static int thermal_trip_notify_cmp(void *ascending, const struct list_head *a, 471 486 const struct list_head *b) 472 487 { ··· 521 506 handle_thermal_trip(tz, td, &way_up_list, &way_down_list); 522 507 523 508 list_sort(&way_up_list, &way_up_list, thermal_trip_notify_cmp); 524 - list_for_each_entry(td, &way_up_list, notify_list_node) { 525 - thermal_notify_tz_trip_up(tz, &td->trip); 526 - thermal_debug_tz_trip_up(tz, &td->trip); 527 - thermal_governor_trip_crossed(governor, tz, &td->trip, true); 528 - } 509 + list_for_each_entry(td, &way_up_list, notify_list_node) 510 + thermal_trip_crossed(tz, &td->trip, governor, true); 529 511 530 512 list_sort(NULL, &way_down_list, thermal_trip_notify_cmp); 531 - list_for_each_entry(td, &way_down_list, notify_list_node) { 532 - thermal_notify_tz_trip_down(tz, &td->trip); 533 - thermal_debug_tz_trip_down(tz, &td->trip); 534 - thermal_governor_trip_crossed(governor, tz, &td->trip, false); 535 - } 513 + list_for_each_entry(td, &way_down_list, notify_list_node) 514 + thermal_trip_crossed(tz, &td->trip, governor, false); 536 515 537 516 if (governor->manage) 538 517 governor->manage(tz); ··· 601 592 mutex_unlock(&tz->lock); 602 593 } 603 594 EXPORT_SYMBOL_GPL(thermal_zone_device_update); 595 + 596 + void thermal_zone_trip_down(struct thermal_zone_device *tz, 597 + const struct thermal_trip *trip) 598 + { 599 + thermal_trip_crossed(tz, trip, thermal_get_tz_governor(tz), false); 600 + } 604 601 605 602 int for_each_thermal_governor(int (*cb)(struct thermal_governor *, void *), 606 603 void *data)
+2
drivers/thermal/thermal_core.h
··· 246 246 void thermal_zone_trip_updated(struct thermal_zone_device *tz, 247 247 const struct thermal_trip *trip); 248 248 int __thermal_zone_get_temp(struct thermal_zone_device *tz, int *temp); 249 + void thermal_zone_trip_down(struct thermal_zone_device *tz, 250 + const struct thermal_trip *trip); 249 251 250 252 /* sysfs I/F */ 251 253 int thermal_zone_create_device_groups(struct thermal_zone_device *tz);
+11 -7
drivers/thermal/thermal_debugfs.c
··· 91 91 * 92 92 * @timestamp: the trip crossing timestamp 93 93 * @duration: total time when the zone temperature was above the trip point 94 + * @trip_temp: trip temperature at mitigation start 95 + * @trip_hyst: trip hysteresis at mitigation start 94 96 * @count: the number of times the zone temperature was above the trip point 95 97 * @max: maximum recorded temperature above the trip point 96 98 * @min: minimum recorded temperature above the trip point ··· 101 99 struct trip_stats { 102 100 ktime_t timestamp; 103 101 ktime_t duration; 102 + int trip_temp; 103 + int trip_hyst; 104 104 int count; 105 105 int max; 106 106 int min; ··· 578 574 struct thermal_debugfs *thermal_dbg = tz->debugfs; 579 575 int trip_id = thermal_zone_trip_id(tz, trip); 580 576 ktime_t now = ktime_get(); 577 + struct trip_stats *trip_stats; 581 578 582 579 if (!thermal_dbg) 583 580 return; ··· 644 639 tz_dbg->trips_crossed[tz_dbg->nr_trips++] = trip_id; 645 640 646 641 tze = list_first_entry(&tz_dbg->tz_episodes, struct tz_episode, node); 647 - tze->trip_stats[trip_id].timestamp = now; 642 + trip_stats = &tze->trip_stats[trip_id]; 643 + trip_stats->trip_temp = trip->temperature; 644 + trip_stats->trip_hyst = trip->hysteresis; 645 + trip_stats->timestamp = now; 648 646 649 647 unlock: 650 648 mutex_unlock(&thermal_dbg->lock); ··· 802 794 const struct thermal_trip *trip = &td->trip; 803 795 struct trip_stats *trip_stats; 804 796 805 - /* Skip invalid trips. */ 806 - if (trip->temperature == THERMAL_TEMP_INVALID) 807 - continue; 808 - 809 797 /* 810 798 * There is no possible mitigation happening at the 811 799 * critical trip point, so the stats will be always ··· 840 836 seq_printf(s, "| %*d | %*s | %*d | %*d | %c%*lld | %*d | %*d | %*d |\n", 841 837 4 , trip_id, 842 838 8, type, 843 - 9, trip->temperature, 844 - 9, trip->hysteresis, 839 + 9, trip_stats->trip_temp, 840 + 9, trip_stats->trip_hyst, 845 841 c, 10, duration_ms, 846 842 9, trip_stats->avg, 847 843 9, trip_stats->min,
+12 -8
drivers/thermal/thermal_trip.c
··· 152 152 if (trip->temperature == temp) 153 153 return; 154 154 155 + trip->temperature = temp; 156 + thermal_notify_tz_trip_change(tz, trip); 157 + 155 158 if (temp == THERMAL_TEMP_INVALID) { 156 159 struct thermal_trip_desc *td = trip_to_trip_desc(trip); 157 160 158 - if (trip->type == THERMAL_TRIP_PASSIVE && 159 - tz->temperature >= td->threshold) { 161 + if (tz->temperature >= td->threshold) { 160 162 /* 161 - * The trip has been crossed, so the thermal zone's 162 - * passive count needs to be adjusted. 163 + * The trip has been crossed on the way up, so some 164 + * adjustments are needed to compensate for the lack 165 + * of it going forward. 163 166 */ 164 - tz->passive--; 165 - WARN_ON_ONCE(tz->passive < 0); 167 + if (trip->type == THERMAL_TRIP_PASSIVE) { 168 + tz->passive--; 169 + WARN_ON_ONCE(tz->passive < 0); 170 + } 171 + thermal_zone_trip_down(tz, trip); 166 172 } 167 173 /* 168 174 * Invalidate the threshold to avoid triggering a spurious ··· 176 170 */ 177 171 td->threshold = INT_MAX; 178 172 } 179 - trip->temperature = temp; 180 - thermal_notify_tz_trip_change(tz, trip); 181 173 } 182 174 EXPORT_SYMBOL_GPL(thermal_zone_set_trip_temp);
+8 -9
drivers/ufs/core/ufs-mcq.c
··· 634 634 struct ufshcd_lrb *lrbp = &hba->lrb[tag]; 635 635 struct ufs_hw_queue *hwq; 636 636 unsigned long flags; 637 - int err = FAILED; 637 + int err; 638 638 639 639 if (!ufshcd_cmd_inflight(lrbp->cmd)) { 640 640 dev_err(hba->dev, 641 641 "%s: skip abort. cmd at tag %d already completed.\n", 642 642 __func__, tag); 643 - goto out; 643 + return FAILED; 644 644 } 645 645 646 646 /* Skip task abort in case previous aborts failed and report failure */ 647 647 if (lrbp->req_abort_skip) { 648 648 dev_err(hba->dev, "%s: skip abort. tag %d failed earlier\n", 649 649 __func__, tag); 650 - goto out; 650 + return FAILED; 651 651 } 652 652 653 653 hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(cmd)); ··· 659 659 */ 660 660 dev_err(hba->dev, "%s: cmd found in sq. hwq=%d, tag=%d\n", 661 661 __func__, hwq->id, tag); 662 - goto out; 662 + return FAILED; 663 663 } 664 664 665 665 /* ··· 667 667 * in the completion queue either. Query the device to see if 668 668 * the command is being processed in the device. 669 669 */ 670 - if (ufshcd_try_to_abort_task(hba, tag)) { 670 + err = ufshcd_try_to_abort_task(hba, tag); 671 + if (err) { 671 672 dev_err(hba->dev, "%s: device abort failed %d\n", __func__, err); 672 673 lrbp->req_abort_skip = true; 673 - goto out; 674 + return FAILED; 674 675 } 675 676 676 - err = SUCCESS; 677 677 spin_lock_irqsave(&hwq->cq_lock, flags); 678 678 if (ufshcd_cmd_inflight(lrbp->cmd)) 679 679 ufshcd_release_scsi_cmd(hba, lrbp); 680 680 spin_unlock_irqrestore(&hwq->cq_lock, flags); 681 681 682 - out: 683 - return err; 682 + return SUCCESS; 684 683 }
+1
fs/bcachefs/btree_locking.c
··· 215 215 216 216 if (unlikely(!best)) { 217 217 struct printbuf buf = PRINTBUF; 218 + buf.atomic++; 218 219 219 220 prt_printf(&buf, bch2_fmt(g->g->trans->c, "cycle of nofail locks")); 220 221
+15 -1
fs/bcachefs/move.c
··· 547 547 ctxt->stats->pos = BBPOS(btree_id, start); 548 548 } 549 549 550 + bch2_trans_begin(trans); 550 551 bch2_trans_iter_init(trans, &iter, btree_id, start, 551 552 BTREE_ITER_prefetch| 552 553 BTREE_ITER_all_snapshots); ··· 921 920 ? c->opts.metadata_replicas 922 921 : io_opts->data_replicas; 923 922 924 - if (!nr_good || nr_good >= replicas) 923 + rcu_read_lock(); 924 + struct bkey_ptrs_c ptrs = bch2_bkey_ptrs_c(k); 925 + unsigned i = 0; 926 + bkey_for_each_ptr(ptrs, ptr) { 927 + struct bch_dev *ca = bch2_dev_rcu(c, ptr->dev); 928 + if (!ptr->cached && 929 + (!ca || !ca->mi.durability)) 930 + data_opts->kill_ptrs |= BIT(i); 931 + i++; 932 + } 933 + rcu_read_unlock(); 934 + 935 + if (!data_opts->kill_ptrs && 936 + (!nr_good || nr_good >= replicas)) 925 937 return false; 926 938 927 939 data_opts->target = 0;
+10
fs/btrfs/btrfs_inode.h
··· 89 89 BTRFS_INODE_FREE_SPACE_INODE, 90 90 /* Set when there are no capabilities in XATTs for the inode. */ 91 91 BTRFS_INODE_NO_CAP_XATTR, 92 + /* 93 + * Set if an error happened when doing a COW write before submitting a 94 + * bio or during writeback. Used for both buffered writes and direct IO 95 + * writes. This is to signal a fast fsync that it has to wait for 96 + * ordered extents to complete and therefore not log extent maps that 97 + * point to unwritten extents (when an ordered extent completes and it 98 + * has the BTRFS_ORDERED_IOERR flag set, it drops extent maps in its 99 + * range). 100 + */ 101 + BTRFS_INODE_COW_WRITE_ERROR, 92 102 }; 93 103 94 104 /* in memory btrfs inode */
+1 -9
fs/btrfs/disk-io.c
··· 4538 4538 struct btrfs_fs_info *fs_info) 4539 4539 { 4540 4540 struct rb_node *node; 4541 - struct btrfs_delayed_ref_root *delayed_refs; 4541 + struct btrfs_delayed_ref_root *delayed_refs = &trans->delayed_refs; 4542 4542 struct btrfs_delayed_ref_node *ref; 4543 4543 4544 - delayed_refs = &trans->delayed_refs; 4545 - 4546 4544 spin_lock(&delayed_refs->lock); 4547 - if (atomic_read(&delayed_refs->num_entries) == 0) { 4548 - spin_unlock(&delayed_refs->lock); 4549 - btrfs_debug(fs_info, "delayed_refs has NO entry"); 4550 - return; 4551 - } 4552 - 4553 4545 while ((node = rb_first_cached(&delayed_refs->href_root)) != NULL) { 4554 4546 struct btrfs_delayed_ref_head *head; 4555 4547 struct rb_node *n;
+31 -29
fs/btrfs/extent_io.c
··· 3689 3689 struct folio *folio = page_folio(page); 3690 3690 struct extent_buffer *exists; 3691 3691 3692 + lockdep_assert_held(&page->mapping->i_private_lock); 3693 + 3692 3694 /* 3693 3695 * For subpage case, we completely rely on radix tree to ensure we 3694 3696 * don't try to insert two ebs for the same bytenr. So here we always ··· 3758 3756 * The caller needs to free the existing folios and retry using the same order. 3759 3757 */ 3760 3758 static int attach_eb_folio_to_filemap(struct extent_buffer *eb, int i, 3759 + struct btrfs_subpage *prealloc, 3761 3760 struct extent_buffer **found_eb_ret) 3762 3761 { 3763 3762 3764 3763 struct btrfs_fs_info *fs_info = eb->fs_info; 3765 3764 struct address_space *mapping = fs_info->btree_inode->i_mapping; 3766 3765 const unsigned long index = eb->start >> PAGE_SHIFT; 3767 - struct folio *existing_folio; 3766 + struct folio *existing_folio = NULL; 3768 3767 int ret; 3769 3768 3770 3769 ASSERT(found_eb_ret); ··· 3777 3774 ret = filemap_add_folio(mapping, eb->folios[i], index + i, 3778 3775 GFP_NOFS | __GFP_NOFAIL); 3779 3776 if (!ret) 3780 - return 0; 3777 + goto finish; 3781 3778 3782 3779 existing_folio = filemap_lock_folio(mapping, index + i); 3783 3780 /* The page cache only exists for a very short time, just retry. */ 3784 - if (IS_ERR(existing_folio)) 3781 + if (IS_ERR(existing_folio)) { 3782 + existing_folio = NULL; 3785 3783 goto retry; 3784 + } 3786 3785 3787 3786 /* For now, we should only have single-page folios for btree inode. */ 3788 3787 ASSERT(folio_nr_pages(existing_folio) == 1); ··· 3795 3790 return -EAGAIN; 3796 3791 } 3797 3792 3798 - if (fs_info->nodesize < PAGE_SIZE) { 3799 - /* 3800 - * We're going to reuse the existing page, can drop our page 3801 - * and subpage structure now. 3802 - */ 3793 + finish: 3794 + spin_lock(&mapping->i_private_lock); 3795 + if (existing_folio && fs_info->nodesize < PAGE_SIZE) { 3796 + /* We're going to reuse the existing page, can drop our folio now. */ 3803 3797 __free_page(folio_page(eb->folios[i], 0)); 3804 3798 eb->folios[i] = existing_folio; 3805 - } else { 3799 + } else if (existing_folio) { 3806 3800 struct extent_buffer *existing_eb; 3807 3801 3808 3802 existing_eb = grab_extent_buffer(fs_info, ··· 3809 3805 if (existing_eb) { 3810 3806 /* The extent buffer still exists, we can use it directly. */ 3811 3807 *found_eb_ret = existing_eb; 3808 + spin_unlock(&mapping->i_private_lock); 3812 3809 folio_unlock(existing_folio); 3813 3810 folio_put(existing_folio); 3814 3811 return 1; ··· 3818 3813 __free_page(folio_page(eb->folios[i], 0)); 3819 3814 eb->folios[i] = existing_folio; 3820 3815 } 3816 + eb->folio_size = folio_size(eb->folios[i]); 3817 + eb->folio_shift = folio_shift(eb->folios[i]); 3818 + /* Should not fail, as we have preallocated the memory. */ 3819 + ret = attach_extent_buffer_folio(eb, eb->folios[i], prealloc); 3820 + ASSERT(!ret); 3821 + /* 3822 + * To inform we have an extra eb under allocation, so that 3823 + * detach_extent_buffer_page() won't release the folio private when the 3824 + * eb hasn't been inserted into radix tree yet. 3825 + * 3826 + * The ref will be decreased when the eb releases the page, in 3827 + * detach_extent_buffer_page(). Thus needs no special handling in the 3828 + * error path. 3829 + */ 3830 + btrfs_folio_inc_eb_refs(fs_info, eb->folios[i]); 3831 + spin_unlock(&mapping->i_private_lock); 3821 3832 return 0; 3822 3833 } 3823 3834 ··· 3845 3824 int attached = 0; 3846 3825 struct extent_buffer *eb; 3847 3826 struct extent_buffer *existing_eb = NULL; 3848 - struct address_space *mapping = fs_info->btree_inode->i_mapping; 3849 3827 struct btrfs_subpage *prealloc = NULL; 3850 3828 u64 lockdep_owner = owner_root; 3851 3829 bool page_contig = true; ··· 3910 3890 for (int i = 0; i < num_folios; i++) { 3911 3891 struct folio *folio; 3912 3892 3913 - ret = attach_eb_folio_to_filemap(eb, i, &existing_eb); 3893 + ret = attach_eb_folio_to_filemap(eb, i, prealloc, &existing_eb); 3914 3894 if (ret > 0) { 3915 3895 ASSERT(existing_eb); 3916 3896 goto out; ··· 3947 3927 * and free the allocated page. 3948 3928 */ 3949 3929 folio = eb->folios[i]; 3950 - eb->folio_size = folio_size(folio); 3951 - eb->folio_shift = folio_shift(folio); 3952 - spin_lock(&mapping->i_private_lock); 3953 - /* Should not fail, as we have preallocated the memory */ 3954 - ret = attach_extent_buffer_folio(eb, folio, prealloc); 3955 - ASSERT(!ret); 3956 - /* 3957 - * To inform we have extra eb under allocation, so that 3958 - * detach_extent_buffer_page() won't release the folio private 3959 - * when the eb hasn't yet been inserted into radix tree. 3960 - * 3961 - * The ref will be decreased when the eb released the page, in 3962 - * detach_extent_buffer_page(). 3963 - * Thus needs no special handling in error path. 3964 - */ 3965 - btrfs_folio_inc_eb_refs(fs_info, folio); 3966 - spin_unlock(&mapping->i_private_lock); 3967 - 3968 3930 WARN_ON(btrfs_folio_test_dirty(fs_info, folio, eb->start, eb->len)); 3969 3931 3970 3932 /*
+16
fs/btrfs/file.c
··· 1885 1885 */ 1886 1886 if (full_sync || btrfs_is_zoned(fs_info)) { 1887 1887 ret = btrfs_wait_ordered_range(inode, start, len); 1888 + clear_bit(BTRFS_INODE_COW_WRITE_ERROR, &BTRFS_I(inode)->runtime_flags); 1888 1889 } else { 1889 1890 /* 1890 1891 * Get our ordered extents as soon as possible to avoid doing ··· 1895 1894 btrfs_get_ordered_extents_for_logging(BTRFS_I(inode), 1896 1895 &ctx.ordered_extents); 1897 1896 ret = filemap_fdatawait_range(inode->i_mapping, start, end); 1897 + if (ret) 1898 + goto out_release_extents; 1899 + 1900 + /* 1901 + * Check and clear the BTRFS_INODE_COW_WRITE_ERROR now after 1902 + * starting and waiting for writeback, because for buffered IO 1903 + * it may have been set during the end IO callback 1904 + * (end_bbio_data_write() -> btrfs_finish_ordered_extent()) in 1905 + * case an error happened and we need to wait for ordered 1906 + * extents to complete so that any extent maps that point to 1907 + * unwritten locations are dropped and we don't log them. 1908 + */ 1909 + if (test_and_clear_bit(BTRFS_INODE_COW_WRITE_ERROR, 1910 + &BTRFS_I(inode)->runtime_flags)) 1911 + ret = btrfs_wait_ordered_range(inode, start, len); 1898 1912 } 1899 1913 1900 1914 if (ret)
+31
fs/btrfs/ordered-data.c
··· 388 388 ret = can_finish_ordered_extent(ordered, page, file_offset, len, uptodate); 389 389 spin_unlock_irqrestore(&inode->ordered_tree_lock, flags); 390 390 391 + /* 392 + * If this is a COW write it means we created new extent maps for the 393 + * range and they point to unwritten locations if we got an error either 394 + * before submitting a bio or during IO. 395 + * 396 + * We have marked the ordered extent with BTRFS_ORDERED_IOERR, and we 397 + * are queuing its completion below. During completion, at 398 + * btrfs_finish_one_ordered(), we will drop the extent maps for the 399 + * unwritten extents. 400 + * 401 + * However because completion runs in a work queue we can end up having 402 + * a fast fsync running before that. In the case of direct IO, once we 403 + * unlock the inode the fsync might start, and we queue the completion 404 + * before unlocking the inode. In the case of buffered IO when writeback 405 + * finishes (end_bbio_data_write()) we queue the completion, so if the 406 + * writeback was triggered by a fast fsync, the fsync might start 407 + * logging before ordered extent completion runs in the work queue. 408 + * 409 + * The fast fsync will log file extent items based on the extent maps it 410 + * finds, so if by the time it collects extent maps the ordered extent 411 + * completion didn't happen yet, it will log file extent items that 412 + * point to unwritten extents, resulting in a corruption if a crash 413 + * happens and the log tree is replayed. Note that a fast fsync does not 414 + * wait for completion of ordered extents in order to reduce latency. 415 + * 416 + * Set a flag in the inode so that the next fast fsync will wait for 417 + * ordered extents to complete before starting to log. 418 + */ 419 + if (!uptodate && !test_bit(BTRFS_ORDERED_NOCOW, &ordered->flags)) 420 + set_bit(BTRFS_INODE_COW_WRITE_ERROR, &inode->runtime_flags); 421 + 391 422 if (ret) 392 423 btrfs_queue_ordered_fn(ordered); 393 424 return ret;
+11 -6
fs/btrfs/tree-log.c
··· 4860 4860 path->slots[0]++; 4861 4861 continue; 4862 4862 } 4863 - if (!dropped_extents) { 4864 - /* 4865 - * Avoid logging extent items logged in past fsync calls 4866 - * and leading to duplicate keys in the log tree. 4867 - */ 4863 + /* 4864 + * Avoid overlapping items in the log tree. The first time we 4865 + * get here, get rid of everything from a past fsync. After 4866 + * that, if the current extent starts before the end of the last 4867 + * extent we copied, truncate the last one. This can happen if 4868 + * an ordered extent completion modifies the subvolume tree 4869 + * while btrfs_next_leaf() has the tree unlocked. 4870 + */ 4871 + if (!dropped_extents || key.offset < truncate_offset) { 4868 4872 ret = truncate_inode_items(trans, root->log_root, inode, 4869 - truncate_offset, 4873 + min(key.offset, truncate_offset), 4870 4874 BTRFS_EXTENT_DATA_KEY); 4871 4875 if (ret) 4872 4876 goto out; 4873 4877 dropped_extents = true; 4874 4878 } 4879 + truncate_offset = btrfs_file_extent_end(path); 4875 4880 if (ins_nr == 0) 4876 4881 start_slot = slot; 4877 4882 ins_nr++;
+1 -1
fs/nilfs2/dir.c
··· 607 607 608 608 kaddr = nilfs_get_folio(inode, i, &folio); 609 609 if (IS_ERR(kaddr)) 610 - continue; 610 + return 0; 611 611 612 612 de = (struct nilfs_dir_entry *)kaddr; 613 613 kaddr += nilfs_last_byte(inode, i) - NILFS_DIR_REC_LEN(1);
+3
fs/nilfs2/segment.c
··· 1652 1652 if (bh->b_folio != bd_folio) { 1653 1653 if (bd_folio) { 1654 1654 folio_lock(bd_folio); 1655 + folio_wait_writeback(bd_folio); 1655 1656 folio_clear_dirty_for_io(bd_folio); 1656 1657 folio_start_writeback(bd_folio); 1657 1658 folio_unlock(bd_folio); ··· 1666 1665 if (bh == segbuf->sb_super_root) { 1667 1666 if (bh->b_folio != bd_folio) { 1668 1667 folio_lock(bd_folio); 1668 + folio_wait_writeback(bd_folio); 1669 1669 folio_clear_dirty_for_io(bd_folio); 1670 1670 folio_start_writeback(bd_folio); 1671 1671 folio_unlock(bd_folio); ··· 1683 1681 } 1684 1682 if (bd_folio) { 1685 1683 folio_lock(bd_folio); 1684 + folio_wait_writeback(bd_folio); 1686 1685 folio_clear_dirty_for_io(bd_folio); 1687 1686 folio_start_writeback(bd_folio); 1688 1687 folio_unlock(bd_folio);
+1 -1
fs/proc/base.c
··· 3214 3214 mm = get_task_mm(task); 3215 3215 if (mm) { 3216 3216 seq_printf(m, "ksm_rmap_items %lu\n", mm->ksm_rmap_items); 3217 - seq_printf(m, "ksm_zero_pages %lu\n", mm->ksm_zero_pages); 3217 + seq_printf(m, "ksm_zero_pages %ld\n", mm_ksm_zero_pages(mm)); 3218 3218 seq_printf(m, "ksm_merging_pages %lu\n", mm->ksm_merging_pages); 3219 3219 seq_printf(m, "ksm_process_profit %ld\n", ksm_process_profit(mm)); 3220 3220 mmput(mm);
-3
fs/smb/client/smb2pdu.c
··· 4577 4577 if (rdata->subreq.start < rdata->subreq.rreq->i_size) 4578 4578 rdata->result = 0; 4579 4579 } 4580 - if (rdata->result == 0 || rdata->result == -EAGAIN) 4581 - iov_iter_advance(&rdata->subreq.io_iter, rdata->got_bytes); 4582 4580 rdata->credits.value = 0; 4583 4581 netfs_subreq_terminated(&rdata->subreq, 4584 4582 (rdata->result == 0 || rdata->result == -EAGAIN) ? ··· 4787 4789 wdata->result = -ENOSPC; 4788 4790 else 4789 4791 wdata->subreq.len = written; 4790 - iov_iter_advance(&wdata->subreq.io_iter, written); 4791 4792 break; 4792 4793 case MID_REQUEST_SUBMITTED: 4793 4794 case MID_RETRY_NEEDED:
+1 -1
fs/smb/client/smb2transport.c
··· 216 216 } 217 217 tcon = smb2_find_smb_sess_tcon_unlocked(ses, tid); 218 218 if (!tcon) { 219 - cifs_put_smb_ses(ses); 220 219 spin_unlock(&cifs_tcp_ses_lock); 220 + cifs_put_smb_ses(ses); 221 221 return NULL; 222 222 } 223 223 spin_unlock(&cifs_tcp_ses_lock);
-33
include/linux/amd-pstate.h drivers/cpufreq/amd-pstate.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 - * linux/include/linux/amd-pstate.h 4 - * 5 3 * Copyright (C) 2022 Advanced Micro Devices, Inc. 6 4 * 7 5 * Author: Meng Li <li.meng@amd.com> ··· 9 11 #define _LINUX_AMD_PSTATE_H 10 12 11 13 #include <linux/pm_qos.h> 12 - 13 - #define AMD_CPPC_EPP_PERFORMANCE 0x00 14 - #define AMD_CPPC_EPP_BALANCE_PERFORMANCE 0x80 15 - #define AMD_CPPC_EPP_BALANCE_POWERSAVE 0xBF 16 - #define AMD_CPPC_EPP_POWERSAVE 0xFF 17 14 18 15 /********************************************************************* 19 16 * AMD P-state INTERFACE * ··· 99 106 u32 policy; 100 107 u64 cppc_cap1_cached; 101 108 bool suspended; 102 - }; 103 - 104 - /* 105 - * enum amd_pstate_mode - driver working mode of amd pstate 106 - */ 107 - enum amd_pstate_mode { 108 - AMD_PSTATE_UNDEFINED = 0, 109 - AMD_PSTATE_DISABLE, 110 - AMD_PSTATE_PASSIVE, 111 - AMD_PSTATE_ACTIVE, 112 - AMD_PSTATE_GUIDED, 113 - AMD_PSTATE_MAX, 114 - }; 115 - 116 - static const char * const amd_pstate_mode_string[] = { 117 - [AMD_PSTATE_UNDEFINED] = "undefined", 118 - [AMD_PSTATE_DISABLE] = "disable", 119 - [AMD_PSTATE_PASSIVE] = "passive", 120 - [AMD_PSTATE_ACTIVE] = "active", 121 - [AMD_PSTATE_GUIDED] = "guided", 122 - NULL, 123 - }; 124 - 125 - struct quirk_entry { 126 - u32 nominal_freq; 127 - u32 lowest_freq; 128 109 }; 129 110 130 111 #endif /* _LINUX_AMD_PSTATE_H */
+3 -3
include/linux/atomic/atomic-arch-fallback.h
··· 2242 2242 2243 2243 /** 2244 2244 * raw_atomic_sub_and_test() - atomic subtract and test if zero with full ordering 2245 - * @i: int value to add 2245 + * @i: int value to subtract 2246 2246 * @v: pointer to atomic_t 2247 2247 * 2248 2248 * Atomically updates @v to (@v - @i) with full ordering. ··· 4368 4368 4369 4369 /** 4370 4370 * raw_atomic64_sub_and_test() - atomic subtract and test if zero with full ordering 4371 - * @i: s64 value to add 4371 + * @i: s64 value to subtract 4372 4372 * @v: pointer to atomic64_t 4373 4373 * 4374 4374 * Atomically updates @v to (@v - @i) with full ordering. ··· 4690 4690 } 4691 4691 4692 4692 #endif /* _LINUX_ATOMIC_FALLBACK_H */ 4693 - // 14850c0b0db20c62fdc78ccd1d42b98b88d76331 4693 + // b565db590afeeff0d7c9485ccbca5bb6e155749f
+4 -4
include/linux/atomic/atomic-instrumented.h
··· 1349 1349 1350 1350 /** 1351 1351 * atomic_sub_and_test() - atomic subtract and test if zero with full ordering 1352 - * @i: int value to add 1352 + * @i: int value to subtract 1353 1353 * @v: pointer to atomic_t 1354 1354 * 1355 1355 * Atomically updates @v to (@v - @i) with full ordering. ··· 2927 2927 2928 2928 /** 2929 2929 * atomic64_sub_and_test() - atomic subtract and test if zero with full ordering 2930 - * @i: s64 value to add 2930 + * @i: s64 value to subtract 2931 2931 * @v: pointer to atomic64_t 2932 2932 * 2933 2933 * Atomically updates @v to (@v - @i) with full ordering. ··· 4505 4505 4506 4506 /** 4507 4507 * atomic_long_sub_and_test() - atomic subtract and test if zero with full ordering 4508 - * @i: long value to add 4508 + * @i: long value to subtract 4509 4509 * @v: pointer to atomic_long_t 4510 4510 * 4511 4511 * Atomically updates @v to (@v - @i) with full ordering. ··· 5050 5050 5051 5051 5052 5052 #endif /* _LINUX_ATOMIC_INSTRUMENTED_H */ 5053 - // ce5b65e0f1f8a276268b667194581d24bed219d4 5053 + // 8829b337928e9508259079d32581775ececd415b
+2 -2
include/linux/atomic/atomic-long.h
··· 1535 1535 1536 1536 /** 1537 1537 * raw_atomic_long_sub_and_test() - atomic subtract and test if zero with full ordering 1538 - * @i: long value to add 1538 + * @i: long value to subtract 1539 1539 * @v: pointer to atomic_long_t 1540 1540 * 1541 1541 * Atomically updates @v to (@v - @i) with full ordering. ··· 1809 1809 } 1810 1810 1811 1811 #endif /* _LINUX_ATOMIC_LONG_H */ 1812 - // 1c4a26fc77f345342953770ebe3c4d08e7ce2f9a 1812 + // eadf183c3600b8b92b91839dd3be6bcc560c752d
+1 -1
include/linux/cdrom.h
··· 77 77 unsigned int clearing, int slot); 78 78 int (*tray_move) (struct cdrom_device_info *, int); 79 79 int (*lock_door) (struct cdrom_device_info *, int); 80 - int (*select_speed) (struct cdrom_device_info *, int); 80 + int (*select_speed) (struct cdrom_device_info *, unsigned long); 81 81 int (*get_last_session) (struct cdrom_device_info *, 82 82 struct cdrom_multisession *); 83 83 int (*get_mcn) (struct cdrom_device_info *,
+8 -2
include/linux/huge_mm.h
··· 269 269 MTHP_STAT_ANON_FAULT_ALLOC, 270 270 MTHP_STAT_ANON_FAULT_FALLBACK, 271 271 MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, 272 - MTHP_STAT_ANON_SWPOUT, 273 - MTHP_STAT_ANON_SWPOUT_FALLBACK, 272 + MTHP_STAT_SWPOUT, 273 + MTHP_STAT_SWPOUT_FALLBACK, 274 274 __MTHP_STAT_COUNT 275 275 }; 276 276 ··· 278 278 unsigned long stats[ilog2(MAX_PTRS_PER_PTE) + 1][__MTHP_STAT_COUNT]; 279 279 }; 280 280 281 + #ifdef CONFIG_SYSFS 281 282 DECLARE_PER_CPU(struct mthp_stat, mthp_stats); 282 283 283 284 static inline void count_mthp_stat(int order, enum mthp_stat_item item) ··· 288 287 289 288 this_cpu_inc(mthp_stats.stats[order][item]); 290 289 } 290 + #else 291 + static inline void count_mthp_stat(int order, enum mthp_stat_item item) 292 + { 293 + } 294 + #endif 291 295 292 296 #define transparent_hugepage_use_zero_page() \ 293 297 (transparent_hugepage_flags & \
-1
include/linux/i2c.h
··· 852 852 853 853 /* i2c adapter classes (bitmask) */ 854 854 #define I2C_CLASS_HWMON (1<<0) /* lm_sensors, ... */ 855 - #define I2C_CLASS_SPD (1<<7) /* Memory modules */ 856 855 /* Warn users that the adapter doesn't support classes anymore */ 857 856 #define I2C_CLASS_DEPRECATED (1<<8) 858 857
+1 -1
include/linux/iommu.h
··· 1533 1533 static inline struct iommu_sva * 1534 1534 iommu_sva_bind_device(struct device *dev, struct mm_struct *mm) 1535 1535 { 1536 - return NULL; 1536 + return ERR_PTR(-ENODEV); 1537 1537 } 1538 1538 1539 1539 static inline void iommu_sva_unbind_device(struct iommu_sva *handle)
+14 -3
include/linux/ksm.h
··· 33 33 */ 34 34 #define is_ksm_zero_pte(pte) (is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte)) 35 35 36 - extern unsigned long ksm_zero_pages; 36 + extern atomic_long_t ksm_zero_pages; 37 + 38 + static inline void ksm_map_zero_page(struct mm_struct *mm) 39 + { 40 + atomic_long_inc(&ksm_zero_pages); 41 + atomic_long_inc(&mm->ksm_zero_pages); 42 + } 37 43 38 44 static inline void ksm_might_unmap_zero_page(struct mm_struct *mm, pte_t pte) 39 45 { 40 46 if (is_ksm_zero_pte(pte)) { 41 - ksm_zero_pages--; 42 - mm->ksm_zero_pages--; 47 + atomic_long_dec(&ksm_zero_pages); 48 + atomic_long_dec(&mm->ksm_zero_pages); 43 49 } 50 + } 51 + 52 + static inline long mm_ksm_zero_pages(struct mm_struct *mm) 53 + { 54 + return atomic_long_read(&mm->ksm_zero_pages); 44 55 } 45 56 46 57 static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
-5
include/linux/lockdep.h
··· 297 297 .wait_type_inner = _wait_type, \ 298 298 .lock_type = LD_LOCK_WAIT_OVERRIDE, } 299 299 300 - #define lock_map_assert_held(l) \ 301 - lockdep_assert(lock_is_held(l) != LOCK_STATE_NOT_HELD) 302 - 303 300 #else /* !CONFIG_LOCKDEP */ 304 301 305 302 static inline void lockdep_init_task(struct task_struct *task) ··· 387 390 388 391 #define DEFINE_WAIT_OVERRIDE_MAP(_name, _wait_type) \ 389 392 struct lockdep_map __maybe_unused _name = {} 390 - 391 - #define lock_map_assert_held(l) do { (void)(l); } while (0) 392 393 393 394 #endif /* !LOCKDEP */ 394 395
+1 -1
include/linux/mm_types.h
··· 985 985 * Represent how many empty pages are merged with kernel zero 986 986 * pages when enabling KSM use_zero_pages. 987 987 */ 988 - unsigned long ksm_zero_pages; 988 + atomic_long_t ksm_zero_pages; 989 989 #endif /* CONFIG_KSM */ 990 990 #ifdef CONFIG_LRU_GEN_WALKS_MMU 991 991 struct {
-2
include/linux/pci.h
··· 413 413 struct resource driver_exclusive_resource; /* driver exclusive resource ranges */ 414 414 415 415 bool match_driver; /* Skip attaching driver */ 416 - struct lock_class_key cfg_access_key; 417 - struct lockdep_map cfg_access_lock; 418 416 419 417 unsigned int transparent:1; /* Subtractive decode bridge */ 420 418 unsigned int io_window:1; /* Bridge has I/O window */
+2 -4
include/linux/pnp.h
··· 435 435 #define protocol_for_each_dev(protocol, dev) \ 436 436 list_for_each_entry(dev, &(protocol)->devices, protocol_list) 437 437 438 - extern const struct bus_type pnp_bus_type; 439 - 440 438 #if defined(CONFIG_PNP) 441 439 442 440 /* device management */ ··· 467 469 int pnp_register_driver(struct pnp_driver *drv); 468 470 void pnp_unregister_driver(struct pnp_driver *drv); 469 471 470 - #define dev_is_pnp(d) ((d)->bus == &pnp_bus_type) 472 + bool dev_is_pnp(const struct device *dev); 471 473 472 474 #else 473 475 ··· 500 502 static inline int pnp_register_driver(struct pnp_driver *drv) { return -ENODEV; } 501 503 static inline void pnp_unregister_driver(struct pnp_driver *drv) { } 502 504 503 - #define dev_is_pnp(d) false 505 + static inline bool dev_is_pnp(const struct device *dev) { return false; } 504 506 505 507 #endif /* CONFIG_PNP */ 506 508
+1
include/net/rtnetlink.h
··· 13 13 RTNL_FLAG_DOIT_UNLOCKED = BIT(0), 14 14 RTNL_FLAG_BULK_DEL_SUPPORTED = BIT(1), 15 15 RTNL_FLAG_DUMP_UNLOCKED = BIT(2), 16 + RTNL_FLAG_DUMP_SPLIT_NLM_DONE = BIT(3), /* legacy behavior */ 16 17 }; 17 18 18 19 enum rtnl_kinds {
+4 -3
include/net/tcp_ao.h
··· 86 86 struct tcp_ao_info { 87 87 /* List of tcp_ao_key's */ 88 88 struct hlist_head head; 89 - /* current_key and rnext_key aren't maintained on listen sockets. 89 + /* current_key and rnext_key are maintained on sockets 90 + * in TCP_AO_ESTABLISHED states. 90 91 * Their purpose is to cache keys on established connections, 91 92 * saving needless lookups. Never dereference any of them from 92 93 * listen sockets. ··· 202 201 }; 203 202 204 203 struct tcp_sigpool; 204 + /* Established states are fast-path and there always is current_key/rnext_key */ 205 205 #define TCP_AO_ESTABLISHED (TCPF_ESTABLISHED | TCPF_FIN_WAIT1 | TCPF_FIN_WAIT2 | \ 206 - TCPF_CLOSE | TCPF_CLOSE_WAIT | \ 207 - TCPF_LAST_ACK | TCPF_CLOSING) 206 + TCPF_CLOSE_WAIT | TCPF_LAST_ACK | TCPF_CLOSING) 208 207 209 208 int tcp_ao_transmit_skb(struct sock *sk, struct sk_buff *skb, 210 209 struct tcp_ao_key *key, struct tcphdr *th,
+2
include/uapi/linux/input-event-codes.h
··· 618 618 #define KEY_CAMERA_ACCESS_ENABLE 0x24b /* Enables programmatic access to camera devices. (HUTRR72) */ 619 619 #define KEY_CAMERA_ACCESS_DISABLE 0x24c /* Disables programmatic access to camera devices. (HUTRR72) */ 620 620 #define KEY_CAMERA_ACCESS_TOGGLE 0x24d /* Toggles the current state of the camera access control. (HUTRR72) */ 621 + #define KEY_ACCESSIBILITY 0x24e /* Toggles the system bound accessibility UI/command (HUTRR116) */ 622 + #define KEY_DO_NOT_DISTURB 0x24f /* Toggles the system-wide "Do Not Disturb" control (HUTRR94)*/ 621 623 622 624 #define KEY_BRIGHTNESS_MIN 0x250 /* Set Brightness to Minimum */ 623 625 #define KEY_BRIGHTNESS_MAX 0x251 /* Set Brightness to Maximum */
+5 -5
io_uring/io-wq.c
··· 927 927 { 928 928 struct io_wq_acct *acct = io_work_get_acct(wq, work); 929 929 unsigned long work_flags = work->flags; 930 - struct io_cb_cancel_data match; 930 + struct io_cb_cancel_data match = { 931 + .fn = io_wq_work_match_item, 932 + .data = work, 933 + .cancel_all = false, 934 + }; 931 935 bool do_create; 932 936 933 937 /* ··· 969 965 raw_spin_unlock(&wq->lock); 970 966 971 967 /* fatal condition, failed to create the first worker */ 972 - match.fn = io_wq_work_match_item, 973 - match.data = work, 974 - match.cancel_all = false, 975 - 976 968 io_acct_cancel_pending_work(wq, acct, &match); 977 969 } 978 970 }
+1 -1
io_uring/io_uring.h
··· 433 433 { 434 434 if (req->flags & REQ_F_CAN_POLL) 435 435 return true; 436 - if (file_can_poll(req->file)) { 436 + if (req->file && file_can_poll(req->file)) { 437 437 req->flags |= REQ_F_CAN_POLL; 438 438 return true; 439 439 }
+12 -10
io_uring/napi.c
··· 261 261 } 262 262 263 263 /* 264 - * __io_napi_adjust_timeout() - Add napi id to the busy poll list 264 + * __io_napi_adjust_timeout() - adjust busy loop timeout 265 265 * @ctx: pointer to io-uring context structure 266 266 * @iowq: pointer to io wait queue 267 267 * @ts: pointer to timespec or NULL 268 268 * 269 269 * Adjust the busy loop timeout according to timespec and busy poll timeout. 270 + * If the specified NAPI timeout is bigger than the wait timeout, then adjust 271 + * the NAPI timeout accordingly. 270 272 */ 271 273 void __io_napi_adjust_timeout(struct io_ring_ctx *ctx, struct io_wait_queue *iowq, 272 274 struct timespec64 *ts) ··· 276 274 unsigned int poll_to = READ_ONCE(ctx->napi_busy_poll_to); 277 275 278 276 if (ts) { 279 - struct timespec64 poll_to_ts = ns_to_timespec64(1000 * (s64)poll_to); 277 + struct timespec64 poll_to_ts; 280 278 281 - if (timespec64_compare(ts, &poll_to_ts) > 0) { 282 - *ts = timespec64_sub(*ts, poll_to_ts); 283 - } else { 284 - u64 to = timespec64_to_ns(ts); 285 - 286 - do_div(to, 1000); 287 - ts->tv_sec = 0; 288 - ts->tv_nsec = 0; 279 + poll_to_ts = ns_to_timespec64(1000 * (s64)poll_to); 280 + if (timespec64_compare(ts, &poll_to_ts) < 0) { 281 + s64 poll_to_ns = timespec64_to_ns(ts); 282 + if (poll_to_ns > 0) { 283 + u64 val = poll_to_ns + 999; 284 + do_div(val, (s64) 1000); 285 + poll_to = val; 286 + } 289 287 } 290 288 } 291 289
+4
io_uring/register.c
··· 355 355 } 356 356 357 357 if (sqd) { 358 + mutex_unlock(&ctx->uring_lock); 358 359 mutex_unlock(&sqd->lock); 359 360 io_put_sq_data(sqd); 361 + mutex_lock(&ctx->uring_lock); 360 362 } 361 363 362 364 if (copy_to_user(arg, new_count, sizeof(new_count))) ··· 382 380 return 0; 383 381 err: 384 382 if (sqd) { 383 + mutex_unlock(&ctx->uring_lock); 385 384 mutex_unlock(&sqd->lock); 386 385 io_put_sq_data(sqd); 386 + mutex_lock(&ctx->uring_lock); 387 387 } 388 388 return ret; 389 389 }
-3
kernel/bpf/devmap.c
··· 760 760 for (i = 0; i < dtab->n_buckets; i++) { 761 761 head = dev_map_index_hash(dtab, i); 762 762 hlist_for_each_entry_safe(dst, next, head, index_hlist) { 763 - if (!dst) 764 - continue; 765 - 766 763 if (is_ifindex_excluded(excluded_devices, num_excluded, 767 764 dst->dev->ifindex)) 768 765 continue;
+6 -5
kernel/bpf/syscall.c
··· 2998 2998 void bpf_link_init(struct bpf_link *link, enum bpf_link_type type, 2999 2999 const struct bpf_link_ops *ops, struct bpf_prog *prog) 3000 3000 { 3001 + WARN_ON(ops->dealloc && ops->dealloc_deferred); 3001 3002 atomic64_set(&link->refcnt, 1); 3002 3003 link->type = type; 3003 3004 link->id = 0; ··· 3057 3056 /* bpf_link_free is guaranteed to be called from process context */ 3058 3057 static void bpf_link_free(struct bpf_link *link) 3059 3058 { 3059 + const struct bpf_link_ops *ops = link->ops; 3060 3060 bool sleepable = false; 3061 3061 3062 3062 bpf_link_free_id(link->id); 3063 3063 if (link->prog) { 3064 3064 sleepable = link->prog->sleepable; 3065 3065 /* detach BPF program, clean up used resources */ 3066 - link->ops->release(link); 3066 + ops->release(link); 3067 3067 bpf_prog_put(link->prog); 3068 3068 } 3069 - if (link->ops->dealloc_deferred) { 3069 + if (ops->dealloc_deferred) { 3070 3070 /* schedule BPF link deallocation; if underlying BPF program 3071 3071 * is sleepable, we need to first wait for RCU tasks trace 3072 3072 * sync, then go through "classic" RCU grace period ··· 3076 3074 call_rcu_tasks_trace(&link->rcu, bpf_link_defer_dealloc_mult_rcu_gp); 3077 3075 else 3078 3076 call_rcu(&link->rcu, bpf_link_defer_dealloc_rcu_gp); 3079 - } 3080 - if (link->ops->dealloc) 3081 - link->ops->dealloc(link); 3077 + } else if (ops->dealloc) 3078 + ops->dealloc(link); 3082 3079 } 3083 3080 3084 3081 static void bpf_link_put_deferred(struct work_struct *work)
+4
kernel/bpf/verifier.c
··· 11128 11128 #else 11129 11129 BTF_ID_UNUSED 11130 11130 #endif 11131 + #ifdef CONFIG_BPF_EVENTS 11131 11132 BTF_ID(func, bpf_session_cookie) 11133 + #else 11134 + BTF_ID_UNUSED 11135 + #endif 11132 11136 11133 11137 static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta) 11134 11138 {
+13
kernel/events/core.c
··· 5384 5384 again: 5385 5385 mutex_lock(&event->child_mutex); 5386 5386 list_for_each_entry(child, &event->child_list, child_list) { 5387 + void *var = NULL; 5387 5388 5388 5389 /* 5389 5390 * Cannot change, child events are not migrated, see the ··· 5425 5424 * this can't be the last reference. 5426 5425 */ 5427 5426 put_event(event); 5427 + } else { 5428 + var = &ctx->refcount; 5428 5429 } 5429 5430 5430 5431 mutex_unlock(&event->child_mutex); 5431 5432 mutex_unlock(&ctx->mutex); 5432 5433 put_ctx(ctx); 5434 + 5435 + if (var) { 5436 + /* 5437 + * If perf_event_free_task() has deleted all events from the 5438 + * ctx while the child_mutex got released above, make sure to 5439 + * notify about the preceding put_ctx(). 5440 + */ 5441 + smp_mb(); /* pairs with wait_var_event() */ 5442 + wake_up_var(var); 5443 + } 5433 5444 goto again; 5434 5445 } 5435 5446 mutex_unlock(&event->child_mutex);
-2
kernel/trace/bpf_trace.c
··· 3517 3517 } 3518 3518 #endif /* CONFIG_UPROBES */ 3519 3519 3520 - #ifdef CONFIG_FPROBE 3521 3520 __bpf_kfunc_start_defs(); 3522 3521 3523 3522 __bpf_kfunc bool bpf_session_is_return(void) ··· 3565 3566 } 3566 3567 3567 3568 late_initcall(bpf_kprobe_multi_kfuncs_init); 3568 - #endif
+1
lib/test_rhashtable.c
··· 811 811 module_init(test_rht_init); 812 812 module_exit(test_rht_exit); 813 813 814 + MODULE_DESCRIPTION("Resizable, Scalable, Concurrent Hash Table test module"); 814 815 MODULE_LICENSE("GPL v2");
+1 -1
mm/filemap.c
··· 1000 1000 do { 1001 1001 cpuset_mems_cookie = read_mems_allowed_begin(); 1002 1002 n = cpuset_mem_spread_node(); 1003 - folio = __folio_alloc_node(gfp, order, n); 1003 + folio = __folio_alloc_node_noprof(gfp, order, n); 1004 1004 } while (!folio && read_mems_allowed_retry(cpuset_mems_cookie)); 1005 1005 1006 1006 return folio;
+4 -4
mm/huge_memory.c
··· 558 558 DEFINE_MTHP_STAT_ATTR(anon_fault_alloc, MTHP_STAT_ANON_FAULT_ALLOC); 559 559 DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK); 560 560 DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); 561 - DEFINE_MTHP_STAT_ATTR(anon_swpout, MTHP_STAT_ANON_SWPOUT); 562 - DEFINE_MTHP_STAT_ATTR(anon_swpout_fallback, MTHP_STAT_ANON_SWPOUT_FALLBACK); 561 + DEFINE_MTHP_STAT_ATTR(swpout, MTHP_STAT_SWPOUT); 562 + DEFINE_MTHP_STAT_ATTR(swpout_fallback, MTHP_STAT_SWPOUT_FALLBACK); 563 563 564 564 static struct attribute *stats_attrs[] = { 565 565 &anon_fault_alloc_attr.attr, 566 566 &anon_fault_fallback_attr.attr, 567 567 &anon_fault_fallback_charge_attr.attr, 568 - &anon_swpout_attr.attr, 569 - &anon_swpout_fallback_attr.attr, 568 + &swpout_attr.attr, 569 + &swpout_fallback_attr.attr, 570 570 NULL, 571 571 }; 572 572
+14 -2
mm/hugetlb.c
··· 5768 5768 * do_exit() will not see it, and will keep the reservation 5769 5769 * forever. 5770 5770 */ 5771 - if (adjust_reservation && vma_needs_reservation(h, vma, address)) 5772 - vma_add_reservation(h, vma, address); 5771 + if (adjust_reservation) { 5772 + int rc = vma_needs_reservation(h, vma, address); 5773 + 5774 + if (rc < 0) 5775 + /* Pressumably allocate_file_region_entries failed 5776 + * to allocate a file_region struct. Clear 5777 + * hugetlb_restore_reserve so that global reserve 5778 + * count will not be incremented by free_huge_folio. 5779 + * Act as if we consumed the reservation. 5780 + */ 5781 + folio_clear_hugetlb_restore_reserve(page_folio(page)); 5782 + else if (rc) 5783 + vma_add_reservation(h, vma, address); 5784 + } 5773 5785 5774 5786 tlb_remove_page_size(tlb, page, huge_page_size(h)); 5775 5787 /*
+11 -4
mm/kmsan/core.c
··· 196 196 u32 origin, bool checked) 197 197 { 198 198 u64 address = (u64)addr; 199 - void *shadow_start; 200 - u32 *origin_start; 199 + u32 *shadow_start, *origin_start; 201 200 size_t pad = 0; 202 201 203 202 KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(addr, size)); ··· 224 225 origin_start = 225 226 (u32 *)kmsan_get_metadata((void *)address, KMSAN_META_ORIGIN); 226 227 227 - for (int i = 0; i < size / KMSAN_ORIGIN_SIZE; i++) 228 - origin_start[i] = origin; 228 + /* 229 + * If the new origin is non-zero, assume that the shadow byte is also non-zero, 230 + * and unconditionally overwrite the old origin slot. 231 + * If the new origin is zero, overwrite the old origin slot iff the 232 + * corresponding shadow slot is zero. 233 + */ 234 + for (int i = 0; i < size / KMSAN_ORIGIN_SIZE; i++) { 235 + if (origin || !shadow_start[i]) 236 + origin_start[i] = origin; 237 + } 229 238 } 230 239 231 240 struct page *kmsan_vmalloc_to_page_or_null(void *vaddr)
+7 -10
mm/ksm.c
··· 296 296 static bool ksm_smart_scan = true; 297 297 298 298 /* The number of zero pages which is placed by KSM */ 299 - unsigned long ksm_zero_pages; 299 + atomic_long_t ksm_zero_pages = ATOMIC_LONG_INIT(0); 300 300 301 301 /* The number of pages that have been skipped due to "smart scanning" */ 302 302 static unsigned long ksm_pages_skipped; ··· 1429 1429 * the dirty bit in zero page's PTE is set. 1430 1430 */ 1431 1431 newpte = pte_mkdirty(pte_mkspecial(pfn_pte(page_to_pfn(kpage), vma->vm_page_prot))); 1432 - ksm_zero_pages++; 1433 - mm->ksm_zero_pages++; 1432 + ksm_map_zero_page(mm); 1434 1433 /* 1435 1434 * We're replacing an anonymous page with a zero page, which is 1436 1435 * not anonymous. We need to do proper accounting otherwise we ··· 2753 2754 { 2754 2755 struct ksm_rmap_item *rmap_item; 2755 2756 struct page *page; 2756 - unsigned int npages = scan_npages; 2757 2757 2758 - while (npages-- && likely(!freezing(current))) { 2758 + while (scan_npages-- && likely(!freezing(current))) { 2759 2759 cond_resched(); 2760 2760 rmap_item = scan_get_next_rmap_item(&page); 2761 2761 if (!rmap_item) 2762 2762 return; 2763 2763 cmp_and_merge_page(page, rmap_item); 2764 2764 put_page(page); 2765 + ksm_pages_scanned++; 2765 2766 } 2766 - 2767 - ksm_pages_scanned += scan_npages - npages; 2768 2767 } 2769 2768 2770 2769 static int ksmd_should_run(void) ··· 3373 3376 #ifdef CONFIG_PROC_FS 3374 3377 long ksm_process_profit(struct mm_struct *mm) 3375 3378 { 3376 - return (long)(mm->ksm_merging_pages + mm->ksm_zero_pages) * PAGE_SIZE - 3379 + return (long)(mm->ksm_merging_pages + mm_ksm_zero_pages(mm)) * PAGE_SIZE - 3377 3380 mm->ksm_rmap_items * sizeof(struct ksm_rmap_item); 3378 3381 } 3379 3382 #endif /* CONFIG_PROC_FS */ ··· 3662 3665 static ssize_t ksm_zero_pages_show(struct kobject *kobj, 3663 3666 struct kobj_attribute *attr, char *buf) 3664 3667 { 3665 - return sysfs_emit(buf, "%ld\n", ksm_zero_pages); 3668 + return sysfs_emit(buf, "%ld\n", atomic_long_read(&ksm_zero_pages)); 3666 3669 } 3667 3670 KSM_ATTR_RO(ksm_zero_pages); 3668 3671 ··· 3671 3674 { 3672 3675 long general_profit; 3673 3676 3674 - general_profit = (ksm_pages_sharing + ksm_zero_pages) * PAGE_SIZE - 3677 + general_profit = (ksm_pages_sharing + atomic_long_read(&ksm_zero_pages)) * PAGE_SIZE - 3675 3678 ksm_rmap_items * sizeof(struct ksm_rmap_item); 3676 3679 3677 3680 return sysfs_emit(buf, "%ld\n", general_profit);
-2
mm/memcontrol.c
··· 3147 3147 struct mem_cgroup *memcg; 3148 3148 struct lruvec *lruvec; 3149 3149 3150 - lockdep_assert_irqs_disabled(); 3151 - 3152 3150 rcu_read_lock(); 3153 3151 memcg = obj_cgroup_memcg(objcg); 3154 3152 lruvec = mem_cgroup_lruvec(memcg, pgdat);
+1 -1
mm/mempool.c
··· 273 273 { 274 274 mempool_t *pool; 275 275 276 - pool = kzalloc_node(sizeof(*pool), gfp_mask, node_id); 276 + pool = kmalloc_node_noprof(sizeof(*pool), gfp_mask | __GFP_ZERO, node_id); 277 277 if (!pool) 278 278 return NULL; 279 279
+34 -16
mm/page_alloc.c
··· 1955 1955 } 1956 1956 1957 1957 /* 1958 - * Reserve a pageblock for exclusive use of high-order atomic allocations if 1959 - * there are no empty page blocks that contain a page with a suitable order 1958 + * Reserve the pageblock(s) surrounding an allocation request for 1959 + * exclusive use of high-order atomic allocations if there are no 1960 + * empty page blocks that contain a page with a suitable order 1960 1961 */ 1961 - static void reserve_highatomic_pageblock(struct page *page, struct zone *zone) 1962 + static void reserve_highatomic_pageblock(struct page *page, int order, 1963 + struct zone *zone) 1962 1964 { 1963 1965 int mt; 1964 1966 unsigned long max_managed, flags; ··· 1986 1984 /* Yoink! */ 1987 1985 mt = get_pageblock_migratetype(page); 1988 1986 /* Only reserve normal pageblocks (i.e., they can merge with others) */ 1989 - if (migratetype_is_mergeable(mt)) 1990 - if (move_freepages_block(zone, page, mt, 1991 - MIGRATE_HIGHATOMIC) != -1) 1992 - zone->nr_reserved_highatomic += pageblock_nr_pages; 1987 + if (!migratetype_is_mergeable(mt)) 1988 + goto out_unlock; 1989 + 1990 + if (order < pageblock_order) { 1991 + if (move_freepages_block(zone, page, mt, MIGRATE_HIGHATOMIC) == -1) 1992 + goto out_unlock; 1993 + zone->nr_reserved_highatomic += pageblock_nr_pages; 1994 + } else { 1995 + change_pageblock_range(page, order, MIGRATE_HIGHATOMIC); 1996 + zone->nr_reserved_highatomic += 1 << order; 1997 + } 1993 1998 1994 1999 out_unlock: 1995 2000 spin_unlock_irqrestore(&zone->lock, flags); ··· 2008 1999 * intense memory pressure but failed atomic allocations should be easier 2009 2000 * to recover from than an OOM. 2010 2001 * 2011 - * If @force is true, try to unreserve a pageblock even though highatomic 2002 + * If @force is true, try to unreserve pageblocks even though highatomic 2012 2003 * pageblock is exhausted. 2013 2004 */ 2014 2005 static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, ··· 2050 2041 * adjust the count once. 2051 2042 */ 2052 2043 if (is_migrate_highatomic(mt)) { 2044 + unsigned long size; 2053 2045 /* 2054 2046 * It should never happen but changes to 2055 2047 * locking could inadvertently allow a per-cpu ··· 2058 2048 * while unreserving so be safe and watch for 2059 2049 * underflows. 2060 2050 */ 2061 - zone->nr_reserved_highatomic -= min( 2062 - pageblock_nr_pages, 2063 - zone->nr_reserved_highatomic); 2051 + size = max(pageblock_nr_pages, 1UL << order); 2052 + size = min(size, zone->nr_reserved_highatomic); 2053 + zone->nr_reserved_highatomic -= size; 2064 2054 } 2065 2055 2066 2056 /* ··· 2072 2062 * of pageblocks that cannot be completely freed 2073 2063 * may increase. 2074 2064 */ 2075 - ret = move_freepages_block(zone, page, mt, 2076 - ac->migratetype); 2065 + if (order < pageblock_order) 2066 + ret = move_freepages_block(zone, page, mt, 2067 + ac->migratetype); 2068 + else { 2069 + move_to_free_list(page, zone, order, mt, 2070 + ac->migratetype); 2071 + change_pageblock_range(page, order, 2072 + ac->migratetype); 2073 + ret = 1; 2074 + } 2077 2075 /* 2078 - * Reserving this block already succeeded, so this should 2079 - * not fail on zone boundaries. 2076 + * Reserving the block(s) already succeeded, 2077 + * so this should not fail on zone boundaries. 2080 2078 */ 2081 2079 WARN_ON_ONCE(ret == -1); 2082 2080 if (ret > 0) { ··· 3424 3406 * if the pageblock should be reserved for the future 3425 3407 */ 3426 3408 if (unlikely(alloc_flags & ALLOC_HIGHATOMIC)) 3427 - reserve_highatomic_pageblock(page, zone); 3409 + reserve_highatomic_pageblock(page, order, zone); 3428 3410 3429 3411 return page; 3430 3412 } else {
+1 -1
mm/page_io.c
··· 217 217 count_memcg_folio_events(folio, THP_SWPOUT, 1); 218 218 count_vm_event(THP_SWPOUT); 219 219 } 220 - count_mthp_stat(folio_order(folio), MTHP_STAT_ANON_SWPOUT); 220 + count_mthp_stat(folio_order(folio), MTHP_STAT_SWPOUT); 221 221 #endif 222 222 count_vm_events(PSWPOUT, folio_nr_pages(folio)); 223 223 }
+3 -2
mm/slub.c
··· 1952 1952 #ifdef CONFIG_MEMCG 1953 1953 new_exts |= MEMCG_DATA_OBJEXTS; 1954 1954 #endif 1955 - old_exts = slab->obj_exts; 1955 + old_exts = READ_ONCE(slab->obj_exts); 1956 1956 handle_failed_objexts_alloc(old_exts, vec, objects); 1957 1957 if (new_slab) { 1958 1958 /* ··· 1961 1961 * be simply assigned. 1962 1962 */ 1963 1963 slab->obj_exts = new_exts; 1964 - } else if (cmpxchg(&slab->obj_exts, old_exts, new_exts) != old_exts) { 1964 + } else if ((old_exts & ~OBJEXTS_FLAGS_MASK) || 1965 + cmpxchg(&slab->obj_exts, old_exts, new_exts) != old_exts) { 1965 1966 /* 1966 1967 * If the slab is already in use, somebody can allocate and 1967 1968 * assign slabobj_exts in parallel. In this case the existing
+5 -5
mm/util.c
··· 705 705 706 706 if (oldsize >= newsize) 707 707 return (void *)p; 708 - newp = kvmalloc(newsize, flags); 708 + newp = kvmalloc_noprof(newsize, flags); 709 709 if (!newp) 710 710 return NULL; 711 711 memcpy(newp, p, oldsize); ··· 726 726 727 727 if (unlikely(check_mul_overflow(n, size, &bytes))) 728 728 return NULL; 729 - return __vmalloc(bytes, flags); 729 + return __vmalloc_noprof(bytes, flags); 730 730 } 731 731 EXPORT_SYMBOL(__vmalloc_array_noprof); 732 732 ··· 737 737 */ 738 738 void *vmalloc_array_noprof(size_t n, size_t size) 739 739 { 740 - return __vmalloc_array(n, size, GFP_KERNEL); 740 + return __vmalloc_array_noprof(n, size, GFP_KERNEL); 741 741 } 742 742 EXPORT_SYMBOL(vmalloc_array_noprof); 743 743 ··· 749 749 */ 750 750 void *__vcalloc_noprof(size_t n, size_t size, gfp_t flags) 751 751 { 752 - return __vmalloc_array(n, size, flags | __GFP_ZERO); 752 + return __vmalloc_array_noprof(n, size, flags | __GFP_ZERO); 753 753 } 754 754 EXPORT_SYMBOL(__vcalloc_noprof); 755 755 ··· 760 760 */ 761 761 void *vcalloc_noprof(size_t n, size_t size) 762 762 { 763 - return __vmalloc_array(n, size, GFP_KERNEL | __GFP_ZERO); 763 + return __vmalloc_array_noprof(n, size, GFP_KERNEL | __GFP_ZERO); 764 764 } 765 765 EXPORT_SYMBOL(vcalloc_noprof); 766 766
+1 -1
mm/vmalloc.c
··· 722 722 * and fall back on vmalloc() if that fails. Others 723 723 * just put it in the vmalloc space. 724 724 */ 725 - #if defined(CONFIG_MODULES) && defined(MODULES_VADDR) 725 + #if defined(CONFIG_EXECMEM) && defined(MODULES_VADDR) 726 726 unsigned long addr = (unsigned long)kasan_reset_tag(x); 727 727 if (addr >= MODULES_VADDR && addr < MODULES_END) 728 728 return 1;
+1 -1
mm/vmscan.c
··· 1227 1227 THP_SWPOUT_FALLBACK, 1); 1228 1228 count_vm_event(THP_SWPOUT_FALLBACK); 1229 1229 } 1230 - count_mthp_stat(order, MTHP_STAT_ANON_SWPOUT_FALLBACK); 1230 + count_mthp_stat(order, MTHP_STAT_SWPOUT_FALLBACK); 1231 1231 #endif 1232 1232 if (!add_to_swap(folio)) 1233 1233 goto activate_locked_split;
+6
net/ax25/af_ax25.c
··· 1378 1378 { 1379 1379 struct sk_buff *skb; 1380 1380 struct sock *newsk; 1381 + ax25_dev *ax25_dev; 1381 1382 DEFINE_WAIT(wait); 1382 1383 struct sock *sk; 1384 + ax25_cb *ax25; 1383 1385 int err = 0; 1384 1386 1385 1387 if (sock->state != SS_UNCONNECTED) ··· 1436 1434 kfree_skb(skb); 1437 1435 sk_acceptq_removed(sk); 1438 1436 newsock->state = SS_CONNECTED; 1437 + ax25 = sk_to_ax25(newsk); 1438 + ax25_dev = ax25->ax25_dev; 1439 + netdev_hold(ax25_dev->dev, &ax25->dev_tracker, GFP_ATOMIC); 1440 + ax25_dev_hold(ax25_dev); 1439 1441 1440 1442 out: 1441 1443 release_sock(sk);
+1 -1
net/ax25/ax25_dev.c
··· 196 196 list_for_each_entry_safe(s, n, &ax25_dev_list, list) { 197 197 netdev_put(s->dev, &s->dev_tracker); 198 198 list_del(&s->list); 199 - kfree(s); 199 + ax25_dev_put(s); 200 200 } 201 201 spin_unlock_bh(&ax25_dev_lock); 202 202 }
+6
net/bpf/test_run.c
··· 727 727 __bpf_prog_test_run_raw_tp(void *data) 728 728 { 729 729 struct bpf_raw_tp_test_run_info *info = data; 730 + struct bpf_trace_run_ctx run_ctx = {}; 731 + struct bpf_run_ctx *old_run_ctx; 732 + 733 + old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx); 730 734 731 735 rcu_read_lock(); 732 736 info->retval = bpf_prog_run(info->prog, info->ctx); 733 737 rcu_read_unlock(); 738 + 739 + bpf_reset_run_ctx(old_run_ctx); 734 740 } 735 741 736 742 int bpf_prog_test_run_raw_tp(struct bpf_prog *prog,
+2 -1
net/core/dev.c
··· 4516 4516 struct rps_dev_flow *rflow, u16 next_cpu) 4517 4517 { 4518 4518 if (next_cpu < nr_cpu_ids) { 4519 + u32 head; 4519 4520 #ifdef CONFIG_RFS_ACCEL 4520 4521 struct netdev_rx_queue *rxqueue; 4521 4522 struct rps_dev_flow_table *flow_table; 4522 4523 struct rps_dev_flow *old_rflow; 4523 - u32 flow_id, head; 4524 4524 u16 rxq_index; 4525 + u32 flow_id; 4525 4526 int rc; 4526 4527 4527 4528 /* Should we steer this flow to a different hardware queue? */
+2
net/core/dst_cache.c
··· 27 27 static void dst_cache_per_cpu_dst_set(struct dst_cache_pcpu *dst_cache, 28 28 struct dst_entry *dst, u32 cookie) 29 29 { 30 + DEBUG_NET_WARN_ON_ONCE(!in_softirq()); 30 31 dst_release(dst_cache->dst); 31 32 if (dst) 32 33 dst_hold(dst); ··· 41 40 { 42 41 struct dst_entry *dst; 43 42 43 + DEBUG_NET_WARN_ON_ONCE(!in_softirq()); 44 44 dst = idst->dst; 45 45 if (!dst) 46 46 goto fail;
+42 -2
net/core/rtnetlink.c
··· 6484 6484 6485 6485 /* Process one rtnetlink message. */ 6486 6486 6487 + static int rtnl_dumpit(struct sk_buff *skb, struct netlink_callback *cb) 6488 + { 6489 + rtnl_dumpit_func dumpit = cb->data; 6490 + int err; 6491 + 6492 + /* Previous iteration have already finished, avoid calling->dumpit() 6493 + * again, it may not expect to be called after it reached the end. 6494 + */ 6495 + if (!dumpit) 6496 + return 0; 6497 + 6498 + err = dumpit(skb, cb); 6499 + 6500 + /* Old dump handlers used to send NLM_DONE as in a separate recvmsg(). 6501 + * Some applications which parse netlink manually depend on this. 6502 + */ 6503 + if (cb->flags & RTNL_FLAG_DUMP_SPLIT_NLM_DONE) { 6504 + if (err < 0 && err != -EMSGSIZE) 6505 + return err; 6506 + if (!err) 6507 + cb->data = NULL; 6508 + 6509 + return skb->len; 6510 + } 6511 + return err; 6512 + } 6513 + 6514 + static int rtnetlink_dump_start(struct sock *ssk, struct sk_buff *skb, 6515 + const struct nlmsghdr *nlh, 6516 + struct netlink_dump_control *control) 6517 + { 6518 + if (control->flags & RTNL_FLAG_DUMP_SPLIT_NLM_DONE) { 6519 + WARN_ON(control->data); 6520 + control->data = control->dump; 6521 + control->dump = rtnl_dumpit; 6522 + } 6523 + 6524 + return netlink_dump_start(ssk, skb, nlh, control); 6525 + } 6526 + 6487 6527 static int rtnetlink_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, 6488 6528 struct netlink_ext_ack *extack) 6489 6529 { ··· 6588 6548 .module = owner, 6589 6549 .flags = flags, 6590 6550 }; 6591 - err = netlink_dump_start(rtnl, skb, nlh, &c); 6551 + err = rtnetlink_dump_start(rtnl, skb, nlh, &c); 6592 6552 /* netlink_dump_start() will keep a reference on 6593 6553 * module if dump is still in progress. 6594 6554 */ ··· 6734 6694 register_netdevice_notifier(&rtnetlink_dev_notifier); 6735 6695 6736 6696 rtnl_register(PF_UNSPEC, RTM_GETLINK, rtnl_getlink, 6737 - rtnl_dump_ifinfo, 0); 6697 + rtnl_dump_ifinfo, RTNL_FLAG_DUMP_SPLIT_NLM_DONE); 6738 6698 rtnl_register(PF_UNSPEC, RTM_SETLINK, rtnl_setlink, NULL, 0); 6739 6699 rtnl_register(PF_UNSPEC, RTM_NEWLINK, rtnl_newlink, NULL, 0); 6740 6700 rtnl_register(PF_UNSPEC, RTM_DELLINK, rtnl_dellink, NULL, 0);
+1 -1
net/ethtool/ioctl.c
··· 2220 2220 const struct ethtool_ops *ops = dev->ethtool_ops; 2221 2221 int n_stats, ret; 2222 2222 2223 - if (!ops || !ops->get_sset_count || ops->get_ethtool_phy_stats) 2223 + if (!ops || !ops->get_sset_count || !ops->get_ethtool_phy_stats) 2224 2224 return -EOPNOTSUPP; 2225 2225 2226 2226 n_stats = ops->get_sset_count(dev, ETH_SS_PHY_STATS);
+3 -3
net/ethtool/tsinfo.c
··· 38 38 ret = ethnl_ops_begin(dev); 39 39 if (ret < 0) 40 40 return ret; 41 - if (req_base->flags & ETHTOOL_FLAG_STATS && 42 - dev->ethtool_ops->get_ts_stats) { 41 + if (req_base->flags & ETHTOOL_FLAG_STATS) { 43 42 ethtool_stats_init((u64 *)&data->stats, 44 43 sizeof(data->stats) / sizeof(u64)); 45 - dev->ethtool_ops->get_ts_stats(dev, &data->stats); 44 + if (dev->ethtool_ops->get_ts_stats) 45 + dev->ethtool_ops->get_ts_stats(dev, &data->stats); 46 46 } 47 47 ret = __ethtool_get_ts_info(dev, &data->ts_info); 48 48 ethnl_ops_complete(dev);
+1 -1
net/ipv4/devinet.c
··· 2805 2805 rtnl_register(PF_INET, RTM_NEWADDR, inet_rtm_newaddr, NULL, 0); 2806 2806 rtnl_register(PF_INET, RTM_DELADDR, inet_rtm_deladdr, NULL, 0); 2807 2807 rtnl_register(PF_INET, RTM_GETADDR, NULL, inet_dump_ifaddr, 2808 - RTNL_FLAG_DUMP_UNLOCKED); 2808 + RTNL_FLAG_DUMP_UNLOCKED | RTNL_FLAG_DUMP_SPLIT_NLM_DONE); 2809 2809 rtnl_register(PF_INET, RTM_GETNETCONF, inet_netconf_get_devconf, 2810 2810 inet_netconf_dump_devconf, 2811 2811 RTNL_FLAG_DOIT_UNLOCKED | RTNL_FLAG_DUMP_UNLOCKED);
+1 -6
net/ipv4/fib_frontend.c
··· 1050 1050 e++; 1051 1051 } 1052 1052 } 1053 - 1054 - /* Don't let NLM_DONE coalesce into a message, even if it could. 1055 - * Some user space expects NLM_DONE in a separate recv(). 1056 - */ 1057 - err = skb->len; 1058 1053 out: 1059 1054 1060 1055 cb->args[1] = e; ··· 1660 1665 rtnl_register(PF_INET, RTM_NEWROUTE, inet_rtm_newroute, NULL, 0); 1661 1666 rtnl_register(PF_INET, RTM_DELROUTE, inet_rtm_delroute, NULL, 0); 1662 1667 rtnl_register(PF_INET, RTM_GETROUTE, NULL, inet_dump_fib, 1663 - RTNL_FLAG_DUMP_UNLOCKED); 1668 + RTNL_FLAG_DUMP_UNLOCKED | RTNL_FLAG_DUMP_SPLIT_NLM_DONE); 1664 1669 }
+8 -1
net/ipv4/tcp.c
··· 1165 1165 1166 1166 process_backlog++; 1167 1167 1168 + #ifdef CONFIG_SKB_DECRYPTED 1169 + skb->decrypted = !!(flags & MSG_SENDPAGE_DECRYPTED); 1170 + #endif 1168 1171 tcp_skb_entail(sk, skb); 1169 1172 copy = size_goal; 1170 1173 ··· 2649 2646 if (oldstate != TCP_ESTABLISHED) 2650 2647 TCP_INC_STATS(sock_net(sk), TCP_MIB_CURRESTAB); 2651 2648 break; 2649 + case TCP_CLOSE_WAIT: 2650 + if (oldstate == TCP_SYN_RECV) 2651 + TCP_INC_STATS(sock_net(sk), TCP_MIB_CURRESTAB); 2652 + break; 2652 2653 2653 2654 case TCP_CLOSE: 2654 2655 if (oldstate == TCP_CLOSE_WAIT || oldstate == TCP_ESTABLISHED) ··· 2664 2657 inet_put_port(sk); 2665 2658 fallthrough; 2666 2659 default: 2667 - if (oldstate == TCP_ESTABLISHED) 2660 + if (oldstate == TCP_ESTABLISHED || oldstate == TCP_CLOSE_WAIT) 2668 2661 TCP_DEC_STATS(sock_net(sk), TCP_MIB_CURRESTAB); 2669 2662 } 2670 2663
+9 -4
net/ipv4/tcp_ao.c
··· 933 933 struct tcp_ao_key *key; 934 934 __be32 sisn, disn; 935 935 u8 *traffic_key; 936 + int state; 936 937 u32 sne = 0; 937 938 938 939 info = rcu_dereference(tcp_sk(sk)->ao_info); ··· 949 948 disn = 0; 950 949 } 951 950 951 + state = READ_ONCE(sk->sk_state); 952 952 /* Fast-path */ 953 - if (likely((1 << sk->sk_state) & TCP_AO_ESTABLISHED)) { 953 + if (likely((1 << state) & TCP_AO_ESTABLISHED)) { 954 954 enum skb_drop_reason err; 955 955 struct tcp_ao_key *current_key; 956 956 ··· 990 988 return SKB_NOT_DROPPED_YET; 991 989 } 992 990 991 + if (unlikely(state == TCP_CLOSE)) 992 + return SKB_DROP_REASON_TCP_CLOSE; 993 + 993 994 /* Lookup key based on peer address and keyid. 994 995 * current_key and rnext_key must not be used on tcp listen 995 996 * sockets as otherwise: ··· 1006 1001 if (th->syn && !th->ack) 1007 1002 goto verify_hash; 1008 1003 1009 - if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_NEW_SYN_RECV)) { 1004 + if ((1 << state) & (TCPF_LISTEN | TCPF_NEW_SYN_RECV)) { 1010 1005 /* Make the initial syn the likely case here */ 1011 1006 if (unlikely(req)) { 1012 1007 sne = tcp_ao_compute_sne(0, tcp_rsk(req)->rcv_isn, ··· 1023 1018 /* no way to figure out initial sisn/disn - drop */ 1024 1019 return SKB_DROP_REASON_TCP_FLAGS; 1025 1020 } 1026 - } else if ((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV)) { 1021 + } else if ((1 << state) & (TCPF_SYN_SENT | TCPF_SYN_RECV)) { 1027 1022 disn = info->lisn; 1028 1023 if (th->syn || th->rst) 1029 1024 sisn = th->seq; 1030 1025 else 1031 1026 sisn = info->risn; 1032 1027 } else { 1033 - WARN_ONCE(1, "TCP-AO: Unexpected sk_state %d", sk->sk_state); 1028 + WARN_ONCE(1, "TCP-AO: Unexpected sk_state %d", state); 1034 1029 return SKB_DROP_REASON_TCP_AOFAILURE; 1035 1030 } 1036 1031 verify_hash:
+6 -1
net/ipv6/ila/ila_lwt.c
··· 58 58 return orig_dst->lwtstate->orig_output(net, sk, skb); 59 59 } 60 60 61 + local_bh_disable(); 61 62 dst = dst_cache_get(&ilwt->dst_cache); 63 + local_bh_enable(); 62 64 if (unlikely(!dst)) { 63 65 struct ipv6hdr *ip6h = ipv6_hdr(skb); 64 66 struct flowi6 fl6; ··· 88 86 goto drop; 89 87 } 90 88 91 - if (ilwt->connected) 89 + if (ilwt->connected) { 90 + local_bh_disable(); 92 91 dst_cache_set_ip6(&ilwt->dst_cache, dst, &fl6.saddr); 92 + local_bh_enable(); 93 + } 93 94 } 94 95 95 96 skb_dst_set(skb, dst);
+4 -4
net/ipv6/ioam6_iptunnel.c
··· 351 351 goto drop; 352 352 353 353 if (!ipv6_addr_equal(&orig_daddr, &ipv6_hdr(skb)->daddr)) { 354 - preempt_disable(); 354 + local_bh_disable(); 355 355 dst = dst_cache_get(&ilwt->cache); 356 - preempt_enable(); 356 + local_bh_enable(); 357 357 358 358 if (unlikely(!dst)) { 359 359 struct ipv6hdr *hdr = ipv6_hdr(skb); ··· 373 373 goto drop; 374 374 } 375 375 376 - preempt_disable(); 376 + local_bh_disable(); 377 377 dst_cache_set_ip6(&ilwt->cache, dst, &fl6.saddr); 378 - preempt_enable(); 378 + local_bh_enable(); 379 379 } 380 380 381 381 skb_dst_drop(skb);
+5 -1
net/ipv6/ip6_fib.c
··· 966 966 if (!fib6_nh->rt6i_pcpu) 967 967 return; 968 968 969 + rcu_read_lock(); 969 970 /* release the reference to this fib entry from 970 971 * all of its cached pcpu routes 971 972 */ ··· 975 974 struct rt6_info *pcpu_rt; 976 975 977 976 ppcpu_rt = per_cpu_ptr(fib6_nh->rt6i_pcpu, cpu); 978 - pcpu_rt = *ppcpu_rt; 977 + 978 + /* Paired with xchg() in rt6_get_pcpu_route() */ 979 + pcpu_rt = READ_ONCE(*ppcpu_rt); 979 980 980 981 /* only dropping the 'from' reference if the cached route 981 982 * is using 'match'. The cached pcpu_rt->from only changes ··· 991 988 fib6_info_release(from); 992 989 } 993 990 } 991 + rcu_read_unlock(); 994 992 } 995 993 996 994 struct fib6_nh_pcpu_arg {
+1
net/ipv6/route.c
··· 1409 1409 struct rt6_info *prev, **p; 1410 1410 1411 1411 p = this_cpu_ptr(res->nh->rt6i_pcpu); 1412 + /* Paired with READ_ONCE() in __fib6_drop_pcpu_from() */ 1412 1413 prev = xchg(p, NULL); 1413 1414 if (prev) { 1414 1415 dst_dev_put(&prev->dst);
+6 -8
net/ipv6/rpl_iptunnel.c
··· 212 212 if (unlikely(err)) 213 213 goto drop; 214 214 215 - preempt_disable(); 215 + local_bh_disable(); 216 216 dst = dst_cache_get(&rlwt->cache); 217 - preempt_enable(); 217 + local_bh_enable(); 218 218 219 219 if (unlikely(!dst)) { 220 220 struct ipv6hdr *hdr = ipv6_hdr(skb); ··· 234 234 goto drop; 235 235 } 236 236 237 - preempt_disable(); 237 + local_bh_disable(); 238 238 dst_cache_set_ip6(&rlwt->cache, dst, &fl6.saddr); 239 - preempt_enable(); 239 + local_bh_enable(); 240 240 } 241 241 242 242 skb_dst_drop(skb); ··· 268 268 return err; 269 269 } 270 270 271 - preempt_disable(); 271 + local_bh_disable(); 272 272 dst = dst_cache_get(&rlwt->cache); 273 - preempt_enable(); 274 273 275 274 if (!dst) { 276 275 ip6_route_input(skb); 277 276 dst = skb_dst(skb); 278 277 if (!dst->error) { 279 - preempt_disable(); 280 278 dst_cache_set_ip6(&rlwt->cache, dst, 281 279 &ipv6_hdr(skb)->saddr); 282 - preempt_enable(); 283 280 } 284 281 } else { 285 282 skb_dst_drop(skb); 286 283 skb_dst_set(skb, dst); 287 284 } 285 + local_bh_enable(); 288 286 289 287 err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev)); 290 288 if (unlikely(err))
+6 -8
net/ipv6/seg6_iptunnel.c
··· 464 464 465 465 slwt = seg6_lwt_lwtunnel(orig_dst->lwtstate); 466 466 467 - preempt_disable(); 467 + local_bh_disable(); 468 468 dst = dst_cache_get(&slwt->cache); 469 - preempt_enable(); 470 469 471 470 if (!dst) { 472 471 ip6_route_input(skb); 473 472 dst = skb_dst(skb); 474 473 if (!dst->error) { 475 - preempt_disable(); 476 474 dst_cache_set_ip6(&slwt->cache, dst, 477 475 &ipv6_hdr(skb)->saddr); 478 - preempt_enable(); 479 476 } 480 477 } else { 481 478 skb_dst_drop(skb); 482 479 skb_dst_set(skb, dst); 483 480 } 481 + local_bh_enable(); 484 482 485 483 err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev)); 486 484 if (unlikely(err)) ··· 534 536 535 537 slwt = seg6_lwt_lwtunnel(orig_dst->lwtstate); 536 538 537 - preempt_disable(); 539 + local_bh_disable(); 538 540 dst = dst_cache_get(&slwt->cache); 539 - preempt_enable(); 541 + local_bh_enable(); 540 542 541 543 if (unlikely(!dst)) { 542 544 struct ipv6hdr *hdr = ipv6_hdr(skb); ··· 556 558 goto drop; 557 559 } 558 560 559 - preempt_disable(); 561 + local_bh_disable(); 560 562 dst_cache_set_ip6(&slwt->cache, dst, &fl6.saddr); 561 - preempt_enable(); 563 + local_bh_enable(); 562 564 } 563 565 564 566 skb_dst_drop(skb);
+5 -4
net/mac80211/cfg.c
··· 2958 2958 memcpy(sdata->vif.bss_conf.mcast_rate, rate, 2959 2959 sizeof(int) * NUM_NL80211_BANDS); 2960 2960 2961 - ieee80211_link_info_change_notify(sdata, &sdata->deflink, 2962 - BSS_CHANGED_MCAST_RATE); 2961 + if (ieee80211_sdata_running(sdata)) 2962 + ieee80211_link_info_change_notify(sdata, &sdata->deflink, 2963 + BSS_CHANGED_MCAST_RATE); 2963 2964 2964 2965 return 0; 2965 2966 } ··· 4017 4016 goto out; 4018 4017 } 4019 4018 4020 - link_data->csa_chanreq = chanreq; 4019 + link_data->csa_chanreq = chanreq; 4021 4020 link_conf->csa_active = true; 4022 4021 4023 4022 if (params->block_tx && ··· 4028 4027 } 4029 4028 4030 4029 cfg80211_ch_switch_started_notify(sdata->dev, 4031 - &link_data->csa_chanreq.oper, 0, 4030 + &link_data->csa_chanreq.oper, link_id, 4032 4031 params->count, params->block_tx); 4033 4032 4034 4033 if (changed) {
+8 -2
net/mac80211/he.c
··· 230 230 231 231 if (!he_spr_ie_elem) 232 232 return; 233 + 234 + he_obss_pd->sr_ctrl = he_spr_ie_elem->he_sr_control; 233 235 data = he_spr_ie_elem->optional; 234 236 235 237 if (he_spr_ie_elem->he_sr_control & 236 238 IEEE80211_HE_SPR_NON_SRG_OFFSET_PRESENT) 237 - data++; 239 + he_obss_pd->non_srg_max_offset = *data++; 240 + 238 241 if (he_spr_ie_elem->he_sr_control & 239 242 IEEE80211_HE_SPR_SRG_INFORMATION_PRESENT) { 240 - he_obss_pd->max_offset = *data++; 241 243 he_obss_pd->min_offset = *data++; 244 + he_obss_pd->max_offset = *data++; 245 + memcpy(he_obss_pd->bss_color_bitmap, data, 8); 246 + data += 8; 247 + memcpy(he_obss_pd->partial_bssid_bitmap, data, 8); 242 248 he_obss_pd->enable = true; 243 249 } 244 250 }
+2
net/mac80211/ieee80211_i.h
··· 1845 1845 void ieee80211_configure_filter(struct ieee80211_local *local); 1846 1846 u64 ieee80211_reset_erp_info(struct ieee80211_sub_if_data *sdata); 1847 1847 1848 + void ieee80211_handle_queued_frames(struct ieee80211_local *local); 1849 + 1848 1850 u64 ieee80211_mgmt_tx_cookie(struct ieee80211_local *local); 1849 1851 int ieee80211_attach_ack_skb(struct ieee80211_local *local, struct sk_buff *skb, 1850 1852 u64 *cookie, gfp_t gfp);
+8 -2
net/mac80211/main.c
··· 423 423 BSS_CHANGED_ERP_SLOT; 424 424 } 425 425 426 - static void ieee80211_tasklet_handler(struct tasklet_struct *t) 426 + void ieee80211_handle_queued_frames(struct ieee80211_local *local) 427 427 { 428 - struct ieee80211_local *local = from_tasklet(local, t, tasklet); 429 428 struct sk_buff *skb; 430 429 431 430 while ((skb = skb_dequeue(&local->skb_queue)) || ··· 447 448 break; 448 449 } 449 450 } 451 + } 452 + 453 + static void ieee80211_tasklet_handler(struct tasklet_struct *t) 454 + { 455 + struct ieee80211_local *local = from_tasklet(local, t, tasklet); 456 + 457 + ieee80211_handle_queued_frames(local); 450 458 } 451 459 452 460 static void ieee80211_restart_work(struct work_struct *work)
+1
net/mac80211/mesh.c
··· 1776 1776 ifmsh->last_preq = jiffies; 1777 1777 ifmsh->next_perr = jiffies; 1778 1778 ifmsh->csa_role = IEEE80211_MESH_CSA_ROLE_NONE; 1779 + ifmsh->nonpeer_pm = NL80211_MESH_POWER_ACTIVE; 1779 1780 /* Allocate all mesh structures when creating the first mesh interface. */ 1780 1781 if (!mesh_allocated) 1781 1782 ieee80211s_init();
+13
net/mac80211/mesh_pathtbl.c
··· 1017 1017 */ 1018 1018 void mesh_path_flush_pending(struct mesh_path *mpath) 1019 1019 { 1020 + struct ieee80211_sub_if_data *sdata = mpath->sdata; 1021 + struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh; 1022 + struct mesh_preq_queue *preq, *tmp; 1020 1023 struct sk_buff *skb; 1021 1024 1022 1025 while ((skb = skb_dequeue(&mpath->frame_queue)) != NULL) 1023 1026 mesh_path_discard_frame(mpath->sdata, skb); 1027 + 1028 + spin_lock_bh(&ifmsh->mesh_preq_queue_lock); 1029 + list_for_each_entry_safe(preq, tmp, &ifmsh->preq_queue.list, list) { 1030 + if (ether_addr_equal(mpath->dst, preq->dst)) { 1031 + list_del(&preq->list); 1032 + kfree(preq); 1033 + --ifmsh->preq_queue_len; 1034 + } 1035 + } 1036 + spin_unlock_bh(&ifmsh->mesh_preq_queue_lock); 1024 1037 } 1025 1038 1026 1039 /**
+1 -1
net/mac80211/parse.c
··· 111 111 if (params->mode < IEEE80211_CONN_MODE_HE) 112 112 break; 113 113 if (len >= sizeof(*elems->he_spr) && 114 - len >= ieee80211_he_spr_size(data)) 114 + len >= ieee80211_he_spr_size(data) - 1) 115 115 elems->he_spr = data; 116 116 break; 117 117 case WLAN_EID_EXT_HE_6GHZ_CAPA:
+10 -4
net/mac80211/scan.c
··· 744 744 local->hw_scan_ies_bufsize *= n_bands; 745 745 } 746 746 747 - local->hw_scan_req = kmalloc( 748 - sizeof(*local->hw_scan_req) + 749 - req->n_channels * sizeof(req->channels[0]) + 750 - local->hw_scan_ies_bufsize, GFP_KERNEL); 747 + local->hw_scan_req = kmalloc(struct_size(local->hw_scan_req, 748 + req.channels, 749 + req->n_channels) + 750 + local->hw_scan_ies_bufsize, 751 + GFP_KERNEL); 751 752 if (!local->hw_scan_req) 752 753 return -ENOMEM; 753 754 754 755 local->hw_scan_req->req.ssids = req->ssids; 755 756 local->hw_scan_req->req.n_ssids = req->n_ssids; 757 + /* None of the channels are actually set 758 + * up but let UBSAN know the boundaries. 759 + */ 760 + local->hw_scan_req->req.n_channels = req->n_channels; 761 + 756 762 ies = (u8 *)local->hw_scan_req + 757 763 sizeof(*local->hw_scan_req) + 758 764 req->n_channels * sizeof(req->channels[0]);
+2 -2
net/mac80211/sta_info.c
··· 1724 1724 skb_queue_head_init(&pending); 1725 1725 1726 1726 /* sync with ieee80211_tx_h_unicast_ps_buf */ 1727 - spin_lock(&sta->ps_lock); 1727 + spin_lock_bh(&sta->ps_lock); 1728 1728 /* Send all buffered frames to the station */ 1729 1729 for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) { 1730 1730 int count = skb_queue_len(&pending), tmp; ··· 1753 1753 */ 1754 1754 clear_sta_flag(sta, WLAN_STA_PSPOLL); 1755 1755 clear_sta_flag(sta, WLAN_STA_UAPSD); 1756 - spin_unlock(&sta->ps_lock); 1756 + spin_unlock_bh(&sta->ps_lock); 1757 1757 1758 1758 atomic_dec(&ps->num_sta_ps); 1759 1759
+2
net/mac80211/util.c
··· 1567 1567 1568 1568 void ieee80211_stop_device(struct ieee80211_local *local) 1569 1569 { 1570 + ieee80211_handle_queued_frames(local); 1571 + 1570 1572 ieee80211_led_radio(local, false); 1571 1573 ieee80211_mod_tpt_led_trig(local, 0, IEEE80211_TPT_LEDTRIG_FL_RADIO); 1572 1574
+7 -2
net/mptcp/protocol.c
··· 2916 2916 if (oldstate != TCP_ESTABLISHED) 2917 2917 MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_CURRESTAB); 2918 2918 break; 2919 - 2919 + case TCP_CLOSE_WAIT: 2920 + /* Unlike TCP, MPTCP sk would not have the TCP_SYN_RECV state: 2921 + * MPTCP "accepted" sockets will be created later on. So no 2922 + * transition from TCP_SYN_RECV to TCP_CLOSE_WAIT. 2923 + */ 2924 + break; 2920 2925 default: 2921 - if (oldstate == TCP_ESTABLISHED) 2926 + if (oldstate == TCP_ESTABLISHED || oldstate == TCP_CLOSE_WAIT) 2922 2927 MPTCP_DEC_STATS(sock_net(sk), MPTCP_MIB_CURRESTAB); 2923 2928 } 2924 2929
+2
net/ncsi/internal.h
··· 325 325 spinlock_t lock; /* Protect the NCSI device */ 326 326 unsigned int package_probe_id;/* Current ID during probe */ 327 327 unsigned int package_num; /* Number of packages */ 328 + unsigned int channel_probe_id;/* Current cahnnel ID during probe */ 328 329 struct list_head packages; /* List of packages */ 329 330 struct ncsi_channel *hot_channel; /* Channel was ever active */ 330 331 struct ncsi_request requests[256]; /* Request table */ ··· 344 343 bool multi_package; /* Enable multiple packages */ 345 344 bool mlx_multi_host; /* Enable multi host Mellanox */ 346 345 u32 package_whitelist; /* Packages to configure */ 346 + unsigned char channel_count; /* Num of channels to probe */ 347 347 }; 348 348 349 349 struct ncsi_cmd_arg {
+37 -38
net/ncsi/ncsi-manage.c
··· 510 510 511 511 break; 512 512 case ncsi_dev_state_suspend_gls: 513 - ndp->pending_req_num = np->channel_num; 513 + ndp->pending_req_num = 1; 514 514 515 515 nca.type = NCSI_PKT_CMD_GLS; 516 516 nca.package = np->id; 517 + nca.channel = ndp->channel_probe_id; 518 + ret = ncsi_xmit_cmd(&nca); 519 + if (ret) 520 + goto error; 521 + ndp->channel_probe_id++; 517 522 518 - nd->state = ncsi_dev_state_suspend_dcnt; 519 - NCSI_FOR_EACH_CHANNEL(np, nc) { 520 - nca.channel = nc->id; 521 - ret = ncsi_xmit_cmd(&nca); 522 - if (ret) 523 - goto error; 523 + if (ndp->channel_probe_id == ndp->channel_count) { 524 + ndp->channel_probe_id = 0; 525 + nd->state = ncsi_dev_state_suspend_dcnt; 524 526 } 525 527 526 528 break; ··· 1347 1345 { 1348 1346 struct ncsi_dev *nd = &ndp->ndev; 1349 1347 struct ncsi_package *np; 1350 - struct ncsi_channel *nc; 1351 1348 struct ncsi_cmd_arg nca; 1352 1349 unsigned char index; 1353 1350 int ret; ··· 1424 1423 1425 1424 nd->state = ncsi_dev_state_probe_cis; 1426 1425 break; 1427 - case ncsi_dev_state_probe_cis: 1428 - ndp->pending_req_num = NCSI_RESERVED_CHANNEL; 1429 - 1430 - /* Clear initial state */ 1431 - nca.type = NCSI_PKT_CMD_CIS; 1432 - nca.package = ndp->active_package->id; 1433 - for (index = 0; index < NCSI_RESERVED_CHANNEL; index++) { 1434 - nca.channel = index; 1435 - ret = ncsi_xmit_cmd(&nca); 1436 - if (ret) 1437 - goto error; 1438 - } 1439 - 1440 - nd->state = ncsi_dev_state_probe_gvi; 1441 - if (IS_ENABLED(CONFIG_NCSI_OEM_CMD_KEEP_PHY)) 1442 - nd->state = ncsi_dev_state_probe_keep_phy; 1443 - break; 1444 1426 case ncsi_dev_state_probe_keep_phy: 1445 1427 ndp->pending_req_num = 1; 1446 1428 ··· 1436 1452 1437 1453 nd->state = ncsi_dev_state_probe_gvi; 1438 1454 break; 1455 + case ncsi_dev_state_probe_cis: 1439 1456 case ncsi_dev_state_probe_gvi: 1440 1457 case ncsi_dev_state_probe_gc: 1441 1458 case ncsi_dev_state_probe_gls: 1442 1459 np = ndp->active_package; 1443 - ndp->pending_req_num = np->channel_num; 1460 + ndp->pending_req_num = 1; 1444 1461 1445 - /* Retrieve version, capability or link status */ 1446 - if (nd->state == ncsi_dev_state_probe_gvi) 1462 + /* Clear initial state Retrieve version, capability or link status */ 1463 + if (nd->state == ncsi_dev_state_probe_cis) 1464 + nca.type = NCSI_PKT_CMD_CIS; 1465 + else if (nd->state == ncsi_dev_state_probe_gvi) 1447 1466 nca.type = NCSI_PKT_CMD_GVI; 1448 1467 else if (nd->state == ncsi_dev_state_probe_gc) 1449 1468 nca.type = NCSI_PKT_CMD_GC; ··· 1454 1467 nca.type = NCSI_PKT_CMD_GLS; 1455 1468 1456 1469 nca.package = np->id; 1457 - NCSI_FOR_EACH_CHANNEL(np, nc) { 1458 - nca.channel = nc->id; 1459 - ret = ncsi_xmit_cmd(&nca); 1460 - if (ret) 1461 - goto error; 1470 + nca.channel = ndp->channel_probe_id; 1471 + 1472 + ret = ncsi_xmit_cmd(&nca); 1473 + if (ret) 1474 + goto error; 1475 + 1476 + if (nd->state == ncsi_dev_state_probe_cis) { 1477 + nd->state = ncsi_dev_state_probe_gvi; 1478 + if (IS_ENABLED(CONFIG_NCSI_OEM_CMD_KEEP_PHY) && ndp->channel_probe_id == 0) 1479 + nd->state = ncsi_dev_state_probe_keep_phy; 1480 + } else if (nd->state == ncsi_dev_state_probe_gvi) { 1481 + nd->state = ncsi_dev_state_probe_gc; 1482 + } else if (nd->state == ncsi_dev_state_probe_gc) { 1483 + nd->state = ncsi_dev_state_probe_gls; 1484 + } else { 1485 + nd->state = ncsi_dev_state_probe_cis; 1486 + ndp->channel_probe_id++; 1462 1487 } 1463 1488 1464 - if (nd->state == ncsi_dev_state_probe_gvi) 1465 - nd->state = ncsi_dev_state_probe_gc; 1466 - else if (nd->state == ncsi_dev_state_probe_gc) 1467 - nd->state = ncsi_dev_state_probe_gls; 1468 - else 1489 + if (ndp->channel_probe_id == ndp->channel_count) { 1490 + ndp->channel_probe_id = 0; 1469 1491 nd->state = ncsi_dev_state_probe_dp; 1492 + } 1470 1493 break; 1471 1494 case ncsi_dev_state_probe_dp: 1472 1495 ndp->pending_req_num = 1; ··· 1777 1780 ndp->requests[i].ndp = ndp; 1778 1781 timer_setup(&ndp->requests[i].timer, ncsi_request_timeout, 0); 1779 1782 } 1783 + ndp->channel_count = NCSI_RESERVED_CHANNEL; 1780 1784 1781 1785 spin_lock_irqsave(&ncsi_dev_lock, flags); 1782 1786 list_add_tail_rcu(&ndp->node, &ncsi_dev_list); ··· 1811 1813 1812 1814 if (!(ndp->flags & NCSI_DEV_PROBED)) { 1813 1815 ndp->package_probe_id = 0; 1816 + ndp->channel_probe_id = 0; 1814 1817 nd->state = ncsi_dev_state_probe; 1815 1818 schedule_work(&ndp->work); 1816 1819 return 0;
+3 -1
net/ncsi/ncsi-rsp.c
··· 795 795 struct ncsi_rsp_gc_pkt *rsp; 796 796 struct ncsi_dev_priv *ndp = nr->ndp; 797 797 struct ncsi_channel *nc; 798 + struct ncsi_package *np; 798 799 size_t size; 799 800 800 801 /* Find the channel */ 801 802 rsp = (struct ncsi_rsp_gc_pkt *)skb_network_header(nr->rsp); 802 803 ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel, 803 - NULL, &nc); 804 + &np, &nc); 804 805 if (!nc) 805 806 return -ENODEV; 806 807 ··· 836 835 */ 837 836 nc->vlan_filter.bitmap = U64_MAX; 838 837 nc->vlan_filter.n_vids = rsp->vlan_cnt; 838 + np->ndp->channel_count = rsp->channel_cnt; 839 839 840 840 return 0; 841 841 }
+1 -1
net/sched/sch_multiq.c
··· 185 185 186 186 qopt->bands = qdisc_dev(sch)->real_num_tx_queues; 187 187 188 - removed = kmalloc(sizeof(*removed) * (q->max_bands - q->bands), 188 + removed = kmalloc(sizeof(*removed) * (q->max_bands - qopt->bands), 189 189 GFP_KERNEL); 190 190 if (!removed) 191 191 return -ENOMEM;
+6 -9
net/sched/sch_taprio.c
··· 1176 1176 { 1177 1177 bool allow_overlapping_txqs = TXTIME_ASSIST_IS_ENABLED(taprio_flags); 1178 1178 1179 - if (!qopt && !dev->num_tc) { 1180 - NL_SET_ERR_MSG(extack, "'mqprio' configuration is necessary"); 1181 - return -EINVAL; 1182 - } 1183 - 1184 - /* If num_tc is already set, it means that the user already 1185 - * configured the mqprio part 1186 - */ 1187 - if (dev->num_tc) 1179 + if (!qopt) { 1180 + if (!dev->num_tc) { 1181 + NL_SET_ERR_MSG(extack, "'mqprio' configuration is necessary"); 1182 + return -EINVAL; 1183 + } 1188 1184 return 0; 1185 + } 1189 1186 1190 1187 /* taprio imposes that traffic classes map 1:n to tx queues */ 1191 1188 if (qopt->num_tc > dev->num_tx_queues) {
+2 -20
net/smc/af_smc.c
··· 459 459 static void smc_adjust_sock_bufsizes(struct sock *nsk, struct sock *osk, 460 460 unsigned long mask) 461 461 { 462 - struct net *nnet = sock_net(nsk); 463 - 464 462 nsk->sk_userlocks = osk->sk_userlocks; 465 - if (osk->sk_userlocks & SOCK_SNDBUF_LOCK) { 463 + if (osk->sk_userlocks & SOCK_SNDBUF_LOCK) 466 464 nsk->sk_sndbuf = osk->sk_sndbuf; 467 - } else { 468 - if (mask == SK_FLAGS_SMC_TO_CLC) 469 - WRITE_ONCE(nsk->sk_sndbuf, 470 - READ_ONCE(nnet->ipv4.sysctl_tcp_wmem[1])); 471 - else 472 - WRITE_ONCE(nsk->sk_sndbuf, 473 - 2 * READ_ONCE(nnet->smc.sysctl_wmem)); 474 - } 475 - if (osk->sk_userlocks & SOCK_RCVBUF_LOCK) { 465 + if (osk->sk_userlocks & SOCK_RCVBUF_LOCK) 476 466 nsk->sk_rcvbuf = osk->sk_rcvbuf; 477 - } else { 478 - if (mask == SK_FLAGS_SMC_TO_CLC) 479 - WRITE_ONCE(nsk->sk_rcvbuf, 480 - READ_ONCE(nnet->ipv4.sysctl_tcp_rmem[1])); 481 - else 482 - WRITE_ONCE(nsk->sk_rcvbuf, 483 - 2 * READ_ONCE(nnet->smc.sysctl_rmem)); 484 - } 485 467 } 486 468 487 469 static void smc_copy_sock_settings(struct sock *nsk, struct sock *osk,
+1 -1
net/sunrpc/auth_gss/svcauth_gss.c
··· 1069 1069 goto out_denied_free; 1070 1070 1071 1071 pages = DIV_ROUND_UP(inlen, PAGE_SIZE); 1072 - in_token->pages = kcalloc(pages, sizeof(struct page *), GFP_KERNEL); 1072 + in_token->pages = kcalloc(pages + 1, sizeof(struct page *), GFP_KERNEL); 1073 1073 if (!in_token->pages) 1074 1074 goto out_denied_free; 1075 1075 in_token->page_base = 0;
+44 -46
net/unix/af_unix.c
··· 221 221 return unix_peer(osk) == NULL || unix_our_peer(sk, osk); 222 222 } 223 223 224 - static inline int unix_recvq_full(const struct sock *sk) 225 - { 226 - return skb_queue_len(&sk->sk_receive_queue) > sk->sk_max_ack_backlog; 227 - } 228 - 229 224 static inline int unix_recvq_full_lockless(const struct sock *sk) 230 225 { 231 - return skb_queue_len_lockless(&sk->sk_receive_queue) > 232 - READ_ONCE(sk->sk_max_ack_backlog); 226 + return skb_queue_len_lockless(&sk->sk_receive_queue) > sk->sk_max_ack_backlog; 233 227 } 234 228 235 229 struct sock *unix_peer_get(struct sock *s) ··· 524 530 return 0; 525 531 } 526 532 527 - static int unix_writable(const struct sock *sk) 533 + static int unix_writable(const struct sock *sk, unsigned char state) 528 534 { 529 - return sk->sk_state != TCP_LISTEN && 530 - (refcount_read(&sk->sk_wmem_alloc) << 2) <= sk->sk_sndbuf; 535 + return state != TCP_LISTEN && 536 + (refcount_read(&sk->sk_wmem_alloc) << 2) <= READ_ONCE(sk->sk_sndbuf); 531 537 } 532 538 533 539 static void unix_write_space(struct sock *sk) ··· 535 541 struct socket_wq *wq; 536 542 537 543 rcu_read_lock(); 538 - if (unix_writable(sk)) { 544 + if (unix_writable(sk, READ_ONCE(sk->sk_state))) { 539 545 wq = rcu_dereference(sk->sk_wq); 540 546 if (skwq_has_sleeper(wq)) 541 547 wake_up_interruptible_sync_poll(&wq->wait, ··· 564 570 sk_error_report(other); 565 571 } 566 572 } 567 - other->sk_state = TCP_CLOSE; 568 573 } 569 574 570 575 static void unix_sock_destructor(struct sock *sk) ··· 610 617 u->path.dentry = NULL; 611 618 u->path.mnt = NULL; 612 619 state = sk->sk_state; 613 - sk->sk_state = TCP_CLOSE; 620 + WRITE_ONCE(sk->sk_state, TCP_CLOSE); 614 621 615 622 skpair = unix_peer(sk); 616 623 unix_peer(sk) = NULL; ··· 631 638 unix_state_lock(skpair); 632 639 /* No more writes */ 633 640 WRITE_ONCE(skpair->sk_shutdown, SHUTDOWN_MASK); 634 - if (!skb_queue_empty(&sk->sk_receive_queue) || embrion) 641 + if (!skb_queue_empty_lockless(&sk->sk_receive_queue) || embrion) 635 642 WRITE_ONCE(skpair->sk_err, ECONNRESET); 636 643 unix_state_unlock(skpair); 637 644 skpair->sk_state_change(skpair); ··· 732 739 if (backlog > sk->sk_max_ack_backlog) 733 740 wake_up_interruptible_all(&u->peer_wait); 734 741 sk->sk_max_ack_backlog = backlog; 735 - sk->sk_state = TCP_LISTEN; 742 + WRITE_ONCE(sk->sk_state, TCP_LISTEN); 743 + 736 744 /* set credentials so connect can copy them */ 737 745 init_peercred(sk); 738 746 err = 0; ··· 970 976 sk->sk_hash = unix_unbound_hash(sk); 971 977 sk->sk_allocation = GFP_KERNEL_ACCOUNT; 972 978 sk->sk_write_space = unix_write_space; 973 - sk->sk_max_ack_backlog = net->unx.sysctl_max_dgram_qlen; 979 + sk->sk_max_ack_backlog = READ_ONCE(net->unx.sysctl_max_dgram_qlen); 974 980 sk->sk_destruct = unix_sock_destructor; 975 981 u = unix_sk(sk); 976 982 u->listener = NULL; ··· 1396 1402 if (err) 1397 1403 goto out_unlock; 1398 1404 1399 - sk->sk_state = other->sk_state = TCP_ESTABLISHED; 1405 + WRITE_ONCE(sk->sk_state, TCP_ESTABLISHED); 1406 + WRITE_ONCE(other->sk_state, TCP_ESTABLISHED); 1400 1407 } else { 1401 1408 /* 1402 1409 * 1003.1g breaking connected state with AF_UNSPEC ··· 1414 1419 1415 1420 unix_peer(sk) = other; 1416 1421 if (!other) 1417 - sk->sk_state = TCP_CLOSE; 1422 + WRITE_ONCE(sk->sk_state, TCP_CLOSE); 1418 1423 unix_dgram_peer_wake_disconnect_wakeup(sk, old_peer); 1419 1424 1420 1425 unix_state_double_unlock(sk, other); 1421 1426 1422 - if (other != old_peer) 1427 + if (other != old_peer) { 1423 1428 unix_dgram_disconnected(sk, old_peer); 1429 + 1430 + unix_state_lock(old_peer); 1431 + if (!unix_peer(old_peer)) 1432 + WRITE_ONCE(old_peer->sk_state, TCP_CLOSE); 1433 + unix_state_unlock(old_peer); 1434 + } 1435 + 1424 1436 sock_put(old_peer); 1425 1437 } else { 1426 1438 unix_peer(sk) = other; ··· 1475 1473 struct sk_buff *skb = NULL; 1476 1474 long timeo; 1477 1475 int err; 1478 - int st; 1479 1476 1480 1477 err = unix_validate_addr(sunaddr, addr_len); 1481 1478 if (err) ··· 1539 1538 if (other->sk_shutdown & RCV_SHUTDOWN) 1540 1539 goto out_unlock; 1541 1540 1542 - if (unix_recvq_full(other)) { 1541 + if (unix_recvq_full_lockless(other)) { 1543 1542 err = -EAGAIN; 1544 1543 if (!timeo) 1545 1544 goto out_unlock; ··· 1564 1563 1565 1564 Well, and we have to recheck the state after socket locked. 1566 1565 */ 1567 - st = sk->sk_state; 1568 - 1569 - switch (st) { 1566 + switch (READ_ONCE(sk->sk_state)) { 1570 1567 case TCP_CLOSE: 1571 1568 /* This is ok... continue with connect */ 1572 1569 break; ··· 1579 1580 1580 1581 unix_state_lock_nested(sk, U_LOCK_SECOND); 1581 1582 1582 - if (sk->sk_state != st) { 1583 + if (sk->sk_state != TCP_CLOSE) { 1583 1584 unix_state_unlock(sk); 1584 1585 unix_state_unlock(other); 1585 1586 sock_put(other); ··· 1632 1633 copy_peercred(sk, other); 1633 1634 1634 1635 sock->state = SS_CONNECTED; 1635 - sk->sk_state = TCP_ESTABLISHED; 1636 + WRITE_ONCE(sk->sk_state, TCP_ESTABLISHED); 1636 1637 sock_hold(newsk); 1637 1638 1638 1639 smp_mb__after_atomic(); /* sock_hold() does an atomic_inc() */ ··· 1704 1705 goto out; 1705 1706 1706 1707 arg->err = -EINVAL; 1707 - if (sk->sk_state != TCP_LISTEN) 1708 + if (READ_ONCE(sk->sk_state) != TCP_LISTEN) 1708 1709 goto out; 1709 1710 1710 1711 /* If socket state is TCP_LISTEN it cannot change (for now...), ··· 1961 1962 } 1962 1963 1963 1964 err = -EMSGSIZE; 1964 - if (len > sk->sk_sndbuf - 32) 1965 + if (len > READ_ONCE(sk->sk_sndbuf) - 32) 1965 1966 goto out; 1966 1967 1967 1968 if (len > SKB_MAX_ALLOC) { ··· 2043 2044 unix_peer(sk) = NULL; 2044 2045 unix_dgram_peer_wake_disconnect_wakeup(sk, other); 2045 2046 2046 - sk->sk_state = TCP_CLOSE; 2047 + WRITE_ONCE(sk->sk_state, TCP_CLOSE); 2047 2048 unix_state_unlock(sk); 2048 2049 2049 2050 unix_dgram_disconnected(sk, other); ··· 2220 2221 } 2221 2222 2222 2223 if (msg->msg_namelen) { 2223 - err = sk->sk_state == TCP_ESTABLISHED ? -EISCONN : -EOPNOTSUPP; 2224 + err = READ_ONCE(sk->sk_state) == TCP_ESTABLISHED ? -EISCONN : -EOPNOTSUPP; 2224 2225 goto out_err; 2225 2226 } else { 2226 2227 err = -ENOTCONN; ··· 2241 2242 &err, 0); 2242 2243 } else { 2243 2244 /* Keep two messages in the pipe so it schedules better */ 2244 - size = min_t(int, size, (sk->sk_sndbuf >> 1) - 64); 2245 + size = min_t(int, size, (READ_ONCE(sk->sk_sndbuf) >> 1) - 64); 2245 2246 2246 2247 /* allow fallback to order-0 allocations */ 2247 2248 size = min_t(int, size, SKB_MAX_HEAD(0) + UNIX_SKB_FRAGS_SZ); ··· 2334 2335 if (err) 2335 2336 return err; 2336 2337 2337 - if (sk->sk_state != TCP_ESTABLISHED) 2338 + if (READ_ONCE(sk->sk_state) != TCP_ESTABLISHED) 2338 2339 return -ENOTCONN; 2339 2340 2340 2341 if (msg->msg_namelen) ··· 2348 2349 { 2349 2350 struct sock *sk = sock->sk; 2350 2351 2351 - if (sk->sk_state != TCP_ESTABLISHED) 2352 + if (READ_ONCE(sk->sk_state) != TCP_ESTABLISHED) 2352 2353 return -ENOTCONN; 2353 2354 2354 2355 return unix_dgram_recvmsg(sock, msg, size, flags); ··· 2653 2654 2654 2655 static int unix_stream_read_skb(struct sock *sk, skb_read_actor_t recv_actor) 2655 2656 { 2656 - if (unlikely(sk->sk_state != TCP_ESTABLISHED)) 2657 + if (unlikely(READ_ONCE(sk->sk_state) != TCP_ESTABLISHED)) 2657 2658 return -ENOTCONN; 2658 2659 2659 2660 return unix_read_skb(sk, recv_actor); ··· 2677 2678 size_t size = state->size; 2678 2679 unsigned int last_len; 2679 2680 2680 - if (unlikely(sk->sk_state != TCP_ESTABLISHED)) { 2681 + if (unlikely(READ_ONCE(sk->sk_state) != TCP_ESTABLISHED)) { 2681 2682 err = -EINVAL; 2682 2683 goto out; 2683 2684 } ··· 3008 3009 struct sk_buff *skb; 3009 3010 long amount = 0; 3010 3011 3011 - if (sk->sk_state == TCP_LISTEN) 3012 + if (READ_ONCE(sk->sk_state) == TCP_LISTEN) 3012 3013 return -EINVAL; 3013 3014 3014 3015 spin_lock(&sk->sk_receive_queue.lock); ··· 3120 3121 static __poll_t unix_poll(struct file *file, struct socket *sock, poll_table *wait) 3121 3122 { 3122 3123 struct sock *sk = sock->sk; 3124 + unsigned char state; 3123 3125 __poll_t mask; 3124 3126 u8 shutdown; 3125 3127 3126 3128 sock_poll_wait(file, sock, wait); 3127 3129 mask = 0; 3128 3130 shutdown = READ_ONCE(sk->sk_shutdown); 3131 + state = READ_ONCE(sk->sk_state); 3129 3132 3130 3133 /* exceptional events? */ 3131 3134 if (READ_ONCE(sk->sk_err)) ··· 3149 3148 3150 3149 /* Connection-based need to check for termination and startup */ 3151 3150 if ((sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET) && 3152 - sk->sk_state == TCP_CLOSE) 3151 + state == TCP_CLOSE) 3153 3152 mask |= EPOLLHUP; 3154 3153 3155 3154 /* 3156 3155 * we set writable also when the other side has shut down the 3157 3156 * connection. This prevents stuck sockets. 3158 3157 */ 3159 - if (unix_writable(sk)) 3158 + if (unix_writable(sk, state)) 3160 3159 mask |= EPOLLOUT | EPOLLWRNORM | EPOLLWRBAND; 3161 3160 3162 3161 return mask; ··· 3167 3166 { 3168 3167 struct sock *sk = sock->sk, *other; 3169 3168 unsigned int writable; 3169 + unsigned char state; 3170 3170 __poll_t mask; 3171 3171 u8 shutdown; 3172 3172 3173 3173 sock_poll_wait(file, sock, wait); 3174 3174 mask = 0; 3175 3175 shutdown = READ_ONCE(sk->sk_shutdown); 3176 + state = READ_ONCE(sk->sk_state); 3176 3177 3177 3178 /* exceptional events? */ 3178 3179 if (READ_ONCE(sk->sk_err) || ··· 3194 3191 mask |= EPOLLIN | EPOLLRDNORM; 3195 3192 3196 3193 /* Connection-based need to check for termination and startup */ 3197 - if (sk->sk_type == SOCK_SEQPACKET) { 3198 - if (sk->sk_state == TCP_CLOSE) 3199 - mask |= EPOLLHUP; 3200 - /* connection hasn't started yet? */ 3201 - if (sk->sk_state == TCP_SYN_SENT) 3202 - return mask; 3203 - } 3194 + if (sk->sk_type == SOCK_SEQPACKET && state == TCP_CLOSE) 3195 + mask |= EPOLLHUP; 3204 3196 3205 3197 /* No write status requested, avoid expensive OUT tests. */ 3206 3198 if (!(poll_requested_events(wait) & (EPOLLWRBAND|EPOLLWRNORM|EPOLLOUT))) 3207 3199 return mask; 3208 3200 3209 - writable = unix_writable(sk); 3201 + writable = unix_writable(sk, state); 3210 3202 if (writable) { 3211 3203 unix_state_lock(sk); 3212 3204
+6 -6
net/unix/diag.c
··· 65 65 u32 *buf; 66 66 int i; 67 67 68 - if (sk->sk_state == TCP_LISTEN) { 68 + if (READ_ONCE(sk->sk_state) == TCP_LISTEN) { 69 69 spin_lock(&sk->sk_receive_queue.lock); 70 70 71 71 attr = nla_reserve(nlskb, UNIX_DIAG_ICONS, ··· 103 103 { 104 104 struct unix_diag_rqlen rql; 105 105 106 - if (sk->sk_state == TCP_LISTEN) { 107 - rql.udiag_rqueue = sk->sk_receive_queue.qlen; 106 + if (READ_ONCE(sk->sk_state) == TCP_LISTEN) { 107 + rql.udiag_rqueue = skb_queue_len_lockless(&sk->sk_receive_queue); 108 108 rql.udiag_wqueue = sk->sk_max_ack_backlog; 109 109 } else { 110 110 rql.udiag_rqueue = (u32) unix_inq_len(sk); ··· 136 136 rep = nlmsg_data(nlh); 137 137 rep->udiag_family = AF_UNIX; 138 138 rep->udiag_type = sk->sk_type; 139 - rep->udiag_state = sk->sk_state; 139 + rep->udiag_state = READ_ONCE(sk->sk_state); 140 140 rep->pad = 0; 141 141 rep->udiag_ino = sk_ino; 142 142 sock_diag_save_cookie(sk, rep->udiag_cookie); ··· 165 165 sock_diag_put_meminfo(sk, skb, UNIX_DIAG_MEMINFO)) 166 166 goto out_nlmsg_trim; 167 167 168 - if (nla_put_u8(skb, UNIX_DIAG_SHUTDOWN, sk->sk_shutdown)) 168 + if (nla_put_u8(skb, UNIX_DIAG_SHUTDOWN, READ_ONCE(sk->sk_shutdown))) 169 169 goto out_nlmsg_trim; 170 170 171 171 if ((req->udiag_show & UDIAG_SHOW_UID) && ··· 215 215 sk_for_each(sk, &net->unx.table.buckets[slot]) { 216 216 if (num < s_num) 217 217 goto next; 218 - if (!(req->udiag_states & (1 << sk->sk_state))) 218 + if (!(req->udiag_states & (1 << READ_ONCE(sk->sk_state)))) 219 219 goto next; 220 220 if (sk_diag_dump(sk, skb, req, sk_user_ns(skb->sk), 221 221 NETLINK_CB(cb->skb).portid,
+1 -1
net/wireless/core.c
··· 431 431 if (wk) { 432 432 list_del_init(&wk->entry); 433 433 if (!list_empty(&rdev->wiphy_work_list)) 434 - schedule_work(work); 434 + queue_work(system_unbound_wq, work); 435 435 spin_unlock_irq(&rdev->wiphy_work_lock); 436 436 437 437 wk->func(&rdev->wiphy, wk);
+4 -4
net/wireless/pmsr.c
··· 56 56 out->ftm.burst_period = 0; 57 57 if (tb[NL80211_PMSR_FTM_REQ_ATTR_BURST_PERIOD]) 58 58 out->ftm.burst_period = 59 - nla_get_u32(tb[NL80211_PMSR_FTM_REQ_ATTR_BURST_PERIOD]); 59 + nla_get_u16(tb[NL80211_PMSR_FTM_REQ_ATTR_BURST_PERIOD]); 60 60 61 61 out->ftm.asap = !!tb[NL80211_PMSR_FTM_REQ_ATTR_ASAP]; 62 62 if (out->ftm.asap && !capa->ftm.asap) { ··· 75 75 out->ftm.num_bursts_exp = 0; 76 76 if (tb[NL80211_PMSR_FTM_REQ_ATTR_NUM_BURSTS_EXP]) 77 77 out->ftm.num_bursts_exp = 78 - nla_get_u32(tb[NL80211_PMSR_FTM_REQ_ATTR_NUM_BURSTS_EXP]); 78 + nla_get_u8(tb[NL80211_PMSR_FTM_REQ_ATTR_NUM_BURSTS_EXP]); 79 79 80 80 if (capa->ftm.max_bursts_exponent >= 0 && 81 81 out->ftm.num_bursts_exp > capa->ftm.max_bursts_exponent) { ··· 88 88 out->ftm.burst_duration = 15; 89 89 if (tb[NL80211_PMSR_FTM_REQ_ATTR_BURST_DURATION]) 90 90 out->ftm.burst_duration = 91 - nla_get_u32(tb[NL80211_PMSR_FTM_REQ_ATTR_BURST_DURATION]); 91 + nla_get_u8(tb[NL80211_PMSR_FTM_REQ_ATTR_BURST_DURATION]); 92 92 93 93 out->ftm.ftms_per_burst = 0; 94 94 if (tb[NL80211_PMSR_FTM_REQ_ATTR_FTMS_PER_BURST]) ··· 107 107 out->ftm.ftmr_retries = 3; 108 108 if (tb[NL80211_PMSR_FTM_REQ_ATTR_NUM_FTMR_RETRIES]) 109 109 out->ftm.ftmr_retries = 110 - nla_get_u32(tb[NL80211_PMSR_FTM_REQ_ATTR_NUM_FTMR_RETRIES]); 110 + nla_get_u8(tb[NL80211_PMSR_FTM_REQ_ATTR_NUM_FTMR_RETRIES]); 111 111 112 112 out->ftm.request_lci = !!tb[NL80211_PMSR_FTM_REQ_ATTR_REQUEST_LCI]; 113 113 if (out->ftm.request_lci && !capa->ftm.request_lci) {
+5 -1
net/wireless/rdev-ops.h
··· 2 2 /* 3 3 * Portions of this file 4 4 * Copyright(c) 2016-2017 Intel Deutschland GmbH 5 - * Copyright (C) 2018, 2021-2023 Intel Corporation 5 + * Copyright (C) 2018, 2021-2024 Intel Corporation 6 6 */ 7 7 #ifndef __CFG80211_RDEV_OPS 8 8 #define __CFG80211_RDEV_OPS ··· 458 458 struct cfg80211_scan_request *request) 459 459 { 460 460 int ret; 461 + 462 + if (WARN_ON_ONCE(!request->n_ssids && request->ssids)) 463 + return -EINVAL; 464 + 461 465 trace_rdev_scan(&rdev->wiphy, request); 462 466 ret = rdev->ops->scan(&rdev->wiphy, request); 463 467 trace_rdev_return_int(&rdev->wiphy, ret);
+33 -17
net/wireless/scan.c
··· 812 812 LIST_HEAD(coloc_ap_list); 813 813 bool need_scan_psc = true; 814 814 const struct ieee80211_sband_iftype_data *iftd; 815 + size_t size, offs_ssids, offs_6ghz_params, offs_ies; 815 816 816 817 rdev_req->scan_6ghz = true; 817 818 ··· 878 877 spin_unlock_bh(&rdev->bss_lock); 879 878 } 880 879 881 - request = kzalloc(struct_size(request, channels, n_channels) + 882 - sizeof(*request->scan_6ghz_params) * count + 883 - sizeof(*request->ssids) * rdev_req->n_ssids, 884 - GFP_KERNEL); 880 + size = struct_size(request, channels, n_channels); 881 + offs_ssids = size; 882 + size += sizeof(*request->ssids) * rdev_req->n_ssids; 883 + offs_6ghz_params = size; 884 + size += sizeof(*request->scan_6ghz_params) * count; 885 + offs_ies = size; 886 + size += rdev_req->ie_len; 887 + 888 + request = kzalloc(size, GFP_KERNEL); 885 889 if (!request) { 886 890 cfg80211_free_coloc_ap_list(&coloc_ap_list); 887 891 return -ENOMEM; ··· 894 888 895 889 *request = *rdev_req; 896 890 request->n_channels = 0; 897 - request->scan_6ghz_params = 898 - (void *)&request->channels[n_channels]; 891 + request->n_6ghz_params = 0; 892 + if (rdev_req->n_ssids) { 893 + /* 894 + * Add the ssids from the parent scan request to the new 895 + * scan request, so the driver would be able to use them 896 + * in its probe requests to discover hidden APs on PSC 897 + * channels. 898 + */ 899 + request->ssids = (void *)request + offs_ssids; 900 + memcpy(request->ssids, rdev_req->ssids, 901 + sizeof(*request->ssids) * request->n_ssids); 902 + } 903 + request->scan_6ghz_params = (void *)request + offs_6ghz_params; 904 + 905 + if (rdev_req->ie_len) { 906 + void *ie = (void *)request + offs_ies; 907 + 908 + memcpy(ie, rdev_req->ie, rdev_req->ie_len); 909 + request->ie = ie; 910 + } 899 911 900 912 /* 901 913 * PSC channels should not be scanned in case of direct scan with 1 SSID ··· 1002 978 1003 979 if (request->n_channels) { 1004 980 struct cfg80211_scan_request *old = rdev->int_scan_req; 1005 - rdev->int_scan_req = request; 1006 981 1007 - /* 1008 - * Add the ssids from the parent scan request to the new scan 1009 - * request, so the driver would be able to use them in its 1010 - * probe requests to discover hidden APs on PSC channels. 1011 - */ 1012 - request->ssids = (void *)&request->channels[request->n_channels]; 1013 - request->n_ssids = rdev_req->n_ssids; 1014 - memcpy(request->ssids, rdev_req->ssids, sizeof(*request->ssids) * 1015 - request->n_ssids); 982 + rdev->int_scan_req = request; 1016 983 1017 984 /* 1018 985 * If this scan follows a previous scan, save the scan start ··· 2143 2128 struct ieee80211_he_operation *he_oper; 2144 2129 2145 2130 tmp = cfg80211_find_ext_elem(WLAN_EID_EXT_HE_OPERATION, ie, ielen); 2146 - if (tmp && tmp->datalen >= sizeof(*he_oper) + 1) { 2131 + if (tmp && tmp->datalen >= sizeof(*he_oper) + 1 && 2132 + tmp->datalen >= ieee80211_he_oper_size(tmp->data + 1)) { 2147 2133 const struct ieee80211_he_6ghz_oper *he_6ghz_oper; 2148 2134 2149 2135 he_oper = (void *)&tmp->data[1];
+2 -2
net/wireless/sysfs.c
··· 5 5 * 6 6 * Copyright 2005-2006 Jiri Benc <jbenc@suse.cz> 7 7 * Copyright 2006 Johannes Berg <johannes@sipsolutions.net> 8 - * Copyright (C) 2020-2021, 2023 Intel Corporation 8 + * Copyright (C) 2020-2021, 2023-2024 Intel Corporation 9 9 */ 10 10 11 11 #include <linux/device.h> ··· 137 137 if (rdev->wiphy.registered && rdev->ops->resume) 138 138 ret = rdev_resume(rdev); 139 139 rdev->suspended = false; 140 - schedule_work(&rdev->wiphy_work); 140 + queue_work(system_unbound_wq, &rdev->wiphy_work); 141 141 wiphy_unlock(&rdev->wiphy); 142 142 143 143 if (ret)
+6 -1
net/wireless/util.c
··· 2549 2549 { 2550 2550 struct cfg80211_registered_device *rdev; 2551 2551 struct wireless_dev *wdev; 2552 + int ret; 2552 2553 2553 2554 wdev = dev->ieee80211_ptr; 2554 2555 if (!wdev) ··· 2561 2560 2562 2561 memset(sinfo, 0, sizeof(*sinfo)); 2563 2562 2564 - return rdev_get_station(rdev, dev, mac_addr, sinfo); 2563 + wiphy_lock(&rdev->wiphy); 2564 + ret = rdev_get_station(rdev, dev, mac_addr, sinfo); 2565 + wiphy_unlock(&rdev->wiphy); 2566 + 2567 + return ret; 2565 2568 } 2566 2569 EXPORT_SYMBOL(cfg80211_get_station); 2567 2570
+1 -4
net/xdp/xsk.c
··· 313 313 314 314 static int xsk_rcv_check(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len) 315 315 { 316 - struct net_device *dev = xdp->rxq->dev; 317 - u32 qid = xdp->rxq->queue_index; 318 - 319 316 if (!xsk_is_bound(xs)) 320 317 return -ENXIO; 321 318 322 - if (!dev->_rx[qid].pool || xs->umem != dev->_rx[qid].pool->umem) 319 + if (xs->dev != xdp->rxq->dev || xs->queue_id != xdp->rxq->queue_index) 323 320 return -EINVAL; 324 321 325 322 if (len > xsk_pool_get_rx_frame_size(xs->pool) && !xs->sg) {
+1 -1
scripts/atomic/kerneldoc/sub_and_test
··· 1 1 cat <<EOF 2 2 /** 3 3 * ${class}${atomicname}() - atomic subtract and test if zero with ${desc_order} ordering 4 - * @i: ${int} value to add 4 + * @i: ${int} value to subtract 5 5 * @v: pointer to ${atomic}_t 6 6 * 7 7 * Atomically updates @v to (@v - @i) with ${desc_order} ordering.
-13
scripts/kconfig/confdata.c
··· 533 533 */ 534 534 if (sym->visible == no && !conf_unsaved) 535 535 sym->flags &= ~SYMBOL_DEF_USER; 536 - switch (sym->type) { 537 - case S_STRING: 538 - case S_INT: 539 - case S_HEX: 540 - /* Reset a string value if it's out of range */ 541 - if (sym_string_within_range(sym, sym->def[S_DEF_USER].val)) 542 - break; 543 - sym->flags &= ~SYMBOL_VALID; 544 - conf_unsaved++; 545 - break; 546 - default: 547 - break; 548 - } 549 536 } 550 537 } 551 538
-29
scripts/kconfig/expr.c
··· 397 397 } 398 398 399 399 /* 400 - * bool FOO!=n => FOO 401 - */ 402 - struct expr *expr_trans_bool(struct expr *e) 403 - { 404 - if (!e) 405 - return NULL; 406 - switch (e->type) { 407 - case E_AND: 408 - case E_OR: 409 - case E_NOT: 410 - e->left.expr = expr_trans_bool(e->left.expr); 411 - e->right.expr = expr_trans_bool(e->right.expr); 412 - break; 413 - case E_UNEQUAL: 414 - // FOO!=n -> FOO 415 - if (e->left.sym->type == S_TRISTATE) { 416 - if (e->right.sym == &symbol_no) { 417 - e->type = E_SYMBOL; 418 - e->right.sym = NULL; 419 - } 420 - } 421 - break; 422 - default: 423 - ; 424 - } 425 - return e; 426 - } 427 - 428 - /* 429 400 * e1 || e2 -> ? 430 401 */ 431 402 static struct expr *expr_join_or(struct expr *e1, struct expr *e2)
-1
scripts/kconfig/expr.h
··· 284 284 void expr_eliminate_eq(struct expr **ep1, struct expr **ep2); 285 285 int expr_eq(struct expr *e1, struct expr *e2); 286 286 tristate expr_calc_value(struct expr *e); 287 - struct expr *expr_trans_bool(struct expr *e); 288 287 struct expr *expr_eliminate_dups(struct expr *e); 289 288 struct expr *expr_transform(struct expr *e); 290 289 int expr_contains_symbol(struct expr *dep, struct symbol *sym);
+2 -1
scripts/kconfig/gconf.c
··· 1422 1422 1423 1423 conf_parse(name); 1424 1424 fixup_rootmenu(&rootmenu); 1425 - conf_read(NULL); 1426 1425 1427 1426 /* Load the interface and connect signals */ 1428 1427 init_main_window(glade_file); 1429 1428 init_tree_model(); 1430 1429 init_left_tree(); 1431 1430 init_right_tree(); 1431 + 1432 + conf_read(NULL); 1432 1433 1433 1434 switch (view_mode) { 1434 1435 case SINGLE_VIEW:
-2
scripts/kconfig/menu.c
··· 398 398 dep = expr_transform(dep); 399 399 dep = expr_alloc_and(expr_copy(basedep), dep); 400 400 dep = expr_eliminate_dups(dep); 401 - if (menu->sym && menu->sym->type != S_TRISTATE) 402 - dep = expr_trans_bool(dep); 403 401 prop->visible.expr = dep; 404 402 405 403 /*
+3 -2
scripts/mod/modpost.c
··· 1647 1647 namespace = get_next_modinfo(&info, "import_ns", 1648 1648 namespace); 1649 1649 } 1650 + 1651 + if (extra_warn && !get_modinfo(&info, "description")) 1652 + warn("missing MODULE_DESCRIPTION() in %s\n", modname); 1650 1653 } 1651 1654 1652 - if (extra_warn && !get_modinfo(&info, "description")) 1653 - warn("missing MODULE_DESCRIPTION() in %s\n", modname); 1654 1655 for (sym = info.symtab_start; sym < info.symtab_stop; sym++) { 1655 1656 symname = remove_dot(info.strtab + sym->st_name); 1656 1657
+1 -1
security/tomoyo/Kconfig
··· 10 10 help 11 11 This selects TOMOYO Linux, pathname-based access control. 12 12 Required userspace tools and further information may be 13 - found at <https://tomoyo.osdn.jp/>. 13 + found at <https://tomoyo.sourceforge.net/>. 14 14 If you are unsure how to answer this question, answer N. 15 15 16 16 config SECURITY_TOMOYO_MAX_ACCEPT_ENTRY
+1 -1
security/tomoyo/common.c
··· 2787 2787 else 2788 2788 continue; 2789 2789 pr_err("Userland tools for TOMOYO 2.6 must be installed and policy must be initialized.\n"); 2790 - pr_err("Please see https://tomoyo.osdn.jp/2.6/ for more information.\n"); 2790 + pr_err("Please see https://tomoyo.sourceforge.net/2.6/ for more information.\n"); 2791 2791 panic("STOP!"); 2792 2792 } 2793 2793 tomoyo_read_unlock(idx);
+23
sound/soc/codecs/lpass-macro-common.c
··· 11 11 12 12 #include "lpass-macro-common.h" 13 13 14 + static DEFINE_MUTEX(lpass_codec_mutex); 15 + static int lpass_codec_version; 16 + 14 17 struct lpass_macro *lpass_macro_pds_init(struct device *dev) 15 18 { 16 19 struct lpass_macro *l_pds; ··· 68 65 } 69 66 } 70 67 EXPORT_SYMBOL_GPL(lpass_macro_pds_exit); 68 + 69 + void lpass_macro_set_codec_version(int version) 70 + { 71 + mutex_lock(&lpass_codec_mutex); 72 + lpass_codec_version = version; 73 + mutex_unlock(&lpass_codec_mutex); 74 + } 75 + EXPORT_SYMBOL_GPL(lpass_macro_set_codec_version); 76 + 77 + int lpass_macro_get_codec_version(void) 78 + { 79 + int ver; 80 + 81 + mutex_lock(&lpass_codec_mutex); 82 + ver = lpass_codec_version; 83 + mutex_unlock(&lpass_codec_mutex); 84 + 85 + return ver; 86 + } 87 + EXPORT_SYMBOL_GPL(lpass_macro_get_codec_version); 71 88 72 89 MODULE_DESCRIPTION("Common macro driver"); 73 90 MODULE_LICENSE("GPL");
+35
sound/soc/codecs/lpass-macro-common.h
··· 18 18 LPASS_VER_11_0_0, 19 19 }; 20 20 21 + enum lpass_codec_version { 22 + LPASS_CODEC_VERSION_1_0 = 1, 23 + LPASS_CODEC_VERSION_1_1, 24 + LPASS_CODEC_VERSION_1_2, 25 + LPASS_CODEC_VERSION_2_0, 26 + LPASS_CODEC_VERSION_2_1, 27 + LPASS_CODEC_VERSION_2_5, 28 + LPASS_CODEC_VERSION_2_6, 29 + LPASS_CODEC_VERSION_2_7, 30 + LPASS_CODEC_VERSION_2_8, 31 + }; 32 + 21 33 struct lpass_macro { 22 34 struct device *macro_pd; 23 35 struct device *dcodec_pd; ··· 37 25 38 26 struct lpass_macro *lpass_macro_pds_init(struct device *dev); 39 27 void lpass_macro_pds_exit(struct lpass_macro *pds); 28 + void lpass_macro_set_codec_version(int version); 29 + int lpass_macro_get_codec_version(void); 30 + 31 + static inline const char *lpass_macro_get_codec_version_string(int version) 32 + { 33 + switch (version) { 34 + case LPASS_CODEC_VERSION_2_0: 35 + return "v2.0"; 36 + case LPASS_CODEC_VERSION_2_1: 37 + return "v2.1"; 38 + case LPASS_CODEC_VERSION_2_5: 39 + return "v2.5"; 40 + case LPASS_CODEC_VERSION_2_6: 41 + return "v2.6"; 42 + case LPASS_CODEC_VERSION_2_7: 43 + return "v2.7"; 44 + case LPASS_CODEC_VERSION_2_8: 45 + return "v2.8"; 46 + default: 47 + break; 48 + } 49 + return "NA"; 50 + } 40 51 41 52 #endif /* __LPASS_MACRO_COMMON_H__ */
+442 -153
sound/soc/codecs/lpass-rx-macro.c
··· 158 158 #define CDC_RX_INTR_CTRL_LEVEL0 (0x03C0) 159 159 #define CDC_RX_INTR_CTRL_BYPASS0 (0x03C8) 160 160 #define CDC_RX_INTR_CTRL_SET0 (0x03D0) 161 - #define CDC_RX_RXn_RX_PATH_CTL(n) (0x0400 + 0x80 * n) 161 + #define CDC_RX_RXn_RX_PATH_CTL(rx, n) (0x0400 + rx->rxn_reg_stride * n) 162 162 #define CDC_RX_RX0_RX_PATH_CTL (0x0400) 163 163 #define CDC_RX_PATH_RESET_EN_MASK BIT(6) 164 164 #define CDC_RX_PATH_CLK_EN_MASK BIT(5) ··· 166 166 #define CDC_RX_PATH_PGA_MUTE_MASK BIT(4) 167 167 #define CDC_RX_PATH_PGA_MUTE_ENABLE BIT(4) 168 168 #define CDC_RX_PATH_PCM_RATE_MASK GENMASK(3, 0) 169 - #define CDC_RX_RXn_RX_PATH_CFG0(n) (0x0404 + 0x80 * n) 169 + #define CDC_RX_RXn_RX_PATH_CFG0(rx, n) (0x0404 + rx->rxn_reg_stride * n) 170 170 #define CDC_RX_RXn_COMP_EN_MASK BIT(1) 171 171 #define CDC_RX_RX0_RX_PATH_CFG0 (0x0404) 172 172 #define CDC_RX_RXn_CLSH_EN_MASK BIT(6) 173 173 #define CDC_RX_DLY_ZN_EN_MASK BIT(3) 174 174 #define CDC_RX_DLY_ZN_ENABLE BIT(3) 175 175 #define CDC_RX_RXn_HD2_EN_MASK BIT(2) 176 - #define CDC_RX_RXn_RX_PATH_CFG1(n) (0x0408 + 0x80 * n) 176 + #define CDC_RX_RXn_RX_PATH_CFG1(rx, n) (0x0408 + rx->rxn_reg_stride * n) 177 177 #define CDC_RX_RXn_SIDETONE_EN_MASK BIT(4) 178 178 #define CDC_RX_RX0_RX_PATH_CFG1 (0x0408) 179 179 #define CDC_RX_RX0_HPH_L_EAR_SEL_MASK BIT(1) 180 - #define CDC_RX_RXn_RX_PATH_CFG2(n) (0x040C + 0x80 * n) 180 + #define CDC_RX_RXn_RX_PATH_CFG2(rx, n) (0x040C + rx->rxn_reg_stride * n) 181 181 #define CDC_RX_RXn_HPF_CUT_FREQ_MASK GENMASK(1, 0) 182 182 #define CDC_RX_RX0_RX_PATH_CFG2 (0x040C) 183 - #define CDC_RX_RXn_RX_PATH_CFG3(n) (0x0410 + 0x80 * n) 183 + #define CDC_RX_RXn_RX_PATH_CFG3(rx, n) (0x0410 + rx->rxn_reg_stride * n) 184 184 #define CDC_RX_RX0_RX_PATH_CFG3 (0x0410) 185 185 #define CDC_RX_DC_COEFF_SEL_MASK GENMASK(1, 0) 186 186 #define CDC_RX_DC_COEFF_SEL_TWO 0x2 187 - #define CDC_RX_RXn_RX_VOL_CTL(n) (0x0414 + 0x80 * n) 187 + #define CDC_RX_RXn_RX_VOL_CTL(rx, n) (0x0414 + rx->rxn_reg_stride * n) 188 188 #define CDC_RX_RX0_RX_VOL_CTL (0x0414) 189 - #define CDC_RX_RXn_RX_PATH_MIX_CTL(n) (0x0418 + 0x80 * n) 189 + #define CDC_RX_RXn_RX_PATH_MIX_CTL(rx, n) (0x0418 + rx->rxn_reg_stride * n) 190 190 #define CDC_RX_RXn_MIX_PCM_RATE_MASK GENMASK(3, 0) 191 191 #define CDC_RX_RXn_MIX_RESET_MASK BIT(6) 192 192 #define CDC_RX_RXn_MIX_RESET BIT(6) 193 193 #define CDC_RX_RXn_MIX_CLK_EN_MASK BIT(5) 194 194 #define CDC_RX_RX0_RX_PATH_MIX_CTL (0x0418) 195 195 #define CDC_RX_RX0_RX_PATH_MIX_CFG (0x041C) 196 - #define CDC_RX_RXn_RX_VOL_MIX_CTL(n) (0x0420 + 0x80 * n) 196 + #define CDC_RX_RXn_RX_VOL_MIX_CTL(rx, n) (0x0420 + rx->rxn_reg_stride * n) 197 197 #define CDC_RX_RX0_RX_VOL_MIX_CTL (0x0420) 198 198 #define CDC_RX_RX0_RX_PATH_SEC1 (0x0424) 199 199 #define CDC_RX_RX0_RX_PATH_SEC2 (0x0428) 200 200 #define CDC_RX_RX0_RX_PATH_SEC3 (0x042C) 201 + #define CDC_RX_RXn_RX_PATH_SEC3(rx, n) (0x042c + rx->rxn_reg_stride * n) 201 202 #define CDC_RX_RX0_RX_PATH_SEC4 (0x0430) 202 203 #define CDC_RX_RX0_RX_PATH_SEC7 (0x0434) 204 + #define CDC_RX_RXn_RX_PATH_SEC7(rx, n) (0x0434 + rx->rxn_reg_stride * n) 203 205 #define CDC_RX_DSM_OUT_DELAY_SEL_MASK GENMASK(2, 0) 204 206 #define CDC_RX_DSM_OUT_DELAY_TWO_SAMPLE 0x2 205 207 #define CDC_RX_RX0_RX_PATH_MIX_SEC0 (0x0438) 206 208 #define CDC_RX_RX0_RX_PATH_MIX_SEC1 (0x043C) 207 - #define CDC_RX_RXn_RX_PATH_DSM_CTL(n) (0x0440 + 0x80 * n) 209 + #define CDC_RX_RXn_RX_PATH_DSM_CTL(rx, n) (0x0440 + rx->rxn_reg_stride * n) 208 210 #define CDC_RX_RXn_DSM_CLK_EN_MASK BIT(0) 209 211 #define CDC_RX_RX0_RX_PATH_DSM_CTL (0x0440) 210 212 #define CDC_RX_RX0_RX_PATH_DSM_DATA1 (0x0444) ··· 215 213 #define CDC_RX_RX0_RX_PATH_DSM_DATA4 (0x0450) 216 214 #define CDC_RX_RX0_RX_PATH_DSM_DATA5 (0x0454) 217 215 #define CDC_RX_RX0_RX_PATH_DSM_DATA6 (0x0458) 216 + /* RX offsets prior to 2.5 codec version */ 218 217 #define CDC_RX_RX1_RX_PATH_CTL (0x0480) 219 218 #define CDC_RX_RX1_RX_PATH_CFG0 (0x0484) 220 219 #define CDC_RX_RX1_RX_PATH_CFG1 (0x0488) ··· 262 259 #define CDC_RX_RX2_RX_PATH_MIX_SEC0 (0x0544) 263 260 #define CDC_RX_RX2_RX_PATH_MIX_SEC1 (0x0548) 264 261 #define CDC_RX_RX2_RX_PATH_DSM_CTL (0x054C) 262 + 263 + /* LPASS CODEC version 2.5 rx reg offsets */ 264 + #define CDC_2_5_RX_RX1_RX_PATH_CTL (0x04c0) 265 + #define CDC_2_5_RX_RX1_RX_PATH_CFG0 (0x04c4) 266 + #define CDC_2_5_RX_RX1_RX_PATH_CFG1 (0x04c8) 267 + #define CDC_2_5_RX_RX1_RX_PATH_CFG2 (0x04cC) 268 + #define CDC_2_5_RX_RX1_RX_PATH_CFG3 (0x04d0) 269 + #define CDC_2_5_RX_RX1_RX_VOL_CTL (0x04d4) 270 + #define CDC_2_5_RX_RX1_RX_PATH_MIX_CTL (0x04d8) 271 + #define CDC_2_5_RX_RX1_RX_PATH_MIX_CFG (0x04dC) 272 + #define CDC_2_5_RX_RX1_RX_VOL_MIX_CTL (0x04e0) 273 + #define CDC_2_5_RX_RX1_RX_PATH_SEC1 (0x04e4) 274 + #define CDC_2_5_RX_RX1_RX_PATH_SEC2 (0x04e8) 275 + #define CDC_2_5_RX_RX1_RX_PATH_SEC3 (0x04eC) 276 + #define CDC_2_5_RX_RX1_RX_PATH_SEC4 (0x04f0) 277 + #define CDC_2_5_RX_RX1_RX_PATH_SEC7 (0x04f4) 278 + #define CDC_2_5_RX_RX1_RX_PATH_MIX_SEC0 (0x04f8) 279 + #define CDC_2_5_RX_RX1_RX_PATH_MIX_SEC1 (0x04fC) 280 + #define CDC_2_5_RX_RX1_RX_PATH_DSM_CTL (0x0500) 281 + #define CDC_2_5_RX_RX1_RX_PATH_DSM_DATA1 (0x0504) 282 + #define CDC_2_5_RX_RX1_RX_PATH_DSM_DATA2 (0x0508) 283 + #define CDC_2_5_RX_RX1_RX_PATH_DSM_DATA3 (0x050C) 284 + #define CDC_2_5_RX_RX1_RX_PATH_DSM_DATA4 (0x0510) 285 + #define CDC_2_5_RX_RX1_RX_PATH_DSM_DATA5 (0x0514) 286 + #define CDC_2_5_RX_RX1_RX_PATH_DSM_DATA6 (0x0518) 287 + 288 + #define CDC_2_5_RX_RX2_RX_PATH_CTL (0x0580) 289 + #define CDC_2_5_RX_RX2_RX_PATH_CFG0 (0x0584) 290 + #define CDC_2_5_RX_RX2_RX_PATH_CFG1 (0x0588) 291 + #define CDC_2_5_RX_RX2_RX_PATH_CFG2 (0x058C) 292 + #define CDC_2_5_RX_RX2_RX_PATH_CFG3 (0x0590) 293 + #define CDC_2_5_RX_RX2_RX_VOL_CTL (0x0594) 294 + #define CDC_2_5_RX_RX2_RX_PATH_MIX_CTL (0x0598) 295 + #define CDC_2_5_RX_RX2_RX_PATH_MIX_CFG (0x059C) 296 + #define CDC_2_5_RX_RX2_RX_VOL_MIX_CTL (0x05a0) 297 + #define CDC_2_5_RX_RX2_RX_PATH_SEC0 (0x05a4) 298 + #define CDC_2_5_RX_RX2_RX_PATH_SEC1 (0x05a8) 299 + #define CDC_2_5_RX_RX2_RX_PATH_SEC2 (0x05aC) 300 + #define CDC_2_5_RX_RX2_RX_PATH_SEC3 (0x05b0) 301 + #define CDC_2_5_RX_RX2_RX_PATH_SEC4 (0x05b4) 302 + #define CDC_2_5_RX_RX2_RX_PATH_SEC5 (0x05b8) 303 + #define CDC_2_5_RX_RX2_RX_PATH_SEC6 (0x05bC) 304 + #define CDC_2_5_RX_RX2_RX_PATH_SEC7 (0x05c0) 305 + #define CDC_2_5_RX_RX2_RX_PATH_MIX_SEC0 (0x05c4) 306 + #define CDC_2_5_RX_RX2_RX_PATH_MIX_SEC1 (0x05c8) 307 + #define CDC_2_5_RX_RX2_RX_PATH_DSM_CTL (0x05cC) 308 + 265 309 #define CDC_RX_IDLE_DETECT_PATH_CTL (0x0780) 266 310 #define CDC_RX_IDLE_DETECT_CFG0 (0x0784) 267 311 #define CDC_RX_IDLE_DETECT_CFG1 (0x0788) ··· 642 592 int rx_mclk_users; 643 593 int clsh_users; 644 594 int rx_mclk_cnt; 595 + int codec_version; 596 + int rxn_reg_stride; 645 597 bool is_ear_mode_on; 646 598 bool hph_pwr_mode; 647 599 bool hph_hd2_mode; ··· 804 752 static SOC_ENUM_SINGLE_DECL(rx_int0_dem_inp_enum, CDC_RX_RX0_RX_PATH_CFG1, 0, 805 753 rx_int_dem_inp_mux_text); 806 754 static SOC_ENUM_SINGLE_DECL(rx_int1_dem_inp_enum, CDC_RX_RX1_RX_PATH_CFG1, 0, 755 + rx_int_dem_inp_mux_text); 756 + static SOC_ENUM_SINGLE_DECL(rx_2_5_int1_dem_inp_enum, CDC_2_5_RX_RX1_RX_PATH_CFG1, 0, 807 757 rx_int_dem_inp_mux_text); 808 758 809 759 static SOC_ENUM_SINGLE_DECL(rx_macro_rx0_enum, SND_SOC_NOPM, 0, rx_macro_mux_text); ··· 1024 970 { CDC_RX_RX0_RX_PATH_DSM_DATA4, 0x55 }, 1025 971 { CDC_RX_RX0_RX_PATH_DSM_DATA5, 0x55 }, 1026 972 { CDC_RX_RX0_RX_PATH_DSM_DATA6, 0x55 }, 1027 - { CDC_RX_RX1_RX_PATH_CTL, 0x04 }, 1028 - { CDC_RX_RX1_RX_PATH_CFG0, 0x00 }, 1029 - { CDC_RX_RX1_RX_PATH_CFG1, 0x64 }, 1030 - { CDC_RX_RX1_RX_PATH_CFG2, 0x8F }, 1031 - { CDC_RX_RX1_RX_PATH_CFG3, 0x00 }, 1032 - { CDC_RX_RX1_RX_VOL_CTL, 0x00 }, 1033 - { CDC_RX_RX1_RX_PATH_MIX_CTL, 0x04 }, 1034 - { CDC_RX_RX1_RX_PATH_MIX_CFG, 0x7E }, 1035 - { CDC_RX_RX1_RX_VOL_MIX_CTL, 0x00 }, 1036 - { CDC_RX_RX1_RX_PATH_SEC1, 0x08 }, 1037 - { CDC_RX_RX1_RX_PATH_SEC2, 0x00 }, 1038 - { CDC_RX_RX1_RX_PATH_SEC3, 0x00 }, 1039 - { CDC_RX_RX1_RX_PATH_SEC4, 0x00 }, 1040 - { CDC_RX_RX1_RX_PATH_SEC7, 0x00 }, 1041 - { CDC_RX_RX1_RX_PATH_MIX_SEC0, 0x08 }, 1042 - { CDC_RX_RX1_RX_PATH_MIX_SEC1, 0x00 }, 1043 - { CDC_RX_RX1_RX_PATH_DSM_CTL, 0x08 }, 1044 - { CDC_RX_RX1_RX_PATH_DSM_DATA1, 0x00 }, 1045 - { CDC_RX_RX1_RX_PATH_DSM_DATA2, 0x00 }, 1046 - { CDC_RX_RX1_RX_PATH_DSM_DATA3, 0x00 }, 1047 - { CDC_RX_RX1_RX_PATH_DSM_DATA4, 0x55 }, 1048 - { CDC_RX_RX1_RX_PATH_DSM_DATA5, 0x55 }, 1049 - { CDC_RX_RX1_RX_PATH_DSM_DATA6, 0x55 }, 1050 - { CDC_RX_RX2_RX_PATH_CTL, 0x04 }, 1051 - { CDC_RX_RX2_RX_PATH_CFG0, 0x00 }, 1052 - { CDC_RX_RX2_RX_PATH_CFG1, 0x64 }, 1053 - { CDC_RX_RX2_RX_PATH_CFG2, 0x8F }, 1054 - { CDC_RX_RX2_RX_PATH_CFG3, 0x00 }, 1055 - { CDC_RX_RX2_RX_VOL_CTL, 0x00 }, 1056 - { CDC_RX_RX2_RX_PATH_MIX_CTL, 0x04 }, 1057 - { CDC_RX_RX2_RX_PATH_MIX_CFG, 0x7E }, 1058 - { CDC_RX_RX2_RX_VOL_MIX_CTL, 0x00 }, 1059 - { CDC_RX_RX2_RX_PATH_SEC0, 0x04 }, 1060 - { CDC_RX_RX2_RX_PATH_SEC1, 0x08 }, 1061 - { CDC_RX_RX2_RX_PATH_SEC2, 0x00 }, 1062 - { CDC_RX_RX2_RX_PATH_SEC3, 0x00 }, 1063 - { CDC_RX_RX2_RX_PATH_SEC4, 0x00 }, 1064 - { CDC_RX_RX2_RX_PATH_SEC5, 0x00 }, 1065 - { CDC_RX_RX2_RX_PATH_SEC6, 0x00 }, 1066 - { CDC_RX_RX2_RX_PATH_SEC7, 0x00 }, 1067 - { CDC_RX_RX2_RX_PATH_MIX_SEC0, 0x08 }, 1068 - { CDC_RX_RX2_RX_PATH_MIX_SEC1, 0x00 }, 1069 - { CDC_RX_RX2_RX_PATH_DSM_CTL, 0x00 }, 1070 973 { CDC_RX_IDLE_DETECT_PATH_CTL, 0x00 }, 1071 974 { CDC_RX_IDLE_DETECT_CFG0, 0x07 }, 1072 975 { CDC_RX_IDLE_DETECT_CFG1, 0x3C }, ··· 1126 1115 { CDC_RX_DSD1_CFG2, 0x96 }, 1127 1116 }; 1128 1117 1118 + static const struct reg_default rx_2_5_defaults[] = { 1119 + { CDC_2_5_RX_RX1_RX_PATH_CTL, 0x04 }, 1120 + { CDC_2_5_RX_RX1_RX_PATH_CFG0, 0x00 }, 1121 + { CDC_2_5_RX_RX1_RX_PATH_CFG1, 0x64 }, 1122 + { CDC_2_5_RX_RX1_RX_PATH_CFG2, 0x8F }, 1123 + { CDC_2_5_RX_RX1_RX_PATH_CFG3, 0x00 }, 1124 + { CDC_2_5_RX_RX1_RX_VOL_CTL, 0x00 }, 1125 + { CDC_2_5_RX_RX1_RX_PATH_MIX_CTL, 0x04 }, 1126 + { CDC_2_5_RX_RX1_RX_PATH_MIX_CFG, 0x7E }, 1127 + { CDC_2_5_RX_RX1_RX_VOL_MIX_CTL, 0x00 }, 1128 + { CDC_2_5_RX_RX1_RX_PATH_SEC1, 0x08 }, 1129 + { CDC_2_5_RX_RX1_RX_PATH_SEC2, 0x00 }, 1130 + { CDC_2_5_RX_RX1_RX_PATH_SEC3, 0x00 }, 1131 + { CDC_2_5_RX_RX1_RX_PATH_SEC4, 0x00 }, 1132 + { CDC_2_5_RX_RX1_RX_PATH_SEC7, 0x00 }, 1133 + { CDC_2_5_RX_RX1_RX_PATH_MIX_SEC0, 0x08 }, 1134 + { CDC_2_5_RX_RX1_RX_PATH_MIX_SEC1, 0x00 }, 1135 + { CDC_2_5_RX_RX1_RX_PATH_DSM_CTL, 0x08 }, 1136 + { CDC_2_5_RX_RX1_RX_PATH_DSM_DATA1, 0x00 }, 1137 + { CDC_2_5_RX_RX1_RX_PATH_DSM_DATA2, 0x00 }, 1138 + { CDC_2_5_RX_RX1_RX_PATH_DSM_DATA3, 0x00 }, 1139 + { CDC_2_5_RX_RX1_RX_PATH_DSM_DATA4, 0x55 }, 1140 + { CDC_2_5_RX_RX1_RX_PATH_DSM_DATA5, 0x55 }, 1141 + { CDC_2_5_RX_RX1_RX_PATH_DSM_DATA6, 0x55 }, 1142 + { CDC_2_5_RX_RX2_RX_PATH_CTL, 0x04 }, 1143 + { CDC_2_5_RX_RX2_RX_PATH_CFG0, 0x00 }, 1144 + { CDC_2_5_RX_RX2_RX_PATH_CFG1, 0x64 }, 1145 + { CDC_2_5_RX_RX2_RX_PATH_CFG2, 0x8F }, 1146 + { CDC_2_5_RX_RX2_RX_PATH_CFG3, 0x00 }, 1147 + { CDC_2_5_RX_RX2_RX_VOL_CTL, 0x00 }, 1148 + { CDC_2_5_RX_RX2_RX_PATH_MIX_CTL, 0x04 }, 1149 + { CDC_2_5_RX_RX2_RX_PATH_MIX_CFG, 0x7E }, 1150 + { CDC_2_5_RX_RX2_RX_VOL_MIX_CTL, 0x00 }, 1151 + { CDC_2_5_RX_RX2_RX_PATH_SEC0, 0x04 }, 1152 + { CDC_2_5_RX_RX2_RX_PATH_SEC1, 0x08 }, 1153 + { CDC_2_5_RX_RX2_RX_PATH_SEC2, 0x00 }, 1154 + { CDC_2_5_RX_RX2_RX_PATH_SEC3, 0x00 }, 1155 + { CDC_2_5_RX_RX2_RX_PATH_SEC4, 0x00 }, 1156 + { CDC_2_5_RX_RX2_RX_PATH_SEC5, 0x00 }, 1157 + { CDC_2_5_RX_RX2_RX_PATH_SEC6, 0x00 }, 1158 + { CDC_2_5_RX_RX2_RX_PATH_SEC7, 0x00 }, 1159 + { CDC_2_5_RX_RX2_RX_PATH_MIX_SEC0, 0x08 }, 1160 + { CDC_2_5_RX_RX2_RX_PATH_MIX_SEC1, 0x00 }, 1161 + { CDC_2_5_RX_RX2_RX_PATH_DSM_CTL, 0x00 }, 1162 + }; 1163 + 1164 + static const struct reg_default rx_pre_2_5_defaults[] = { 1165 + { CDC_RX_RX1_RX_PATH_CTL, 0x04 }, 1166 + { CDC_RX_RX1_RX_PATH_CFG0, 0x00 }, 1167 + { CDC_RX_RX1_RX_PATH_CFG1, 0x64 }, 1168 + { CDC_RX_RX1_RX_PATH_CFG2, 0x8F }, 1169 + { CDC_RX_RX1_RX_PATH_CFG3, 0x00 }, 1170 + { CDC_RX_RX1_RX_VOL_CTL, 0x00 }, 1171 + { CDC_RX_RX1_RX_PATH_MIX_CTL, 0x04 }, 1172 + { CDC_RX_RX1_RX_PATH_MIX_CFG, 0x7E }, 1173 + { CDC_RX_RX1_RX_VOL_MIX_CTL, 0x00 }, 1174 + { CDC_RX_RX1_RX_PATH_SEC1, 0x08 }, 1175 + { CDC_RX_RX1_RX_PATH_SEC2, 0x00 }, 1176 + { CDC_RX_RX1_RX_PATH_SEC3, 0x00 }, 1177 + { CDC_RX_RX1_RX_PATH_SEC4, 0x00 }, 1178 + { CDC_RX_RX1_RX_PATH_SEC7, 0x00 }, 1179 + { CDC_RX_RX1_RX_PATH_MIX_SEC0, 0x08 }, 1180 + { CDC_RX_RX1_RX_PATH_MIX_SEC1, 0x00 }, 1181 + { CDC_RX_RX1_RX_PATH_DSM_CTL, 0x08 }, 1182 + { CDC_RX_RX1_RX_PATH_DSM_DATA1, 0x00 }, 1183 + { CDC_RX_RX1_RX_PATH_DSM_DATA2, 0x00 }, 1184 + { CDC_RX_RX1_RX_PATH_DSM_DATA3, 0x00 }, 1185 + { CDC_RX_RX1_RX_PATH_DSM_DATA4, 0x55 }, 1186 + { CDC_RX_RX1_RX_PATH_DSM_DATA5, 0x55 }, 1187 + { CDC_RX_RX1_RX_PATH_DSM_DATA6, 0x55 }, 1188 + { CDC_RX_RX2_RX_PATH_CTL, 0x04 }, 1189 + { CDC_RX_RX2_RX_PATH_CFG0, 0x00 }, 1190 + { CDC_RX_RX2_RX_PATH_CFG1, 0x64 }, 1191 + { CDC_RX_RX2_RX_PATH_CFG2, 0x8F }, 1192 + { CDC_RX_RX2_RX_PATH_CFG3, 0x00 }, 1193 + { CDC_RX_RX2_RX_VOL_CTL, 0x00 }, 1194 + { CDC_RX_RX2_RX_PATH_MIX_CTL, 0x04 }, 1195 + { CDC_RX_RX2_RX_PATH_MIX_CFG, 0x7E }, 1196 + { CDC_RX_RX2_RX_VOL_MIX_CTL, 0x00 }, 1197 + { CDC_RX_RX2_RX_PATH_SEC0, 0x04 }, 1198 + { CDC_RX_RX2_RX_PATH_SEC1, 0x08 }, 1199 + { CDC_RX_RX2_RX_PATH_SEC2, 0x00 }, 1200 + { CDC_RX_RX2_RX_PATH_SEC3, 0x00 }, 1201 + { CDC_RX_RX2_RX_PATH_SEC4, 0x00 }, 1202 + { CDC_RX_RX2_RX_PATH_SEC5, 0x00 }, 1203 + { CDC_RX_RX2_RX_PATH_SEC6, 0x00 }, 1204 + { CDC_RX_RX2_RX_PATH_SEC7, 0x00 }, 1205 + { CDC_RX_RX2_RX_PATH_MIX_SEC0, 0x08 }, 1206 + { CDC_RX_RX2_RX_PATH_MIX_SEC1, 0x00 }, 1207 + { CDC_RX_RX2_RX_PATH_DSM_CTL, 0x00 }, 1208 + 1209 + }; 1210 + 1129 1211 static bool rx_is_wronly_register(struct device *dev, 1130 1212 unsigned int reg) 1131 1213 { ··· 1273 1169 return false; 1274 1170 } 1275 1171 1172 + static bool rx_pre_2_5_is_rw_register(struct device *dev, unsigned int reg) 1173 + { 1174 + switch (reg) { 1175 + case CDC_RX_RX1_RX_PATH_CTL: 1176 + case CDC_RX_RX1_RX_PATH_CFG0: 1177 + case CDC_RX_RX1_RX_PATH_CFG1: 1178 + case CDC_RX_RX1_RX_PATH_CFG2: 1179 + case CDC_RX_RX1_RX_PATH_CFG3: 1180 + case CDC_RX_RX1_RX_VOL_CTL: 1181 + case CDC_RX_RX1_RX_PATH_MIX_CTL: 1182 + case CDC_RX_RX1_RX_PATH_MIX_CFG: 1183 + case CDC_RX_RX1_RX_VOL_MIX_CTL: 1184 + case CDC_RX_RX1_RX_PATH_SEC1: 1185 + case CDC_RX_RX1_RX_PATH_SEC2: 1186 + case CDC_RX_RX1_RX_PATH_SEC3: 1187 + case CDC_RX_RX1_RX_PATH_SEC4: 1188 + case CDC_RX_RX1_RX_PATH_SEC7: 1189 + case CDC_RX_RX1_RX_PATH_MIX_SEC0: 1190 + case CDC_RX_RX1_RX_PATH_MIX_SEC1: 1191 + case CDC_RX_RX1_RX_PATH_DSM_CTL: 1192 + case CDC_RX_RX1_RX_PATH_DSM_DATA1: 1193 + case CDC_RX_RX1_RX_PATH_DSM_DATA2: 1194 + case CDC_RX_RX1_RX_PATH_DSM_DATA3: 1195 + case CDC_RX_RX1_RX_PATH_DSM_DATA4: 1196 + case CDC_RX_RX1_RX_PATH_DSM_DATA5: 1197 + case CDC_RX_RX1_RX_PATH_DSM_DATA6: 1198 + case CDC_RX_RX2_RX_PATH_CTL: 1199 + case CDC_RX_RX2_RX_PATH_CFG0: 1200 + case CDC_RX_RX2_RX_PATH_CFG1: 1201 + case CDC_RX_RX2_RX_PATH_CFG2: 1202 + case CDC_RX_RX2_RX_PATH_CFG3: 1203 + case CDC_RX_RX2_RX_VOL_CTL: 1204 + case CDC_RX_RX2_RX_PATH_MIX_CTL: 1205 + case CDC_RX_RX2_RX_PATH_MIX_CFG: 1206 + case CDC_RX_RX2_RX_VOL_MIX_CTL: 1207 + case CDC_RX_RX2_RX_PATH_SEC0: 1208 + case CDC_RX_RX2_RX_PATH_SEC1: 1209 + case CDC_RX_RX2_RX_PATH_SEC2: 1210 + case CDC_RX_RX2_RX_PATH_SEC3: 1211 + case CDC_RX_RX2_RX_PATH_SEC4: 1212 + case CDC_RX_RX2_RX_PATH_SEC5: 1213 + case CDC_RX_RX2_RX_PATH_SEC6: 1214 + case CDC_RX_RX2_RX_PATH_SEC7: 1215 + case CDC_RX_RX2_RX_PATH_MIX_SEC0: 1216 + case CDC_RX_RX2_RX_PATH_MIX_SEC1: 1217 + case CDC_RX_RX2_RX_PATH_DSM_CTL: 1218 + return true; 1219 + } 1220 + 1221 + return false; 1222 + } 1223 + 1224 + static bool rx_2_5_is_rw_register(struct device *dev, unsigned int reg) 1225 + { 1226 + switch (reg) { 1227 + case CDC_2_5_RX_RX1_RX_PATH_CTL: 1228 + case CDC_2_5_RX_RX1_RX_PATH_CFG0: 1229 + case CDC_2_5_RX_RX1_RX_PATH_CFG1: 1230 + case CDC_2_5_RX_RX1_RX_PATH_CFG2: 1231 + case CDC_2_5_RX_RX1_RX_PATH_CFG3: 1232 + case CDC_2_5_RX_RX1_RX_VOL_CTL: 1233 + case CDC_2_5_RX_RX1_RX_PATH_MIX_CTL: 1234 + case CDC_2_5_RX_RX1_RX_PATH_MIX_CFG: 1235 + case CDC_2_5_RX_RX1_RX_VOL_MIX_CTL: 1236 + case CDC_2_5_RX_RX1_RX_PATH_SEC1: 1237 + case CDC_2_5_RX_RX1_RX_PATH_SEC2: 1238 + case CDC_2_5_RX_RX1_RX_PATH_SEC3: 1239 + case CDC_2_5_RX_RX1_RX_PATH_SEC4: 1240 + case CDC_2_5_RX_RX1_RX_PATH_SEC7: 1241 + case CDC_2_5_RX_RX1_RX_PATH_MIX_SEC0: 1242 + case CDC_2_5_RX_RX1_RX_PATH_MIX_SEC1: 1243 + case CDC_2_5_RX_RX1_RX_PATH_DSM_CTL: 1244 + case CDC_2_5_RX_RX1_RX_PATH_DSM_DATA1: 1245 + case CDC_2_5_RX_RX1_RX_PATH_DSM_DATA2: 1246 + case CDC_2_5_RX_RX1_RX_PATH_DSM_DATA3: 1247 + case CDC_2_5_RX_RX1_RX_PATH_DSM_DATA4: 1248 + case CDC_2_5_RX_RX1_RX_PATH_DSM_DATA5: 1249 + case CDC_2_5_RX_RX1_RX_PATH_DSM_DATA6: 1250 + case CDC_2_5_RX_RX2_RX_PATH_CTL: 1251 + case CDC_2_5_RX_RX2_RX_PATH_CFG0: 1252 + case CDC_2_5_RX_RX2_RX_PATH_CFG1: 1253 + case CDC_2_5_RX_RX2_RX_PATH_CFG2: 1254 + case CDC_2_5_RX_RX2_RX_PATH_CFG3: 1255 + case CDC_2_5_RX_RX2_RX_VOL_CTL: 1256 + case CDC_2_5_RX_RX2_RX_PATH_MIX_CTL: 1257 + case CDC_2_5_RX_RX2_RX_PATH_MIX_CFG: 1258 + case CDC_2_5_RX_RX2_RX_VOL_MIX_CTL: 1259 + case CDC_2_5_RX_RX2_RX_PATH_SEC0: 1260 + case CDC_2_5_RX_RX2_RX_PATH_SEC1: 1261 + case CDC_2_5_RX_RX2_RX_PATH_SEC2: 1262 + case CDC_2_5_RX_RX2_RX_PATH_SEC3: 1263 + case CDC_2_5_RX_RX2_RX_PATH_SEC4: 1264 + case CDC_2_5_RX_RX2_RX_PATH_SEC5: 1265 + case CDC_2_5_RX_RX2_RX_PATH_SEC6: 1266 + case CDC_2_5_RX_RX2_RX_PATH_SEC7: 1267 + case CDC_2_5_RX_RX2_RX_PATH_MIX_SEC0: 1268 + case CDC_2_5_RX_RX2_RX_PATH_MIX_SEC1: 1269 + case CDC_2_5_RX_RX2_RX_PATH_DSM_CTL: 1270 + return true; 1271 + } 1272 + 1273 + return false; 1274 + } 1275 + 1276 1276 static bool rx_is_rw_register(struct device *dev, unsigned int reg) 1277 1277 { 1278 + struct rx_macro *rx = dev_get_drvdata(dev); 1279 + 1278 1280 switch (reg) { 1279 1281 case CDC_RX_TOP_TOP_CFG0: 1280 1282 case CDC_RX_TOP_SWR_CTRL: ··· 1510 1300 case CDC_RX_RX0_RX_PATH_DSM_DATA4: 1511 1301 case CDC_RX_RX0_RX_PATH_DSM_DATA5: 1512 1302 case CDC_RX_RX0_RX_PATH_DSM_DATA6: 1513 - case CDC_RX_RX1_RX_PATH_CTL: 1514 - case CDC_RX_RX1_RX_PATH_CFG0: 1515 - case CDC_RX_RX1_RX_PATH_CFG1: 1516 - case CDC_RX_RX1_RX_PATH_CFG2: 1517 - case CDC_RX_RX1_RX_PATH_CFG3: 1518 - case CDC_RX_RX1_RX_VOL_CTL: 1519 - case CDC_RX_RX1_RX_PATH_MIX_CTL: 1520 - case CDC_RX_RX1_RX_PATH_MIX_CFG: 1521 - case CDC_RX_RX1_RX_VOL_MIX_CTL: 1522 - case CDC_RX_RX1_RX_PATH_SEC1: 1523 - case CDC_RX_RX1_RX_PATH_SEC2: 1524 - case CDC_RX_RX1_RX_PATH_SEC3: 1525 - case CDC_RX_RX1_RX_PATH_SEC4: 1526 - case CDC_RX_RX1_RX_PATH_SEC7: 1527 - case CDC_RX_RX1_RX_PATH_MIX_SEC0: 1528 - case CDC_RX_RX1_RX_PATH_MIX_SEC1: 1529 - case CDC_RX_RX1_RX_PATH_DSM_CTL: 1530 - case CDC_RX_RX1_RX_PATH_DSM_DATA1: 1531 - case CDC_RX_RX1_RX_PATH_DSM_DATA2: 1532 - case CDC_RX_RX1_RX_PATH_DSM_DATA3: 1533 - case CDC_RX_RX1_RX_PATH_DSM_DATA4: 1534 - case CDC_RX_RX1_RX_PATH_DSM_DATA5: 1535 - case CDC_RX_RX1_RX_PATH_DSM_DATA6: 1536 - case CDC_RX_RX2_RX_PATH_CTL: 1537 - case CDC_RX_RX2_RX_PATH_CFG0: 1538 - case CDC_RX_RX2_RX_PATH_CFG1: 1539 - case CDC_RX_RX2_RX_PATH_CFG2: 1540 - case CDC_RX_RX2_RX_PATH_CFG3: 1541 - case CDC_RX_RX2_RX_VOL_CTL: 1542 - case CDC_RX_RX2_RX_PATH_MIX_CTL: 1543 - case CDC_RX_RX2_RX_PATH_MIX_CFG: 1544 - case CDC_RX_RX2_RX_VOL_MIX_CTL: 1545 - case CDC_RX_RX2_RX_PATH_SEC0: 1546 - case CDC_RX_RX2_RX_PATH_SEC1: 1547 - case CDC_RX_RX2_RX_PATH_SEC2: 1548 - case CDC_RX_RX2_RX_PATH_SEC3: 1549 - case CDC_RX_RX2_RX_PATH_SEC4: 1550 - case CDC_RX_RX2_RX_PATH_SEC5: 1551 - case CDC_RX_RX2_RX_PATH_SEC6: 1552 - case CDC_RX_RX2_RX_PATH_SEC7: 1553 - case CDC_RX_RX2_RX_PATH_MIX_SEC0: 1554 - case CDC_RX_RX2_RX_PATH_MIX_SEC1: 1555 - case CDC_RX_RX2_RX_PATH_DSM_CTL: 1556 1303 case CDC_RX_IDLE_DETECT_PATH_CTL: 1557 1304 case CDC_RX_IDLE_DETECT_CFG0: 1558 1305 case CDC_RX_IDLE_DETECT_CFG1: ··· 1596 1429 return true; 1597 1430 } 1598 1431 1432 + switch (rx->codec_version) { 1433 + case LPASS_CODEC_VERSION_1_0: 1434 + case LPASS_CODEC_VERSION_1_1: 1435 + case LPASS_CODEC_VERSION_1_2: 1436 + case LPASS_CODEC_VERSION_2_0: 1437 + return rx_pre_2_5_is_rw_register(dev, reg); 1438 + case LPASS_CODEC_VERSION_2_5: 1439 + case LPASS_CODEC_VERSION_2_6: 1440 + case LPASS_CODEC_VERSION_2_7: 1441 + case LPASS_CODEC_VERSION_2_8: 1442 + return rx_2_5_is_rw_register(dev, reg); 1443 + default: 1444 + break; 1445 + } 1446 + 1599 1447 return false; 1600 1448 } 1601 1449 ··· 1661 1479 return rx_is_rw_register(dev, reg); 1662 1480 } 1663 1481 1664 - static const struct regmap_config rx_regmap_config = { 1482 + static struct regmap_config rx_regmap_config = { 1665 1483 .name = "rx_macro", 1666 1484 .reg_bits = 16, 1667 1485 .val_bits = 32, /* 8 but with 32 bit read/write */ 1668 1486 .reg_stride = 4, 1669 1487 .cache_type = REGCACHE_FLAT, 1670 - .reg_defaults = rx_defaults, 1671 - .num_reg_defaults = ARRAY_SIZE(rx_defaults), 1672 1488 .max_register = RX_MAX_OFFSET, 1673 1489 .writeable_reg = rx_is_writeable_register, 1674 1490 .volatile_reg = rx_is_volatile_register, ··· 1678 1498 { 1679 1499 struct snd_soc_dapm_widget *widget = snd_soc_dapm_kcontrol_widget(kcontrol); 1680 1500 struct snd_soc_component *component = snd_soc_dapm_to_component(widget->dapm); 1501 + struct rx_macro *rx = snd_soc_component_get_drvdata(component); 1681 1502 struct soc_enum *e = (struct soc_enum *)kcontrol->private_value; 1682 1503 unsigned short look_ahead_dly_reg; 1683 1504 unsigned int val; 1684 1505 1685 1506 val = ucontrol->value.enumerated.item[0]; 1686 1507 1687 - if (e->reg == CDC_RX_RX0_RX_PATH_CFG1) 1688 - look_ahead_dly_reg = CDC_RX_RX0_RX_PATH_CFG0; 1689 - else if (e->reg == CDC_RX_RX1_RX_PATH_CFG1) 1690 - look_ahead_dly_reg = CDC_RX_RX1_RX_PATH_CFG0; 1508 + if (e->reg == CDC_RX_RXn_RX_PATH_CFG1(rx, 0)) 1509 + look_ahead_dly_reg = CDC_RX_RXn_RX_PATH_CFG0(rx, 0); 1510 + else if (e->reg == CDC_RX_RXn_RX_PATH_CFG1(rx, 1)) 1511 + look_ahead_dly_reg = CDC_RX_RXn_RX_PATH_CFG0(rx, 1); 1691 1512 1692 1513 /* Set Look Ahead Delay */ 1693 1514 if (val) ··· 1707 1526 snd_soc_dapm_get_enum_double, rx_macro_int_dem_inp_mux_put); 1708 1527 static const struct snd_kcontrol_new rx_int1_dem_inp_mux = 1709 1528 SOC_DAPM_ENUM_EXT("rx_int1_dem_inp", rx_int1_dem_inp_enum, 1529 + snd_soc_dapm_get_enum_double, rx_macro_int_dem_inp_mux_put); 1530 + 1531 + static const struct snd_kcontrol_new rx_2_5_int1_dem_inp_mux = 1532 + SOC_DAPM_ENUM_EXT("rx_int1_dem_inp", rx_2_5_int1_dem_inp_enum, 1710 1533 snd_soc_dapm_get_enum_double, rx_macro_int_dem_inp_mux_put); 1711 1534 1712 1535 static int rx_macro_set_prim_interpolator_rate(struct snd_soc_dai *dai, ··· 1746 1561 if ((inp0_sel == int_1_mix1_inp + INTn_1_INP_SEL_RX0) || 1747 1562 (inp1_sel == int_1_mix1_inp + INTn_1_INP_SEL_RX0) || 1748 1563 (inp2_sel == int_1_mix1_inp + INTn_1_INP_SEL_RX0)) { 1749 - int_fs_reg = CDC_RX_RXn_RX_PATH_CTL(j); 1564 + int_fs_reg = CDC_RX_RXn_RX_PATH_CTL(rx, j); 1750 1565 /* sample_rate is in Hz */ 1751 1566 snd_soc_component_update_bits(component, int_fs_reg, 1752 1567 CDC_RX_PATH_PCM_RATE_MASK, ··· 1779 1594 CDC_RX_INTX_2_SEL_MASK); 1780 1595 1781 1596 if (int_mux_cfg1_val == int_2_inp + INTn_2_INP_SEL_RX0) { 1782 - int_fs_reg = CDC_RX_RXn_RX_PATH_MIX_CTL(j); 1597 + int_fs_reg = CDC_RX_RXn_RX_PATH_MIX_CTL(rx, j); 1783 1598 snd_soc_component_update_bits(component, int_fs_reg, 1784 1599 CDC_RX_RXn_MIX_PCM_RATE_MASK, 1785 1600 rate_reg_val); ··· 1898 1713 static int rx_macro_digital_mute(struct snd_soc_dai *dai, int mute, int stream) 1899 1714 { 1900 1715 struct snd_soc_component *component = dai->component; 1716 + struct rx_macro *rx = snd_soc_component_get_drvdata(component); 1901 1717 uint16_t j, reg, mix_reg, dsm_reg; 1902 1718 u16 int_mux_cfg0, int_mux_cfg1; 1903 1719 u8 int_mux_cfg0_val, int_mux_cfg1_val; ··· 1909 1723 case RX_MACRO_AIF3_PB: 1910 1724 case RX_MACRO_AIF4_PB: 1911 1725 for (j = 0; j < INTERP_MAX; j++) { 1912 - reg = CDC_RX_RXn_RX_PATH_CTL(j); 1913 - mix_reg = CDC_RX_RXn_RX_PATH_MIX_CTL(j); 1914 - dsm_reg = CDC_RX_RXn_RX_PATH_DSM_CTL(j); 1726 + reg = CDC_RX_RXn_RX_PATH_CTL(rx, j); 1727 + mix_reg = CDC_RX_RXn_RX_PATH_MIX_CTL(rx, j); 1728 + dsm_reg = CDC_RX_RXn_RX_PATH_DSM_CTL(rx, j); 1915 1729 1916 1730 if (mute) { 1917 1731 snd_soc_component_update_bits(component, reg, ··· 1928 1742 } 1929 1743 1930 1744 if (j == INTERP_AUX) 1931 - dsm_reg = CDC_RX_RX2_RX_PATH_DSM_CTL; 1745 + dsm_reg = CDC_RX_RXn_RX_PATH_DSM_CTL(rx, 2); 1932 1746 1933 1747 int_mux_cfg0 = CDC_RX_INP_MUX_RX_INT0_CFG0 + j * 8; 1934 1748 int_mux_cfg1 = int_mux_cfg0 + 4; ··· 2136 1950 int event) 2137 1951 { 2138 1952 struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm); 1953 + struct rx_macro *rx = snd_soc_component_get_drvdata(component); 2139 1954 u16 gain_reg, reg; 2140 1955 2141 - reg = CDC_RX_RXn_RX_PATH_CTL(w->shift); 2142 - gain_reg = CDC_RX_RXn_RX_VOL_CTL(w->shift); 1956 + reg = CDC_RX_RXn_RX_PATH_CTL(rx, w->shift); 1957 + gain_reg = CDC_RX_RXn_RX_VOL_CTL(rx, w->shift); 2143 1958 2144 1959 switch (event) { 2145 1960 case SND_SOC_DAPM_PRE_PMU: ··· 2172 1985 if (comp == INTERP_AUX) 2173 1986 return 0; 2174 1987 2175 - pcm_rate = snd_soc_component_read(component, CDC_RX_RXn_RX_PATH_CTL(comp)) & 0x0F; 1988 + pcm_rate = snd_soc_component_read(component, CDC_RX_RXn_RX_PATH_CTL(rx, comp)) & 0x0F; 2176 1989 if (pcm_rate < 0x06) 2177 1990 val = 0x03; 2178 1991 else if (pcm_rate < 0x08) ··· 2183 1996 val = 0x00; 2184 1997 2185 1998 if (SND_SOC_DAPM_EVENT_ON(event)) 2186 - snd_soc_component_update_bits(component, CDC_RX_RXn_RX_PATH_CFG3(comp), 1999 + snd_soc_component_update_bits(component, CDC_RX_RXn_RX_PATH_CFG3(rx, comp), 2187 2000 CDC_RX_DC_COEFF_SEL_MASK, val); 2188 2001 2189 2002 if (SND_SOC_DAPM_EVENT_OFF(event)) 2190 - snd_soc_component_update_bits(component, CDC_RX_RXn_RX_PATH_CFG3(comp), 2003 + snd_soc_component_update_bits(component, CDC_RX_RXn_RX_PATH_CFG3(rx, comp), 2191 2004 CDC_RX_DC_COEFF_SEL_MASK, 0x3); 2192 2005 if (!rx->comp_enabled[comp]) 2193 2006 return 0; ··· 2200 2013 CDC_RX_COMPANDERn_SOFT_RST_MASK, 0x1); 2201 2014 snd_soc_component_write_field(component, CDC_RX_COMPANDERn_CTL0(comp), 2202 2015 CDC_RX_COMPANDERn_SOFT_RST_MASK, 0x0); 2203 - snd_soc_component_write_field(component, CDC_RX_RXn_RX_PATH_CFG0(comp), 2016 + snd_soc_component_write_field(component, CDC_RX_RXn_RX_PATH_CFG0(rx, comp), 2204 2017 CDC_RX_RXn_COMP_EN_MASK, 0x1); 2205 2018 } 2206 2019 2207 2020 if (SND_SOC_DAPM_EVENT_OFF(event)) { 2208 2021 snd_soc_component_write_field(component, CDC_RX_COMPANDERn_CTL0(comp), 2209 2022 CDC_RX_COMPANDERn_HALT_MASK, 0x1); 2210 - snd_soc_component_write_field(component, CDC_RX_RXn_RX_PATH_CFG0(comp), 2023 + snd_soc_component_write_field(component, CDC_RX_RXn_RX_PATH_CFG0(rx, comp), 2211 2024 CDC_RX_RXn_COMP_EN_MASK, 0x0); 2212 2025 snd_soc_component_write_field(component, CDC_RX_COMPANDERn_CTL0(comp), 2213 2026 CDC_RX_COMPANDERn_CLK_EN_MASK, 0x0); ··· 2306 2119 /* Update Aux HPF control */ 2307 2120 if (!rx->is_aux_hpf_on) 2308 2121 snd_soc_component_update_bits(component, 2309 - CDC_RX_RX2_RX_PATH_CFG1, 0x04, 0x00); 2122 + CDC_RX_RXn_RX_PATH_CFG1(rx, 2), 0x04, 0x00); 2310 2123 } 2311 2124 2312 2125 if (SND_SOC_DAPM_EVENT_OFF(event)) { 2313 2126 /* Reset to default (HPF=ON) */ 2314 2127 snd_soc_component_update_bits(component, 2315 - CDC_RX_RX2_RX_PATH_CFG1, 0x04, 0x04); 2128 + CDC_RX_RXn_RX_PATH_CFG1(rx, 2), 0x04, 0x04); 2316 2129 } 2317 2130 2318 2131 return 0; ··· 2364 2177 CDC_RX_CLSH_DECAY_CTRL, 2365 2178 CDC_RX_CLSH_DECAY_RATE_MASK, 0x0); 2366 2179 snd_soc_component_write_field(component, 2367 - CDC_RX_RX0_RX_PATH_CFG0, 2180 + CDC_RX_RXn_RX_PATH_CFG0(rx, 0), 2368 2181 CDC_RX_RXn_CLSH_EN_MASK, 0x1); 2369 2182 break; 2370 2183 case INTERP_HPHR: ··· 2380 2193 CDC_RX_CLSH_DECAY_CTRL, 2381 2194 CDC_RX_CLSH_DECAY_RATE_MASK, 0x0); 2382 2195 snd_soc_component_write_field(component, 2383 - CDC_RX_RX1_RX_PATH_CFG0, 2196 + CDC_RX_RXn_RX_PATH_CFG0(rx, 1), 2384 2197 CDC_RX_RXn_CLSH_EN_MASK, 0x1); 2385 2198 break; 2386 2199 case INTERP_AUX: 2387 2200 snd_soc_component_update_bits(component, 2388 - CDC_RX_RX2_RX_PATH_CFG0, 2201 + CDC_RX_RXn_RX_PATH_CFG0(rx, 2), 2389 2202 CDC_RX_RX2_DLY_Z_EN_MASK, 1); 2390 2203 snd_soc_component_write_field(component, 2391 - CDC_RX_RX2_RX_PATH_CFG0, 2204 + CDC_RX_RXn_RX_PATH_CFG0(rx, 2), 2392 2205 CDC_RX_RX2_CLSH_EN_MASK, 1); 2393 2206 break; 2394 2207 } ··· 2399 2212 static void rx_macro_hd2_control(struct snd_soc_component *component, 2400 2213 u16 interp_idx, int event) 2401 2214 { 2215 + struct rx_macro *rx = snd_soc_component_get_drvdata(component); 2402 2216 u16 hd2_scale_reg, hd2_enable_reg; 2403 2217 2404 2218 switch (interp_idx) { 2405 2219 case INTERP_HPHL: 2406 - hd2_scale_reg = CDC_RX_RX0_RX_PATH_SEC3; 2407 - hd2_enable_reg = CDC_RX_RX0_RX_PATH_CFG0; 2220 + hd2_scale_reg = CDC_RX_RXn_RX_PATH_SEC3(rx, 0); 2221 + hd2_enable_reg = CDC_RX_RXn_RX_PATH_CFG0(rx, 0); 2408 2222 break; 2409 2223 case INTERP_HPHR: 2410 - hd2_scale_reg = CDC_RX_RX1_RX_PATH_SEC3; 2411 - hd2_enable_reg = CDC_RX_RX1_RX_PATH_CFG0; 2224 + hd2_scale_reg = CDC_RX_RXn_RX_PATH_SEC3(rx, 1); 2225 + hd2_enable_reg = CDC_RX_RXn_RX_PATH_CFG0(rx, 1); 2412 2226 break; 2413 2227 } 2414 2228 ··· 2664 2476 if (interp_idx == INTERP_HPHL) { 2665 2477 if (rx->is_ear_mode_on) 2666 2478 snd_soc_component_write_field(component, 2667 - CDC_RX_RX0_RX_PATH_CFG1, 2479 + CDC_RX_RXn_RX_PATH_CFG1(rx, 0), 2668 2480 CDC_RX_RX0_HPH_L_EAR_SEL_MASK, 0x1); 2669 2481 else 2670 2482 snd_soc_component_write_field(component, ··· 2681 2493 2682 2494 if (hph_lut_bypass_reg && SND_SOC_DAPM_EVENT_OFF(event)) { 2683 2495 snd_soc_component_write_field(component, 2684 - CDC_RX_RX0_RX_PATH_CFG1, 2496 + CDC_RX_RXn_RX_PATH_CFG1(rx, 0), 2685 2497 CDC_RX_RX0_HPH_L_EAR_SEL_MASK, 0x0); 2686 2498 snd_soc_component_update_bits(component, hph_lut_bypass_reg, 2687 2499 CDC_RX_TOP_HPH_LUT_BYPASS_MASK, 0); ··· 2698 2510 u16 main_reg, dsm_reg, rx_cfg2_reg; 2699 2511 struct rx_macro *rx = snd_soc_component_get_drvdata(component); 2700 2512 2701 - main_reg = CDC_RX_RXn_RX_PATH_CTL(interp_idx); 2702 - dsm_reg = CDC_RX_RXn_RX_PATH_DSM_CTL(interp_idx); 2513 + main_reg = CDC_RX_RXn_RX_PATH_CTL(rx, interp_idx); 2514 + dsm_reg = CDC_RX_RXn_RX_PATH_DSM_CTL(rx, interp_idx); 2703 2515 if (interp_idx == INTERP_AUX) 2704 - dsm_reg = CDC_RX_RX2_RX_PATH_DSM_CTL; 2705 - rx_cfg2_reg = CDC_RX_RXn_RX_PATH_CFG2(interp_idx); 2516 + dsm_reg = CDC_RX_RXn_RX_PATH_DSM_CTL(rx, 2); 2517 + 2518 + rx_cfg2_reg = CDC_RX_RXn_RX_PATH_CFG2(rx, interp_idx); 2706 2519 2707 2520 if (SND_SOC_DAPM_EVENT_ON(event)) { 2708 2521 if (rx->main_clk_users[interp_idx] == 0) { ··· 2770 2581 struct snd_kcontrol *kcontrol, int event) 2771 2582 { 2772 2583 struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm); 2584 + struct rx_macro *rx = snd_soc_component_get_drvdata(component); 2773 2585 u16 gain_reg, mix_reg; 2774 2586 2775 - gain_reg = CDC_RX_RXn_RX_VOL_MIX_CTL(w->shift); 2776 - mix_reg = CDC_RX_RXn_RX_PATH_MIX_CTL(w->shift); 2587 + gain_reg = CDC_RX_RXn_RX_VOL_MIX_CTL(rx, w->shift); 2588 + mix_reg = CDC_RX_RXn_RX_PATH_MIX_CTL(rx, w->shift); 2777 2589 2778 2590 switch (event) { 2779 2591 case SND_SOC_DAPM_PRE_PMU: ··· 2805 2615 struct snd_kcontrol *kcontrol, int event) 2806 2616 { 2807 2617 struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm); 2618 + struct rx_macro *rx = snd_soc_component_get_drvdata(component); 2808 2619 2809 2620 switch (event) { 2810 2621 case SND_SOC_DAPM_PRE_PMU: 2811 2622 rx_macro_enable_interp_clk(component, event, w->shift); 2812 - snd_soc_component_write_field(component, CDC_RX_RXn_RX_PATH_CFG1(w->shift), 2623 + snd_soc_component_write_field(component, CDC_RX_RXn_RX_PATH_CFG1(rx, w->shift), 2813 2624 CDC_RX_RXn_SIDETONE_EN_MASK, 1); 2814 - snd_soc_component_write_field(component, CDC_RX_RXn_RX_PATH_CTL(w->shift), 2625 + snd_soc_component_write_field(component, CDC_RX_RXn_RX_PATH_CTL(rx, w->shift), 2815 2626 CDC_RX_PATH_CLK_EN_MASK, 1); 2816 2627 break; 2817 2628 case SND_SOC_DAPM_POST_PMD: 2818 - snd_soc_component_write_field(component, CDC_RX_RXn_RX_PATH_CFG1(w->shift), 2629 + snd_soc_component_write_field(component, CDC_RX_RXn_RX_PATH_CFG1(rx, w->shift), 2819 2630 CDC_RX_RXn_SIDETONE_EN_MASK, 0); 2820 2631 rx_macro_enable_interp_clk(component, event, w->shift); 2821 2632 break; ··· 2986 2795 return 0; 2987 2796 } 2988 2797 2989 - static const struct snd_kcontrol_new rx_macro_snd_controls[] = { 2990 - SOC_SINGLE_S8_TLV("RX_RX0 Digital Volume", CDC_RX_RX0_RX_VOL_CTL, 2991 - -84, 40, digital_gain), 2798 + static const struct snd_kcontrol_new rx_macro_def_snd_controls[] = { 2992 2799 SOC_SINGLE_S8_TLV("RX_RX1 Digital Volume", CDC_RX_RX1_RX_VOL_CTL, 2993 2800 -84, 40, digital_gain), 2994 2801 SOC_SINGLE_S8_TLV("RX_RX2 Digital Volume", CDC_RX_RX2_RX_VOL_CTL, 2995 - -84, 40, digital_gain), 2996 - SOC_SINGLE_S8_TLV("RX_RX0 Mix Digital Volume", CDC_RX_RX0_RX_VOL_MIX_CTL, 2997 2802 -84, 40, digital_gain), 2998 2803 SOC_SINGLE_S8_TLV("RX_RX1 Mix Digital Volume", CDC_RX_RX1_RX_VOL_MIX_CTL, 2999 2804 -84, 40, digital_gain), 3000 2805 SOC_SINGLE_S8_TLV("RX_RX2 Mix Digital Volume", CDC_RX_RX2_RX_VOL_MIX_CTL, 3001 2806 -84, 40, digital_gain), 2807 + }; 3002 2808 2809 + static const struct snd_kcontrol_new rx_macro_2_5_snd_controls[] = { 2810 + 2811 + SOC_SINGLE_S8_TLV("RX_RX1 Digital Volume", CDC_2_5_RX_RX1_RX_VOL_CTL, 2812 + -84, 40, digital_gain), 2813 + SOC_SINGLE_S8_TLV("RX_RX2 Digital Volume", CDC_2_5_RX_RX2_RX_VOL_CTL, 2814 + -84, 40, digital_gain), 2815 + SOC_SINGLE_S8_TLV("RX_RX1 Mix Digital Volume", CDC_2_5_RX_RX1_RX_VOL_MIX_CTL, 2816 + -84, 40, digital_gain), 2817 + SOC_SINGLE_S8_TLV("RX_RX2 Mix Digital Volume", CDC_2_5_RX_RX2_RX_VOL_MIX_CTL, 2818 + -84, 40, digital_gain), 2819 + }; 2820 + 2821 + static const struct snd_kcontrol_new rx_macro_snd_controls[] = { 2822 + SOC_SINGLE_S8_TLV("RX_RX0 Digital Volume", CDC_RX_RX0_RX_VOL_CTL, 2823 + -84, 40, digital_gain), 2824 + SOC_SINGLE_S8_TLV("RX_RX0 Mix Digital Volume", CDC_RX_RX0_RX_VOL_MIX_CTL, 2825 + -84, 40, digital_gain), 3003 2826 SOC_SINGLE_EXT("RX_COMP1 Switch", SND_SOC_NOPM, RX_MACRO_COMP1, 1, 0, 3004 2827 rx_macro_get_compander, rx_macro_set_compander), 3005 2828 SOC_SINGLE_EXT("RX_COMP2 Switch", SND_SOC_NOPM, RX_MACRO_COMP2, 1, 0, ··· 3131 2926 return 0; 3132 2927 } 3133 2928 2929 + static const struct snd_soc_dapm_widget rx_macro_2_5_dapm_widgets[] = { 2930 + SND_SOC_DAPM_MUX("RX INT1 DEM MUX", SND_SOC_NOPM, 0, 0, 2931 + &rx_2_5_int1_dem_inp_mux), 2932 + }; 2933 + 2934 + static const struct snd_soc_dapm_widget rx_macro_def_dapm_widgets[] = { 2935 + SND_SOC_DAPM_MUX("RX INT1 DEM MUX", SND_SOC_NOPM, 0, 0, 2936 + &rx_int1_dem_inp_mux), 2937 + }; 2938 + 3134 2939 static const struct snd_soc_dapm_widget rx_macro_dapm_widgets[] = { 3135 2940 SND_SOC_DAPM_AIF_IN("RX AIF1 PB", "RX_MACRO_AIF1 Playback", 0, 3136 2941 SND_SOC_NOPM, 0, 0), ··· 3212 2997 3213 2998 SND_SOC_DAPM_MUX("RX INT0 DEM MUX", SND_SOC_NOPM, 0, 0, 3214 2999 &rx_int0_dem_inp_mux), 3215 - SND_SOC_DAPM_MUX("RX INT1 DEM MUX", SND_SOC_NOPM, 0, 0, 3216 - &rx_int1_dem_inp_mux), 3217 3000 3218 3001 SND_SOC_DAPM_MUX_E("RX INT0_2 MUX", SND_SOC_NOPM, INTERP_HPHL, 0, 3219 3002 &rx_int0_2_mux, rx_macro_enable_mix_path, ··· 3606 3393 3607 3394 static int rx_macro_component_probe(struct snd_soc_component *component) 3608 3395 { 3396 + struct snd_soc_dapm_context *dapm = snd_soc_component_get_dapm(component); 3609 3397 struct rx_macro *rx = snd_soc_component_get_drvdata(component); 3398 + const struct snd_soc_dapm_widget *widgets; 3399 + const struct snd_kcontrol_new *controls; 3400 + unsigned int num_controls; 3401 + int ret, num_widgets; 3610 3402 3611 3403 snd_soc_component_init_regmap(component, rx->regmap); 3612 3404 3613 - snd_soc_component_update_bits(component, CDC_RX_RX0_RX_PATH_SEC7, 3405 + snd_soc_component_update_bits(component, CDC_RX_RXn_RX_PATH_SEC7(rx, 0), 3614 3406 CDC_RX_DSM_OUT_DELAY_SEL_MASK, 3615 3407 CDC_RX_DSM_OUT_DELAY_TWO_SAMPLE); 3616 - snd_soc_component_update_bits(component, CDC_RX_RX1_RX_PATH_SEC7, 3408 + snd_soc_component_update_bits(component, CDC_RX_RXn_RX_PATH_SEC7(rx, 1), 3617 3409 CDC_RX_DSM_OUT_DELAY_SEL_MASK, 3618 3410 CDC_RX_DSM_OUT_DELAY_TWO_SAMPLE); 3619 - snd_soc_component_update_bits(component, CDC_RX_RX2_RX_PATH_SEC7, 3411 + snd_soc_component_update_bits(component, CDC_RX_RXn_RX_PATH_SEC7(rx, 2), 3620 3412 CDC_RX_DSM_OUT_DELAY_SEL_MASK, 3621 3413 CDC_RX_DSM_OUT_DELAY_TWO_SAMPLE); 3622 - snd_soc_component_update_bits(component, CDC_RX_RX0_RX_PATH_CFG3, 3414 + snd_soc_component_update_bits(component, CDC_RX_RXn_RX_PATH_CFG3(rx, 0), 3623 3415 CDC_RX_DC_COEFF_SEL_MASK, 3624 3416 CDC_RX_DC_COEFF_SEL_TWO); 3625 - snd_soc_component_update_bits(component, CDC_RX_RX1_RX_PATH_CFG3, 3417 + snd_soc_component_update_bits(component, CDC_RX_RXn_RX_PATH_CFG3(rx, 1), 3626 3418 CDC_RX_DC_COEFF_SEL_MASK, 3627 3419 CDC_RX_DC_COEFF_SEL_TWO); 3628 - snd_soc_component_update_bits(component, CDC_RX_RX2_RX_PATH_CFG3, 3420 + snd_soc_component_update_bits(component, CDC_RX_RXn_RX_PATH_CFG3(rx, 2), 3629 3421 CDC_RX_DC_COEFF_SEL_MASK, 3630 3422 CDC_RX_DC_COEFF_SEL_TWO); 3423 + 3424 + switch (rx->codec_version) { 3425 + case LPASS_CODEC_VERSION_1_0: 3426 + case LPASS_CODEC_VERSION_1_1: 3427 + case LPASS_CODEC_VERSION_1_2: 3428 + case LPASS_CODEC_VERSION_2_0: 3429 + controls = rx_macro_def_snd_controls; 3430 + num_controls = ARRAY_SIZE(rx_macro_def_snd_controls); 3431 + widgets = rx_macro_def_dapm_widgets; 3432 + num_widgets = ARRAY_SIZE(rx_macro_def_dapm_widgets); 3433 + break; 3434 + case LPASS_CODEC_VERSION_2_5: 3435 + case LPASS_CODEC_VERSION_2_6: 3436 + case LPASS_CODEC_VERSION_2_7: 3437 + case LPASS_CODEC_VERSION_2_8: 3438 + controls = rx_macro_2_5_snd_controls; 3439 + num_controls = ARRAY_SIZE(rx_macro_2_5_snd_controls); 3440 + widgets = rx_macro_2_5_dapm_widgets; 3441 + num_widgets = ARRAY_SIZE(rx_macro_2_5_dapm_widgets); 3442 + break; 3443 + default: 3444 + return -EINVAL; 3445 + } 3631 3446 3632 3447 rx->component = component; 3633 3448 3634 - return 0; 3449 + ret = snd_soc_add_component_controls(component, controls, num_controls); 3450 + if (ret) 3451 + return ret; 3452 + 3453 + return snd_soc_dapm_new_controls(dapm, widgets, num_widgets); 3635 3454 } 3636 3455 3637 3456 static int swclk_gate_enable(struct clk_hw *hw) ··· 3762 3517 3763 3518 static int rx_macro_probe(struct platform_device *pdev) 3764 3519 { 3520 + struct reg_default *reg_defaults; 3765 3521 struct device *dev = &pdev->dev; 3766 3522 kernel_ulong_t flags; 3767 3523 struct rx_macro *rx; 3768 3524 void __iomem *base; 3769 - int ret; 3525 + int ret, def_count; 3770 3526 3771 3527 flags = (kernel_ulong_t)device_get_match_data(dev); 3772 3528 ··· 3806 3560 ret = PTR_ERR(base); 3807 3561 goto err; 3808 3562 } 3563 + rx->codec_version = lpass_macro_get_codec_version(); 3564 + switch (rx->codec_version) { 3565 + case LPASS_CODEC_VERSION_1_0: 3566 + case LPASS_CODEC_VERSION_1_1: 3567 + case LPASS_CODEC_VERSION_1_2: 3568 + case LPASS_CODEC_VERSION_2_0: 3569 + rx->rxn_reg_stride = 0x80; 3570 + def_count = ARRAY_SIZE(rx_defaults) + ARRAY_SIZE(rx_pre_2_5_defaults); 3571 + reg_defaults = kmalloc_array(def_count, sizeof(struct reg_default), GFP_KERNEL); 3572 + if (!reg_defaults) { 3573 + ret = -ENOMEM; 3574 + goto err; 3575 + } 3576 + memcpy(&reg_defaults[0], rx_defaults, sizeof(rx_defaults)); 3577 + memcpy(&reg_defaults[ARRAY_SIZE(rx_defaults)], 3578 + rx_pre_2_5_defaults, sizeof(rx_pre_2_5_defaults)); 3579 + break; 3580 + case LPASS_CODEC_VERSION_2_5: 3581 + case LPASS_CODEC_VERSION_2_6: 3582 + case LPASS_CODEC_VERSION_2_7: 3583 + case LPASS_CODEC_VERSION_2_8: 3584 + rx->rxn_reg_stride = 0xc0; 3585 + def_count = ARRAY_SIZE(rx_defaults) + ARRAY_SIZE(rx_2_5_defaults); 3586 + reg_defaults = kmalloc_array(def_count, sizeof(struct reg_default), GFP_KERNEL); 3587 + if (!reg_defaults) { 3588 + ret = -ENOMEM; 3589 + goto err; 3590 + } 3591 + memcpy(&reg_defaults[0], rx_defaults, sizeof(rx_defaults)); 3592 + memcpy(&reg_defaults[ARRAY_SIZE(rx_defaults)], 3593 + rx_2_5_defaults, sizeof(rx_2_5_defaults)); 3594 + break; 3595 + default: 3596 + dev_err(rx->dev, "Unsupported Codec version (%d)\n", rx->codec_version); 3597 + ret = -EINVAL; 3598 + goto err; 3599 + } 3600 + 3601 + rx_regmap_config.reg_defaults = reg_defaults; 3602 + rx_regmap_config.num_reg_defaults = def_count; 3809 3603 3810 3604 rx->regmap = devm_regmap_init_mmio(dev, base, &rx_regmap_config); 3811 3605 if (IS_ERR(rx->regmap)) { 3812 3606 ret = PTR_ERR(rx->regmap); 3813 - goto err; 3607 + goto err_ver; 3814 3608 } 3815 3609 3816 3610 dev_set_drvdata(dev, rx); ··· 3863 3577 3864 3578 ret = clk_prepare_enable(rx->macro); 3865 3579 if (ret) 3866 - goto err; 3580 + goto err_ver; 3867 3581 3868 3582 ret = clk_prepare_enable(rx->dcodec); 3869 3583 if (ret) ··· 3909 3623 if (ret) 3910 3624 goto err_clkout; 3911 3625 3626 + kfree(reg_defaults); 3912 3627 return 0; 3913 3628 3914 3629 err_clkout: ··· 3922 3635 clk_disable_unprepare(rx->dcodec); 3923 3636 err_dcodec: 3924 3637 clk_disable_unprepare(rx->macro); 3638 + err_ver: 3639 + kfree(reg_defaults); 3925 3640 err: 3926 3641 lpass_macro_pds_exit(rx->pds); 3927 3642
+28
sound/soc/codecs/lpass-va-macro.c
··· 1461 1461 return dmic_sample_rate; 1462 1462 } 1463 1463 1464 + static void va_macro_set_lpass_codec_version(struct va_macro *va) 1465 + { 1466 + int core_id_0 = 0, core_id_1 = 0, core_id_2 = 0, version; 1467 + 1468 + regmap_read(va->regmap, CDC_VA_TOP_CSR_CORE_ID_0, &core_id_0); 1469 + regmap_read(va->regmap, CDC_VA_TOP_CSR_CORE_ID_1, &core_id_1); 1470 + regmap_read(va->regmap, CDC_VA_TOP_CSR_CORE_ID_2, &core_id_2); 1471 + 1472 + if ((core_id_0 == 0x01) && (core_id_1 == 0x0F)) 1473 + version = LPASS_CODEC_VERSION_2_0; 1474 + if ((core_id_0 == 0x02) && (core_id_1 == 0x0E)) 1475 + version = LPASS_CODEC_VERSION_2_1; 1476 + if ((core_id_0 == 0x02) && (core_id_1 == 0x0F) && (core_id_2 == 0x50 || core_id_2 == 0x51)) 1477 + version = LPASS_CODEC_VERSION_2_5; 1478 + if ((core_id_0 == 0x02) && (core_id_1 == 0x0F) && (core_id_2 == 0x60 || core_id_2 == 0x61)) 1479 + version = LPASS_CODEC_VERSION_2_6; 1480 + if ((core_id_0 == 0x02) && (core_id_1 == 0x0F) && (core_id_2 == 0x70 || core_id_2 == 0x71)) 1481 + version = LPASS_CODEC_VERSION_2_7; 1482 + if ((core_id_0 == 0x02) && (core_id_1 == 0x0F) && (core_id_2 == 0x80 || core_id_2 == 0x81)) 1483 + version = LPASS_CODEC_VERSION_2_8; 1484 + 1485 + lpass_macro_set_codec_version(version); 1486 + 1487 + dev_dbg(va->dev, "LPASS Codec Version %s\n", lpass_macro_get_codec_version_string(version)); 1488 + } 1489 + 1464 1490 static int va_macro_probe(struct platform_device *pdev) 1465 1491 { 1466 1492 struct device *dev = &pdev->dev; ··· 1579 1553 if (ret) 1580 1554 goto err_npl; 1581 1555 } 1556 + 1557 + va_macro_set_lpass_codec_version(va); 1582 1558 1583 1559 if (va->has_swr_master) { 1584 1560 /* Set default CLK div to 1 */
+6
tools/arch/arm64/include/asm/cputype.h
··· 86 86 #define ARM_CPU_PART_CORTEX_X2 0xD48 87 87 #define ARM_CPU_PART_NEOVERSE_N2 0xD49 88 88 #define ARM_CPU_PART_CORTEX_A78C 0xD4B 89 + #define ARM_CPU_PART_NEOVERSE_V2 0xD4F 90 + #define ARM_CPU_PART_CORTEX_X4 0xD82 91 + #define ARM_CPU_PART_NEOVERSE_V3 0xD84 89 92 90 93 #define APM_CPU_PART_XGENE 0x000 91 94 #define APM_CPU_VAR_POTENZA 0x00 ··· 162 159 #define MIDR_CORTEX_X2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X2) 163 160 #define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2) 164 161 #define MIDR_CORTEX_A78C MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78C) 162 + #define MIDR_NEOVERSE_V2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V2) 163 + #define MIDR_CORTEX_X4 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X4) 164 + #define MIDR_NEOVERSE_V3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V3) 165 165 #define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX) 166 166 #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX) 167 167 #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
+4 -5
tools/arch/x86/include/asm/msr-index.h
··· 170 170 * CPU is not affected by Branch 171 171 * History Injection. 172 172 */ 173 + #define ARCH_CAP_XAPIC_DISABLE BIT(21) /* 174 + * IA32_XAPIC_DISABLE_STATUS MSR 175 + * supported 176 + */ 173 177 #define ARCH_CAP_PBRSB_NO BIT(24) /* 174 178 * Not susceptible to Post-Barrier 175 179 * Return Stack Buffer Predictions. ··· 194 190 #define ARCH_CAP_RFDS_CLEAR BIT(28) /* 195 191 * VERW clears CPU Register 196 192 * File. 197 - */ 198 - 199 - #define ARCH_CAP_XAPIC_DISABLE BIT(21) /* 200 - * IA32_XAPIC_DISABLE_STATUS MSR 201 - * supported 202 193 */ 203 194 204 195 #define MSR_IA32_FLUSH_CMD 0x0000010b
+20 -2
tools/arch/x86/include/uapi/asm/kvm.h
··· 457 457 458 458 #define KVM_STATE_VMX_PREEMPTION_TIMER_DEADLINE 0x00000001 459 459 460 - /* attributes for system fd (group 0) */ 461 - #define KVM_X86_XCOMP_GUEST_SUPP 0 460 + /* vendor-independent attributes for system fd (group 0) */ 461 + #define KVM_X86_GRP_SYSTEM 0 462 + # define KVM_X86_XCOMP_GUEST_SUPP 0 463 + 464 + /* vendor-specific groups and attributes for system fd */ 465 + #define KVM_X86_GRP_SEV 1 466 + # define KVM_X86_SEV_VMSA_FEATURES 0 462 467 463 468 struct kvm_vmx_nested_state_data { 464 469 __u8 vmcs12[KVM_STATE_NESTED_VMX_VMCS_SIZE]; ··· 694 689 /* Guest Migration Extension */ 695 690 KVM_SEV_SEND_CANCEL, 696 691 692 + /* Second time is the charm; improved versions of the above ioctls. */ 693 + KVM_SEV_INIT2, 694 + 697 695 KVM_SEV_NR_MAX, 698 696 }; 699 697 ··· 706 698 __u64 data; 707 699 __u32 error; 708 700 __u32 sev_fd; 701 + }; 702 + 703 + struct kvm_sev_init { 704 + __u64 vmsa_features; 705 + __u32 flags; 706 + __u16 ghcb_version; 707 + __u16 pad1; 708 + __u32 pad2[8]; 709 709 }; 710 710 711 711 struct kvm_sev_launch_start { ··· 872 856 873 857 #define KVM_X86_DEFAULT_VM 0 874 858 #define KVM_X86_SW_PROTECTED_VM 1 859 + #define KVM_X86_SEV_VM 2 860 + #define KVM_X86_SEV_ES_VM 3 875 861 876 862 #endif /* _ASM_X86_KVM_H */
+4 -1
tools/include/uapi/asm-generic/unistd.h
··· 842 842 #define __NR_lsm_list_modules 461 843 843 __SYSCALL(__NR_lsm_list_modules, sys_lsm_list_modules) 844 844 845 + #define __NR_mseal 462 846 + __SYSCALL(__NR_mseal, sys_mseal) 847 + 845 848 #undef __NR_syscalls 846 - #define __NR_syscalls 462 849 + #define __NR_syscalls 463 847 850 848 851 /* 849 852 * 32 bit systems traditionally used different
+28 -3
tools/include/uapi/drm/i915_drm.h
··· 806 806 */ 807 807 #define I915_PARAM_PXP_STATUS 58 808 808 809 + /* 810 + * Query if kernel allows marking a context to send a Freq hint to SLPC. This 811 + * will enable use of the strategies allowed by the SLPC algorithm. 812 + */ 813 + #define I915_PARAM_HAS_CONTEXT_FREQ_HINT 59 814 + 809 815 /* Must be kept compact -- no holes and well documented */ 810 816 811 817 /** ··· 2154 2148 * -EIO: The firmware did not succeed in creating the protected context. 2155 2149 */ 2156 2150 #define I915_CONTEXT_PARAM_PROTECTED_CONTENT 0xd 2151 + 2152 + /* 2153 + * I915_CONTEXT_PARAM_LOW_LATENCY: 2154 + * 2155 + * Mark this context as a low latency workload which requires aggressive GT 2156 + * frequency scaling. Use I915_PARAM_HAS_CONTEXT_FREQ_HINT to check if the kernel 2157 + * supports this per context flag. 2158 + */ 2159 + #define I915_CONTEXT_PARAM_LOW_LATENCY 0xe 2157 2160 /* Must be kept compact -- no holes and well documented */ 2158 2161 2159 2162 /** @value: Context parameter value to be set or queried */ ··· 2638 2623 * 2639 2624 */ 2640 2625 2626 + /* 2627 + * struct drm_i915_reset_stats - Return global reset and other context stats 2628 + * 2629 + * Driver keeps few stats for each contexts and also global reset count. 2630 + * This struct can be used to query those stats. 2631 + */ 2641 2632 struct drm_i915_reset_stats { 2633 + /** @ctx_id: ID of the requested context */ 2642 2634 __u32 ctx_id; 2635 + 2636 + /** @flags: MBZ */ 2643 2637 __u32 flags; 2644 2638 2645 - /* All resets since boot/module reload, for all contexts */ 2639 + /** @reset_count: All resets since boot/module reload, for all contexts */ 2646 2640 __u32 reset_count; 2647 2641 2648 - /* Number of batches lost when active in GPU, for this context */ 2642 + /** @batch_active: Number of batches lost when active in GPU, for this context */ 2649 2643 __u32 batch_active; 2650 2644 2651 - /* Number of batches lost pending for execution, for this context */ 2645 + /** @batch_pending: Number of batches lost pending for execution, for this context */ 2652 2646 __u32 batch_pending; 2653 2647 2648 + /** @pad: MBZ */ 2654 2649 __u32 pad; 2655 2650 }; 2656 2651
+2 -2
tools/include/uapi/linux/kvm.h
··· 1221 1221 /* Available with KVM_CAP_SPAPR_RESIZE_HPT */ 1222 1222 #define KVM_PPC_RESIZE_HPT_PREPARE _IOR(KVMIO, 0xad, struct kvm_ppc_resize_hpt) 1223 1223 #define KVM_PPC_RESIZE_HPT_COMMIT _IOR(KVMIO, 0xae, struct kvm_ppc_resize_hpt) 1224 - /* Available with KVM_CAP_PPC_RADIX_MMU or KVM_CAP_PPC_MMU_HASH_V3 */ 1224 + /* Available with KVM_CAP_PPC_MMU_RADIX or KVM_CAP_PPC_MMU_HASH_V3 */ 1225 1225 #define KVM_PPC_CONFIGURE_V3_MMU _IOW(KVMIO, 0xaf, struct kvm_ppc_mmuv3_cfg) 1226 - /* Available with KVM_CAP_PPC_RADIX_MMU */ 1226 + /* Available with KVM_CAP_PPC_MMU_RADIX */ 1227 1227 #define KVM_PPC_GET_RMMU_INFO _IOW(KVMIO, 0xb0, struct kvm_ppc_rmmu_info) 1228 1228 /* Available with KVM_CAP_PPC_GET_CPU_CHAR */ 1229 1229 #define KVM_PPC_GET_CPU_CHAR _IOR(KVMIO, 0xb1, struct kvm_ppc_cpu_char)
+3 -1
tools/include/uapi/linux/stat.h
··· 126 126 __u64 stx_mnt_id; 127 127 __u32 stx_dio_mem_align; /* Memory buffer alignment for direct I/O */ 128 128 __u32 stx_dio_offset_align; /* File offset alignment for direct I/O */ 129 + __u64 stx_subvol; /* Subvolume identifier */ 129 130 /* 0xa0 */ 130 - __u64 __spare3[12]; /* Spare space for future expansion */ 131 + __u64 __spare3[11]; /* Spare space for future expansion */ 131 132 /* 0x100 */ 132 133 }; 133 134 ··· 156 155 #define STATX_MNT_ID 0x00001000U /* Got stx_mnt_id */ 157 156 #define STATX_DIOALIGN 0x00002000U /* Want/got direct I/O alignment info */ 158 157 #define STATX_MNT_ID_UNIQUE 0x00004000U /* Want/got extended stx_mount_id */ 158 + #define STATX_SUBVOL 0x00008000U /* Want/got stx_subvol */ 159 159 160 160 #define STATX__RESERVED 0x80000000U /* Reserved for future struct statx expansion */ 161 161
+2 -1
tools/lib/bpf/features.c
··· 393 393 err = -errno; /* close() can clobber errno */ 394 394 395 395 if (link_fd >= 0 || err != -EBADF) { 396 - close(link_fd); 396 + if (link_fd >= 0) 397 + close(link_fd); 397 398 close(prog_fd); 398 399 return 0; 399 400 }
+1
tools/perf/Makefile.perf
··· 214 214 215 215 ifdef MAKECMDGOALS 216 216 ifeq ($(filter-out $(NON_CONFIG_TARGETS),$(MAKECMDGOALS)),) 217 + VMLINUX_H=$(src-perf)/util/bpf_skel/vmlinux/vmlinux.h 217 218 config := 0 218 219 endif 219 220 endif
+1
tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl
··· 376 376 459 n64 lsm_get_self_attr sys_lsm_get_self_attr 377 377 460 n64 lsm_set_self_attr sys_lsm_set_self_attr 378 378 461 n64 lsm_list_modules sys_lsm_list_modules 379 + 462 n64 mseal sys_mseal
+1
tools/perf/arch/powerpc/entry/syscalls/syscall.tbl
··· 548 548 459 common lsm_get_self_attr sys_lsm_get_self_attr 549 549 460 common lsm_set_self_attr sys_lsm_set_self_attr 550 550 461 common lsm_list_modules sys_lsm_list_modules 551 + 462 common mseal sys_mseal
+1
tools/perf/arch/s390/entry/syscalls/syscall.tbl
··· 464 464 459 common lsm_get_self_attr sys_lsm_get_self_attr sys_lsm_get_self_attr 465 465 460 common lsm_set_self_attr sys_lsm_set_self_attr sys_lsm_set_self_attr 466 466 461 common lsm_list_modules sys_lsm_list_modules sys_lsm_list_modules 467 + 462 common mseal sys_mseal sys_mseal
+2 -1
tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
··· 374 374 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 375 375 451 common cachestat sys_cachestat 376 376 452 common fchmodat2 sys_fchmodat2 377 - 453 64 map_shadow_stack sys_map_shadow_stack 377 + 453 common map_shadow_stack sys_map_shadow_stack 378 378 454 common futex_wake sys_futex_wake 379 379 455 common futex_wait sys_futex_wait 380 380 456 common futex_requeue sys_futex_requeue ··· 383 383 459 common lsm_get_self_attr sys_lsm_get_self_attr 384 384 460 common lsm_set_self_attr sys_lsm_set_self_attr 385 385 461 common lsm_list_modules sys_lsm_list_modules 386 + 462 common mseal sys_mseal 386 387 387 388 # 388 389 # Due to a historical design error, certain syscalls are numbered differently
+2 -4
tools/perf/builtin-record.c
··· 1956 1956 1957 1957 if (count.lost) { 1958 1958 if (!lost) { 1959 - lost = zalloc(sizeof(*lost) + 1960 - session->machines.host.id_hdr_size); 1959 + lost = zalloc(PERF_SAMPLE_MAX_SIZE); 1961 1960 if (!lost) { 1962 1961 pr_debug("Memory allocation failed\n"); 1963 1962 return; ··· 1972 1973 lost_count = perf_bpf_filter__lost_count(evsel); 1973 1974 if (lost_count) { 1974 1975 if (!lost) { 1975 - lost = zalloc(sizeof(*lost) + 1976 - session->machines.host.id_hdr_size); 1976 + lost = zalloc(PERF_SAMPLE_MAX_SIZE); 1977 1977 if (!lost) { 1978 1978 pr_debug("Memory allocation failed\n"); 1979 1979 return;
+1 -1
tools/perf/builtin-trace.c
··· 765 765 static DEFINE_STRARRAY(fcntl_cmds, "F_"); 766 766 767 767 static const char *fcntl_linux_specific_cmds[] = { 768 - "SETLEASE", "GETLEASE", "NOTIFY", [5] = "CANCELLK", "DUPFD_CLOEXEC", 768 + "SETLEASE", "GETLEASE", "NOTIFY", "DUPFD_QUERY", [5] = "CANCELLK", "DUPFD_CLOEXEC", 769 769 "SETPIPE_SZ", "GETPIPE_SZ", "ADD_SEALS", "GET_SEALS", 770 770 "GET_RW_HINT", "SET_RW_HINT", "GET_FILE_RW_HINT", "SET_FILE_RW_HINT", 771 771 };
+7 -1
tools/perf/trace/beauty/arch/x86/include/asm/irq_vectors.h
··· 97 97 98 98 #define LOCAL_TIMER_VECTOR 0xec 99 99 100 + /* 101 + * Posted interrupt notification vector for all device MSIs delivered to 102 + * the host kernel. 103 + */ 104 + #define POSTED_MSI_NOTIFICATION_VECTOR 0xeb 105 + 100 106 #define NR_VECTORS 256 101 107 102 108 #ifdef CONFIG_X86_LOCAL_APIC 103 - #define FIRST_SYSTEM_VECTOR LOCAL_TIMER_VECTOR 109 + #define FIRST_SYSTEM_VECTOR POSTED_MSI_NOTIFICATION_VECTOR 104 110 #else 105 111 #define FIRST_SYSTEM_VECTOR NR_VECTORS 106 112 #endif
+2 -1
tools/perf/trace/beauty/include/linux/socket.h
··· 16 16 struct socket; 17 17 struct sock; 18 18 struct sk_buff; 19 + struct proto_accept_arg; 19 20 20 21 #define __sockaddr_check_size(size) \ 21 22 BUILD_BUG_ON(((size) > sizeof(struct __kernel_sockaddr_storage))) ··· 434 433 extern int __sys_sendto(int fd, void __user *buff, size_t len, 435 434 unsigned int flags, struct sockaddr __user *addr, 436 435 int addr_len); 437 - extern struct file *do_accept(struct file *file, unsigned file_flags, 436 + extern struct file *do_accept(struct file *file, struct proto_accept_arg *arg, 438 437 struct sockaddr __user *upeer_sockaddr, 439 438 int __user *upeer_addrlen, int flags); 440 439 extern int __sys_accept4(int fd, struct sockaddr __user *upeer_sockaddr,
+8 -6
tools/perf/trace/beauty/include/uapi/linux/fcntl.h
··· 9 9 #define F_GETLEASE (F_LINUX_SPECIFIC_BASE + 1) 10 10 11 11 /* 12 + * Request nofications on a directory. 13 + * See below for events that may be notified. 14 + */ 15 + #define F_NOTIFY (F_LINUX_SPECIFIC_BASE + 2) 16 + 17 + #define F_DUPFD_QUERY (F_LINUX_SPECIFIC_BASE + 3) 18 + 19 + /* 12 20 * Cancel a blocking posix lock; internal use only until we expose an 13 21 * asynchronous lock api to userspace: 14 22 */ ··· 24 16 25 17 /* Create a file descriptor with FD_CLOEXEC set. */ 26 18 #define F_DUPFD_CLOEXEC (F_LINUX_SPECIFIC_BASE + 6) 27 - 28 - /* 29 - * Request nofications on a directory. 30 - * See below for events that may be notified. 31 - */ 32 - #define F_NOTIFY (F_LINUX_SPECIFIC_BASE+2) 33 19 34 20 /* 35 21 * Set and get of pipe page size array
+22
tools/perf/trace/beauty/include/uapi/linux/prctl.h
··· 306 306 # define PR_RISCV_V_VSTATE_CTRL_NEXT_MASK 0xc 307 307 # define PR_RISCV_V_VSTATE_CTRL_MASK 0x1f 308 308 309 + #define PR_RISCV_SET_ICACHE_FLUSH_CTX 71 310 + # define PR_RISCV_CTX_SW_FENCEI_ON 0 311 + # define PR_RISCV_CTX_SW_FENCEI_OFF 1 312 + # define PR_RISCV_SCOPE_PER_PROCESS 0 313 + # define PR_RISCV_SCOPE_PER_THREAD 1 314 + 315 + /* PowerPC Dynamic Execution Control Register (DEXCR) controls */ 316 + #define PR_PPC_GET_DEXCR 72 317 + #define PR_PPC_SET_DEXCR 73 318 + /* DEXCR aspect to act on */ 319 + # define PR_PPC_DEXCR_SBHE 0 /* Speculative branch hint enable */ 320 + # define PR_PPC_DEXCR_IBRTPD 1 /* Indirect branch recurrent target prediction disable */ 321 + # define PR_PPC_DEXCR_SRAPD 2 /* Subroutine return address prediction disable */ 322 + # define PR_PPC_DEXCR_NPHIE 3 /* Non-privileged hash instruction enable */ 323 + /* Action to apply / return */ 324 + # define PR_PPC_DEXCR_CTRL_EDITABLE 0x1 /* Aspect can be modified with PR_PPC_SET_DEXCR */ 325 + # define PR_PPC_DEXCR_CTRL_SET 0x2 /* Set the aspect for this process */ 326 + # define PR_PPC_DEXCR_CTRL_CLEAR 0x4 /* Clear the aspect for this process */ 327 + # define PR_PPC_DEXCR_CTRL_SET_ONEXEC 0x8 /* Set the aspect on exec */ 328 + # define PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC 0x10 /* Clear the aspect on exec */ 329 + # define PR_PPC_DEXCR_CTRL_MASK 0x1f 330 + 309 331 #endif /* _LINUX_PRCTL_H */
+3 -1
tools/perf/trace/beauty/include/uapi/linux/stat.h
··· 126 126 __u64 stx_mnt_id; 127 127 __u32 stx_dio_mem_align; /* Memory buffer alignment for direct I/O */ 128 128 __u32 stx_dio_offset_align; /* File offset alignment for direct I/O */ 129 + __u64 stx_subvol; /* Subvolume identifier */ 129 130 /* 0xa0 */ 130 - __u64 __spare3[12]; /* Spare space for future expansion */ 131 + __u64 __spare3[11]; /* Spare space for future expansion */ 131 132 /* 0x100 */ 132 133 }; 133 134 ··· 156 155 #define STATX_MNT_ID 0x00001000U /* Got stx_mnt_id */ 157 156 #define STATX_DIOALIGN 0x00002000U /* Want/got direct I/O alignment info */ 158 157 #define STATX_MNT_ID_UNIQUE 0x00004000U /* Want/got extended stx_mount_id */ 158 + #define STATX_SUBVOL 0x00008000U /* Want/got stx_subvol */ 159 159 160 160 #define STATX__RESERVED 0x80000000U /* Reserved for future struct statx expansion */ 161 161
+23 -3
tools/power/cpupower/utils/helpers/amd.c
··· 41 41 unsigned res1:31; 42 42 unsigned en:1; 43 43 } pstatedef; 44 + /* since fam 1Ah: */ 45 + struct { 46 + unsigned fid:12; 47 + unsigned res1:2; 48 + unsigned vid:8; 49 + unsigned iddval:8; 50 + unsigned idddiv:2; 51 + unsigned res2:31; 52 + unsigned en:1; 53 + } pstatedef2; 44 54 unsigned long long val; 45 55 }; 46 56 47 57 static int get_did(union core_pstate pstate) 48 58 { 49 59 int t; 60 + 61 + /* Fam 1Ah onward do not use did */ 62 + if (cpupower_cpu_info.family >= 0x1A) 63 + return 0; 50 64 51 65 if (cpupower_cpu_info.caps & CPUPOWER_CAP_AMD_PSTATEDEF) 52 66 t = pstate.pstatedef.did; ··· 75 61 static int get_cof(union core_pstate pstate) 76 62 { 77 63 int t; 78 - int fid, did, cof; 64 + int fid, did, cof = 0; 79 65 80 66 did = get_did(pstate); 81 67 if (cpupower_cpu_info.caps & CPUPOWER_CAP_AMD_PSTATEDEF) { 82 - fid = pstate.pstatedef.fid; 83 - cof = 200 * fid / did; 68 + if (cpupower_cpu_info.family >= 0x1A) { 69 + fid = pstate.pstatedef2.fid; 70 + if (fid > 0x0f) 71 + cof = (fid * 5); 72 + } else { 73 + fid = pstate.pstatedef.fid; 74 + cof = 200 * fid / did; 75 + } 84 76 } else { 85 77 t = 0x10; 86 78 fid = pstate.pstate.fid;
+1
tools/testing/cxl/test/mem.c
··· 3 3 4 4 #include <linux/platform_device.h> 5 5 #include <linux/mod_devicetable.h> 6 + #include <linux/vmalloc.h> 6 7 #include <linux/module.h> 7 8 #include <linux/delay.h> 8 9 #include <linux/sizes.h>
+1 -1
tools/testing/selftests/alsa/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 # 3 3 4 - CFLAGS += $(shell pkg-config --cflags alsa) 4 + CFLAGS += $(shell pkg-config --cflags alsa) $(KHDR_INCLUDES) 5 5 LDLIBS += $(shell pkg-config --libs alsa) 6 6 ifeq ($(LDLIBS),) 7 7 LDLIBS += -lasound
+1 -1
tools/testing/selftests/bpf/progs/test_sk_storage_tracing.c
··· 84 84 } 85 85 86 86 SEC("fexit/inet_csk_accept") 87 - int BPF_PROG(inet_csk_accept, struct sock *sk, int flags, int *err, bool kern, 87 + int BPF_PROG(inet_csk_accept, struct sock *sk, struct proto_accept_arg *arg, 88 88 struct sock *accepted_sk) 89 89 { 90 90 set_task_info(accepted_sk);
+1
tools/testing/selftests/cachestat/test_cachestat.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #define _GNU_SOURCE 3 + #define __SANE_USERSPACE_TYPES__ // Use ll64 3 4 4 5 #include <stdio.h> 5 6 #include <stdbool.h>
+1
tools/testing/selftests/filesystems/overlayfs/dev_in_maps.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #define _GNU_SOURCE 3 + #define __SANE_USERSPACE_TYPES__ // Use ll64 3 4 4 5 #include <inttypes.h> 5 6 #include <unistd.h>
+19 -7
tools/testing/selftests/ftrace/config
··· 1 - CONFIG_KPROBES=y 1 + CONFIG_BPF_SYSCALL=y 2 + CONFIG_DEBUG_INFO_BTF=y 3 + CONFIG_DEBUG_INFO_DWARF4=y 4 + CONFIG_EPROBE_EVENTS=y 5 + CONFIG_FPROBE=y 6 + CONFIG_FPROBE_EVENTS=y 2 7 CONFIG_FTRACE=y 8 + CONFIG_FTRACE_SYSCALLS=y 9 + CONFIG_FUNCTION_GRAPH_RETVAL=y 3 10 CONFIG_FUNCTION_PROFILER=y 4 - CONFIG_TRACER_SNAPSHOT=y 5 - CONFIG_STACK_TRACER=y 6 11 CONFIG_HIST_TRIGGERS=y 7 - CONFIG_SCHED_TRACER=y 8 - CONFIG_PREEMPT_TRACER=y 9 12 CONFIG_IRQSOFF_TRACER=y 10 - CONFIG_PREEMPTIRQ_DELAY_TEST=m 13 + CONFIG_KALLSYMS_ALL=y 14 + CONFIG_KPROBES=y 15 + CONFIG_KPROBE_EVENTS=y 11 16 CONFIG_MODULES=y 12 17 CONFIG_MODULE_UNLOAD=y 18 + CONFIG_PREEMPTIRQ_DELAY_TEST=m 19 + CONFIG_PREEMPT_TRACER=y 20 + CONFIG_PROBE_EVENTS_BTF_ARGS=y 13 21 CONFIG_SAMPLES=y 14 22 CONFIG_SAMPLE_FTRACE_DIRECT=m 15 23 CONFIG_SAMPLE_TRACE_PRINTK=m 16 - CONFIG_KALLSYMS_ALL=y 24 + CONFIG_SCHED_TRACER=y 25 + CONFIG_STACK_TRACER=y 26 + CONFIG_TRACER_SNAPSHOT=y 27 + CONFIG_UPROBES=y 28 + CONFIG_UPROBE_EVENTS=y
+1 -1
tools/testing/selftests/ftrace/test.d/dynevent/test_duplicates.tc
··· 1 1 #!/bin/sh 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 # description: Generic dynamic event - check if duplicate events are caught 4 - # requires: dynamic_events "e[:[<group>/][<event>]] <attached-group>.<attached-event> [<args>]":README 4 + # requires: dynamic_events "e[:[<group>/][<event>]] <attached-group>.<attached-event> [<args>]":README events/syscalls/sys_enter_openat 5 5 6 6 echo 0 > events/enable 7 7
+19 -1
tools/testing/selftests/ftrace/test.d/filter/event-filter-function.tc
··· 10 10 } 11 11 12 12 sample_events() { 13 - echo > trace 14 13 echo 1 > events/kmem/kmem_cache_free/enable 15 14 echo 1 > tracing_on 16 15 ls > /dev/null ··· 21 22 echo 0 > events/enable 22 23 23 24 echo "Get the most frequently calling function" 25 + echo > trace 24 26 sample_events 25 27 26 28 target_func=`cat trace | grep -o 'call_site=\([^+]*\)' | sed 's/call_site=//' | sort | uniq -c | sort | tail -n 1 | sed 's/^[ 0-9]*//'` ··· 32 32 33 33 echo "Test event filter function name" 34 34 echo "call_site.function == $target_func" > events/kmem/kmem_cache_free/filter 35 + 35 36 sample_events 37 + max_retry=10 38 + while [ `grep kmem_cache_free trace| wc -l` -eq 0 ]; do 39 + sample_events 40 + max_retry=$((max_retry - 1)) 41 + if [ $max_retry -eq 0 ]; then 42 + exit_fail 43 + fi 44 + done 36 45 37 46 hitcnt=`grep kmem_cache_free trace| grep $target_func | wc -l` 38 47 misscnt=`grep kmem_cache_free trace| grep -v $target_func | wc -l` ··· 58 49 59 50 echo "Test event filter function address" 60 51 echo "call_site.function == 0x$address" > events/kmem/kmem_cache_free/filter 52 + echo > trace 61 53 sample_events 54 + max_retry=10 55 + while [ `grep kmem_cache_free trace| wc -l` -eq 0 ]; do 56 + sample_events 57 + max_retry=$((max_retry - 1)) 58 + if [ $max_retry -eq 0 ]; then 59 + exit_fail 60 + fi 61 + done 62 62 63 63 hitcnt=`grep kmem_cache_free trace| grep $target_func | wc -l` 64 64 misscnt=`grep kmem_cache_free trace| grep -v $target_func | wc -l`
+2 -1
tools/testing/selftests/ftrace/test.d/kprobe/kprobe_eventname.tc
··· 30 30 fi 31 31 32 32 grep " [tT] .*\.isra\..*" /proc/kallsyms | cut -f 3 -d " " | while read f; do 33 - if grep -s $f available_filter_functions; then 33 + cnt=`grep -s $f available_filter_functions | wc -l`; 34 + if [ $cnt -eq 1 ]; then 34 35 echo $f 35 36 break 36 37 fi
-2
tools/testing/selftests/futex/Makefile
··· 3 3 4 4 TEST_PROGS := run.sh 5 5 6 - .PHONY: all clean 7 - 8 6 include ../lib.mk 9 7 10 8 all:
+1 -1
tools/testing/selftests/futex/functional/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 INCLUDES := -I../include -I../../ $(KHDR_INCLUDES) 3 - CFLAGS := $(CFLAGS) -g -O2 -Wall -D_GNU_SOURCE -pthread $(INCLUDES) $(KHDR_INCLUDES) 3 + CFLAGS := $(CFLAGS) -g -O2 -Wall -D_GNU_SOURCE= -pthread $(INCLUDES) $(KHDR_INCLUDES) 4 4 LDLIBS := -lpthread -lrt 5 5 6 6 LOCAL_HDRS := \
+1 -1
tools/testing/selftests/futex/functional/futex_requeue_pi.c
··· 360 360 361 361 int main(int argc, char *argv[]) 362 362 { 363 - const char *test_name; 363 + char *test_name; 364 364 int c, ret; 365 365 366 366 while ((c = getopt(argc, argv, "bchlot:v:")) != -1) {
+1
tools/testing/selftests/kvm/Makefile
··· 183 183 TEST_GEN_PROGS_s390x += s390x/tprot 184 184 TEST_GEN_PROGS_s390x += s390x/cmma_test 185 185 TEST_GEN_PROGS_s390x += s390x/debug_test 186 + TEST_GEN_PROGS_s390x += s390x/shared_zeropage_test 186 187 TEST_GEN_PROGS_s390x += demand_paging_test 187 188 TEST_GEN_PROGS_s390x += dirty_log_test 188 189 TEST_GEN_PROGS_s390x += guest_print_test
+111
tools/testing/selftests/kvm/s390x/shared_zeropage_test.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * Test shared zeropage handling (with/without storage keys) 4 + * 5 + * Copyright (C) 2024, Red Hat, Inc. 6 + */ 7 + #include <sys/mman.h> 8 + 9 + #include <linux/fs.h> 10 + 11 + #include "test_util.h" 12 + #include "kvm_util.h" 13 + #include "kselftest.h" 14 + #include "ucall_common.h" 15 + 16 + static void set_storage_key(void *addr, uint8_t skey) 17 + { 18 + asm volatile("sske %0,%1" : : "d" (skey), "a" (addr)); 19 + } 20 + 21 + static void guest_code(void) 22 + { 23 + /* Issue some storage key instruction. */ 24 + set_storage_key((void *)0, 0x98); 25 + GUEST_DONE(); 26 + } 27 + 28 + /* 29 + * Returns 1 if the shared zeropage is mapped, 0 if something else is mapped. 30 + * Returns < 0 on error or if nothing is mapped. 31 + */ 32 + static int maps_shared_zeropage(int pagemap_fd, void *addr) 33 + { 34 + struct page_region region; 35 + struct pm_scan_arg arg = { 36 + .start = (uintptr_t)addr, 37 + .end = (uintptr_t)addr + 4096, 38 + .vec = (uintptr_t)&region, 39 + .vec_len = 1, 40 + .size = sizeof(struct pm_scan_arg), 41 + .category_mask = PAGE_IS_PFNZERO, 42 + .category_anyof_mask = PAGE_IS_PRESENT, 43 + .return_mask = PAGE_IS_PFNZERO, 44 + }; 45 + return ioctl(pagemap_fd, PAGEMAP_SCAN, &arg); 46 + } 47 + 48 + int main(int argc, char *argv[]) 49 + { 50 + char *mem, *page0, *page1, *page2, tmp; 51 + const size_t pagesize = getpagesize(); 52 + struct kvm_vcpu *vcpu; 53 + struct kvm_vm *vm; 54 + struct ucall uc; 55 + int pagemap_fd; 56 + 57 + ksft_print_header(); 58 + ksft_set_plan(3); 59 + 60 + /* 61 + * We'll use memory that is not mapped into the VM for simplicity. 62 + * Shared zeropages are enabled/disabled per-process. 63 + */ 64 + mem = mmap(0, 3 * pagesize, PROT_READ, MAP_PRIVATE | MAP_ANON, -1, 0); 65 + TEST_ASSERT(mem != MAP_FAILED, "mmap() failed"); 66 + 67 + /* Disable THP. Ignore errors on older kernels. */ 68 + madvise(mem, 3 * pagesize, MADV_NOHUGEPAGE); 69 + 70 + page0 = mem; 71 + page1 = page0 + pagesize; 72 + page2 = page1 + pagesize; 73 + 74 + /* Can we even detect shared zeropages? */ 75 + pagemap_fd = open("/proc/self/pagemap", O_RDONLY); 76 + TEST_REQUIRE(pagemap_fd >= 0); 77 + 78 + tmp = *page0; 79 + asm volatile("" : "+r" (tmp)); 80 + TEST_REQUIRE(maps_shared_zeropage(pagemap_fd, page0) == 1); 81 + 82 + vm = vm_create_with_one_vcpu(&vcpu, guest_code); 83 + 84 + /* Verify that we get the shared zeropage after VM creation. */ 85 + tmp = *page1; 86 + asm volatile("" : "+r" (tmp)); 87 + ksft_test_result(maps_shared_zeropage(pagemap_fd, page1) == 1, 88 + "Shared zeropages should be enabled\n"); 89 + 90 + /* 91 + * Let our VM execute a storage key instruction that should 92 + * unshare all shared zeropages. 93 + */ 94 + vcpu_run(vcpu); 95 + get_ucall(vcpu, &uc); 96 + TEST_ASSERT_EQ(uc.cmd, UCALL_DONE); 97 + 98 + /* Verify that we don't have a shared zeropage anymore. */ 99 + ksft_test_result(!maps_shared_zeropage(pagemap_fd, page1), 100 + "Shared zeropage should be gone\n"); 101 + 102 + /* Verify that we don't get any new shared zeropages. */ 103 + tmp = *page2; 104 + asm volatile("" : "+r" (tmp)); 105 + ksft_test_result(!maps_shared_zeropage(pagemap_fd, page2), 106 + "Shared zeropages should be disabled\n"); 107 + 108 + kvm_vm_free(vm); 109 + 110 + ksft_finished(); 111 + }
+1
tools/testing/selftests/net/hsr/config
··· 2 2 CONFIG_NET_SCH_NETEM=m 3 3 CONFIG_HSR=y 4 4 CONFIG_VETH=y 5 + CONFIG_BRIDGE=y
+9 -9
tools/testing/selftests/net/lib.sh
··· 15 15 ksft_skip=4 16 16 17 17 # namespace list created by setup_ns 18 - NS_LIST="" 18 + NS_LIST=() 19 19 20 20 ############################################################################## 21 21 # Helpers ··· 27 27 local -A weights 28 28 local weight=0 29 29 30 + local i 30 31 for i in "$@"; do 31 32 weights[$i]=$((weight++)) 32 33 done ··· 68 67 while true 69 68 do 70 69 local out 71 - out=$("$@") 72 - local ret=$? 73 - if ((!ret)); then 70 + if out=$("$@"); then 74 71 echo -n "$out" 75 72 return 0 76 73 fi ··· 138 139 fi 139 140 140 141 for ns in "$@"; do 142 + [ -z "${ns}" ] && continue 141 143 ip netns delete "${ns}" &> /dev/null 142 144 if ! busywait $BUSYWAIT_TIMEOUT ip netns list \| grep -vq "^$ns$" &> /dev/null; then 143 145 echo "Warn: Failed to remove namespace $ns" ··· 152 152 153 153 cleanup_all_ns() 154 154 { 155 - cleanup_ns $NS_LIST 155 + cleanup_ns "${NS_LIST[@]}" 156 156 } 157 157 158 158 # setup netns with given names as prefix. e.g ··· 161 161 { 162 162 local ns="" 163 163 local ns_name="" 164 - local ns_list="" 164 + local ns_list=() 165 165 local ns_exist= 166 166 for ns_name in "$@"; do 167 167 # Some test may setup/remove same netns multi times ··· 177 177 178 178 if ! ip netns add "$ns"; then 179 179 echo "Failed to create namespace $ns_name" 180 - cleanup_ns "$ns_list" 180 + cleanup_ns "${ns_list[@]}" 181 181 return $ksft_skip 182 182 fi 183 183 ip -n "$ns" link set lo up 184 - ! $ns_exist && ns_list="$ns_list $ns" 184 + ! $ns_exist && ns_list+=("$ns") 185 185 done 186 - NS_LIST="$NS_LIST $ns_list" 186 + NS_LIST+=("${ns_list[@]}") 187 187 } 188 188 189 189 tc_rule_stats_get()
+1
tools/testing/selftests/openat2/openat2_test.c
··· 5 5 */ 6 6 7 7 #define _GNU_SOURCE 8 + #define __SANE_USERSPACE_TYPES__ // Use ll64 8 9 #include <fcntl.h> 9 10 #include <sched.h> 10 11 #include <sys/stat.h>