Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.16-rc3).

No conflicts or adjacent changes.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+4455 -1495
+6 -3
.mailmap
··· 197 197 Daniel Borkmann <daniel@iogearbox.net> <dborkmann@redhat.com> 198 198 Daniel Borkmann <daniel@iogearbox.net> <dborkman@redhat.com> 199 199 Daniel Borkmann <daniel@iogearbox.net> <dxchgb@gmail.com> 200 + Danilo Krummrich <dakr@kernel.org> <dakr@redhat.com> 200 201 David Brownell <david-b@pacbell.net> 201 202 David Collins <quic_collinsd@quicinc.com> <collinsd@codeaurora.org> 202 203 David Heidelberg <david@ixit.cz> <d.okias@gmail.com> ··· 283 282 Gustavo Padovan <padovan@profusion.mobi> 284 283 Hamza Mahfooz <hamzamahfooz@linux.microsoft.com> <hamza.mahfooz@amd.com> 285 284 Hanjun Guo <guohanjun@huawei.com> <hanjun.guo@linaro.org> 285 + Hans de Goede <hansg@kernel.org> <hdegoede@redhat.com> 286 286 Hans Verkuil <hverkuil@xs4all.nl> <hansverk@cisco.com> 287 287 Hans Verkuil <hverkuil@xs4all.nl> <hverkuil-cisco@xs4all.nl> 288 288 Harry Yoo <harry.yoo@oracle.com> <42.hyeyoo@gmail.com> ··· 693 691 Serge Hallyn <sergeh@kernel.org> <serue@us.ibm.com> 694 692 Seth Forshee <sforshee@kernel.org> <seth.forshee@canonical.com> 695 693 Shakeel Butt <shakeel.butt@linux.dev> <shakeelb@google.com> 696 - Shannon Nelson <shannon.nelson@amd.com> <snelson@pensando.io> 697 - Shannon Nelson <shannon.nelson@amd.com> <shannon.nelson@intel.com> 698 - Shannon Nelson <shannon.nelson@amd.com> <shannon.nelson@oracle.com> 694 + Shannon Nelson <sln@onemain.com> <shannon.nelson@amd.com> 695 + Shannon Nelson <sln@onemain.com> <snelson@pensando.io> 696 + Shannon Nelson <sln@onemain.com> <shannon.nelson@intel.com> 697 + Shannon Nelson <sln@onemain.com> <shannon.nelson@oracle.com> 699 698 Sharath Chandra Vurukala <quic_sharathv@quicinc.com> <sharathv@codeaurora.org> 700 699 Shiraz Hashim <shiraz.linux.kernel@gmail.com> <shiraz.hashim@st.com> 701 700 Shuah Khan <shuah@kernel.org> <shuahkhan@gmail.com>
+2
Documentation/admin-guide/cifs/usage.rst
··· 270 270 illegal Windows/NTFS/SMB characters to a remap range (this mount parameter 271 271 is the default for SMB3). This remap (``mapposix``) range is also 272 272 compatible with Mac (and "Services for Mac" on some older Windows). 273 + When POSIX Extensions for SMB 3.1.1 are negotiated, remapping is automatically 274 + disabled. 273 275 274 276 CIFS VFS Mount Options 275 277 ======================
+77
Documentation/block/ublk.rst
··· 352 352 parameter of `struct ublk_param_segment` with backend for avoiding 353 353 unnecessary IO split, which usually hurts io_uring performance. 354 354 355 + Auto Buffer Registration 356 + ------------------------ 357 + 358 + The ``UBLK_F_AUTO_BUF_REG`` feature automatically handles buffer registration 359 + and unregistration for I/O requests, which simplifies the buffer management 360 + process and reduces overhead in the ublk server implementation. 361 + 362 + This is another feature flag for using zero copy, and it is compatible with 363 + ``UBLK_F_SUPPORT_ZERO_COPY``. 364 + 365 + Feature Overview 366 + ~~~~~~~~~~~~~~~~ 367 + 368 + This feature automatically registers request buffers to the io_uring context 369 + before delivering I/O commands to the ublk server and unregisters them when 370 + completing I/O commands. This eliminates the need for manual buffer 371 + registration/unregistration via ``UBLK_IO_REGISTER_IO_BUF`` and 372 + ``UBLK_IO_UNREGISTER_IO_BUF`` commands, then IO handling in ublk server 373 + can avoid dependency on the two uring_cmd operations. 374 + 375 + IOs can't be issued concurrently to io_uring if there is any dependency 376 + among these IOs. So this way not only simplifies ublk server implementation, 377 + but also makes concurrent IO handling becomes possible by removing the 378 + dependency on buffer registration & unregistration commands. 379 + 380 + Usage Requirements 381 + ~~~~~~~~~~~~~~~~~~ 382 + 383 + 1. The ublk server must create a sparse buffer table on the same ``io_ring_ctx`` 384 + used for ``UBLK_IO_FETCH_REQ`` and ``UBLK_IO_COMMIT_AND_FETCH_REQ``. If 385 + uring_cmd is issued on a different ``io_ring_ctx``, manual buffer 386 + unregistration is required. 387 + 388 + 2. Buffer registration data must be passed via uring_cmd's ``sqe->addr`` with the 389 + following structure:: 390 + 391 + struct ublk_auto_buf_reg { 392 + __u16 index; /* Buffer index for registration */ 393 + __u8 flags; /* Registration flags */ 394 + __u8 reserved0; /* Reserved for future use */ 395 + __u32 reserved1; /* Reserved for future use */ 396 + }; 397 + 398 + ublk_auto_buf_reg_to_sqe_addr() is for converting the above structure into 399 + ``sqe->addr``. 400 + 401 + 3. All reserved fields in ``ublk_auto_buf_reg`` must be zeroed. 402 + 403 + 4. Optional flags can be passed via ``ublk_auto_buf_reg.flags``. 404 + 405 + Fallback Behavior 406 + ~~~~~~~~~~~~~~~~~ 407 + 408 + If auto buffer registration fails: 409 + 410 + 1. When ``UBLK_AUTO_BUF_REG_FALLBACK`` is enabled: 411 + 412 + - The uring_cmd is completed 413 + - ``UBLK_IO_F_NEED_REG_BUF`` is set in ``ublksrv_io_desc.op_flags`` 414 + - The ublk server must manually deal with the failure, such as, register 415 + the buffer manually, or using user copy feature for retrieving the data 416 + for handling ublk IO 417 + 418 + 2. If fallback is not enabled: 419 + 420 + - The ublk I/O request fails silently 421 + - The uring_cmd won't be completed 422 + 423 + Limitations 424 + ~~~~~~~~~~~ 425 + 426 + - Requires same ``io_ring_ctx`` for all operations 427 + - May require manual buffer management in fallback cases 428 + - io_ring_ctx buffer table has a max size of 16K, which may not be enough 429 + in case that too many ublk devices are handled by this single io_ring_ctx 430 + and each one has very large queue depth 431 + 355 432 References 356 433 ========== 357 434
-65
Documentation/devicetree/bindings/pmem/pmem-region.txt
··· 1 - Device-tree bindings for persistent memory regions 2 - ----------------------------------------------------- 3 - 4 - Persistent memory refers to a class of memory devices that are: 5 - 6 - a) Usable as main system memory (i.e. cacheable), and 7 - b) Retain their contents across power failure. 8 - 9 - Given b) it is best to think of persistent memory as a kind of memory mapped 10 - storage device. To ensure data integrity the operating system needs to manage 11 - persistent regions separately to the normal memory pool. To aid with that this 12 - binding provides a standardised interface for discovering where persistent 13 - memory regions exist inside the physical address space. 14 - 15 - Bindings for the region nodes: 16 - ----------------------------- 17 - 18 - Required properties: 19 - - compatible = "pmem-region" 20 - 21 - - reg = <base, size>; 22 - The reg property should specify an address range that is 23 - translatable to a system physical address range. This address 24 - range should be mappable as normal system memory would be 25 - (i.e cacheable). 26 - 27 - If the reg property contains multiple address ranges 28 - each address range will be treated as though it was specified 29 - in a separate device node. Having multiple address ranges in a 30 - node implies no special relationship between the two ranges. 31 - 32 - Optional properties: 33 - - Any relevant NUMA associativity properties for the target platform. 34 - 35 - - volatile; This property indicates that this region is actually 36 - backed by non-persistent memory. This lets the OS know that it 37 - may skip the cache flushes required to ensure data is made 38 - persistent after a write. 39 - 40 - If this property is absent then the OS must assume that the region 41 - is backed by non-volatile memory. 42 - 43 - Examples: 44 - -------------------- 45 - 46 - /* 47 - * This node specifies one 4KB region spanning from 48 - * 0x5000 to 0x5fff that is backed by non-volatile memory. 49 - */ 50 - pmem@5000 { 51 - compatible = "pmem-region"; 52 - reg = <0x00005000 0x00001000>; 53 - }; 54 - 55 - /* 56 - * This node specifies two 4KB regions that are backed by 57 - * volatile (normal) memory. 58 - */ 59 - pmem@6000 { 60 - compatible = "pmem-region"; 61 - reg = < 0x00006000 0x00001000 62 - 0x00008000 0x00001000 >; 63 - volatile; 64 - }; 65 -
+48
Documentation/devicetree/bindings/pmem/pmem-region.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pmem-region.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + maintainers: 8 + - Oliver O'Halloran <oohall@gmail.com> 9 + 10 + title: Persistent Memory Regions 11 + 12 + description: | 13 + Persistent memory refers to a class of memory devices that are: 14 + 15 + a) Usable as main system memory (i.e. cacheable), and 16 + b) Retain their contents across power failure. 17 + 18 + Given b) it is best to think of persistent memory as a kind of memory mapped 19 + storage device. To ensure data integrity the operating system needs to manage 20 + persistent regions separately to the normal memory pool. To aid with that this 21 + binding provides a standardised interface for discovering where persistent 22 + memory regions exist inside the physical address space. 23 + 24 + properties: 25 + compatible: 26 + const: pmem-region 27 + 28 + reg: 29 + maxItems: 1 30 + 31 + volatile: 32 + description: 33 + Indicates the region is volatile (non-persistent) and the OS can skip 34 + cache flushes for writes 35 + type: boolean 36 + 37 + required: 38 + - compatible 39 + - reg 40 + 41 + additionalProperties: false 42 + 43 + examples: 44 + - | 45 + pmem@5000 { 46 + compatible = "pmem-region"; 47 + reg = <0x00005000 0x00001000>; 48 + };
+3 -1
Documentation/filesystems/proc.rst
··· 584 584 ms may share 585 585 gd stack segment growns down 586 586 pf pure PFN range 587 - dw disabled write to the mapped file 588 587 lo pages are locked in memory 589 588 io memory mapped I/O area 590 589 sr sequential read advise provided ··· 606 607 mt arm64 MTE allocation tags are enabled 607 608 um userfaultfd missing tracking 608 609 uw userfaultfd wr-protect tracking 610 + ui userfaultfd minor fault 609 611 ss shadow/guarded control stack page 610 612 sl sealed 613 + lf lock on fault pages 614 + dp always lazily freeable mapping 611 615 == ======================================= 612 616 613 617 Note that there is no guarantee that every flag and associated mnemonic will
+3
Documentation/netlink/specs/ethtool.yaml
··· 7 7 doc: Partial family for Ethtool Netlink. 8 8 uapi-header: linux/ethtool_netlink_generated.h 9 9 10 + c-family-name: ethtool-genl-name 11 + c-version-name: ethtool-genl-version 12 + 10 13 definitions: 11 14 - 12 15 name: udp-tunnel-type
+1
Documentation/process/embargoed-hardware-issues.rst
··· 290 290 AMD Tom Lendacky <thomas.lendacky@amd.com> 291 291 Ampere Darren Hart <darren@os.amperecomputing.com> 292 292 ARM Catalin Marinas <catalin.marinas@arm.com> 293 + IBM Power Madhavan Srinivasan <maddy@linux.ibm.com> 293 294 IBM Z Christian Borntraeger <borntraeger@de.ibm.com> 294 295 Intel Tony Luck <tony.luck@intel.com> 295 296 Qualcomm Trilok Soni <quic_tsoni@quicinc.com>
+41 -42
MAINTAINERS
··· 207 207 X: include/uapi/ 208 208 209 209 ABIT UGURU 1,2 HARDWARE MONITOR DRIVER 210 - M: Hans de Goede <hdegoede@redhat.com> 210 + M: Hans de Goede <hansg@kernel.org> 211 211 L: linux-hwmon@vger.kernel.org 212 212 S: Maintained 213 213 F: drivers/hwmon/abituguru.c ··· 371 371 F: drivers/platform/x86/quickstart.c 372 372 373 373 ACPI SERIAL MULTI INSTANTIATE DRIVER 374 - M: Hans de Goede <hdegoede@redhat.com> 374 + M: Hans de Goede <hansg@kernel.org> 375 375 L: platform-driver-x86@vger.kernel.org 376 376 S: Maintained 377 377 F: drivers/platform/x86/serial-multi-instantiate.c ··· 1157 1157 F: arch/x86/kernel/amd_node.c 1158 1158 1159 1159 AMD PDS CORE DRIVER 1160 - M: Shannon Nelson <shannon.nelson@amd.com> 1161 1160 M: Brett Creeley <brett.creeley@amd.com> 1162 1161 L: netdev@vger.kernel.org 1163 1162 S: Maintained ··· 3550 3551 F: scripts/make_fit.py 3551 3552 3552 3553 ARM64 PLATFORM DRIVERS 3553 - M: Hans de Goede <hdegoede@redhat.com> 3554 + M: Hans de Goede <hansg@kernel.org> 3554 3555 M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> 3555 3556 R: Bryan O'Donoghue <bryan.odonoghue@linaro.org> 3556 3557 L: platform-driver-x86@vger.kernel.org ··· 3711 3712 F: drivers/platform/x86/eeepc*.c 3712 3713 3713 3714 ASUS TF103C DOCK DRIVER 3714 - M: Hans de Goede <hdegoede@redhat.com> 3715 + M: Hans de Goede <hansg@kernel.org> 3715 3716 L: platform-driver-x86@vger.kernel.org 3716 3717 S: Maintained 3717 3718 T: git git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86.git ··· 5613 5614 F: drivers/usb/chipidea/ 5614 5615 5615 5616 CHIPONE ICN8318 I2C TOUCHSCREEN DRIVER 5616 - M: Hans de Goede <hdegoede@redhat.com> 5617 + M: Hans de Goede <hansg@kernel.org> 5617 5618 L: linux-input@vger.kernel.org 5618 5619 S: Maintained 5619 5620 F: Documentation/devicetree/bindings/input/touchscreen/chipone,icn8318.yaml 5620 5621 F: drivers/input/touchscreen/chipone_icn8318.c 5621 5622 5622 5623 CHIPONE ICN8505 I2C TOUCHSCREEN DRIVER 5623 - M: Hans de Goede <hdegoede@redhat.com> 5624 + M: Hans de Goede <hansg@kernel.org> 5624 5625 L: linux-input@vger.kernel.org 5625 5626 S: Maintained 5626 5627 F: drivers/input/touchscreen/chipone_icn8505.c ··· 6254 6255 F: include/linux/smpboot.h 6255 6256 F: kernel/cpu.c 6256 6257 F: kernel/smpboot.* 6258 + F: rust/helper/cpu.c 6257 6259 F: rust/kernel/cpu.rs 6258 6260 6259 6261 CPU IDLE TIME MANAGEMENT FRAMEWORK ··· 6918 6918 F: include/linux/devfreq-event.h 6919 6919 6920 6920 DEVICE RESOURCE MANAGEMENT HELPERS 6921 - M: Hans de Goede <hdegoede@redhat.com> 6921 + M: Hans de Goede <hansg@kernel.org> 6922 6922 R: Matti Vaittinen <mazziesaccount@gmail.com> 6923 6923 S: Maintained 6924 6924 F: include/linux/devm-helpers.h ··· 7517 7517 F: include/drm/gud.h 7518 7518 7519 7519 DRM DRIVER FOR GRAIN MEDIA GM12U320 PROJECTORS 7520 - M: Hans de Goede <hdegoede@redhat.com> 7520 + M: Hans de Goede <hansg@kernel.org> 7521 7521 S: Maintained 7522 7522 T: git https://gitlab.freedesktop.org/drm/misc/kernel.git 7523 7523 F: drivers/gpu/drm/tiny/gm12u320.c ··· 7917 7917 F: drivers/gpu/drm/vkms/ 7918 7918 7919 7919 DRM DRIVER FOR VIRTUALBOX VIRTUAL GPU 7920 - M: Hans de Goede <hdegoede@redhat.com> 7920 + M: Hans de Goede <hansg@kernel.org> 7921 7921 L: dri-devel@lists.freedesktop.org 7922 7922 S: Maintained 7923 7923 T: git https://gitlab.freedesktop.org/drm/misc/kernel.git ··· 8318 8318 F: include/drm/drm_panel.h 8319 8319 8320 8320 DRM PRIVACY-SCREEN CLASS 8321 - M: Hans de Goede <hdegoede@redhat.com> 8321 + M: Hans de Goede <hansg@kernel.org> 8322 8322 L: dri-devel@lists.freedesktop.org 8323 8323 S: Maintained 8324 8324 T: git https://gitlab.freedesktop.org/drm/misc/kernel.git ··· 9941 9941 9942 9942 FWCTL PDS DRIVER 9943 9943 M: Brett Creeley <brett.creeley@amd.com> 9944 - R: Shannon Nelson <shannon.nelson@amd.com> 9945 9944 L: linux-kernel@vger.kernel.org 9946 9945 S: Maintained 9947 9946 F: drivers/fwctl/pds/ ··· 10221 10222 F: Documentation/devicetree/bindings/connector/gocontroll,moduline-module-slot.yaml 10222 10223 10223 10224 GOODIX TOUCHSCREEN 10224 - M: Hans de Goede <hdegoede@redhat.com> 10225 + M: Hans de Goede <hansg@kernel.org> 10225 10226 L: linux-input@vger.kernel.org 10226 10227 S: Maintained 10227 10228 F: drivers/input/touchscreen/goodix* ··· 10260 10261 K: [gG]oogle.?[tT]ensor 10261 10262 10262 10263 GPD POCKET FAN DRIVER 10263 - M: Hans de Goede <hdegoede@redhat.com> 10264 + M: Hans de Goede <hansg@kernel.org> 10264 10265 L: platform-driver-x86@vger.kernel.org 10265 10266 S: Maintained 10266 10267 F: drivers/platform/x86/gpd-pocket-fan.c ··· 11421 11422 F: drivers/i2c/busses/i2c-viapro.c 11422 11423 11423 11424 I2C/SMBUS INTEL CHT WHISKEY COVE PMIC DRIVER 11424 - M: Hans de Goede <hdegoede@redhat.com> 11425 + M: Hans de Goede <hansg@kernel.org> 11425 11426 L: linux-i2c@vger.kernel.org 11426 11427 S: Maintained 11427 11428 F: drivers/i2c/busses/i2c-cht-wc.c ··· 12011 12012 F: sound/soc/intel/ 12012 12013 12013 12014 INTEL ATOMISP2 DUMMY / POWER-MANAGEMENT DRIVER 12014 - M: Hans de Goede <hdegoede@redhat.com> 12015 + M: Hans de Goede <hansg@kernel.org> 12015 12016 L: platform-driver-x86@vger.kernel.org 12016 12017 S: Maintained 12017 12018 F: drivers/platform/x86/intel/atomisp2/pm.c 12018 12019 12019 12020 INTEL ATOMISP2 LED DRIVER 12020 - M: Hans de Goede <hdegoede@redhat.com> 12021 + M: Hans de Goede <hansg@kernel.org> 12021 12022 L: platform-driver-x86@vger.kernel.org 12022 12023 S: Maintained 12023 12024 F: drivers/platform/x86/intel/atomisp2/led.c ··· 13678 13679 F: drivers/platform/x86/lenovo-wmi-hotkey-utilities.c 13679 13680 13680 13681 LETSKETCH HID TABLET DRIVER 13681 - M: Hans de Goede <hdegoede@redhat.com> 13682 + M: Hans de Goede <hansg@kernel.org> 13682 13683 L: linux-input@vger.kernel.org 13683 13684 S: Maintained 13684 13685 T: git git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid.git ··· 13728 13729 F: drivers/ata/sata_gemini.h 13729 13730 13730 13731 LIBATA SATA AHCI PLATFORM devices support 13731 - M: Hans de Goede <hdegoede@redhat.com> 13732 + M: Hans de Goede <hansg@kernel.org> 13732 13733 L: linux-ide@vger.kernel.org 13733 13734 S: Maintained 13734 13735 F: drivers/ata/ahci_platform.c ··· 13798 13799 L: nvdimm@lists.linux.dev 13799 13800 S: Supported 13800 13801 Q: https://patchwork.kernel.org/project/linux-nvdimm/list/ 13801 - F: Documentation/devicetree/bindings/pmem/pmem-region.txt 13802 + F: Documentation/devicetree/bindings/pmem/pmem-region.yaml 13802 13803 F: drivers/nvdimm/of_pmem.c 13803 13804 13804 13805 LIBNVDIMM: NON-VOLATILE MEMORY DEVICE SUBSYSTEM ··· 14098 14099 F: block/partitions/ldm.* 14099 14100 14100 14101 LOGITECH HID GAMING KEYBOARDS 14101 - M: Hans de Goede <hdegoede@redhat.com> 14102 + M: Hans de Goede <hansg@kernel.org> 14102 14103 L: linux-input@vger.kernel.org 14103 14104 S: Maintained 14104 14105 T: git git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid.git ··· 14780 14781 F: drivers/power/supply/max17040_battery.c 14781 14782 14782 14783 MAXIM MAX17042 FAMILY FUEL GAUGE DRIVERS 14783 - R: Hans de Goede <hdegoede@redhat.com> 14784 + R: Hans de Goede <hansg@kernel.org> 14784 14785 R: Krzysztof Kozlowski <krzk@kernel.org> 14785 14786 R: Marek Szyprowski <m.szyprowski@samsung.com> 14786 14787 R: Sebastian Krzyszkowiak <sebastian.krzyszkowiak@puri.sm> ··· 15582 15583 F: drivers/net/ethernet/mellanox/mlxfw/ 15583 15584 15584 15585 MELLANOX HARDWARE PLATFORM SUPPORT 15585 - M: Hans de Goede <hdegoede@redhat.com> 15586 + M: Hans de Goede <hansg@kernel.org> 15586 15587 M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> 15587 15588 M: Vadim Pasternak <vadimp@nvidia.com> 15588 15589 L: platform-driver-x86@vger.kernel.org ··· 15919 15920 R: Nico Pache <npache@redhat.com> 15920 15921 R: Ryan Roberts <ryan.roberts@arm.com> 15921 15922 R: Dev Jain <dev.jain@arm.com> 15923 + R: Barry Song <baohua@kernel.org> 15922 15924 L: linux-mm@kvack.org 15923 15925 S: Maintained 15924 15926 W: http://www.linux-mm.org ··· 16539 16539 F: drivers/platform/surface/surface_gpe.c 16540 16540 16541 16541 MICROSOFT SURFACE HARDWARE PLATFORM SUPPORT 16542 - M: Hans de Goede <hdegoede@redhat.com> 16542 + M: Hans de Goede <hansg@kernel.org> 16543 16543 M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> 16544 16544 M: Maximilian Luz <luzmaximilian@gmail.com> 16545 16545 L: platform-driver-x86@vger.kernel.org ··· 17707 17707 F: tools/testing/selftests/nolibc/ 17708 17708 17709 17709 NOVATEK NVT-TS I2C TOUCHSCREEN DRIVER 17710 - M: Hans de Goede <hdegoede@redhat.com> 17710 + M: Hans de Goede <hansg@kernel.org> 17711 17711 L: linux-input@vger.kernel.org 17712 17712 S: Maintained 17713 17713 F: Documentation/devicetree/bindings/input/touchscreen/novatek,nvt-ts.yaml ··· 19377 19377 F: include/crypto/pcrypt.h 19378 19378 19379 19379 PDS DSC VIRTIO DATA PATH ACCELERATOR 19380 - R: Shannon Nelson <shannon.nelson@amd.com> 19380 + R: Brett Creeley <brett.creeley@amd.com> 19381 19381 F: drivers/vdpa/pds/ 19382 19382 19383 19383 PECI HARDWARE MONITORING DRIVERS ··· 19399 19399 F: include/linux/peci.h 19400 19400 19401 19401 PENSANDO ETHERNET DRIVERS 19402 - M: Shannon Nelson <shannon.nelson@amd.com> 19403 19402 M: Brett Creeley <brett.creeley@amd.com> 19404 19403 L: netdev@vger.kernel.org 19405 19404 S: Maintained ··· 22171 22172 R: David Vernet <void@manifault.com> 22172 22173 R: Andrea Righi <arighi@nvidia.com> 22173 22174 R: Changwoo Min <changwoo@igalia.com> 22174 - L: linux-kernel@vger.kernel.org 22175 + L: sched-ext@lists.linux.dev 22175 22176 S: Maintained 22176 22177 W: https://github.com/sched-ext/scx 22177 22178 T: git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext.git ··· 22708 22709 K: [^@]sifive 22709 22710 22710 22711 SILEAD TOUCHSCREEN DRIVER 22711 - M: Hans de Goede <hdegoede@redhat.com> 22712 + M: Hans de Goede <hansg@kernel.org> 22712 22713 L: linux-input@vger.kernel.org 22713 22714 L: platform-driver-x86@vger.kernel.org 22714 22715 S: Maintained ··· 22741 22742 F: drivers/i3c/master/svc-i3c-master.c 22742 22743 22743 22744 SIMPLEFB FB DRIVER 22744 - M: Hans de Goede <hdegoede@redhat.com> 22745 + M: Hans de Goede <hansg@kernel.org> 22745 22746 L: linux-fbdev@vger.kernel.org 22746 22747 S: Maintained 22747 22748 F: Documentation/devicetree/bindings/display/simple-framebuffer.yaml ··· 22870 22871 F: drivers/hwmon/emc2103.c 22871 22872 22872 22873 SMSC SCH5627 HARDWARE MONITOR DRIVER 22873 - M: Hans de Goede <hdegoede@redhat.com> 22874 + M: Hans de Goede <hansg@kernel.org> 22874 22875 L: linux-hwmon@vger.kernel.org 22875 22876 S: Supported 22876 22877 F: Documentation/hwmon/sch5627.rst ··· 23525 23526 F: Documentation/process/stable-kernel-rules.rst 23526 23527 23527 23528 STAGING - ATOMISP DRIVER 23528 - M: Hans de Goede <hdegoede@redhat.com> 23529 + M: Hans de Goede <hansg@kernel.org> 23529 23530 M: Mauro Carvalho Chehab <mchehab@kernel.org> 23530 23531 R: Sakari Ailus <sakari.ailus@linux.intel.com> 23531 23532 L: linux-media@vger.kernel.org ··· 23821 23822 F: drivers/net/ethernet/i825xx/sun3* 23822 23823 23823 23824 SUN4I LOW RES ADC ATTACHED TABLET KEYS DRIVER 23824 - M: Hans de Goede <hdegoede@redhat.com> 23825 + M: Hans de Goede <hansg@kernel.org> 23825 23826 L: linux-input@vger.kernel.org 23826 23827 S: Maintained 23827 23828 F: Documentation/devicetree/bindings/input/allwinner,sun4i-a10-lradc-keys.yaml ··· 25589 25590 F: drivers/hid/usbhid/ 25590 25591 25591 25592 USB INTEL XHCI ROLE MUX DRIVER 25592 - M: Hans de Goede <hdegoede@redhat.com> 25593 + M: Hans de Goede <hansg@kernel.org> 25593 25594 L: linux-usb@vger.kernel.org 25594 25595 S: Maintained 25595 25596 F: drivers/usb/roles/intel-xhci-usb-role-switch.c ··· 25780 25781 F: drivers/usb/typec/mux/intel_pmc_mux.c 25781 25782 25782 25783 USB TYPEC PI3USB30532 MUX DRIVER 25783 - M: Hans de Goede <hdegoede@redhat.com> 25784 + M: Hans de Goede <hansg@kernel.org> 25784 25785 L: linux-usb@vger.kernel.org 25785 25786 S: Maintained 25786 25787 F: drivers/usb/typec/mux/pi3usb30532.c ··· 25809 25810 25810 25811 USB VIDEO CLASS 25811 25812 M: Laurent Pinchart <laurent.pinchart@ideasonboard.com> 25812 - M: Hans de Goede <hdegoede@redhat.com> 25813 + M: Hans de Goede <hansg@kernel.org> 25813 25814 L: linux-media@vger.kernel.org 25814 25815 S: Maintained 25815 25816 W: http://www.ideasonboard.org/uvc/ ··· 26340 26341 F: sound/virtio/* 26341 26342 26342 26343 VIRTUAL BOX GUEST DEVICE DRIVER 26343 - M: Hans de Goede <hdegoede@redhat.com> 26344 + M: Hans de Goede <hansg@kernel.org> 26344 26345 M: Arnd Bergmann <arnd@arndb.de> 26345 26346 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 26346 26347 S: Maintained ··· 26349 26350 F: include/uapi/linux/vbox*.h 26350 26351 26351 26352 VIRTUAL BOX SHARED FOLDER VFS DRIVER 26352 - M: Hans de Goede <hdegoede@redhat.com> 26353 + M: Hans de Goede <hansg@kernel.org> 26353 26354 L: linux-fsdevel@vger.kernel.org 26354 26355 S: Maintained 26355 26356 F: fs/vboxsf/* ··· 26604 26605 26605 26606 WACOM PROTOCOL 4 SERIAL TABLETS 26606 26607 M: Julian Squires <julian@cipht.net> 26607 - M: Hans de Goede <hdegoede@redhat.com> 26608 + M: Hans de Goede <hansg@kernel.org> 26608 26609 L: linux-input@vger.kernel.org 26609 26610 S: Maintained 26610 26611 F: drivers/input/tablet/wacom_serial4.c ··· 26771 26772 F: include/uapi/linux/wwan.h 26772 26773 26773 26774 X-POWERS AXP288 PMIC DRIVERS 26774 - M: Hans de Goede <hdegoede@redhat.com> 26775 + M: Hans de Goede <hansg@kernel.org> 26775 26776 S: Maintained 26776 26777 F: drivers/acpi/pmic/intel_pmic_xpower.c 26777 26778 N: axp288 ··· 26863 26864 F: arch/x86/mm/ 26864 26865 26865 26866 X86 PLATFORM ANDROID TABLETS DSDT FIXUP DRIVER 26866 - M: Hans de Goede <hdegoede@redhat.com> 26867 + M: Hans de Goede <hansg@kernel.org> 26867 26868 L: platform-driver-x86@vger.kernel.org 26868 26869 S: Maintained 26869 26870 T: git git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86.git 26870 26871 F: drivers/platform/x86/x86-android-tablets/ 26871 26872 26872 26873 X86 PLATFORM DRIVERS 26873 - M: Hans de Goede <hdegoede@redhat.com> 26874 + M: Hans de Goede <hansg@kernel.org> 26874 26875 M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> 26875 26876 L: platform-driver-x86@vger.kernel.org 26876 26877 S: Maintained
+1 -4
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 16 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc1 5 + EXTRAVERSION = -rc2 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION* ··· 1832 1832 # Misc 1833 1833 # --------------------------------------------------------------------------- 1834 1834 1835 - # Run misc checks when ${KBUILD_EXTRA_WARN} contains 1 1836 1835 PHONY += misc-check 1837 - ifneq ($(findstring 1,$(KBUILD_EXTRA_WARN)),) 1838 1836 misc-check: 1839 1837 $(Q)$(srctree)/scripts/misc-check 1840 - endif 1841 1838 1842 1839 all: misc-check 1843 1840
+28 -6
arch/arm64/include/asm/kvm_host.h
··· 1107 1107 #define ctxt_sys_reg(c,r) (*__ctxt_sys_reg(c,r)) 1108 1108 1109 1109 u64 kvm_vcpu_apply_reg_masks(const struct kvm_vcpu *, enum vcpu_sysreg, u64); 1110 - #define __vcpu_sys_reg(v,r) \ 1111 - (*({ \ 1110 + 1111 + #define __vcpu_assign_sys_reg(v, r, val) \ 1112 + do { \ 1112 1113 const struct kvm_cpu_context *ctxt = &(v)->arch.ctxt; \ 1113 - u64 *__r = __ctxt_sys_reg(ctxt, (r)); \ 1114 + u64 __v = (val); \ 1114 1115 if (vcpu_has_nv((v)) && (r) >= __SANITISED_REG_START__) \ 1115 - *__r = kvm_vcpu_apply_reg_masks((v), (r), *__r);\ 1116 - __r; \ 1117 - })) 1116 + __v = kvm_vcpu_apply_reg_masks((v), (r), __v); \ 1117 + \ 1118 + ctxt_sys_reg(ctxt, (r)) = __v; \ 1119 + } while (0) 1120 + 1121 + #define __vcpu_rmw_sys_reg(v, r, op, val) \ 1122 + do { \ 1123 + const struct kvm_cpu_context *ctxt = &(v)->arch.ctxt; \ 1124 + u64 __v = ctxt_sys_reg(ctxt, (r)); \ 1125 + __v op (val); \ 1126 + if (vcpu_has_nv((v)) && (r) >= __SANITISED_REG_START__) \ 1127 + __v = kvm_vcpu_apply_reg_masks((v), (r), __v); \ 1128 + \ 1129 + ctxt_sys_reg(ctxt, (r)) = __v; \ 1130 + } while (0) 1131 + 1132 + #define __vcpu_sys_reg(v,r) \ 1133 + ({ \ 1134 + const struct kvm_cpu_context *ctxt = &(v)->arch.ctxt; \ 1135 + u64 __v = ctxt_sys_reg(ctxt, (r)); \ 1136 + if (vcpu_has_nv((v)) && (r) >= __SANITISED_REG_START__) \ 1137 + __v = kvm_vcpu_apply_reg_masks((v), (r), __v); \ 1138 + __v; \ 1139 + }) 1118 1140 1119 1141 u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg); 1120 1142 void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg);
+9 -9
arch/arm64/kvm/arch_timer.c
··· 108 108 109 109 switch(arch_timer_ctx_index(ctxt)) { 110 110 case TIMER_VTIMER: 111 - __vcpu_sys_reg(vcpu, CNTV_CTL_EL0) = ctl; 111 + __vcpu_assign_sys_reg(vcpu, CNTV_CTL_EL0, ctl); 112 112 break; 113 113 case TIMER_PTIMER: 114 - __vcpu_sys_reg(vcpu, CNTP_CTL_EL0) = ctl; 114 + __vcpu_assign_sys_reg(vcpu, CNTP_CTL_EL0, ctl); 115 115 break; 116 116 case TIMER_HVTIMER: 117 - __vcpu_sys_reg(vcpu, CNTHV_CTL_EL2) = ctl; 117 + __vcpu_assign_sys_reg(vcpu, CNTHV_CTL_EL2, ctl); 118 118 break; 119 119 case TIMER_HPTIMER: 120 - __vcpu_sys_reg(vcpu, CNTHP_CTL_EL2) = ctl; 120 + __vcpu_assign_sys_reg(vcpu, CNTHP_CTL_EL2, ctl); 121 121 break; 122 122 default: 123 123 WARN_ON(1); ··· 130 130 131 131 switch(arch_timer_ctx_index(ctxt)) { 132 132 case TIMER_VTIMER: 133 - __vcpu_sys_reg(vcpu, CNTV_CVAL_EL0) = cval; 133 + __vcpu_assign_sys_reg(vcpu, CNTV_CVAL_EL0, cval); 134 134 break; 135 135 case TIMER_PTIMER: 136 - __vcpu_sys_reg(vcpu, CNTP_CVAL_EL0) = cval; 136 + __vcpu_assign_sys_reg(vcpu, CNTP_CVAL_EL0, cval); 137 137 break; 138 138 case TIMER_HVTIMER: 139 - __vcpu_sys_reg(vcpu, CNTHV_CVAL_EL2) = cval; 139 + __vcpu_assign_sys_reg(vcpu, CNTHV_CVAL_EL2, cval); 140 140 break; 141 141 case TIMER_HPTIMER: 142 - __vcpu_sys_reg(vcpu, CNTHP_CVAL_EL2) = cval; 142 + __vcpu_assign_sys_reg(vcpu, CNTHP_CVAL_EL2, cval); 143 143 break; 144 144 default: 145 145 WARN_ON(1); ··· 1036 1036 if (vcpu_has_nv(vcpu)) { 1037 1037 struct arch_timer_offset *offs = &vcpu_vtimer(vcpu)->offset; 1038 1038 1039 - offs->vcpu_offset = &__vcpu_sys_reg(vcpu, CNTVOFF_EL2); 1039 + offs->vcpu_offset = __ctxt_sys_reg(&vcpu->arch.ctxt, CNTVOFF_EL2); 1040 1040 offs->vm_offset = &vcpu->kvm->arch.timer_data.poffset; 1041 1041 } 1042 1042
+2 -2
arch/arm64/kvm/debug.c
··· 216 216 void kvm_debug_handle_oslar(struct kvm_vcpu *vcpu, u64 val) 217 217 { 218 218 if (val & OSLAR_EL1_OSLK) 219 - __vcpu_sys_reg(vcpu, OSLSR_EL1) |= OSLSR_EL1_OSLK; 219 + __vcpu_rmw_sys_reg(vcpu, OSLSR_EL1, |=, OSLSR_EL1_OSLK); 220 220 else 221 - __vcpu_sys_reg(vcpu, OSLSR_EL1) &= ~OSLSR_EL1_OSLK; 221 + __vcpu_rmw_sys_reg(vcpu, OSLSR_EL1, &=, ~OSLSR_EL1_OSLK); 222 222 223 223 preempt_disable(); 224 224 kvm_arch_vcpu_put(vcpu);
+2 -2
arch/arm64/kvm/fpsimd.c
··· 103 103 fp_state.sve_state = vcpu->arch.sve_state; 104 104 fp_state.sve_vl = vcpu->arch.sve_max_vl; 105 105 fp_state.sme_state = NULL; 106 - fp_state.svcr = &__vcpu_sys_reg(vcpu, SVCR); 107 - fp_state.fpmr = &__vcpu_sys_reg(vcpu, FPMR); 106 + fp_state.svcr = __ctxt_sys_reg(&vcpu->arch.ctxt, SVCR); 107 + fp_state.fpmr = __ctxt_sys_reg(&vcpu->arch.ctxt, FPMR); 108 108 fp_state.fp_type = &vcpu->arch.fp_type; 109 109 110 110 if (vcpu_has_sve(vcpu))
+2 -2
arch/arm64/kvm/hyp/exception.c
··· 37 37 if (unlikely(vcpu_has_nv(vcpu))) 38 38 vcpu_write_sys_reg(vcpu, val, reg); 39 39 else if (!__vcpu_write_sys_reg_to_cpu(val, reg)) 40 - __vcpu_sys_reg(vcpu, reg) = val; 40 + __vcpu_assign_sys_reg(vcpu, reg, val); 41 41 } 42 42 43 43 static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, unsigned long target_mode, ··· 51 51 } else if (has_vhe()) { 52 52 write_sysreg_el1(val, SYS_SPSR); 53 53 } else { 54 - __vcpu_sys_reg(vcpu, SPSR_EL1) = val; 54 + __vcpu_assign_sys_reg(vcpu, SPSR_EL1, val); 55 55 } 56 56 } 57 57
+2 -2
arch/arm64/kvm/hyp/include/hyp/switch.h
··· 45 45 if (!vcpu_el1_is_32bit(vcpu)) 46 46 return; 47 47 48 - __vcpu_sys_reg(vcpu, FPEXC32_EL2) = read_sysreg(fpexc32_el2); 48 + __vcpu_assign_sys_reg(vcpu, FPEXC32_EL2, read_sysreg(fpexc32_el2)); 49 49 } 50 50 51 51 static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) ··· 456 456 */ 457 457 if (vcpu_has_sve(vcpu)) { 458 458 zcr_el1 = read_sysreg_el1(SYS_ZCR); 459 - __vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu)) = zcr_el1; 459 + __vcpu_assign_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu), zcr_el1); 460 460 461 461 /* 462 462 * The guest's state is always saved using the guest's max VL.
+3 -3
arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
··· 307 307 vcpu->arch.ctxt.spsr_irq = read_sysreg(spsr_irq); 308 308 vcpu->arch.ctxt.spsr_fiq = read_sysreg(spsr_fiq); 309 309 310 - __vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2); 311 - __vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2); 310 + __vcpu_assign_sys_reg(vcpu, DACR32_EL2, read_sysreg(dacr32_el2)); 311 + __vcpu_assign_sys_reg(vcpu, IFSR32_EL2, read_sysreg(ifsr32_el2)); 312 312 313 313 if (has_vhe() || kvm_debug_regs_in_use(vcpu)) 314 - __vcpu_sys_reg(vcpu, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2); 314 + __vcpu_assign_sys_reg(vcpu, DBGVCR32_EL2, read_sysreg(dbgvcr32_el2)); 315 315 } 316 316 317 317 static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu)
+2 -2
arch/arm64/kvm/hyp/nvhe/hyp-main.c
··· 26 26 27 27 static void __hyp_sve_save_guest(struct kvm_vcpu *vcpu) 28 28 { 29 - __vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR); 29 + __vcpu_assign_sys_reg(vcpu, ZCR_EL1, read_sysreg_el1(SYS_ZCR)); 30 30 /* 31 31 * On saving/restoring guest sve state, always use the maximum VL for 32 32 * the guest. The layout of the data when saving the sve state depends ··· 79 79 80 80 has_fpmr = kvm_has_fpmr(kern_hyp_va(vcpu->kvm)); 81 81 if (has_fpmr) 82 - __vcpu_sys_reg(vcpu, FPMR) = read_sysreg_s(SYS_FPMR); 82 + __vcpu_assign_sys_reg(vcpu, FPMR, read_sysreg_s(SYS_FPMR)); 83 83 84 84 if (system_supports_sve()) 85 85 __hyp_sve_restore_host();
+2 -2
arch/arm64/kvm/hyp/vhe/switch.c
··· 223 223 */ 224 224 val = read_sysreg_el0(SYS_CNTP_CVAL); 225 225 if (map.direct_ptimer == vcpu_ptimer(vcpu)) 226 - __vcpu_sys_reg(vcpu, CNTP_CVAL_EL0) = val; 226 + __vcpu_assign_sys_reg(vcpu, CNTP_CVAL_EL0, val); 227 227 if (map.direct_ptimer == vcpu_hptimer(vcpu)) 228 - __vcpu_sys_reg(vcpu, CNTHP_CVAL_EL2) = val; 228 + __vcpu_assign_sys_reg(vcpu, CNTHP_CVAL_EL2, val); 229 229 230 230 offset = read_sysreg_s(SYS_CNTPOFF_EL2); 231 231
+23 -23
arch/arm64/kvm/hyp/vhe/sysreg-sr.c
··· 18 18 static void __sysreg_save_vel2_state(struct kvm_vcpu *vcpu) 19 19 { 20 20 /* These registers are common with EL1 */ 21 - __vcpu_sys_reg(vcpu, PAR_EL1) = read_sysreg(par_el1); 22 - __vcpu_sys_reg(vcpu, TPIDR_EL1) = read_sysreg(tpidr_el1); 21 + __vcpu_assign_sys_reg(vcpu, PAR_EL1, read_sysreg(par_el1)); 22 + __vcpu_assign_sys_reg(vcpu, TPIDR_EL1, read_sysreg(tpidr_el1)); 23 23 24 - __vcpu_sys_reg(vcpu, ESR_EL2) = read_sysreg_el1(SYS_ESR); 25 - __vcpu_sys_reg(vcpu, AFSR0_EL2) = read_sysreg_el1(SYS_AFSR0); 26 - __vcpu_sys_reg(vcpu, AFSR1_EL2) = read_sysreg_el1(SYS_AFSR1); 27 - __vcpu_sys_reg(vcpu, FAR_EL2) = read_sysreg_el1(SYS_FAR); 28 - __vcpu_sys_reg(vcpu, MAIR_EL2) = read_sysreg_el1(SYS_MAIR); 29 - __vcpu_sys_reg(vcpu, VBAR_EL2) = read_sysreg_el1(SYS_VBAR); 30 - __vcpu_sys_reg(vcpu, CONTEXTIDR_EL2) = read_sysreg_el1(SYS_CONTEXTIDR); 31 - __vcpu_sys_reg(vcpu, AMAIR_EL2) = read_sysreg_el1(SYS_AMAIR); 24 + __vcpu_assign_sys_reg(vcpu, ESR_EL2, read_sysreg_el1(SYS_ESR)); 25 + __vcpu_assign_sys_reg(vcpu, AFSR0_EL2, read_sysreg_el1(SYS_AFSR0)); 26 + __vcpu_assign_sys_reg(vcpu, AFSR1_EL2, read_sysreg_el1(SYS_AFSR1)); 27 + __vcpu_assign_sys_reg(vcpu, FAR_EL2, read_sysreg_el1(SYS_FAR)); 28 + __vcpu_assign_sys_reg(vcpu, MAIR_EL2, read_sysreg_el1(SYS_MAIR)); 29 + __vcpu_assign_sys_reg(vcpu, VBAR_EL2, read_sysreg_el1(SYS_VBAR)); 30 + __vcpu_assign_sys_reg(vcpu, CONTEXTIDR_EL2, read_sysreg_el1(SYS_CONTEXTIDR)); 31 + __vcpu_assign_sys_reg(vcpu, AMAIR_EL2, read_sysreg_el1(SYS_AMAIR)); 32 32 33 33 /* 34 34 * In VHE mode those registers are compatible between EL1 and EL2, ··· 46 46 * are always trapped, ensuring that the in-memory 47 47 * copy is always up-to-date. A small blessing... 48 48 */ 49 - __vcpu_sys_reg(vcpu, SCTLR_EL2) = read_sysreg_el1(SYS_SCTLR); 50 - __vcpu_sys_reg(vcpu, TTBR0_EL2) = read_sysreg_el1(SYS_TTBR0); 51 - __vcpu_sys_reg(vcpu, TTBR1_EL2) = read_sysreg_el1(SYS_TTBR1); 52 - __vcpu_sys_reg(vcpu, TCR_EL2) = read_sysreg_el1(SYS_TCR); 49 + __vcpu_assign_sys_reg(vcpu, SCTLR_EL2, read_sysreg_el1(SYS_SCTLR)); 50 + __vcpu_assign_sys_reg(vcpu, TTBR0_EL2, read_sysreg_el1(SYS_TTBR0)); 51 + __vcpu_assign_sys_reg(vcpu, TTBR1_EL2, read_sysreg_el1(SYS_TTBR1)); 52 + __vcpu_assign_sys_reg(vcpu, TCR_EL2, read_sysreg_el1(SYS_TCR)); 53 53 54 54 if (ctxt_has_tcrx(&vcpu->arch.ctxt)) { 55 - __vcpu_sys_reg(vcpu, TCR2_EL2) = read_sysreg_el1(SYS_TCR2); 55 + __vcpu_assign_sys_reg(vcpu, TCR2_EL2, read_sysreg_el1(SYS_TCR2)); 56 56 57 57 if (ctxt_has_s1pie(&vcpu->arch.ctxt)) { 58 - __vcpu_sys_reg(vcpu, PIRE0_EL2) = read_sysreg_el1(SYS_PIRE0); 59 - __vcpu_sys_reg(vcpu, PIR_EL2) = read_sysreg_el1(SYS_PIR); 58 + __vcpu_assign_sys_reg(vcpu, PIRE0_EL2, read_sysreg_el1(SYS_PIRE0)); 59 + __vcpu_assign_sys_reg(vcpu, PIR_EL2, read_sysreg_el1(SYS_PIR)); 60 60 } 61 61 62 62 if (ctxt_has_s1poe(&vcpu->arch.ctxt)) 63 - __vcpu_sys_reg(vcpu, POR_EL2) = read_sysreg_el1(SYS_POR); 63 + __vcpu_assign_sys_reg(vcpu, POR_EL2, read_sysreg_el1(SYS_POR)); 64 64 } 65 65 66 66 /* ··· 70 70 */ 71 71 val = read_sysreg_el1(SYS_CNTKCTL); 72 72 val &= CNTKCTL_VALID_BITS; 73 - __vcpu_sys_reg(vcpu, CNTHCTL_EL2) &= ~CNTKCTL_VALID_BITS; 74 - __vcpu_sys_reg(vcpu, CNTHCTL_EL2) |= val; 73 + __vcpu_rmw_sys_reg(vcpu, CNTHCTL_EL2, &=, ~CNTKCTL_VALID_BITS); 74 + __vcpu_rmw_sys_reg(vcpu, CNTHCTL_EL2, |=, val); 75 75 } 76 76 77 - __vcpu_sys_reg(vcpu, SP_EL2) = read_sysreg(sp_el1); 78 - __vcpu_sys_reg(vcpu, ELR_EL2) = read_sysreg_el1(SYS_ELR); 79 - __vcpu_sys_reg(vcpu, SPSR_EL2) = read_sysreg_el1(SYS_SPSR); 77 + __vcpu_assign_sys_reg(vcpu, SP_EL2, read_sysreg(sp_el1)); 78 + __vcpu_assign_sys_reg(vcpu, ELR_EL2, read_sysreg_el1(SYS_ELR)); 79 + __vcpu_assign_sys_reg(vcpu, SPSR_EL2, read_sysreg_el1(SYS_SPSR)); 80 80 } 81 81 82 82 static void __sysreg_restore_vel2_state(struct kvm_vcpu *vcpu)
+1 -1
arch/arm64/kvm/nested.c
··· 1757 1757 1758 1758 out: 1759 1759 for (enum vcpu_sysreg sr = __SANITISED_REG_START__; sr < NR_SYS_REGS; sr++) 1760 - (void)__vcpu_sys_reg(vcpu, sr); 1760 + __vcpu_rmw_sys_reg(vcpu, sr, |=, 0); 1761 1761 1762 1762 return 0; 1763 1763 }
+12 -12
arch/arm64/kvm/pmu-emul.c
··· 178 178 val |= lower_32_bits(val); 179 179 } 180 180 181 - __vcpu_sys_reg(vcpu, reg) = val; 181 + __vcpu_assign_sys_reg(vcpu, reg, val); 182 182 183 183 /* Recreate the perf event to reflect the updated sample_period */ 184 184 kvm_pmu_create_perf_event(pmc); ··· 204 204 void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) 205 205 { 206 206 kvm_pmu_release_perf_event(kvm_vcpu_idx_to_pmc(vcpu, select_idx)); 207 - __vcpu_sys_reg(vcpu, counter_index_to_reg(select_idx)) = val; 207 + __vcpu_assign_sys_reg(vcpu, counter_index_to_reg(select_idx), val); 208 208 kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu); 209 209 } 210 210 ··· 239 239 240 240 reg = counter_index_to_reg(pmc->idx); 241 241 242 - __vcpu_sys_reg(vcpu, reg) = val; 242 + __vcpu_assign_sys_reg(vcpu, reg, val); 243 243 244 244 kvm_pmu_release_perf_event(pmc); 245 245 } ··· 503 503 reg = __vcpu_sys_reg(vcpu, counter_index_to_reg(i)) + 1; 504 504 if (!kvm_pmc_is_64bit(pmc)) 505 505 reg = lower_32_bits(reg); 506 - __vcpu_sys_reg(vcpu, counter_index_to_reg(i)) = reg; 506 + __vcpu_assign_sys_reg(vcpu, counter_index_to_reg(i), reg); 507 507 508 508 /* No overflow? move on */ 509 509 if (kvm_pmc_has_64bit_overflow(pmc) ? reg : lower_32_bits(reg)) 510 510 continue; 511 511 512 512 /* Mark overflow */ 513 - __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i); 513 + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=, BIT(i)); 514 514 515 515 if (kvm_pmu_counter_can_chain(pmc)) 516 516 kvm_pmu_counter_increment(vcpu, BIT(i + 1), ··· 556 556 perf_event->attr.sample_period = period; 557 557 perf_event->hw.sample_period = period; 558 558 559 - __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx); 559 + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=, BIT(idx)); 560 560 561 561 if (kvm_pmu_counter_can_chain(pmc)) 562 562 kvm_pmu_counter_increment(vcpu, BIT(idx + 1), ··· 602 602 kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu); 603 603 604 604 /* The reset bits don't indicate any state, and shouldn't be saved. */ 605 - __vcpu_sys_reg(vcpu, PMCR_EL0) = val & ~(ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_P); 605 + __vcpu_assign_sys_reg(vcpu, PMCR_EL0, (val & ~(ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_P))); 606 606 607 607 if (val & ARMV8_PMU_PMCR_C) 608 608 kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 0); ··· 779 779 u64 reg; 780 780 781 781 reg = counter_index_to_evtreg(pmc->idx); 782 - __vcpu_sys_reg(vcpu, reg) = data & kvm_pmu_evtyper_mask(vcpu->kvm); 782 + __vcpu_assign_sys_reg(vcpu, reg, (data & kvm_pmu_evtyper_mask(vcpu->kvm))); 783 783 784 784 kvm_pmu_create_perf_event(pmc); 785 785 } ··· 914 914 { 915 915 u64 mask = kvm_pmu_implemented_counter_mask(vcpu); 916 916 917 - __vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= mask; 918 - __vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= mask; 919 - __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= mask; 917 + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, &=, mask); 918 + __vcpu_rmw_sys_reg(vcpu, PMINTENSET_EL1, &=, mask); 919 + __vcpu_rmw_sys_reg(vcpu, PMCNTENSET_EL0, &=, mask); 920 920 921 921 kvm_pmu_reprogram_counter_mask(vcpu, mask); 922 922 } ··· 1038 1038 u64 val = __vcpu_sys_reg(vcpu, MDCR_EL2); 1039 1039 val &= ~MDCR_EL2_HPMN; 1040 1040 val |= FIELD_PREP(MDCR_EL2_HPMN, kvm->arch.nr_pmu_counters); 1041 - __vcpu_sys_reg(vcpu, MDCR_EL2) = val; 1041 + __vcpu_assign_sys_reg(vcpu, MDCR_EL2, val); 1042 1042 } 1043 1043 } 1044 1044 }
+31 -29
arch/arm64/kvm/sys_regs.c
··· 228 228 * to reverse-translate virtual EL2 system registers for a 229 229 * non-VHE guest hypervisor. 230 230 */ 231 - __vcpu_sys_reg(vcpu, reg) = val; 231 + __vcpu_assign_sys_reg(vcpu, reg, val); 232 232 233 233 switch (reg) { 234 234 case CNTHCTL_EL2: ··· 263 263 return; 264 264 265 265 memory_write: 266 - __vcpu_sys_reg(vcpu, reg) = val; 266 + __vcpu_assign_sys_reg(vcpu, reg, val); 267 267 } 268 268 269 269 /* CSSELR values; used to index KVM_REG_ARM_DEMUX_ID_CCSIDR */ ··· 605 605 if ((val ^ rd->val) & ~OSLSR_EL1_OSLK) 606 606 return -EINVAL; 607 607 608 - __vcpu_sys_reg(vcpu, rd->reg) = val; 608 + __vcpu_assign_sys_reg(vcpu, rd->reg, val); 609 609 return 0; 610 610 } 611 611 ··· 791 791 mask |= GENMASK(n - 1, 0); 792 792 793 793 reset_unknown(vcpu, r); 794 - __vcpu_sys_reg(vcpu, r->reg) &= mask; 794 + __vcpu_rmw_sys_reg(vcpu, r->reg, &=, mask); 795 795 796 796 return __vcpu_sys_reg(vcpu, r->reg); 797 797 } ··· 799 799 static u64 reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) 800 800 { 801 801 reset_unknown(vcpu, r); 802 - __vcpu_sys_reg(vcpu, r->reg) &= GENMASK(31, 0); 802 + __vcpu_rmw_sys_reg(vcpu, r->reg, &=, GENMASK(31, 0)); 803 803 804 804 return __vcpu_sys_reg(vcpu, r->reg); 805 805 } ··· 811 811 return 0; 812 812 813 813 reset_unknown(vcpu, r); 814 - __vcpu_sys_reg(vcpu, r->reg) &= kvm_pmu_evtyper_mask(vcpu->kvm); 814 + __vcpu_rmw_sys_reg(vcpu, r->reg, &=, kvm_pmu_evtyper_mask(vcpu->kvm)); 815 815 816 816 return __vcpu_sys_reg(vcpu, r->reg); 817 817 } ··· 819 819 static u64 reset_pmselr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) 820 820 { 821 821 reset_unknown(vcpu, r); 822 - __vcpu_sys_reg(vcpu, r->reg) &= PMSELR_EL0_SEL_MASK; 822 + __vcpu_rmw_sys_reg(vcpu, r->reg, &=, PMSELR_EL0_SEL_MASK); 823 823 824 824 return __vcpu_sys_reg(vcpu, r->reg); 825 825 } ··· 835 835 * The value of PMCR.N field is included when the 836 836 * vCPU register is read via kvm_vcpu_read_pmcr(). 837 837 */ 838 - __vcpu_sys_reg(vcpu, r->reg) = pmcr; 838 + __vcpu_assign_sys_reg(vcpu, r->reg, pmcr); 839 839 840 840 return __vcpu_sys_reg(vcpu, r->reg); 841 841 } ··· 907 907 return false; 908 908 909 909 if (p->is_write) 910 - __vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval; 910 + __vcpu_assign_sys_reg(vcpu, PMSELR_EL0, p->regval); 911 911 else 912 912 /* return PMSELR.SEL field */ 913 913 p->regval = __vcpu_sys_reg(vcpu, PMSELR_EL0) ··· 1076 1076 { 1077 1077 u64 mask = kvm_pmu_accessible_counter_mask(vcpu); 1078 1078 1079 - __vcpu_sys_reg(vcpu, r->reg) = val & mask; 1079 + __vcpu_assign_sys_reg(vcpu, r->reg, val & mask); 1080 1080 kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu); 1081 1081 1082 1082 return 0; ··· 1103 1103 val = p->regval & mask; 1104 1104 if (r->Op2 & 0x1) 1105 1105 /* accessing PMCNTENSET_EL0 */ 1106 - __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val; 1106 + __vcpu_rmw_sys_reg(vcpu, PMCNTENSET_EL0, |=, val); 1107 1107 else 1108 1108 /* accessing PMCNTENCLR_EL0 */ 1109 - __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val; 1109 + __vcpu_rmw_sys_reg(vcpu, PMCNTENSET_EL0, &=, ~val); 1110 1110 1111 1111 kvm_pmu_reprogram_counter_mask(vcpu, val); 1112 1112 } else { ··· 1129 1129 1130 1130 if (r->Op2 & 0x1) 1131 1131 /* accessing PMINTENSET_EL1 */ 1132 - __vcpu_sys_reg(vcpu, PMINTENSET_EL1) |= val; 1132 + __vcpu_rmw_sys_reg(vcpu, PMINTENSET_EL1, |=, val); 1133 1133 else 1134 1134 /* accessing PMINTENCLR_EL1 */ 1135 - __vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val; 1135 + __vcpu_rmw_sys_reg(vcpu, PMINTENSET_EL1, &=, ~val); 1136 1136 } else { 1137 1137 p->regval = __vcpu_sys_reg(vcpu, PMINTENSET_EL1); 1138 1138 } ··· 1151 1151 if (p->is_write) { 1152 1152 if (r->CRm & 0x2) 1153 1153 /* accessing PMOVSSET_EL0 */ 1154 - __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= (p->regval & mask); 1154 + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=, (p->regval & mask)); 1155 1155 else 1156 1156 /* accessing PMOVSCLR_EL0 */ 1157 - __vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~(p->regval & mask); 1157 + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, &=, ~(p->regval & mask)); 1158 1158 } else { 1159 1159 p->regval = __vcpu_sys_reg(vcpu, PMOVSSET_EL0); 1160 1160 } ··· 1185 1185 if (!vcpu_mode_priv(vcpu)) 1186 1186 return undef_access(vcpu, p, r); 1187 1187 1188 - __vcpu_sys_reg(vcpu, PMUSERENR_EL0) = 1189 - p->regval & ARMV8_PMU_USERENR_MASK; 1188 + __vcpu_assign_sys_reg(vcpu, PMUSERENR_EL0, 1189 + (p->regval & ARMV8_PMU_USERENR_MASK)); 1190 1190 } else { 1191 1191 p->regval = __vcpu_sys_reg(vcpu, PMUSERENR_EL0) 1192 1192 & ARMV8_PMU_USERENR_MASK; ··· 1237 1237 if (!kvm_supports_32bit_el0()) 1238 1238 val |= ARMV8_PMU_PMCR_LC; 1239 1239 1240 - __vcpu_sys_reg(vcpu, r->reg) = val; 1240 + __vcpu_assign_sys_reg(vcpu, r->reg, val); 1241 1241 kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu); 1242 1242 1243 1243 return 0; ··· 2213 2213 if (kvm_has_mte(vcpu->kvm)) 2214 2214 clidr |= 2ULL << CLIDR_TTYPE_SHIFT(loc); 2215 2215 2216 - __vcpu_sys_reg(vcpu, r->reg) = clidr; 2216 + __vcpu_assign_sys_reg(vcpu, r->reg, clidr); 2217 2217 2218 2218 return __vcpu_sys_reg(vcpu, r->reg); 2219 2219 } ··· 2227 2227 if ((val & CLIDR_EL1_RES0) || (!(ctr_el0 & CTR_EL0_IDC) && idc)) 2228 2228 return -EINVAL; 2229 2229 2230 - __vcpu_sys_reg(vcpu, rd->reg) = val; 2230 + __vcpu_assign_sys_reg(vcpu, rd->reg, val); 2231 2231 2232 2232 return 0; 2233 2233 } ··· 2404 2404 const struct sys_reg_desc *r) 2405 2405 { 2406 2406 if (p->is_write) 2407 - __vcpu_sys_reg(vcpu, SP_EL1) = p->regval; 2407 + __vcpu_assign_sys_reg(vcpu, SP_EL1, p->regval); 2408 2408 else 2409 2409 p->regval = __vcpu_sys_reg(vcpu, SP_EL1); 2410 2410 ··· 2428 2428 const struct sys_reg_desc *r) 2429 2429 { 2430 2430 if (p->is_write) 2431 - __vcpu_sys_reg(vcpu, SPSR_EL1) = p->regval; 2431 + __vcpu_assign_sys_reg(vcpu, SPSR_EL1, p->regval); 2432 2432 else 2433 2433 p->regval = __vcpu_sys_reg(vcpu, SPSR_EL1); 2434 2434 ··· 2440 2440 const struct sys_reg_desc *r) 2441 2441 { 2442 2442 if (p->is_write) 2443 - __vcpu_sys_reg(vcpu, CNTKCTL_EL1) = p->regval; 2443 + __vcpu_assign_sys_reg(vcpu, CNTKCTL_EL1, p->regval); 2444 2444 else 2445 2445 p->regval = __vcpu_sys_reg(vcpu, CNTKCTL_EL1); 2446 2446 ··· 2454 2454 if (!cpus_have_final_cap(ARM64_HAS_HCR_NV1)) 2455 2455 val |= HCR_E2H; 2456 2456 2457 - return __vcpu_sys_reg(vcpu, r->reg) = val; 2457 + __vcpu_assign_sys_reg(vcpu, r->reg, val); 2458 + 2459 + return __vcpu_sys_reg(vcpu, r->reg); 2458 2460 } 2459 2461 2460 2462 static unsigned int __el2_visibility(const struct kvm_vcpu *vcpu, ··· 2627 2625 u64_replace_bits(val, hpmn, MDCR_EL2_HPMN); 2628 2626 } 2629 2627 2630 - __vcpu_sys_reg(vcpu, MDCR_EL2) = val; 2628 + __vcpu_assign_sys_reg(vcpu, MDCR_EL2, val); 2631 2629 2632 2630 /* 2633 2631 * Request a reload of the PMU to enable/disable the counters ··· 2756 2754 2757 2755 static u64 reset_mdcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) 2758 2756 { 2759 - __vcpu_sys_reg(vcpu, r->reg) = vcpu->kvm->arch.nr_pmu_counters; 2757 + __vcpu_assign_sys_reg(vcpu, r->reg, vcpu->kvm->arch.nr_pmu_counters); 2760 2758 return vcpu->kvm->arch.nr_pmu_counters; 2761 2759 } 2762 2760 ··· 4792 4790 r->reset(vcpu, r); 4793 4791 4794 4792 if (r->reg >= __SANITISED_REG_START__ && r->reg < NR_SYS_REGS) 4795 - (void)__vcpu_sys_reg(vcpu, r->reg); 4793 + __vcpu_rmw_sys_reg(vcpu, r->reg, |=, 0); 4796 4794 } 4797 4795 4798 4796 set_bit(KVM_ARCH_FLAG_ID_REGS_INITIALIZED, &kvm->arch.flags); ··· 5014 5012 if (r->set_user) { 5015 5013 ret = (r->set_user)(vcpu, r, val); 5016 5014 } else { 5017 - __vcpu_sys_reg(vcpu, r->reg) = val; 5015 + __vcpu_assign_sys_reg(vcpu, r->reg, val); 5018 5016 ret = 0; 5019 5017 } 5020 5018
+2 -2
arch/arm64/kvm/sys_regs.h
··· 137 137 { 138 138 BUG_ON(!r->reg); 139 139 BUG_ON(r->reg >= NR_SYS_REGS); 140 - __vcpu_sys_reg(vcpu, r->reg) = 0x1de7ec7edbadc0deULL; 140 + __vcpu_assign_sys_reg(vcpu, r->reg, 0x1de7ec7edbadc0deULL); 141 141 return __vcpu_sys_reg(vcpu, r->reg); 142 142 } 143 143 ··· 145 145 { 146 146 BUG_ON(!r->reg); 147 147 BUG_ON(r->reg >= NR_SYS_REGS); 148 - __vcpu_sys_reg(vcpu, r->reg) = r->val; 148 + __vcpu_assign_sys_reg(vcpu, r->reg, r->val); 149 149 return __vcpu_sys_reg(vcpu, r->reg); 150 150 } 151 151
+5 -5
arch/arm64/kvm/vgic/vgic-v3-nested.c
··· 356 356 val = __vcpu_sys_reg(vcpu, ICH_HCR_EL2); 357 357 val &= ~ICH_HCR_EL2_EOIcount_MASK; 358 358 val |= (s_cpu_if->vgic_hcr & ICH_HCR_EL2_EOIcount_MASK); 359 - __vcpu_sys_reg(vcpu, ICH_HCR_EL2) = val; 360 - __vcpu_sys_reg(vcpu, ICH_VMCR_EL2) = s_cpu_if->vgic_vmcr; 359 + __vcpu_assign_sys_reg(vcpu, ICH_HCR_EL2, val); 360 + __vcpu_assign_sys_reg(vcpu, ICH_VMCR_EL2, s_cpu_if->vgic_vmcr); 361 361 362 362 for (i = 0; i < 4; i++) { 363 - __vcpu_sys_reg(vcpu, ICH_AP0RN(i)) = s_cpu_if->vgic_ap0r[i]; 364 - __vcpu_sys_reg(vcpu, ICH_AP1RN(i)) = s_cpu_if->vgic_ap1r[i]; 363 + __vcpu_assign_sys_reg(vcpu, ICH_AP0RN(i), s_cpu_if->vgic_ap0r[i]); 364 + __vcpu_assign_sys_reg(vcpu, ICH_AP1RN(i), s_cpu_if->vgic_ap1r[i]); 365 365 } 366 366 367 367 for_each_set_bit(i, &shadow_if->lr_map, kvm_vgic_global_state.nr_lr) { ··· 370 370 val &= ~ICH_LR_STATE; 371 371 val |= s_cpu_if->vgic_lr[i] & ICH_LR_STATE; 372 372 373 - __vcpu_sys_reg(vcpu, ICH_LRN(i)) = val; 373 + __vcpu_assign_sys_reg(vcpu, ICH_LRN(i), val); 374 374 s_cpu_if->vgic_lr[i] = 0; 375 375 } 376 376
+2 -2
arch/arm64/lib/crypto/poly1305-glue.c
··· 38 38 unsigned int todo = min_t(unsigned int, len, SZ_4K); 39 39 40 40 kernel_neon_begin(); 41 - poly1305_blocks_neon(state, src, todo, 1); 41 + poly1305_blocks_neon(state, src, todo, padbit); 42 42 kernel_neon_end(); 43 43 44 44 len -= todo; 45 45 src += todo; 46 46 } while (len); 47 47 } else 48 - poly1305_blocks(state, src, len, 1); 48 + poly1305_blocks(state, src, len, padbit); 49 49 } 50 50 EXPORT_SYMBOL_GPL(poly1305_blocks_arch); 51 51
+1 -1
arch/powerpc/boot/dts/microwatt.dts
··· 4 4 / { 5 5 #size-cells = <0x02>; 6 6 #address-cells = <0x02>; 7 - model-name = "microwatt"; 7 + model = "microwatt"; 8 8 compatible = "microwatt-soc"; 9 9 10 10 aliases {
+10
arch/powerpc/boot/dts/mpc8315erdb.dts
··· 6 6 */ 7 7 8 8 /dts-v1/; 9 + #include <dt-bindings/interrupt-controller/irq.h> 9 10 10 11 / { 11 12 compatible = "fsl,mpc8315erdb"; ··· 358 357 interrupts = <80 8>; 359 358 interrupt-parent = <&ipic>; 360 359 fsl,mpc8313-wakeup-timer = <&gtm1>; 360 + }; 361 + 362 + gpio: gpio-controller@c00 { 363 + compatible = "fsl,mpc8314-gpio"; 364 + reg = <0xc00 0x100>; 365 + interrupts = <74 IRQ_TYPE_LEVEL_LOW>; 366 + interrupt-parent = <&ipic>; 367 + gpio-controller; 368 + #gpio-cells = <2>; 361 369 }; 362 370 }; 363 371
+1 -1
arch/powerpc/include/asm/ppc_asm.h
··· 183 183 /* 184 184 * Used to name C functions called from asm 185 185 */ 186 - #ifdef CONFIG_PPC_KERNEL_PCREL 186 + #if defined(__powerpc64__) && defined(CONFIG_PPC_KERNEL_PCREL) 187 187 #define CFUNC(name) name@notoc 188 188 #else 189 189 #define CFUNC(name) name
+4 -4
arch/powerpc/include/uapi/asm/ioctls.h
··· 23 23 #define TCSETSW _IOW('t', 21, struct termios) 24 24 #define TCSETSF _IOW('t', 22, struct termios) 25 25 26 - #define TCGETA _IOR('t', 23, struct termio) 27 - #define TCSETA _IOW('t', 24, struct termio) 28 - #define TCSETAW _IOW('t', 25, struct termio) 29 - #define TCSETAF _IOW('t', 28, struct termio) 26 + #define TCGETA 0x40147417 /* _IOR('t', 23, struct termio) */ 27 + #define TCSETA 0x80147418 /* _IOW('t', 24, struct termio) */ 28 + #define TCSETAW 0x80147419 /* _IOW('t', 25, struct termio) */ 29 + #define TCSETAF 0x8014741c /* _IOW('t', 28, struct termio) */ 30 30 31 31 #define TCSBRK _IO('t', 29) 32 32 #define TCXONC _IO('t', 30)
+2
arch/powerpc/kernel/eeh.c
··· 1509 1509 /* Invalid PE ? */ 1510 1510 if (!pe) 1511 1511 return -ENODEV; 1512 + else 1513 + ret = eeh_ops->configure_bridge(pe); 1512 1514 1513 1515 return ret; 1514 1516 }
+1 -1
arch/powerpc/kernel/vdso/Makefile
··· 53 53 ldflags-y += $(filter-out $(CC_AUTO_VAR_INIT_ZERO_ENABLER) $(CC_FLAGS_FTRACE) -Wa$(comma)%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS)) 54 54 55 55 CC32FLAGS := -m32 56 - CC32FLAGSREMOVE := -mcmodel=medium -mabi=elfv1 -mabi=elfv2 -mcall-aixdesc 56 + CC32FLAGSREMOVE := -mcmodel=medium -mabi=elfv1 -mabi=elfv2 -mcall-aixdesc -mpcrel 57 57 ifdef CONFIG_CC_IS_CLANG 58 58 # This flag is supported by clang for 64-bit but not 32-bit so it will cause 59 59 # an unused command line flag warning for this file.
+1 -1
arch/x86/Kconfig
··· 89 89 select ARCH_HAS_DMA_OPS if GART_IOMMU || XEN 90 90 select ARCH_HAS_EARLY_DEBUG if KGDB 91 91 select ARCH_HAS_ELF_RANDOMIZE 92 - select ARCH_HAS_EXECMEM_ROX if X86_64 92 + select ARCH_HAS_EXECMEM_ROX if X86_64 && STRICT_MODULE_RWX 93 93 select ARCH_HAS_FAST_MULTIPLIER 94 94 select ARCH_HAS_FORTIFY_SOURCE 95 95 select ARCH_HAS_GCOV_PROFILE_ALL
+8
arch/x86/include/asm/module.h
··· 5 5 #include <asm-generic/module.h> 6 6 #include <asm/orc_types.h> 7 7 8 + struct its_array { 9 + #ifdef CONFIG_MITIGATION_ITS 10 + void **pages; 11 + int num; 12 + #endif 13 + }; 14 + 8 15 struct mod_arch_specific { 9 16 #ifdef CONFIG_UNWINDER_ORC 10 17 unsigned int num_orcs; 11 18 int *orc_unwind_ip; 12 19 struct orc_entry *orc_unwind; 13 20 #endif 21 + struct its_array its_pages; 14 22 }; 15 23 16 24 #endif /* _ASM_X86_MODULE_H */
+22
arch/x86/include/asm/sighandling.h
··· 24 24 int x64_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs); 25 25 int x32_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs); 26 26 27 + /* 28 + * To prevent immediate repeat of single step trap on return from SIGTRAP 29 + * handler if the trap flag (TF) is set without an external debugger attached, 30 + * clear the software event flag in the augmented SS, ensuring no single-step 31 + * trap is pending upon ERETU completion. 32 + * 33 + * Note, this function should be called in sigreturn() before the original 34 + * state is restored to make sure the TF is read from the entry frame. 35 + */ 36 + static __always_inline void prevent_single_step_upon_eretu(struct pt_regs *regs) 37 + { 38 + /* 39 + * If the trap flag (TF) is set, i.e., the sigreturn() SYSCALL instruction 40 + * is being single-stepped, do not clear the software event flag in the 41 + * augmented SS, thus a debugger won't skip over the following instruction. 42 + */ 43 + #ifdef CONFIG_X86_FRED 44 + if (!(regs->flags & X86_EFLAGS_TF)) 45 + regs->fred_ss.swevent = 0; 46 + #endif 47 + } 48 + 27 49 #endif /* _ASM_X86_SIGHANDLING_H */
+1 -1
arch/x86/include/asm/tdx.h
··· 106 106 107 107 typedef u64 (*sc_func_t)(u64 fn, struct tdx_module_args *args); 108 108 109 - static inline u64 sc_retry(sc_func_t func, u64 fn, 109 + static __always_inline u64 sc_retry(sc_func_t func, u64 fn, 110 110 struct tdx_module_args *args) 111 111 { 112 112 int retry = RDRAND_RETRY_LOOPS;
+55 -24
arch/x86/kernel/alternative.c
··· 116 116 #endif 117 117 static void *its_page; 118 118 static unsigned int its_offset; 119 + struct its_array its_pages; 120 + 121 + static void *__its_alloc(struct its_array *pages) 122 + { 123 + void *page __free(execmem) = execmem_alloc(EXECMEM_MODULE_TEXT, PAGE_SIZE); 124 + if (!page) 125 + return NULL; 126 + 127 + void *tmp = krealloc(pages->pages, (pages->num+1) * sizeof(void *), 128 + GFP_KERNEL); 129 + if (!tmp) 130 + return NULL; 131 + 132 + pages->pages = tmp; 133 + pages->pages[pages->num++] = page; 134 + 135 + return no_free_ptr(page); 136 + } 119 137 120 138 /* Initialize a thunk with the "jmp *reg; int3" instructions. */ 121 139 static void *its_init_thunk(void *thunk, int reg) ··· 169 151 return thunk + offset; 170 152 } 171 153 154 + static void its_pages_protect(struct its_array *pages) 155 + { 156 + for (int i = 0; i < pages->num; i++) { 157 + void *page = pages->pages[i]; 158 + execmem_restore_rox(page, PAGE_SIZE); 159 + } 160 + } 161 + 162 + static void its_fini_core(void) 163 + { 164 + if (IS_ENABLED(CONFIG_STRICT_KERNEL_RWX)) 165 + its_pages_protect(&its_pages); 166 + kfree(its_pages.pages); 167 + } 168 + 172 169 #ifdef CONFIG_MODULES 173 170 void its_init_mod(struct module *mod) 174 171 { ··· 206 173 its_page = NULL; 207 174 mutex_unlock(&text_mutex); 208 175 209 - for (int i = 0; i < mod->its_num_pages; i++) { 210 - void *page = mod->its_page_array[i]; 211 - execmem_restore_rox(page, PAGE_SIZE); 212 - } 176 + if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX)) 177 + its_pages_protect(&mod->arch.its_pages); 213 178 } 214 179 215 180 void its_free_mod(struct module *mod) ··· 215 184 if (!cpu_feature_enabled(X86_FEATURE_INDIRECT_THUNK_ITS)) 216 185 return; 217 186 218 - for (int i = 0; i < mod->its_num_pages; i++) { 219 - void *page = mod->its_page_array[i]; 187 + for (int i = 0; i < mod->arch.its_pages.num; i++) { 188 + void *page = mod->arch.its_pages.pages[i]; 220 189 execmem_free(page); 221 190 } 222 - kfree(mod->its_page_array); 191 + kfree(mod->arch.its_pages.pages); 223 192 } 224 193 #endif /* CONFIG_MODULES */ 225 194 226 195 static void *its_alloc(void) 227 196 { 228 - void *page __free(execmem) = execmem_alloc(EXECMEM_MODULE_TEXT, PAGE_SIZE); 197 + struct its_array *pages = &its_pages; 198 + void *page; 229 199 200 + #ifdef CONFIG_MODULE 201 + if (its_mod) 202 + pages = &its_mod->arch.its_pages; 203 + #endif 204 + 205 + page = __its_alloc(pages); 230 206 if (!page) 231 207 return NULL; 232 208 233 - #ifdef CONFIG_MODULES 234 - if (its_mod) { 235 - void *tmp = krealloc(its_mod->its_page_array, 236 - (its_mod->its_num_pages+1) * sizeof(void *), 237 - GFP_KERNEL); 238 - if (!tmp) 239 - return NULL; 209 + execmem_make_temp_rw(page, PAGE_SIZE); 210 + if (pages == &its_pages) 211 + set_memory_x((unsigned long)page, 1); 240 212 241 - its_mod->its_page_array = tmp; 242 - its_mod->its_page_array[its_mod->its_num_pages++] = page; 243 - 244 - execmem_make_temp_rw(page, PAGE_SIZE); 245 - } 246 - #endif /* CONFIG_MODULES */ 247 - 248 - return no_free_ptr(page); 213 + return page; 249 214 } 250 215 251 216 static void *its_allocate_thunk(int reg) ··· 295 268 return thunk; 296 269 } 297 270 298 - #endif 271 + #else 272 + static inline void its_fini_core(void) {} 273 + #endif /* CONFIG_MITIGATION_ITS */ 299 274 300 275 /* 301 276 * Nomenclature for variable names to simplify and clarify this code and ease ··· 2366 2337 */ 2367 2338 apply_retpolines(__retpoline_sites, __retpoline_sites_end); 2368 2339 apply_returns(__return_sites, __return_sites_end); 2340 + 2341 + its_fini_core(); 2369 2342 2370 2343 /* 2371 2344 * Adjust all CALL instructions to point to func()-10, including
+4
arch/x86/kernel/signal_32.c
··· 152 152 struct sigframe_ia32 __user *frame = (struct sigframe_ia32 __user *)(regs->sp-8); 153 153 sigset_t set; 154 154 155 + prevent_single_step_upon_eretu(regs); 156 + 155 157 if (!access_ok(frame, sizeof(*frame))) 156 158 goto badframe; 157 159 if (__get_user(set.sig[0], &frame->sc.oldmask) ··· 176 174 struct pt_regs *regs = current_pt_regs(); 177 175 struct rt_sigframe_ia32 __user *frame; 178 176 sigset_t set; 177 + 178 + prevent_single_step_upon_eretu(regs); 179 179 180 180 frame = (struct rt_sigframe_ia32 __user *)(regs->sp - 4); 181 181
+4
arch/x86/kernel/signal_64.c
··· 250 250 sigset_t set; 251 251 unsigned long uc_flags; 252 252 253 + prevent_single_step_upon_eretu(regs); 254 + 253 255 frame = (struct rt_sigframe __user *)(regs->sp - sizeof(long)); 254 256 if (!access_ok(frame, sizeof(*frame))) 255 257 goto badframe; ··· 367 365 struct rt_sigframe_x32 __user *frame; 368 366 sigset_t set; 369 367 unsigned long uc_flags; 368 + 369 + prevent_single_step_upon_eretu(regs); 370 370 371 371 frame = (struct rt_sigframe_x32 __user *)(regs->sp - 8); 372 372
+24
arch/x86/kernel/smp.c
··· 299 299 .send_call_func_single_ipi = native_send_call_func_single_ipi, 300 300 }; 301 301 EXPORT_SYMBOL_GPL(smp_ops); 302 + 303 + int arch_cpu_rescan_dead_smt_siblings(void) 304 + { 305 + enum cpuhp_smt_control old = cpu_smt_control; 306 + int ret; 307 + 308 + /* 309 + * If SMT has been disabled and SMT siblings are in HLT, bring them back 310 + * online and offline them again so that they end up in MWAIT proper. 311 + * 312 + * Called with hotplug enabled. 313 + */ 314 + if (old != CPU_SMT_DISABLED && old != CPU_SMT_FORCE_DISABLED) 315 + return 0; 316 + 317 + ret = cpuhp_smt_enable(); 318 + if (ret) 319 + return ret; 320 + 321 + ret = cpuhp_smt_disable(old); 322 + 323 + return ret; 324 + } 325 + EXPORT_SYMBOL_GPL(arch_cpu_rescan_dead_smt_siblings);
+7 -47
arch/x86/kernel/smpboot.c
··· 1244 1244 local_irq_disable(); 1245 1245 } 1246 1246 1247 + /* 1248 + * We need to flush the caches before going to sleep, lest we have 1249 + * dirty data in our caches when we come back up. 1250 + */ 1247 1251 void __noreturn mwait_play_dead(unsigned int eax_hint) 1248 1252 { 1249 1253 struct mwait_cpu_dead *md = this_cpu_ptr(&mwait_cpu_dead); ··· 1291 1287 native_halt(); 1292 1288 } 1293 1289 } 1294 - } 1295 - 1296 - /* 1297 - * We need to flush the caches before going to sleep, lest we have 1298 - * dirty data in our caches when we come back up. 1299 - */ 1300 - static inline void mwait_play_dead_cpuid_hint(void) 1301 - { 1302 - unsigned int eax, ebx, ecx, edx; 1303 - unsigned int highest_cstate = 0; 1304 - unsigned int highest_subcstate = 0; 1305 - int i; 1306 - 1307 - if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || 1308 - boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) 1309 - return; 1310 - if (!this_cpu_has(X86_FEATURE_MWAIT)) 1311 - return; 1312 - if (!this_cpu_has(X86_FEATURE_CLFLUSH)) 1313 - return; 1314 - 1315 - eax = CPUID_LEAF_MWAIT; 1316 - ecx = 0; 1317 - native_cpuid(&eax, &ebx, &ecx, &edx); 1318 - 1319 - /* 1320 - * eax will be 0 if EDX enumeration is not valid. 1321 - * Initialized below to cstate, sub_cstate value when EDX is valid. 1322 - */ 1323 - if (!(ecx & CPUID5_ECX_EXTENSIONS_SUPPORTED)) { 1324 - eax = 0; 1325 - } else { 1326 - edx >>= MWAIT_SUBSTATE_SIZE; 1327 - for (i = 0; i < 7 && edx; i++, edx >>= MWAIT_SUBSTATE_SIZE) { 1328 - if (edx & MWAIT_SUBSTATE_MASK) { 1329 - highest_cstate = i; 1330 - highest_subcstate = edx & MWAIT_SUBSTATE_MASK; 1331 - } 1332 - } 1333 - eax = (highest_cstate << MWAIT_SUBSTATE_SIZE) | 1334 - (highest_subcstate - 1); 1335 - } 1336 - 1337 - mwait_play_dead(eax); 1338 1290 } 1339 1291 1340 1292 /* ··· 1343 1383 play_dead_common(); 1344 1384 tboot_shutdown(TB_SHUTDOWN_WFS); 1345 1385 1346 - mwait_play_dead_cpuid_hint(); 1347 - if (cpuidle_play_dead()) 1348 - hlt_play_dead(); 1386 + /* Below returns only on error. */ 1387 + cpuidle_play_dead(); 1388 + hlt_play_dead(); 1349 1389 } 1350 1390 1351 1391 #else /* ... !CONFIG_HOTPLUG_CPU */
+8 -1
arch/x86/kvm/mmu/mmu.c
··· 4896 4896 { 4897 4897 u64 error_code = PFERR_GUEST_FINAL_MASK; 4898 4898 u8 level = PG_LEVEL_4K; 4899 + u64 direct_bits; 4899 4900 u64 end; 4900 4901 int r; 4901 4902 4902 4903 if (!vcpu->kvm->arch.pre_fault_allowed) 4903 4904 return -EOPNOTSUPP; 4905 + 4906 + if (kvm_is_gfn_alias(vcpu->kvm, gpa_to_gfn(range->gpa))) 4907 + return -EINVAL; 4904 4908 4905 4909 /* 4906 4910 * reload is efficient when called repeatedly, so we can do it on ··· 4914 4910 if (r) 4915 4911 return r; 4916 4912 4913 + direct_bits = 0; 4917 4914 if (kvm_arch_has_private_mem(vcpu->kvm) && 4918 4915 kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(range->gpa))) 4919 4916 error_code |= PFERR_PRIVATE_ACCESS; 4917 + else 4918 + direct_bits = gfn_to_gpa(kvm_gfn_direct_bits(vcpu->kvm)); 4920 4919 4921 4920 /* 4922 4921 * Shadow paging uses GVA for kvm page fault, so restrict to 4923 4922 * two-dimensional paging. 4924 4923 */ 4925 - r = kvm_tdp_map_page(vcpu, range->gpa, error_code, &level); 4924 + r = kvm_tdp_map_page(vcpu, range->gpa | direct_bits, error_code, &level); 4926 4925 if (r < 0) 4927 4926 return r; 4928 4927
+35 -9
arch/x86/kvm/svm/sev.c
··· 2871 2871 } 2872 2872 } 2873 2873 2874 + static bool is_sev_snp_initialized(void) 2875 + { 2876 + struct sev_user_data_snp_status *status; 2877 + struct sev_data_snp_addr buf; 2878 + bool initialized = false; 2879 + int ret, error = 0; 2880 + 2881 + status = snp_alloc_firmware_page(GFP_KERNEL | __GFP_ZERO); 2882 + if (!status) 2883 + return false; 2884 + 2885 + buf.address = __psp_pa(status); 2886 + ret = sev_do_cmd(SEV_CMD_SNP_PLATFORM_STATUS, &buf, &error); 2887 + if (ret) { 2888 + pr_err("SEV: SNP_PLATFORM_STATUS failed ret=%d, fw_error=%d (%#x)\n", 2889 + ret, error, error); 2890 + goto out; 2891 + } 2892 + 2893 + initialized = !!status->state; 2894 + 2895 + out: 2896 + snp_free_firmware_page(status); 2897 + 2898 + return initialized; 2899 + } 2900 + 2874 2901 void __init sev_hardware_setup(void) 2875 2902 { 2876 2903 unsigned int eax, ebx, ecx, edx, sev_asid_count, sev_es_asid_count; ··· 3002 2975 sev_snp_supported = sev_snp_enabled && cc_platform_has(CC_ATTR_HOST_SEV_SNP); 3003 2976 3004 2977 out: 2978 + if (sev_enabled) { 2979 + init_args.probe = true; 2980 + if (sev_platform_init(&init_args)) 2981 + sev_supported = sev_es_supported = sev_snp_supported = false; 2982 + else if (sev_snp_supported) 2983 + sev_snp_supported = is_sev_snp_initialized(); 2984 + } 2985 + 3005 2986 if (boot_cpu_has(X86_FEATURE_SEV)) 3006 2987 pr_info("SEV %s (ASIDs %u - %u)\n", 3007 2988 sev_supported ? min_sev_asid <= max_sev_asid ? "enabled" : ··· 3036 3001 sev_supported_vmsa_features = 0; 3037 3002 if (sev_es_debug_swap_enabled) 3038 3003 sev_supported_vmsa_features |= SVM_SEV_FEAT_DEBUG_SWAP; 3039 - 3040 - if (!sev_enabled) 3041 - return; 3042 - 3043 - /* 3044 - * Do both SNP and SEV initialization at KVM module load. 3045 - */ 3046 - init_args.probe = true; 3047 - sev_platform_init(&init_args); 3048 3004 } 3049 3005 3050 3006 void sev_hardware_unsetup(void)
-3
arch/x86/mm/init_32.c
··· 30 30 #include <linux/initrd.h> 31 31 #include <linux/cpumask.h> 32 32 #include <linux/gfp.h> 33 - #include <linux/execmem.h> 34 33 35 34 #include <asm/asm.h> 36 35 #include <asm/bios_ebda.h> ··· 747 748 set_pages_ro(virt_to_page(start), size >> PAGE_SHIFT); 748 749 pr_info("Write protecting kernel text and read-only data: %luk\n", 749 750 size >> 10); 750 - 751 - execmem_cache_make_ro(); 752 751 753 752 kernel_set_to_readonly = 1; 754 753
-3
arch/x86/mm/init_64.c
··· 34 34 #include <linux/gfp.h> 35 35 #include <linux/kcore.h> 36 36 #include <linux/bootmem_info.h> 37 - #include <linux/execmem.h> 38 37 39 38 #include <asm/processor.h> 40 39 #include <asm/bios_ebda.h> ··· 1390 1391 printk(KERN_INFO "Write protecting the kernel read-only data: %luk\n", 1391 1392 (end - start) >> 10); 1392 1393 set_memory_ro(start, (end - start) >> PAGE_SHIFT); 1393 - 1394 - execmem_cache_make_ro(); 1395 1394 1396 1395 kernel_set_to_readonly = 1; 1397 1396
+3
arch/x86/mm/pat/set_memory.c
··· 1257 1257 pgprot_t pgprot; 1258 1258 int i = 0; 1259 1259 1260 + if (!cpu_feature_enabled(X86_FEATURE_PSE)) 1261 + return 0; 1262 + 1260 1263 addr &= PMD_MASK; 1261 1264 pte = pte_offset_kernel(pmd, addr); 1262 1265 first = *pte;
+5 -12
arch/x86/power/hibernate.c
··· 192 192 193 193 int arch_resume_nosmt(void) 194 194 { 195 - int ret = 0; 195 + int ret; 196 + 196 197 /* 197 198 * We reached this while coming out of hibernation. This means 198 199 * that SMT siblings are sleeping in hlt, as mwait is not safe ··· 207 206 * Called with hotplug disabled. 208 207 */ 209 208 cpu_hotplug_enable(); 210 - if (cpu_smt_control == CPU_SMT_DISABLED || 211 - cpu_smt_control == CPU_SMT_FORCE_DISABLED) { 212 - enum cpuhp_smt_control old = cpu_smt_control; 213 209 214 - ret = cpuhp_smt_enable(); 215 - if (ret) 216 - goto out; 217 - ret = cpuhp_smt_disable(old); 218 - if (ret) 219 - goto out; 220 - } 221 - out: 210 + ret = arch_cpu_rescan_dead_smt_siblings(); 211 + 222 212 cpu_hotplug_disable(); 213 + 223 214 return ret; 224 215 }
+3 -2
arch/x86/virt/vmx/tdx/tdx.c
··· 75 75 args->r9, args->r10, args->r11); 76 76 } 77 77 78 - static inline int sc_retry_prerr(sc_func_t func, sc_err_func_t err_func, 79 - u64 fn, struct tdx_module_args *args) 78 + static __always_inline int sc_retry_prerr(sc_func_t func, 79 + sc_err_func_t err_func, 80 + u64 fn, struct tdx_module_args *args) 80 81 { 81 82 u64 sret = sc_retry(func, fn, args); 82 83
+13 -13
block/blk-merge.c
··· 998 998 if (!plug || rq_list_empty(&plug->mq_list)) 999 999 return false; 1000 1000 1001 - rq_list_for_each(&plug->mq_list, rq) { 1002 - if (rq->q == q) { 1003 - if (blk_attempt_bio_merge(q, rq, bio, nr_segs, false) == 1004 - BIO_MERGE_OK) 1005 - return true; 1006 - break; 1007 - } 1001 + rq = plug->mq_list.tail; 1002 + if (rq->q == q) 1003 + return blk_attempt_bio_merge(q, rq, bio, nr_segs, false) == 1004 + BIO_MERGE_OK; 1005 + else if (!plug->multiple_queues) 1006 + return false; 1008 1007 1009 - /* 1010 - * Only keep iterating plug list for merges if we have multiple 1011 - * queues 1012 - */ 1013 - if (!plug->multiple_queues) 1014 - break; 1008 + rq_list_for_each(&plug->mq_list, rq) { 1009 + if (rq->q != q) 1010 + continue; 1011 + if (blk_attempt_bio_merge(q, rq, bio, nr_segs, false) == 1012 + BIO_MERGE_OK) 1013 + return true; 1014 + break; 1015 1015 } 1016 1016 return false; 1017 1017 }
+6 -2
block/blk-zoned.c
··· 1225 1225 if (bio_flagged(bio, BIO_EMULATES_ZONE_APPEND)) { 1226 1226 bio->bi_opf &= ~REQ_OP_MASK; 1227 1227 bio->bi_opf |= REQ_OP_ZONE_APPEND; 1228 + bio_clear_flag(bio, BIO_EMULATES_ZONE_APPEND); 1228 1229 } 1229 1230 1230 1231 /* ··· 1307 1306 spin_unlock_irqrestore(&zwplug->lock, flags); 1308 1307 1309 1308 bdev = bio->bi_bdev; 1310 - submit_bio_noacct_nocheck(bio); 1311 1309 1312 1310 /* 1313 1311 * blk-mq devices will reuse the extra reference on the request queue ··· 1314 1314 * path for BIO-based devices will not do that. So drop this extra 1315 1315 * reference here. 1316 1316 */ 1317 - if (bdev_test_flag(bdev, BD_HAS_SUBMIT_BIO)) 1317 + if (bdev_test_flag(bdev, BD_HAS_SUBMIT_BIO)) { 1318 + bdev->bd_disk->fops->submit_bio(bio); 1318 1319 blk_queue_exit(bdev->bd_disk->queue); 1320 + } else { 1321 + blk_mq_submit_bio(bio); 1322 + } 1319 1323 1320 1324 put_zwplug: 1321 1325 /* Drop the reference we took in disk_zone_wplug_schedule_bio_work(). */
+1 -1
crypto/hkdf.c
··· 566 566 567 567 static void __exit crypto_hkdf_module_exit(void) {} 568 568 569 - module_init(crypto_hkdf_module_init); 569 + late_initcall(crypto_hkdf_module_init); 570 570 module_exit(crypto_hkdf_module_exit); 571 571 572 572 MODULE_LICENSE("GPL");
+2 -2
drivers/accel/amdxdna/aie2_psp.c
··· 126 126 psp->ddev = ddev; 127 127 memcpy(psp->psp_regs, conf->psp_regs, sizeof(psp->psp_regs)); 128 128 129 - psp->fw_buf_sz = ALIGN(conf->fw_size, PSP_FW_ALIGN) + PSP_FW_ALIGN; 130 - psp->fw_buffer = drmm_kmalloc(ddev, psp->fw_buf_sz, GFP_KERNEL); 129 + psp->fw_buf_sz = ALIGN(conf->fw_size, PSP_FW_ALIGN); 130 + psp->fw_buffer = drmm_kmalloc(ddev, psp->fw_buf_sz + PSP_FW_ALIGN, GFP_KERNEL); 131 131 if (!psp->fw_buffer) { 132 132 drm_err(ddev, "no memory for fw buffer"); 133 133 return NULL;
+1 -1
drivers/acpi/acpi_pad.c
··· 33 33 static DEFINE_MUTEX(isolated_cpus_lock); 34 34 static DEFINE_MUTEX(round_robin_lock); 35 35 36 - static unsigned long power_saving_mwait_eax; 36 + static unsigned int power_saving_mwait_eax; 37 37 38 38 static unsigned char tsc_detected_unstable; 39 39 static unsigned char tsc_marked_unstable;
+3 -6
drivers/acpi/apei/einj-core.c
··· 883 883 } 884 884 885 885 einj_dev = faux_device_create("acpi-einj", NULL, &einj_device_ops); 886 - if (!einj_dev) 887 - return -ENODEV; 888 886 889 - einj_initialized = true; 887 + if (einj_dev) 888 + einj_initialized = true; 890 889 891 890 return 0; 892 891 } 893 892 894 893 static void __exit einj_exit(void) 895 894 { 896 - if (einj_initialized) 897 - faux_device_destroy(einj_dev); 898 - 895 + faux_device_destroy(einj_dev); 899 896 } 900 897 901 898 module_init(einj_init);
+1 -1
drivers/acpi/cppc_acpi.c
··· 476 476 struct cpc_desc *cpc_ptr; 477 477 int cpu; 478 478 479 - for_each_possible_cpu(cpu) { 479 + for_each_present_cpu(cpu) { 480 480 cpc_ptr = per_cpu(cpc_desc_ptr, cpu); 481 481 desired_reg = &cpc_ptr->cpc_regs[DESIRED_PERF]; 482 482 if (!CPC_IN_SYSTEM_MEMORY(desired_reg) &&
+17
drivers/acpi/ec.c
··· 23 23 #include <linux/delay.h> 24 24 #include <linux/interrupt.h> 25 25 #include <linux/list.h> 26 + #include <linux/printk.h> 26 27 #include <linux/spinlock.h> 27 28 #include <linux/slab.h> 29 + #include <linux/string.h> 28 30 #include <linux/suspend.h> 29 31 #include <linux/acpi.h> 30 32 #include <linux/dmi.h> ··· 2030 2028 * Asus X50GL: 2031 2029 * https://bugzilla.kernel.org/show_bug.cgi?id=11880 2032 2030 */ 2031 + goto out; 2032 + } 2033 + 2034 + if (!strstarts(ecdt_ptr->id, "\\")) { 2035 + /* 2036 + * The ECDT table on some MSI notebooks contains invalid data, together 2037 + * with an empty ID string (""). 2038 + * 2039 + * Section 5.2.15 of the ACPI specification requires the ID string to be 2040 + * a "fully qualified reference to the (...) embedded controller device", 2041 + * so this string always has to start with a backslash. 2042 + * 2043 + * By verifying this we can avoid such faulty ECDT tables in a safe way. 2044 + */ 2045 + pr_err(FW_BUG "Ignoring ECDT due to invalid ID string \"%s\"\n", ecdt_ptr->id); 2033 2046 goto out; 2034 2047 } 2035 2048
+6
drivers/acpi/internal.h
··· 175 175 static inline void acpi_early_processor_control_setup(void) {} 176 176 #endif 177 177 178 + #ifdef CONFIG_ACPI_PROCESSOR_CSTATE 179 + void acpi_idle_rescan_dead_smt_siblings(void); 180 + #else 181 + static inline void acpi_idle_rescan_dead_smt_siblings(void) {} 182 + #endif 183 + 178 184 /* -------------------------------------------------------------------------- 179 185 Embedded Controller 180 186 -------------------------------------------------------------------------- */
+3
drivers/acpi/processor_driver.c
··· 279 279 * after acpi_cppc_processor_probe() has been called for all online CPUs 280 280 */ 281 281 acpi_processor_init_invariance_cppc(); 282 + 283 + acpi_idle_rescan_dead_smt_siblings(); 284 + 282 285 return 0; 283 286 err: 284 287 driver_unregister(&acpi_processor_driver);
+8
drivers/acpi/processor_idle.c
··· 24 24 #include <acpi/processor.h> 25 25 #include <linux/context_tracking.h> 26 26 27 + #include "internal.h" 28 + 27 29 /* 28 30 * Include the apic definitions for x86 to have the APIC timer related defines 29 31 * available also for UP (on SMP it gets magically included via linux/smp.h). ··· 57 55 }; 58 56 59 57 #ifdef CONFIG_ACPI_PROCESSOR_CSTATE 58 + void acpi_idle_rescan_dead_smt_siblings(void) 59 + { 60 + if (cpuidle_get_driver() == &acpi_idle_driver) 61 + arch_cpu_rescan_dead_smt_siblings(); 62 + } 63 + 60 64 static 61 65 DEFINE_PER_CPU(struct acpi_processor_cx * [CPUIDLE_STATE_MAX], acpi_cstate); 62 66
+7
drivers/acpi/resource.c
··· 667 667 }, 668 668 }, 669 669 { 670 + /* MACHENIKE L16P/L16P */ 671 + .matches = { 672 + DMI_MATCH(DMI_SYS_VENDOR, "MACHENIKE"), 673 + DMI_MATCH(DMI_BOARD_NAME, "L16P"), 674 + }, 675 + }, 676 + { 670 677 /* 671 678 * TongFang GM5HG0A in case of the SKIKK Vanaheim relabel the 672 679 * board-name is changed, so check OEM strings instead. Note
+33 -6
drivers/ata/ahci.c
··· 1410 1410 1411 1411 static bool ahci_broken_lpm(struct pci_dev *pdev) 1412 1412 { 1413 + /* 1414 + * Platforms with LPM problems. 1415 + * If driver_data is NULL, there is no existing BIOS version with 1416 + * functioning LPM. 1417 + * If driver_data is non-NULL, then driver_data contains the DMI BIOS 1418 + * build date of the first BIOS version with functioning LPM (i.e. older 1419 + * BIOS versions have broken LPM). 1420 + */ 1413 1421 static const struct dmi_system_id sysids[] = { 1414 - /* Various Lenovo 50 series have LPM issues with older BIOSen */ 1415 1422 { 1416 1423 .matches = { 1417 1424 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), ··· 1445 1438 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 1446 1439 DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad W541"), 1447 1440 }, 1441 + .driver_data = "20180409", /* 2.35 */ 1442 + }, 1443 + { 1444 + .matches = { 1445 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 1446 + DMI_MATCH(DMI_PRODUCT_VERSION, "ASUSPRO D840MB_M840SA"), 1447 + }, 1448 + /* 320 is broken, there is no known good version. */ 1449 + }, 1450 + { 1448 1451 /* 1449 - * Note date based on release notes, 2.35 has been 1450 - * reported to be good, but I've been unable to get 1451 - * a hold of the reporter to get the DMI BIOS date. 1452 - * TODO: fix this. 1452 + * AMD 500 Series Chipset SATA Controller [1022:43eb] 1453 + * on this motherboard timeouts on ports 5 and 6 when 1454 + * LPM is enabled, at least with WDC WD20EFAX-68FB5N0 1455 + * hard drives. LPM with the same drive works fine on 1456 + * all other ports on the same controller. 1453 1457 */ 1454 - .driver_data = "20180310", /* 2.35 */ 1458 + .matches = { 1459 + DMI_MATCH(DMI_BOARD_VENDOR, 1460 + "ASUSTeK COMPUTER INC."), 1461 + DMI_MATCH(DMI_BOARD_NAME, 1462 + "ROG STRIX B550-F GAMING (WI-FI)"), 1463 + }, 1464 + /* 3621 is broken, there is no known good version. */ 1455 1465 }, 1456 1466 { } /* terminate list */ 1457 1467 }; ··· 1478 1454 1479 1455 if (!dmi) 1480 1456 return false; 1457 + 1458 + if (!dmi->driver_data) 1459 + return true; 1481 1460 1482 1461 dmi_get_date(DMI_BIOS_DATE, &year, &month, &date); 1483 1462 snprintf(buf, sizeof(buf), "%04d%02d%02d", year, month, date);
+16 -8
drivers/ata/libata-acpi.c
··· 514 514 EXPORT_SYMBOL_GPL(ata_acpi_gtm_xfermask); 515 515 516 516 /** 517 - * ata_acpi_cbl_80wire - Check for 80 wire cable 517 + * ata_acpi_cbl_pata_type - Return PATA cable type 518 518 * @ap: Port to check 519 - * @gtm: GTM data to use 520 519 * 521 - * Return 1 if the @gtm indicates the BIOS selected an 80wire mode. 520 + * Return ATA_CBL_PATA* according to the transfer mode selected by BIOS 522 521 */ 523 - int ata_acpi_cbl_80wire(struct ata_port *ap, const struct ata_acpi_gtm *gtm) 522 + int ata_acpi_cbl_pata_type(struct ata_port *ap) 524 523 { 525 524 struct ata_device *dev; 525 + int ret = ATA_CBL_PATA_UNK; 526 + const struct ata_acpi_gtm *gtm = ata_acpi_init_gtm(ap); 527 + 528 + if (!gtm) 529 + return ATA_CBL_PATA40; 526 530 527 531 ata_for_each_dev(dev, &ap->link, ENABLED) { 528 532 unsigned int xfer_mask, udma_mask; ··· 534 530 xfer_mask = ata_acpi_gtm_xfermask(dev, gtm); 535 531 ata_unpack_xfermask(xfer_mask, NULL, NULL, &udma_mask); 536 532 537 - if (udma_mask & ~ATA_UDMA_MASK_40C) 538 - return 1; 533 + ret = ATA_CBL_PATA40; 534 + 535 + if (udma_mask & ~ATA_UDMA_MASK_40C) { 536 + ret = ATA_CBL_PATA80; 537 + break; 538 + } 539 539 } 540 540 541 - return 0; 541 + return ret; 542 542 } 543 - EXPORT_SYMBOL_GPL(ata_acpi_cbl_80wire); 543 + EXPORT_SYMBOL_GPL(ata_acpi_cbl_pata_type); 544 544 545 545 static void ata_acpi_gtf_to_tf(struct ata_device *dev, 546 546 const struct ata_acpi_gtf *gtf,
+1 -1
drivers/ata/pata_cs5536.c
··· 27 27 #include <scsi/scsi_host.h> 28 28 #include <linux/dmi.h> 29 29 30 - #ifdef CONFIG_X86_32 30 + #if defined(CONFIG_X86) && defined(CONFIG_X86_32) 31 31 #include <asm/msr.h> 32 32 static int use_msr; 33 33 module_param_named(msr, use_msr, int, 0644);
+1 -1
drivers/ata/pata_macio.c
··· 1298 1298 priv->dev = &pdev->dev; 1299 1299 1300 1300 /* Get MMIO regions */ 1301 - if (pci_request_regions(pdev, "pata-macio")) { 1301 + if (pcim_request_all_regions(pdev, "pata-macio")) { 1302 1302 dev_err(&pdev->dev, 1303 1303 "Cannot obtain PCI resources\n"); 1304 1304 return -EBUSY;
+4 -5
drivers/ata/pata_via.c
··· 201 201 two drives */ 202 202 if (ata66 & (0x10100000 >> (16 * ap->port_no))) 203 203 return ATA_CBL_PATA80; 204 + 204 205 /* Check with ACPI so we can spot BIOS reported SATA bridges */ 205 - if (ata_acpi_init_gtm(ap) && 206 - ata_acpi_cbl_80wire(ap, ata_acpi_init_gtm(ap))) 207 - return ATA_CBL_PATA80; 208 - return ATA_CBL_PATA40; 206 + return ata_acpi_cbl_pata_type(ap); 209 207 } 210 208 211 209 static int via_pre_reset(struct ata_link *link, unsigned long deadline) ··· 366 368 } 367 369 368 370 if (dev->class == ATA_DEV_ATAPI && 369 - dmi_check_system(no_atapi_dma_dmi_table)) { 371 + (dmi_check_system(no_atapi_dma_dmi_table) || 372 + config->id == PCI_DEVICE_ID_VIA_6415)) { 370 373 ata_dev_warn(dev, "controller locks up on ATAPI DMA, forcing PIO\n"); 371 374 mask &= ATA_MASK_PIO; 372 375 }
+3 -1
drivers/atm/atmtcp.c
··· 288 288 struct sk_buff *new_skb; 289 289 int result = 0; 290 290 291 - if (!skb->len) return 0; 291 + if (skb->len < sizeof(struct atmtcp_hdr)) 292 + goto done; 293 + 292 294 dev = vcc->dev_data; 293 295 hdr = (struct atmtcp_hdr *) skb->data; 294 296 if (hdr->length == ATMTCP_HDR_MAGIC) {
+2 -1
drivers/base/faux.c
··· 86 86 .name = "faux_driver", 87 87 .bus = &faux_bus_type, 88 88 .probe_type = PROBE_FORCE_SYNCHRONOUS, 89 + .suppress_bind_attrs = true, 89 90 }; 90 91 91 92 static void faux_device_release(struct device *dev) ··· 170 169 * successful is almost impossible to determine by the caller. 171 170 */ 172 171 if (!dev->driver) { 173 - dev_err(dev, "probe did not succeed, tearing down the device\n"); 172 + dev_dbg(dev, "probe did not succeed, tearing down the device\n"); 174 173 faux_device_destroy(faux_dev); 175 174 faux_dev = NULL; 176 175 }
+5 -6
drivers/block/loop.c
··· 1248 1248 lo->lo_flags &= ~LOOP_SET_STATUS_CLEARABLE_FLAGS; 1249 1249 lo->lo_flags |= (info->lo_flags & LOOP_SET_STATUS_SETTABLE_FLAGS); 1250 1250 1251 - if (size_changed) { 1252 - loff_t new_size = get_size(lo->lo_offset, lo->lo_sizelimit, 1253 - lo->lo_backing_file); 1254 - loop_set_size(lo, new_size); 1255 - } 1256 - 1257 1251 /* update the direct I/O flag if lo_offset changed */ 1258 1252 loop_update_dio(lo); 1259 1253 ··· 1255 1261 blk_mq_unfreeze_queue(lo->lo_queue, memflags); 1256 1262 if (partscan) 1257 1263 clear_bit(GD_SUPPRESS_PART_SCAN, &lo->lo_disk->state); 1264 + if (!err && size_changed) { 1265 + loff_t new_size = get_size(lo->lo_offset, lo->lo_sizelimit, 1266 + lo->lo_backing_file); 1267 + loop_set_size(lo, new_size); 1268 + } 1258 1269 out_unlock: 1259 1270 mutex_unlock(&lo->lo_mutex); 1260 1271 if (partscan)
+2 -2
drivers/cpufreq/rcpufreq_dt.rs
··· 26 26 } 27 27 28 28 /// Finds supply name for the CPU from DT. 29 - fn find_supply_names(dev: &Device, cpu: u32) -> Option<KVec<CString>> { 29 + fn find_supply_names(dev: &Device, cpu: cpu::CpuId) -> Option<KVec<CString>> { 30 30 // Try "cpu0" for older DTs, fallback to "cpu". 31 - let name = (cpu == 0) 31 + let name = (cpu.as_u32() == 0) 32 32 .then(|| find_supply_name_exact(dev, "cpu0")) 33 33 .flatten() 34 34 .or_else(|| find_supply_name_exact(dev, "cpu"))?;
+1 -1
drivers/dma-buf/dma-buf.c
··· 1118 1118 * Catch exporters making buffers inaccessible even when 1119 1119 * attachments preventing that exist. 1120 1120 */ 1121 - WARN_ON_ONCE(ret == EBUSY); 1121 + WARN_ON_ONCE(ret == -EBUSY); 1122 1122 if (ret) 1123 1123 return ERR_PTR(ret); 1124 1124 }
+2 -3
drivers/dma-buf/udmabuf.c
··· 264 264 ubuf->sg = NULL; 265 265 } 266 266 } else { 267 - dma_sync_sg_for_cpu(dev, ubuf->sg->sgl, ubuf->sg->nents, 268 - direction); 267 + dma_sync_sgtable_for_cpu(dev, ubuf->sg, direction); 269 268 } 270 269 271 270 return ret; ··· 279 280 if (!ubuf->sg) 280 281 return -EINVAL; 281 282 282 - dma_sync_sg_for_device(dev, ubuf->sg->sgl, ubuf->sg->nents, direction); 283 + dma_sync_sgtable_for_device(dev, ubuf->sg, direction); 283 284 return 0; 284 285 } 285 286
+1 -1
drivers/gpu/drm/meson/meson_encoder_hdmi.c
··· 109 109 venc_freq /= 2; 110 110 111 111 dev_dbg(priv->dev, 112 - "vclk:%lluHz phy=%lluHz venc=%lluHz hdmi=%lluHz enci=%d\n", 112 + "phy:%lluHz vclk=%lluHz venc=%lluHz hdmi=%lluHz enci=%d\n", 113 113 phy_freq, vclk_freq, venc_freq, hdmi_freq, 114 114 priv->venc.hdmi_use_enci); 115 115
+34 -21
drivers/gpu/drm/meson/meson_vclk.c
··· 110 110 #define HDMI_PLL_LOCK BIT(31) 111 111 #define HDMI_PLL_LOCK_G12A (3 << 30) 112 112 113 - #define PIXEL_FREQ_1000_1001(_freq) \ 114 - DIV_ROUND_CLOSEST_ULL((_freq) * 1000ULL, 1001ULL) 115 - #define PHY_FREQ_1000_1001(_freq) \ 116 - (PIXEL_FREQ_1000_1001(DIV_ROUND_DOWN_ULL(_freq, 10ULL)) * 10) 113 + #define FREQ_1000_1001(_freq) DIV_ROUND_CLOSEST_ULL((_freq) * 1000ULL, 1001ULL) 117 114 118 115 /* VID PLL Dividers */ 119 116 enum { ··· 769 772 pll_freq); 770 773 } 771 774 775 + static bool meson_vclk_freqs_are_matching_param(unsigned int idx, 776 + unsigned long long phy_freq, 777 + unsigned long long vclk_freq) 778 + { 779 + DRM_DEBUG_DRIVER("i = %d vclk_freq = %lluHz alt = %lluHz\n", 780 + idx, params[idx].vclk_freq, 781 + FREQ_1000_1001(params[idx].vclk_freq)); 782 + DRM_DEBUG_DRIVER("i = %d phy_freq = %lluHz alt = %lluHz\n", 783 + idx, params[idx].phy_freq, 784 + FREQ_1000_1001(params[idx].phy_freq)); 785 + 786 + /* Match strict frequency */ 787 + if (phy_freq == params[idx].phy_freq && 788 + vclk_freq == params[idx].vclk_freq) 789 + return true; 790 + 791 + /* Match 1000/1001 variant: vclk deviation has to be less than 1kHz 792 + * (drm EDID is defined in 1kHz steps, so everything smaller must be 793 + * rounding error) and the PHY freq deviation has to be less than 794 + * 10kHz (as the TMDS clock is 10 times the pixel clock, so anything 795 + * smaller must be rounding error as well). 796 + */ 797 + if (abs(vclk_freq - FREQ_1000_1001(params[idx].vclk_freq)) < 1000 && 798 + abs(phy_freq - FREQ_1000_1001(params[idx].phy_freq)) < 10000) 799 + return true; 800 + 801 + /* no match */ 802 + return false; 803 + } 804 + 772 805 enum drm_mode_status 773 806 meson_vclk_vic_supported_freq(struct meson_drm *priv, 774 807 unsigned long long phy_freq, ··· 817 790 } 818 791 819 792 for (i = 0 ; params[i].pixel_freq ; ++i) { 820 - DRM_DEBUG_DRIVER("i = %d pixel_freq = %lluHz alt = %lluHz\n", 821 - i, params[i].pixel_freq, 822 - PIXEL_FREQ_1000_1001(params[i].pixel_freq)); 823 - DRM_DEBUG_DRIVER("i = %d phy_freq = %lluHz alt = %lluHz\n", 824 - i, params[i].phy_freq, 825 - PHY_FREQ_1000_1001(params[i].phy_freq)); 826 - /* Match strict frequency */ 827 - if (phy_freq == params[i].phy_freq && 828 - vclk_freq == params[i].vclk_freq) 829 - return MODE_OK; 830 - /* Match 1000/1001 variant */ 831 - if (phy_freq == PHY_FREQ_1000_1001(params[i].phy_freq) && 832 - vclk_freq == PIXEL_FREQ_1000_1001(params[i].vclk_freq)) 793 + if (meson_vclk_freqs_are_matching_param(i, phy_freq, vclk_freq)) 833 794 return MODE_OK; 834 795 } 835 796 ··· 1090 1075 } 1091 1076 1092 1077 for (freq = 0 ; params[freq].pixel_freq ; ++freq) { 1093 - if ((phy_freq == params[freq].phy_freq || 1094 - phy_freq == PHY_FREQ_1000_1001(params[freq].phy_freq)) && 1095 - (vclk_freq == params[freq].vclk_freq || 1096 - vclk_freq == PIXEL_FREQ_1000_1001(params[freq].vclk_freq))) { 1078 + if (meson_vclk_freqs_are_matching_param(freq, phy_freq, 1079 + vclk_freq)) { 1097 1080 if (vclk_freq != params[freq].vclk_freq) 1098 1081 vic_alternate_clock = true; 1099 1082 else
+1
drivers/gpu/drm/sitronix/Kconfig
··· 5 5 select DRM_GEM_SHMEM_HELPER 6 6 select DRM_KMS_HELPER 7 7 select REGMAP_I2C 8 + select VIDEOMODE_HELPERS 8 9 help 9 10 DRM driver for Sitronix ST7571 panels controlled over I2C. 10 11
+6 -6
drivers/gpu/drm/vc4/vc4_hdmi.c
··· 560 560 if (ret) 561 561 return ret; 562 562 563 - ret = drm_connector_hdmi_audio_init(connector, dev->dev, 564 - &vc4_hdmi_audio_funcs, 565 - 8, false, -1); 566 - if (ret) 567 - return ret; 568 - 569 563 drm_connector_helper_add(connector, &vc4_hdmi_connector_helper_funcs); 570 564 571 565 /* ··· 2284 2290 dev_err(dev, "Could not register CPU DAI: %d\n", ret); 2285 2291 return ret; 2286 2292 } 2293 + 2294 + ret = drm_connector_hdmi_audio_init(&vc4_hdmi->connector, dev, 2295 + &vc4_hdmi_audio_funcs, 8, false, 2296 + -1); 2297 + if (ret) 2298 + return ret; 2287 2299 2288 2300 dai_link->cpus = &vc4_hdmi->audio.cpu; 2289 2301 dai_link->codecs = &vc4_hdmi->audio.codec;
+20 -4
drivers/gpu/drm/xe/xe_lrc.c
··· 941 941 * store it in the PPHSWP. 942 942 */ 943 943 #define CONTEXT_ACTIVE 1ULL 944 - static void xe_lrc_setup_utilization(struct xe_lrc *lrc) 944 + static int xe_lrc_setup_utilization(struct xe_lrc *lrc) 945 945 { 946 - u32 *cmd; 946 + u32 *cmd, *buf = NULL; 947 947 948 - cmd = lrc->bb_per_ctx_bo->vmap.vaddr; 948 + if (lrc->bb_per_ctx_bo->vmap.is_iomem) { 949 + buf = kmalloc(lrc->bb_per_ctx_bo->size, GFP_KERNEL); 950 + if (!buf) 951 + return -ENOMEM; 952 + cmd = buf; 953 + } else { 954 + cmd = lrc->bb_per_ctx_bo->vmap.vaddr; 955 + } 949 956 950 957 *cmd++ = MI_STORE_REGISTER_MEM | MI_SRM_USE_GGTT | MI_SRM_ADD_CS_OFFSET; 951 958 *cmd++ = ENGINE_ID(0).addr; ··· 973 966 974 967 *cmd++ = MI_BATCH_BUFFER_END; 975 968 969 + if (buf) { 970 + xe_map_memcpy_to(gt_to_xe(lrc->gt), &lrc->bb_per_ctx_bo->vmap, 0, 971 + buf, (cmd - buf) * sizeof(*cmd)); 972 + kfree(buf); 973 + } 974 + 976 975 xe_lrc_write_ctx_reg(lrc, CTX_BB_PER_CTX_PTR, 977 976 xe_bo_ggtt_addr(lrc->bb_per_ctx_bo) | 1); 978 977 978 + return 0; 979 979 } 980 980 981 981 #define PVC_CTX_ASID (0x2e + 1) ··· 1139 1125 map = __xe_lrc_start_seqno_map(lrc); 1140 1126 xe_map_write32(lrc_to_xe(lrc), &map, lrc->fence_ctx.next_seqno - 1); 1141 1127 1142 - xe_lrc_setup_utilization(lrc); 1128 + err = xe_lrc_setup_utilization(lrc); 1129 + if (err) 1130 + goto err_lrc_finish; 1143 1131 1144 1132 return 0; 1145 1133
+1 -1
drivers/gpu/drm/xe/xe_svm.c
··· 764 764 return false; 765 765 } 766 766 767 - if (range_size <= SZ_64K && !supports_4K_migration(vm->xe)) { 767 + if (range_size < SZ_64K && !supports_4K_migration(vm->xe)) { 768 768 drm_dbg(&vm->xe->drm, "Platform doesn't support SZ_4K range migration\n"); 769 769 return false; 770 770 }
+6 -3
drivers/hwmon/ftsteutates.c
··· 423 423 break; 424 424 case hwmon_pwm: 425 425 switch (attr) { 426 - case hwmon_pwm_auto_channels_temp: 427 - if (data->fan_source[channel] == FTS_FAN_SOURCE_INVALID) 426 + case hwmon_pwm_auto_channels_temp: { 427 + u8 fan_source = data->fan_source[channel]; 428 + 429 + if (fan_source == FTS_FAN_SOURCE_INVALID || fan_source >= BITS_PER_LONG) 428 430 *val = 0; 429 431 else 430 - *val = BIT(data->fan_source[channel]); 432 + *val = BIT(fan_source); 431 433 432 434 return 0; 435 + } 433 436 default: 434 437 break; 435 438 }
-7
drivers/hwmon/ltc4282.c
··· 1512 1512 } 1513 1513 1514 1514 if (device_property_read_bool(dev, "adi,fault-log-enable")) { 1515 - ret = regmap_set_bits(st->map, LTC4282_ADC_CTRL, 1516 - LTC4282_FAULT_LOG_EN_MASK); 1517 - if (ret) 1518 - return ret; 1519 - } 1520 - 1521 - if (device_property_read_bool(dev, "adi,fault-log-enable")) { 1522 1515 ret = regmap_set_bits(st->map, LTC4282_ADC_CTRL, LTC4282_FAULT_LOG_EN_MASK); 1523 1516 if (ret) 1524 1517 return ret;
+97 -141
drivers/hwmon/occ/common.c
··· 459 459 return sysfs_emit(buf, "%llu\n", val); 460 460 } 461 461 462 - static u64 occ_get_powr_avg(u64 *accum, u32 *samples) 462 + static u64 occ_get_powr_avg(u64 accum, u32 samples) 463 463 { 464 - u64 divisor = get_unaligned_be32(samples); 465 - 466 - return (divisor == 0) ? 0 : 467 - div64_u64(get_unaligned_be64(accum) * 1000000ULL, divisor); 464 + return (samples == 0) ? 0 : 465 + mul_u64_u32_div(accum, 1000000UL, samples); 468 466 } 469 467 470 468 static ssize_t occ_show_power_2(struct device *dev, ··· 487 489 get_unaligned_be32(&power->sensor_id), 488 490 power->function_id, power->apss_channel); 489 491 case 1: 490 - val = occ_get_powr_avg(&power->accumulator, 491 - &power->update_tag); 492 + val = occ_get_powr_avg(get_unaligned_be64(&power->accumulator), 493 + get_unaligned_be32(&power->update_tag)); 492 494 break; 493 495 case 2: 494 496 val = (u64)get_unaligned_be32(&power->update_tag) * ··· 525 527 return sysfs_emit(buf, "%u_system\n", 526 528 get_unaligned_be32(&power->sensor_id)); 527 529 case 1: 528 - val = occ_get_powr_avg(&power->system.accumulator, 529 - &power->system.update_tag); 530 + val = occ_get_powr_avg(get_unaligned_be64(&power->system.accumulator), 531 + get_unaligned_be32(&power->system.update_tag)); 530 532 break; 531 533 case 2: 532 534 val = (u64)get_unaligned_be32(&power->system.update_tag) * ··· 539 541 return sysfs_emit(buf, "%u_proc\n", 540 542 get_unaligned_be32(&power->sensor_id)); 541 543 case 5: 542 - val = occ_get_powr_avg(&power->proc.accumulator, 543 - &power->proc.update_tag); 544 + val = occ_get_powr_avg(get_unaligned_be64(&power->proc.accumulator), 545 + get_unaligned_be32(&power->proc.update_tag)); 544 546 break; 545 547 case 6: 546 548 val = (u64)get_unaligned_be32(&power->proc.update_tag) * ··· 553 555 return sysfs_emit(buf, "%u_vdd\n", 554 556 get_unaligned_be32(&power->sensor_id)); 555 557 case 9: 556 - val = occ_get_powr_avg(&power->vdd.accumulator, 557 - &power->vdd.update_tag); 558 + val = occ_get_powr_avg(get_unaligned_be64(&power->vdd.accumulator), 559 + get_unaligned_be32(&power->vdd.update_tag)); 558 560 break; 559 561 case 10: 560 562 val = (u64)get_unaligned_be32(&power->vdd.update_tag) * ··· 567 569 return sysfs_emit(buf, "%u_vdn\n", 568 570 get_unaligned_be32(&power->sensor_id)); 569 571 case 13: 570 - val = occ_get_powr_avg(&power->vdn.accumulator, 571 - &power->vdn.update_tag); 572 + val = occ_get_powr_avg(get_unaligned_be64(&power->vdn.accumulator), 573 + get_unaligned_be32(&power->vdn.update_tag)); 572 574 break; 573 575 case 14: 574 576 val = (u64)get_unaligned_be32(&power->vdn.update_tag) * ··· 745 747 } 746 748 747 749 /* 748 - * Some helper macros to make it easier to define an occ_attribute. Since these 749 - * are dynamically allocated, we shouldn't use the existing kernel macros which 750 + * A helper to make it easier to define an occ_attribute. Since these 751 + * are dynamically allocated, we cannot use the existing kernel macros which 750 752 * stringify the name argument. 751 753 */ 752 - #define ATTR_OCC(_name, _mode, _show, _store) { \ 753 - .attr = { \ 754 - .name = _name, \ 755 - .mode = VERIFY_OCTAL_PERMISSIONS(_mode), \ 756 - }, \ 757 - .show = _show, \ 758 - .store = _store, \ 759 - } 754 + static void occ_init_attribute(struct occ_attribute *attr, int mode, 755 + ssize_t (*show)(struct device *dev, struct device_attribute *attr, char *buf), 756 + ssize_t (*store)(struct device *dev, struct device_attribute *attr, 757 + const char *buf, size_t count), 758 + int nr, int index, const char *fmt, ...) 759 + { 760 + va_list args; 760 761 761 - #define SENSOR_ATTR_OCC(_name, _mode, _show, _store, _nr, _index) { \ 762 - .dev_attr = ATTR_OCC(_name, _mode, _show, _store), \ 763 - .index = _index, \ 764 - .nr = _nr, \ 765 - } 762 + va_start(args, fmt); 763 + vsnprintf(attr->name, sizeof(attr->name), fmt, args); 764 + va_end(args); 766 765 767 - #define OCC_INIT_ATTR(_name, _mode, _show, _store, _nr, _index) \ 768 - ((struct sensor_device_attribute_2) \ 769 - SENSOR_ATTR_OCC(_name, _mode, _show, _store, _nr, _index)) 766 + attr->sensor.dev_attr.attr.name = attr->name; 767 + attr->sensor.dev_attr.attr.mode = mode; 768 + attr->sensor.dev_attr.show = show; 769 + attr->sensor.dev_attr.store = store; 770 + attr->sensor.index = index; 771 + attr->sensor.nr = nr; 772 + } 770 773 771 774 /* 772 775 * Allocate and instatiate sensor_device_attribute_2s. It's most efficient to ··· 854 855 sensors->extended.num_sensors = 0; 855 856 } 856 857 857 - occ->attrs = devm_kzalloc(dev, sizeof(*occ->attrs) * num_attrs, 858 + occ->attrs = devm_kcalloc(dev, num_attrs, sizeof(*occ->attrs), 858 859 GFP_KERNEL); 859 860 if (!occ->attrs) 860 861 return -ENOMEM; 861 862 862 863 /* null-terminated list */ 863 - occ->group.attrs = devm_kzalloc(dev, sizeof(*occ->group.attrs) * 864 - num_attrs + 1, GFP_KERNEL); 864 + occ->group.attrs = devm_kcalloc(dev, num_attrs + 1, 865 + sizeof(*occ->group.attrs), 866 + GFP_KERNEL); 865 867 if (!occ->group.attrs) 866 868 return -ENOMEM; 867 869 ··· 872 872 s = i + 1; 873 873 temp = ((struct temp_sensor_2 *)sensors->temp.data) + i; 874 874 875 - snprintf(attr->name, sizeof(attr->name), "temp%d_label", s); 876 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_temp, NULL, 877 - 0, i); 875 + occ_init_attribute(attr, 0444, show_temp, NULL, 876 + 0, i, "temp%d_label", s); 878 877 attr++; 879 878 880 879 if (sensors->temp.version == 2 && 881 880 temp->fru_type == OCC_FRU_TYPE_VRM) { 882 - snprintf(attr->name, sizeof(attr->name), 883 - "temp%d_alarm", s); 881 + occ_init_attribute(attr, 0444, show_temp, NULL, 882 + 1, i, "temp%d_alarm", s); 884 883 } else { 885 - snprintf(attr->name, sizeof(attr->name), 886 - "temp%d_input", s); 884 + occ_init_attribute(attr, 0444, show_temp, NULL, 885 + 1, i, "temp%d_input", s); 887 886 } 888 887 889 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_temp, NULL, 890 - 1, i); 891 888 attr++; 892 889 893 890 if (sensors->temp.version > 1) { 894 - snprintf(attr->name, sizeof(attr->name), 895 - "temp%d_fru_type", s); 896 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 897 - show_temp, NULL, 2, i); 891 + occ_init_attribute(attr, 0444, show_temp, NULL, 892 + 2, i, "temp%d_fru_type", s); 898 893 attr++; 899 894 900 - snprintf(attr->name, sizeof(attr->name), 901 - "temp%d_fault", s); 902 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 903 - show_temp, NULL, 3, i); 895 + occ_init_attribute(attr, 0444, show_temp, NULL, 896 + 3, i, "temp%d_fault", s); 904 897 attr++; 905 898 906 899 if (sensors->temp.version == 0x10) { 907 - snprintf(attr->name, sizeof(attr->name), 908 - "temp%d_max", s); 909 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 910 - show_temp, NULL, 911 - 4, i); 900 + occ_init_attribute(attr, 0444, show_temp, NULL, 901 + 4, i, "temp%d_max", s); 912 902 attr++; 913 903 } 914 904 } ··· 907 917 for (i = 0; i < sensors->freq.num_sensors; ++i) { 908 918 s = i + 1; 909 919 910 - snprintf(attr->name, sizeof(attr->name), "freq%d_label", s); 911 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_freq, NULL, 912 - 0, i); 920 + occ_init_attribute(attr, 0444, show_freq, NULL, 921 + 0, i, "freq%d_label", s); 913 922 attr++; 914 923 915 - snprintf(attr->name, sizeof(attr->name), "freq%d_input", s); 916 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_freq, NULL, 917 - 1, i); 924 + occ_init_attribute(attr, 0444, show_freq, NULL, 925 + 1, i, "freq%d_input", s); 918 926 attr++; 919 927 } 920 928 ··· 928 940 s = (i * 4) + 1; 929 941 930 942 for (j = 0; j < 4; ++j) { 931 - snprintf(attr->name, sizeof(attr->name), 932 - "power%d_label", s); 933 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 934 - show_power, NULL, 935 - nr++, i); 943 + occ_init_attribute(attr, 0444, show_power, 944 + NULL, nr++, i, 945 + "power%d_label", s); 936 946 attr++; 937 947 938 - snprintf(attr->name, sizeof(attr->name), 939 - "power%d_average", s); 940 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 941 - show_power, NULL, 942 - nr++, i); 948 + occ_init_attribute(attr, 0444, show_power, 949 + NULL, nr++, i, 950 + "power%d_average", s); 943 951 attr++; 944 952 945 - snprintf(attr->name, sizeof(attr->name), 946 - "power%d_average_interval", s); 947 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 948 - show_power, NULL, 949 - nr++, i); 953 + occ_init_attribute(attr, 0444, show_power, 954 + NULL, nr++, i, 955 + "power%d_average_interval", s); 950 956 attr++; 951 957 952 - snprintf(attr->name, sizeof(attr->name), 953 - "power%d_input", s); 954 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 955 - show_power, NULL, 956 - nr++, i); 958 + occ_init_attribute(attr, 0444, show_power, 959 + NULL, nr++, i, 960 + "power%d_input", s); 957 961 attr++; 958 962 959 963 s++; ··· 957 977 for (i = 0; i < sensors->power.num_sensors; ++i) { 958 978 s = i + 1; 959 979 960 - snprintf(attr->name, sizeof(attr->name), 961 - "power%d_label", s); 962 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 963 - show_power, NULL, 0, i); 980 + occ_init_attribute(attr, 0444, show_power, NULL, 981 + 0, i, "power%d_label", s); 964 982 attr++; 965 983 966 - snprintf(attr->name, sizeof(attr->name), 967 - "power%d_average", s); 968 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 969 - show_power, NULL, 1, i); 984 + occ_init_attribute(attr, 0444, show_power, NULL, 985 + 1, i, "power%d_average", s); 970 986 attr++; 971 987 972 - snprintf(attr->name, sizeof(attr->name), 973 - "power%d_average_interval", s); 974 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 975 - show_power, NULL, 2, i); 988 + occ_init_attribute(attr, 0444, show_power, NULL, 989 + 2, i, "power%d_average_interval", s); 976 990 attr++; 977 991 978 - snprintf(attr->name, sizeof(attr->name), 979 - "power%d_input", s); 980 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 981 - show_power, NULL, 3, i); 992 + occ_init_attribute(attr, 0444, show_power, NULL, 993 + 3, i, "power%d_input", s); 982 994 attr++; 983 995 } 984 996 ··· 978 1006 } 979 1007 980 1008 if (sensors->caps.num_sensors >= 1) { 981 - snprintf(attr->name, sizeof(attr->name), "power%d_label", s); 982 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL, 983 - 0, 0); 1009 + occ_init_attribute(attr, 0444, show_caps, NULL, 1010 + 0, 0, "power%d_label", s); 984 1011 attr++; 985 1012 986 - snprintf(attr->name, sizeof(attr->name), "power%d_cap", s); 987 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL, 988 - 1, 0); 1013 + occ_init_attribute(attr, 0444, show_caps, NULL, 1014 + 1, 0, "power%d_cap", s); 989 1015 attr++; 990 1016 991 - snprintf(attr->name, sizeof(attr->name), "power%d_input", s); 992 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL, 993 - 2, 0); 1017 + occ_init_attribute(attr, 0444, show_caps, NULL, 1018 + 2, 0, "power%d_input", s); 994 1019 attr++; 995 1020 996 - snprintf(attr->name, sizeof(attr->name), 997 - "power%d_cap_not_redundant", s); 998 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL, 999 - 3, 0); 1021 + occ_init_attribute(attr, 0444, show_caps, NULL, 1022 + 3, 0, "power%d_cap_not_redundant", s); 1000 1023 attr++; 1001 1024 1002 - snprintf(attr->name, sizeof(attr->name), "power%d_cap_max", s); 1003 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL, 1004 - 4, 0); 1025 + occ_init_attribute(attr, 0444, show_caps, NULL, 1026 + 4, 0, "power%d_cap_max", s); 1005 1027 attr++; 1006 1028 1007 - snprintf(attr->name, sizeof(attr->name), "power%d_cap_min", s); 1008 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL, 1009 - 5, 0); 1029 + occ_init_attribute(attr, 0444, show_caps, NULL, 1030 + 5, 0, "power%d_cap_min", s); 1010 1031 attr++; 1011 1032 1012 - snprintf(attr->name, sizeof(attr->name), "power%d_cap_user", 1013 - s); 1014 - attr->sensor = OCC_INIT_ATTR(attr->name, 0644, show_caps, 1015 - occ_store_caps_user, 6, 0); 1033 + occ_init_attribute(attr, 0644, show_caps, occ_store_caps_user, 1034 + 6, 0, "power%d_cap_user", s); 1016 1035 attr++; 1017 1036 1018 1037 if (sensors->caps.version > 1) { 1019 - snprintf(attr->name, sizeof(attr->name), 1020 - "power%d_cap_user_source", s); 1021 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 1022 - show_caps, NULL, 7, 0); 1038 + occ_init_attribute(attr, 0444, show_caps, NULL, 1039 + 7, 0, "power%d_cap_user_source", s); 1023 1040 attr++; 1024 1041 1025 1042 if (sensors->caps.version > 2) { 1026 - snprintf(attr->name, sizeof(attr->name), 1027 - "power%d_cap_min_soft", s); 1028 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 1029 - show_caps, NULL, 1030 - 8, 0); 1043 + occ_init_attribute(attr, 0444, show_caps, NULL, 1044 + 8, 0, 1045 + "power%d_cap_min_soft", s); 1031 1046 attr++; 1032 1047 } 1033 1048 } ··· 1023 1064 for (i = 0; i < sensors->extended.num_sensors; ++i) { 1024 1065 s = i + 1; 1025 1066 1026 - snprintf(attr->name, sizeof(attr->name), "extn%d_label", s); 1027 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 1028 - occ_show_extended, NULL, 0, i); 1067 + occ_init_attribute(attr, 0444, occ_show_extended, NULL, 1068 + 0, i, "extn%d_label", s); 1029 1069 attr++; 1030 1070 1031 - snprintf(attr->name, sizeof(attr->name), "extn%d_flags", s); 1032 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 1033 - occ_show_extended, NULL, 1, i); 1071 + occ_init_attribute(attr, 0444, occ_show_extended, NULL, 1072 + 1, i, "extn%d_flags", s); 1034 1073 attr++; 1035 1074 1036 - snprintf(attr->name, sizeof(attr->name), "extn%d_input", s); 1037 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 1038 - occ_show_extended, NULL, 2, i); 1075 + occ_init_attribute(attr, 0444, occ_show_extended, NULL, 1076 + 2, i, "extn%d_input", s); 1039 1077 attr++; 1040 1078 } 1041 1079
+7 -5
drivers/idle/intel_idle.c
··· 152 152 int index, bool irqoff) 153 153 { 154 154 struct cpuidle_state *state = &drv->states[index]; 155 - unsigned long eax = flg2MWAIT(state->flags); 156 - unsigned long ecx = 1*irqoff; /* break on interrupt flag */ 155 + unsigned int eax = flg2MWAIT(state->flags); 156 + unsigned int ecx = 1*irqoff; /* break on interrupt flag */ 157 157 158 158 mwait_idle_with_hints(eax, ecx); 159 159 ··· 226 226 static __cpuidle int intel_idle_s2idle(struct cpuidle_device *dev, 227 227 struct cpuidle_driver *drv, int index) 228 228 { 229 - unsigned long ecx = 1; /* break on interrupt flag */ 230 229 struct cpuidle_state *state = &drv->states[index]; 231 - unsigned long eax = flg2MWAIT(state->flags); 230 + unsigned int eax = flg2MWAIT(state->flags); 231 + unsigned int ecx = 1; /* break on interrupt flag */ 232 232 233 233 if (state->flags & CPUIDLE_FLAG_INIT_XSTATE) 234 234 fpu_idle_fpregs(); ··· 2507 2507 pr_debug("Local APIC timer is reliable in %s\n", 2508 2508 boot_cpu_has(X86_FEATURE_ARAT) ? "all C-states" : "C1"); 2509 2509 2510 + arch_cpu_rescan_dead_smt_siblings(); 2511 + 2510 2512 return 0; 2511 2513 2512 2514 hp_setup_fail: ··· 2520 2518 return retval; 2521 2519 2522 2520 } 2523 - device_initcall(intel_idle_init); 2521 + subsys_initcall_sync(intel_idle_init); 2524 2522 2525 2523 /* 2526 2524 * We are not really modular, but we used to support that. Meaning we also
+2 -2
drivers/iommu/tegra-smmu.c
··· 559 559 { 560 560 unsigned int pd_index = iova_pd_index(iova); 561 561 struct tegra_smmu *smmu = as->smmu; 562 - struct tegra_pd *pd = as->pd; 562 + u32 *pd = &as->pd->val[pd_index]; 563 563 unsigned long offset = pd_index * sizeof(*pd); 564 564 565 565 /* Set the page directory entry first */ 566 - pd->val[pd_index] = value; 566 + *pd = value; 567 567 568 568 /* The flush the page directory entry from caches */ 569 569 dma_sync_single_range_for_device(smmu->dev, as->pd_dma, offset,
+5 -4
drivers/net/can/m_can/tcan4x5x-core.c
··· 411 411 priv = cdev_to_priv(mcan_class); 412 412 413 413 priv->power = devm_regulator_get_optional(&spi->dev, "vsup"); 414 - if (PTR_ERR(priv->power) == -EPROBE_DEFER) { 415 - ret = -EPROBE_DEFER; 416 - goto out_m_can_class_free_dev; 417 - } else { 414 + if (IS_ERR(priv->power)) { 415 + if (PTR_ERR(priv->power) == -EPROBE_DEFER) { 416 + ret = -EPROBE_DEFER; 417 + goto out_m_can_class_free_dev; 418 + } 418 419 priv->power = NULL; 419 420 } 420 421
+16 -11
drivers/net/ethernet/airoha/airoha_eth.c
··· 1065 1065 1066 1066 static int airoha_qdma_init_hfwd_queues(struct airoha_qdma *qdma) 1067 1067 { 1068 + int size, index, num_desc = HW_DSCP_NUM; 1068 1069 struct airoha_eth *eth = qdma->eth; 1069 1070 int id = qdma - &eth->qdma[0]; 1071 + u32 status, buf_size; 1070 1072 dma_addr_t dma_addr; 1071 1073 const char *name; 1072 - int size, index; 1073 - u32 status; 1074 - 1075 - size = HW_DSCP_NUM * sizeof(struct airoha_qdma_fwd_desc); 1076 - if (!dmam_alloc_coherent(eth->dev, size, &dma_addr, GFP_KERNEL)) 1077 - return -ENOMEM; 1078 - 1079 - airoha_qdma_wr(qdma, REG_FWD_DSCP_BASE, dma_addr); 1080 1074 1081 1075 name = devm_kasprintf(eth->dev, GFP_KERNEL, "qdma%d-buf", id); 1082 1076 if (!name) 1083 1077 return -ENOMEM; 1084 1078 1079 + buf_size = id ? AIROHA_MAX_PACKET_SIZE / 2 : AIROHA_MAX_PACKET_SIZE; 1085 1080 index = of_property_match_string(eth->dev->of_node, 1086 1081 "memory-region-names", name); 1087 1082 if (index >= 0) { ··· 1094 1099 rmem = of_reserved_mem_lookup(np); 1095 1100 of_node_put(np); 1096 1101 dma_addr = rmem->base; 1102 + /* Compute the number of hw descriptors according to the 1103 + * reserved memory size and the payload buffer size 1104 + */ 1105 + num_desc = div_u64(rmem->size, buf_size); 1097 1106 } else { 1098 - size = AIROHA_MAX_PACKET_SIZE * HW_DSCP_NUM; 1107 + size = buf_size * num_desc; 1099 1108 if (!dmam_alloc_coherent(eth->dev, size, &dma_addr, 1100 1109 GFP_KERNEL)) 1101 1110 return -ENOMEM; ··· 1107 1108 1108 1109 airoha_qdma_wr(qdma, REG_FWD_BUF_BASE, dma_addr); 1109 1110 1111 + size = num_desc * sizeof(struct airoha_qdma_fwd_desc); 1112 + if (!dmam_alloc_coherent(eth->dev, size, &dma_addr, GFP_KERNEL)) 1113 + return -ENOMEM; 1114 + 1115 + airoha_qdma_wr(qdma, REG_FWD_DSCP_BASE, dma_addr); 1116 + /* QDMA0: 2KB. QDMA1: 1KB */ 1110 1117 airoha_qdma_rmw(qdma, REG_HW_FWD_DSCP_CFG, 1111 1118 HW_FWD_DSCP_PAYLOAD_SIZE_MASK, 1112 - FIELD_PREP(HW_FWD_DSCP_PAYLOAD_SIZE_MASK, 0)); 1119 + FIELD_PREP(HW_FWD_DSCP_PAYLOAD_SIZE_MASK, !!id)); 1113 1120 airoha_qdma_rmw(qdma, REG_FWD_DSCP_LOW_THR, FWD_DSCP_LOW_THR_MASK, 1114 1121 FIELD_PREP(FWD_DSCP_LOW_THR_MASK, 128)); 1115 1122 airoha_qdma_rmw(qdma, REG_LMGR_INIT_CFG, 1116 1123 LMGR_INIT_START | LMGR_SRAM_MODE_MASK | 1117 1124 HW_FWD_DESC_NUM_MASK, 1118 - FIELD_PREP(HW_FWD_DESC_NUM_MASK, HW_DSCP_NUM) | 1125 + FIELD_PREP(HW_FWD_DESC_NUM_MASK, num_desc) | 1119 1126 LMGR_INIT_START | LMGR_SRAM_MODE_MASK); 1120 1127 1121 1128 return read_poll_timeout(airoha_qdma_rr, status,
+3 -1
drivers/net/ethernet/airoha/airoha_ppe.c
··· 819 819 int idle; 820 820 821 821 hwe = airoha_ppe_foe_get_entry(ppe, iter->hash); 822 - ib1 = READ_ONCE(hwe->ib1); 822 + if (!hwe) 823 + continue; 823 824 825 + ib1 = READ_ONCE(hwe->ib1); 824 826 state = FIELD_GET(AIROHA_FOE_IB1_BIND_STATE, ib1); 825 827 if (state != AIROHA_FOE_STATE_BIND) { 826 828 iter->hash = 0xffff;
+74 -13
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 10780 10780 bp->num_rss_ctx--; 10781 10781 } 10782 10782 10783 + static bool bnxt_vnic_has_rx_ring(struct bnxt *bp, struct bnxt_vnic_info *vnic, 10784 + int rxr_id) 10785 + { 10786 + u16 tbl_size = bnxt_get_rxfh_indir_size(bp->dev); 10787 + int i, vnic_rx; 10788 + 10789 + /* Ntuple VNIC always has all the rx rings. Any change of ring id 10790 + * must be updated because a future filter may use it. 10791 + */ 10792 + if (vnic->flags & BNXT_VNIC_NTUPLE_FLAG) 10793 + return true; 10794 + 10795 + for (i = 0; i < tbl_size; i++) { 10796 + if (vnic->flags & BNXT_VNIC_RSSCTX_FLAG) 10797 + vnic_rx = ethtool_rxfh_context_indir(vnic->rss_ctx)[i]; 10798 + else 10799 + vnic_rx = bp->rss_indir_tbl[i]; 10800 + 10801 + if (rxr_id == vnic_rx) 10802 + return true; 10803 + } 10804 + 10805 + return false; 10806 + } 10807 + 10808 + static int bnxt_set_vnic_mru_p5(struct bnxt *bp, struct bnxt_vnic_info *vnic, 10809 + u16 mru, int rxr_id) 10810 + { 10811 + int rc; 10812 + 10813 + if (!bnxt_vnic_has_rx_ring(bp, vnic, rxr_id)) 10814 + return 0; 10815 + 10816 + if (mru) { 10817 + rc = bnxt_hwrm_vnic_set_rss_p5(bp, vnic, true); 10818 + if (rc) { 10819 + netdev_err(bp->dev, "hwrm vnic %d set rss failure rc: %d\n", 10820 + vnic->vnic_id, rc); 10821 + return rc; 10822 + } 10823 + } 10824 + vnic->mru = mru; 10825 + bnxt_hwrm_vnic_update(bp, vnic, 10826 + VNIC_UPDATE_REQ_ENABLES_MRU_VALID); 10827 + 10828 + return 0; 10829 + } 10830 + 10831 + static int bnxt_set_rss_ctx_vnic_mru(struct bnxt *bp, u16 mru, int rxr_id) 10832 + { 10833 + struct ethtool_rxfh_context *ctx; 10834 + unsigned long context; 10835 + int rc; 10836 + 10837 + xa_for_each(&bp->dev->ethtool->rss_ctx, context, ctx) { 10838 + struct bnxt_rss_ctx *rss_ctx = ethtool_rxfh_context_priv(ctx); 10839 + struct bnxt_vnic_info *vnic = &rss_ctx->vnic; 10840 + 10841 + rc = bnxt_set_vnic_mru_p5(bp, vnic, mru, rxr_id); 10842 + if (rc) 10843 + return rc; 10844 + } 10845 + 10846 + return 0; 10847 + } 10848 + 10783 10849 static void bnxt_hwrm_realloc_rss_ctx_vnic(struct bnxt *bp) 10784 10850 { 10785 10851 bool set_tpa = !!(bp->flags & BNXT_FLAG_TPA); ··· 15973 15907 struct bnxt_vnic_info *vnic; 15974 15908 struct bnxt_napi *bnapi; 15975 15909 int i, rc; 15910 + u16 mru; 15976 15911 15977 15912 rxr = &bp->rx_ring[idx]; 15978 15913 clone = qmem; ··· 16024 15957 napi_enable_locked(&bnapi->napi); 16025 15958 bnxt_db_nq_arm(bp, &cpr->cp_db, cpr->cp_raw_cons); 16026 15959 15960 + mru = bp->dev->mtu + ETH_HLEN + VLAN_HLEN; 16027 15961 for (i = 0; i < bp->nr_vnics; i++) { 16028 15962 vnic = &bp->vnic_info[i]; 16029 15963 16030 - rc = bnxt_hwrm_vnic_set_rss_p5(bp, vnic, true); 16031 - if (rc) { 16032 - netdev_err(bp->dev, "hwrm vnic %d set rss failure rc: %d\n", 16033 - vnic->vnic_id, rc); 15964 + rc = bnxt_set_vnic_mru_p5(bp, vnic, mru, idx); 15965 + if (rc) 16034 15966 return rc; 16035 - } 16036 - vnic->mru = bp->dev->mtu + ETH_HLEN + VLAN_HLEN; 16037 - bnxt_hwrm_vnic_update(bp, vnic, 16038 - VNIC_UPDATE_REQ_ENABLES_MRU_VALID); 16039 15967 } 16040 - 16041 - return 0; 15968 + return bnxt_set_rss_ctx_vnic_mru(bp, mru, idx); 16042 15969 16043 15970 err_reset: 16044 15971 netdev_err(bp->dev, "Unexpected HWRM error during queue start rc: %d\n", ··· 16054 15993 16055 15994 for (i = 0; i < bp->nr_vnics; i++) { 16056 15995 vnic = &bp->vnic_info[i]; 16057 - vnic->mru = 0; 16058 - bnxt_hwrm_vnic_update(bp, vnic, 16059 - VNIC_UPDATE_REQ_ENABLES_MRU_VALID); 15996 + 15997 + bnxt_set_vnic_mru_p5(bp, vnic, 0, idx); 16060 15998 } 15999 + bnxt_set_rss_ctx_vnic_mru(bp, 0, idx); 16061 16000 /* Make sure NAPI sees that the VNIC is disabled */ 16062 16001 synchronize_net(); 16063 16002 rxr = &bp->rx_ring[idx];
+10 -14
drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
··· 231 231 return; 232 232 233 233 mutex_lock(&edev->en_dev_lock); 234 - if (!bnxt_ulp_registered(edev)) { 235 - mutex_unlock(&edev->en_dev_lock); 236 - return; 237 - } 234 + if (!bnxt_ulp_registered(edev) || 235 + (edev->flags & BNXT_EN_FLAG_ULP_STOPPED)) 236 + goto ulp_stop_exit; 238 237 239 238 edev->flags |= BNXT_EN_FLAG_ULP_STOPPED; 240 239 if (aux_priv) { ··· 249 250 adrv->suspend(adev, pm); 250 251 } 251 252 } 253 + ulp_stop_exit: 252 254 mutex_unlock(&edev->en_dev_lock); 253 255 } 254 256 ··· 258 258 struct bnxt_aux_priv *aux_priv = bp->aux_priv; 259 259 struct bnxt_en_dev *edev = bp->edev; 260 260 261 - if (!edev) 262 - return; 263 - 264 - edev->flags &= ~BNXT_EN_FLAG_ULP_STOPPED; 265 - 266 - if (err) 261 + if (!edev || err) 267 262 return; 268 263 269 264 mutex_lock(&edev->en_dev_lock); 270 - if (!bnxt_ulp_registered(edev)) { 271 - mutex_unlock(&edev->en_dev_lock); 272 - return; 273 - } 265 + if (!bnxt_ulp_registered(edev) || 266 + !(edev->flags & BNXT_EN_FLAG_ULP_STOPPED)) 267 + goto ulp_start_exit; 274 268 275 269 if (edev->ulp_tbl->msix_requested) 276 270 bnxt_fill_msix_vecs(bp, edev->msix_entries); ··· 281 287 adrv->resume(adev); 282 288 } 283 289 } 290 + ulp_start_exit: 291 + edev->flags &= ~BNXT_EN_FLAG_ULP_STOPPED; 284 292 mutex_unlock(&edev->en_dev_lock); 285 293 } 286 294
+1
drivers/net/ethernet/faraday/Kconfig
··· 31 31 depends on ARM || COMPILE_TEST 32 32 depends on !64BIT || BROKEN 33 33 select PHYLIB 34 + select FIXED_PHY 34 35 select MDIO_ASPEED if MACH_ASPEED_G6 35 36 select CRC32 36 37 help
+11 -3
drivers/net/ethernet/intel/e1000e/netdev.c
··· 3534 3534 case e1000_pch_cnp: 3535 3535 case e1000_pch_tgp: 3536 3536 case e1000_pch_adp: 3537 - case e1000_pch_mtp: 3538 - case e1000_pch_lnp: 3539 - case e1000_pch_ptp: 3540 3537 case e1000_pch_nvp: 3541 3538 if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI) { 3542 3539 /* Stable 24MHz frequency */ ··· 3548 3551 shift = INCVALUE_SHIFT_38400KHZ; 3549 3552 adapter->cc.shift = shift; 3550 3553 } 3554 + break; 3555 + case e1000_pch_mtp: 3556 + case e1000_pch_lnp: 3557 + case e1000_pch_ptp: 3558 + /* System firmware can misreport this value, so set it to a 3559 + * stable 38400KHz frequency. 3560 + */ 3561 + incperiod = INCPERIOD_38400KHZ; 3562 + incvalue = INCVALUE_38400KHZ; 3563 + shift = INCVALUE_SHIFT_38400KHZ; 3564 + adapter->cc.shift = shift; 3551 3565 break; 3552 3566 case e1000_82574: 3553 3567 case e1000_82583:
+5 -3
drivers/net/ethernet/intel/e1000e/ptp.c
··· 295 295 case e1000_pch_cnp: 296 296 case e1000_pch_tgp: 297 297 case e1000_pch_adp: 298 - case e1000_pch_mtp: 299 - case e1000_pch_lnp: 300 - case e1000_pch_ptp: 301 298 case e1000_pch_nvp: 302 299 if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI) 303 300 adapter->ptp_clock_info.max_adj = MAX_PPB_24MHZ; 304 301 else 305 302 adapter->ptp_clock_info.max_adj = MAX_PPB_38400KHZ; 303 + break; 304 + case e1000_pch_mtp: 305 + case e1000_pch_lnp: 306 + case e1000_pch_ptp: 307 + adapter->ptp_clock_info.max_adj = MAX_PPB_38400KHZ; 306 308 break; 307 309 case e1000_82574: 308 310 case e1000_82583:
+48
drivers/net/ethernet/intel/ice/ice_arfs.c
··· 378 378 } 379 379 380 380 /** 381 + * ice_arfs_cmp - Check if aRFS filter matches this flow. 382 + * @fltr_info: filter info of the saved ARFS entry. 383 + * @fk: flow dissector keys. 384 + * @n_proto: One of htons(ETH_P_IP) or htons(ETH_P_IPV6). 385 + * @ip_proto: One of IPPROTO_TCP or IPPROTO_UDP. 386 + * 387 + * Since this function assumes limited values for n_proto and ip_proto, it 388 + * is meant to be called only from ice_rx_flow_steer(). 389 + * 390 + * Return: 391 + * * true - fltr_info refers to the same flow as fk. 392 + * * false - fltr_info and fk refer to different flows. 393 + */ 394 + static bool 395 + ice_arfs_cmp(const struct ice_fdir_fltr *fltr_info, const struct flow_keys *fk, 396 + __be16 n_proto, u8 ip_proto) 397 + { 398 + /* Determine if the filter is for IPv4 or IPv6 based on flow_type, 399 + * which is one of ICE_FLTR_PTYPE_NONF_IPV{4,6}_{TCP,UDP}. 400 + */ 401 + bool is_v4 = fltr_info->flow_type == ICE_FLTR_PTYPE_NONF_IPV4_TCP || 402 + fltr_info->flow_type == ICE_FLTR_PTYPE_NONF_IPV4_UDP; 403 + 404 + /* Following checks are arranged in the quickest and most discriminative 405 + * fields first for early failure. 406 + */ 407 + if (is_v4) 408 + return n_proto == htons(ETH_P_IP) && 409 + fltr_info->ip.v4.src_port == fk->ports.src && 410 + fltr_info->ip.v4.dst_port == fk->ports.dst && 411 + fltr_info->ip.v4.src_ip == fk->addrs.v4addrs.src && 412 + fltr_info->ip.v4.dst_ip == fk->addrs.v4addrs.dst && 413 + fltr_info->ip.v4.proto == ip_proto; 414 + 415 + return fltr_info->ip.v6.src_port == fk->ports.src && 416 + fltr_info->ip.v6.dst_port == fk->ports.dst && 417 + fltr_info->ip.v6.proto == ip_proto && 418 + !memcmp(&fltr_info->ip.v6.src_ip, &fk->addrs.v6addrs.src, 419 + sizeof(struct in6_addr)) && 420 + !memcmp(&fltr_info->ip.v6.dst_ip, &fk->addrs.v6addrs.dst, 421 + sizeof(struct in6_addr)); 422 + } 423 + 424 + /** 381 425 * ice_rx_flow_steer - steer the Rx flow to where application is being run 382 426 * @netdev: ptr to the netdev being adjusted 383 427 * @skb: buffer with required header information ··· 492 448 continue; 493 449 494 450 fltr_info = &arfs_entry->fltr_info; 451 + 452 + if (!ice_arfs_cmp(fltr_info, &fk, n_proto, ip_proto)) 453 + continue; 454 + 495 455 ret = fltr_info->fltr_id; 496 456 497 457 if (fltr_info->q_index == rxq_idx ||
+5 -1
drivers/net/ethernet/intel/ice/ice_eswitch.c
··· 508 508 */ 509 509 int ice_eswitch_attach_vf(struct ice_pf *pf, struct ice_vf *vf) 510 510 { 511 - struct ice_repr *repr = ice_repr_create_vf(vf); 512 511 struct devlink *devlink = priv_to_devlink(pf); 512 + struct ice_repr *repr; 513 513 int err; 514 514 515 + if (!ice_is_eswitch_mode_switchdev(pf)) 516 + return 0; 517 + 518 + repr = ice_repr_create_vf(vf); 515 519 if (IS_ERR(repr)) 516 520 return PTR_ERR(repr); 517 521
+2 -2
drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
··· 1823 1823 req->chan_cnt = IEEE_8021QAZ_MAX_TCS; 1824 1824 req->bpid_per_chan = 1; 1825 1825 } else { 1826 - req->chan_cnt = 1; 1826 + req->chan_cnt = pfvf->hw.rx_chan_cnt; 1827 1827 req->bpid_per_chan = 0; 1828 1828 } 1829 1829 ··· 1848 1848 req->chan_cnt = IEEE_8021QAZ_MAX_TCS; 1849 1849 req->bpid_per_chan = 1; 1850 1850 } else { 1851 - req->chan_cnt = 1; 1851 + req->chan_cnt = pfvf->hw.rx_chan_cnt; 1852 1852 req->bpid_per_chan = 0; 1853 1853 } 1854 1854
+4 -2
drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
··· 447 447 priv->llu_plu_irq = platform_get_irq(pdev, MLXBF_GIGE_LLU_PLU_INTR_IDX); 448 448 449 449 phy_irq = acpi_dev_gpio_irq_get_by(ACPI_COMPANION(&pdev->dev), "phy", 0); 450 - if (phy_irq < 0) { 451 - dev_err(&pdev->dev, "Error getting PHY irq. Use polling instead"); 450 + if (phy_irq == -EPROBE_DEFER) { 451 + err = -EPROBE_DEFER; 452 + goto out; 453 + } else if (phy_irq < 0) { 452 454 phy_irq = PHY_POLL; 453 455 } 454 456
+1 -4
drivers/net/ethernet/meta/fbnic/fbnic_fw.c
··· 127 127 return -EBUSY; 128 128 129 129 addr = dma_map_single(fbd->dev, msg, PAGE_SIZE, direction); 130 - if (dma_mapping_error(fbd->dev, addr)) { 131 - free_page((unsigned long)msg); 132 - 130 + if (dma_mapping_error(fbd->dev, addr)) 133 131 return -ENOSPC; 134 - } 135 132 136 133 mbx->buf_info[tail].msg = msg; 137 134 mbx->buf_info[tail].addr = addr;
+2 -2
drivers/net/ethernet/microchip/lan743x_ptp.h
··· 18 18 */ 19 19 #define LAN743X_PTP_N_EVENT_CHAN 2 20 20 #define LAN743X_PTP_N_PEROUT LAN743X_PTP_N_EVENT_CHAN 21 - #define LAN743X_PTP_N_EXTTS 4 22 - #define LAN743X_PTP_N_PPS 0 23 21 #define PCI11X1X_PTP_IO_MAX_CHANNELS 8 22 + #define LAN743X_PTP_N_EXTTS PCI11X1X_PTP_IO_MAX_CHANNELS 23 + #define LAN743X_PTP_N_PPS 0 24 24 #define PTP_CMD_CTL_TIMEOUT_CNT 50 25 25 26 26 struct lan743x_adapter;
+2 -1
drivers/net/ethernet/pensando/ionic/ionic_main.c
··· 516 516 unsigned long start_time; 517 517 unsigned long max_wait; 518 518 unsigned long duration; 519 - int done = 0; 520 519 bool fw_up; 521 520 int opcode; 521 + bool done; 522 522 int err; 523 523 524 524 /* Wait for dev cmd to complete, retrying if we get EAGAIN, ··· 526 526 */ 527 527 max_wait = jiffies + (max_seconds * HZ); 528 528 try_again: 529 + done = false; 529 530 opcode = idev->opcode; 530 531 start_time = jiffies; 531 532 for (fw_up = ionic_is_fw_running(idev);
+2 -17
drivers/net/ethernet/ti/icssg/icssg_common.c
··· 98 98 { 99 99 struct cppi5_host_desc_t *first_desc, *next_desc; 100 100 dma_addr_t buf_dma, next_desc_dma; 101 - struct prueth_swdata *swdata; 102 - struct page *page; 103 101 u32 buf_dma_len; 104 102 105 103 first_desc = desc; 106 104 next_desc = first_desc; 107 - 108 - swdata = cppi5_hdesc_get_swdata(desc); 109 - if (swdata->type == PRUETH_SWDATA_PAGE) { 110 - page = swdata->data.page; 111 - page_pool_recycle_direct(page->pp, swdata->data.page); 112 - goto free_desc; 113 - } 114 105 115 106 cppi5_hdesc_get_obuf(first_desc, &buf_dma, &buf_dma_len); 116 107 k3_udma_glue_tx_cppi5_to_dma_addr(tx_chn->tx_chn, &buf_dma); ··· 126 135 k3_cppi_desc_pool_free(tx_chn->desc_pool, next_desc); 127 136 } 128 137 129 - free_desc: 130 138 k3_cppi_desc_pool_free(tx_chn->desc_pool, first_desc); 131 139 } 132 140 EXPORT_SYMBOL_GPL(prueth_xmit_free); ··· 602 612 k3_udma_glue_tx_dma_to_cppi5_addr(tx_chn->tx_chn, &buf_dma); 603 613 cppi5_hdesc_attach_buf(first_desc, buf_dma, xdpf->len, buf_dma, xdpf->len); 604 614 swdata = cppi5_hdesc_get_swdata(first_desc); 605 - if (page) { 606 - swdata->type = PRUETH_SWDATA_PAGE; 607 - swdata->data.page = page; 608 - } else { 609 - swdata->type = PRUETH_SWDATA_XDPF; 610 - swdata->data.xdpf = xdpf; 611 - } 615 + swdata->type = PRUETH_SWDATA_XDPF; 616 + swdata->data.xdpf = xdpf; 612 617 613 618 /* Report BQL before sending the packet */ 614 619 netif_txq = netdev_get_tx_queue(ndev, tx_chn->id);
+3 -1
drivers/net/wireless/ath/ath12k/core.c
··· 1216 1216 INIT_LIST_HEAD(&ar->fw_stats.pdevs); 1217 1217 INIT_LIST_HEAD(&ar->fw_stats.bcn); 1218 1218 init_completion(&ar->fw_stats_complete); 1219 + init_completion(&ar->fw_stats_done); 1219 1220 } 1220 1221 1221 1222 void ath12k_fw_stats_free(struct ath12k_fw_stats *stats) ··· 1229 1228 void ath12k_fw_stats_reset(struct ath12k *ar) 1230 1229 { 1231 1230 spin_lock_bh(&ar->data_lock); 1232 - ar->fw_stats.fw_stats_done = false; 1233 1231 ath12k_fw_stats_free(&ar->fw_stats); 1232 + ar->fw_stats.num_vdev_recvd = 0; 1233 + ar->fw_stats.num_bcn_recvd = 0; 1234 1234 spin_unlock_bh(&ar->data_lock); 1235 1235 } 1236 1236
+9 -1
drivers/net/wireless/ath/ath12k/core.h
··· 601 601 #define ATH12K_NUM_CHANS 101 602 602 #define ATH12K_MAX_5GHZ_CHAN 173 603 603 604 + static inline bool ath12k_is_2ghz_channel_freq(u32 freq) 605 + { 606 + return freq >= ATH12K_MIN_2GHZ_FREQ && 607 + freq <= ATH12K_MAX_2GHZ_FREQ; 608 + } 609 + 604 610 enum ath12k_hw_state { 605 611 ATH12K_HW_STATE_OFF, 606 612 ATH12K_HW_STATE_ON, ··· 632 626 struct list_head pdevs; 633 627 struct list_head vdevs; 634 628 struct list_head bcn; 635 - bool fw_stats_done; 629 + u32 num_vdev_recvd; 630 + u32 num_bcn_recvd; 636 631 }; 637 632 638 633 struct ath12k_dbg_htt_stats { ··· 813 806 bool regdom_set_by_user; 814 807 815 808 struct completion fw_stats_complete; 809 + struct completion fw_stats_done; 816 810 817 811 struct completion mlo_setup_done; 818 812 u32 mlo_setup_status;
-58
drivers/net/wireless/ath/ath12k/debugfs.c
··· 1251 1251 */ 1252 1252 } 1253 1253 1254 - void 1255 - ath12k_debugfs_fw_stats_process(struct ath12k *ar, 1256 - struct ath12k_fw_stats *stats) 1257 - { 1258 - struct ath12k_base *ab = ar->ab; 1259 - struct ath12k_pdev *pdev; 1260 - bool is_end; 1261 - static unsigned int num_vdev, num_bcn; 1262 - size_t total_vdevs_started = 0; 1263 - int i; 1264 - 1265 - if (stats->stats_id == WMI_REQUEST_VDEV_STAT) { 1266 - if (list_empty(&stats->vdevs)) { 1267 - ath12k_warn(ab, "empty vdev stats"); 1268 - return; 1269 - } 1270 - /* FW sends all the active VDEV stats irrespective of PDEV, 1271 - * hence limit until the count of all VDEVs started 1272 - */ 1273 - rcu_read_lock(); 1274 - for (i = 0; i < ab->num_radios; i++) { 1275 - pdev = rcu_dereference(ab->pdevs_active[i]); 1276 - if (pdev && pdev->ar) 1277 - total_vdevs_started += pdev->ar->num_started_vdevs; 1278 - } 1279 - rcu_read_unlock(); 1280 - 1281 - is_end = ((++num_vdev) == total_vdevs_started); 1282 - 1283 - list_splice_tail_init(&stats->vdevs, 1284 - &ar->fw_stats.vdevs); 1285 - 1286 - if (is_end) { 1287 - ar->fw_stats.fw_stats_done = true; 1288 - num_vdev = 0; 1289 - } 1290 - return; 1291 - } 1292 - if (stats->stats_id == WMI_REQUEST_BCN_STAT) { 1293 - if (list_empty(&stats->bcn)) { 1294 - ath12k_warn(ab, "empty beacon stats"); 1295 - return; 1296 - } 1297 - /* Mark end until we reached the count of all started VDEVs 1298 - * within the PDEV 1299 - */ 1300 - is_end = ((++num_bcn) == ar->num_started_vdevs); 1301 - 1302 - list_splice_tail_init(&stats->bcn, 1303 - &ar->fw_stats.bcn); 1304 - 1305 - if (is_end) { 1306 - ar->fw_stats.fw_stats_done = true; 1307 - num_bcn = 0; 1308 - } 1309 - } 1310 - } 1311 - 1312 1254 static int ath12k_open_vdev_stats(struct inode *inode, struct file *file) 1313 1255 { 1314 1256 struct ath12k *ar = inode->i_private;
-7
drivers/net/wireless/ath/ath12k/debugfs.h
··· 12 12 void ath12k_debugfs_soc_destroy(struct ath12k_base *ab); 13 13 void ath12k_debugfs_register(struct ath12k *ar); 14 14 void ath12k_debugfs_unregister(struct ath12k *ar); 15 - void ath12k_debugfs_fw_stats_process(struct ath12k *ar, 16 - struct ath12k_fw_stats *stats); 17 15 void ath12k_debugfs_op_vif_add(struct ieee80211_hw *hw, 18 16 struct ieee80211_vif *vif); 19 17 void ath12k_debugfs_pdev_create(struct ath12k_base *ab); ··· 121 123 } 122 124 123 125 static inline void ath12k_debugfs_unregister(struct ath12k *ar) 124 - { 125 - } 126 - 127 - static inline void ath12k_debugfs_fw_stats_process(struct ath12k *ar, 128 - struct ath12k_fw_stats *stats) 129 126 { 130 127 } 131 128
+370 -24
drivers/net/wireless/ath/ath12k/mac.c
··· 4360 4360 { 4361 4361 struct ath12k_base *ab = ar->ab; 4362 4362 struct ath12k_hw *ah = ath12k_ar_to_ah(ar); 4363 - unsigned long timeout, time_left; 4363 + unsigned long time_left; 4364 4364 int ret; 4365 4365 4366 4366 guard(mutex)(&ah->hw_mutex); ··· 4368 4368 if (ah->state != ATH12K_HW_STATE_ON) 4369 4369 return -ENETDOWN; 4370 4370 4371 - /* FW stats can get split when exceeding the stats data buffer limit. 4372 - * In that case, since there is no end marking for the back-to-back 4373 - * received 'update stats' event, we keep a 3 seconds timeout in case, 4374 - * fw_stats_done is not marked yet 4375 - */ 4376 - timeout = jiffies + msecs_to_jiffies(3 * 1000); 4377 4371 ath12k_fw_stats_reset(ar); 4378 4372 4379 4373 reinit_completion(&ar->fw_stats_complete); 4374 + reinit_completion(&ar->fw_stats_done); 4380 4375 4381 4376 ret = ath12k_wmi_send_stats_request_cmd(ar, param->stats_id, 4382 4377 param->vdev_id, param->pdev_id); 4383 - 4384 4378 if (ret) { 4385 4379 ath12k_warn(ab, "failed to request fw stats: %d\n", ret); 4386 4380 return ret; ··· 4385 4391 param->pdev_id, param->vdev_id, param->stats_id); 4386 4392 4387 4393 time_left = wait_for_completion_timeout(&ar->fw_stats_complete, 1 * HZ); 4388 - 4389 4394 if (!time_left) { 4390 4395 ath12k_warn(ab, "time out while waiting for get fw stats\n"); 4391 4396 return -ETIMEDOUT; ··· 4393 4400 /* Firmware sends WMI_UPDATE_STATS_EVENTID back-to-back 4394 4401 * when stats data buffer limit is reached. fw_stats_complete 4395 4402 * is completed once host receives first event from firmware, but 4396 - * still end might not be marked in the TLV. 4397 - * Below loop is to confirm that firmware completed sending all the event 4398 - * and fw_stats_done is marked true when end is marked in the TLV. 4403 + * still there could be more events following. Below is to wait 4404 + * until firmware completes sending all the events. 4399 4405 */ 4400 - for (;;) { 4401 - if (time_after(jiffies, timeout)) 4402 - break; 4403 - spin_lock_bh(&ar->data_lock); 4404 - if (ar->fw_stats.fw_stats_done) { 4405 - spin_unlock_bh(&ar->data_lock); 4406 - break; 4407 - } 4408 - spin_unlock_bh(&ar->data_lock); 4406 + time_left = wait_for_completion_timeout(&ar->fw_stats_done, 3 * HZ); 4407 + if (!time_left) { 4408 + ath12k_warn(ab, "time out while waiting for fw stats done\n"); 4409 + return -ETIMEDOUT; 4409 4410 } 4411 + 4410 4412 return 0; 4411 4413 } 4412 4414 ··· 5878 5890 return ret; 5879 5891 } 5880 5892 5893 + static bool ath12k_mac_is_freq_on_mac(struct ath12k_hw_mode_freq_range_arg *freq_range, 5894 + u32 freq, u8 mac_id) 5895 + { 5896 + return (freq >= freq_range[mac_id].low_2ghz_freq && 5897 + freq <= freq_range[mac_id].high_2ghz_freq) || 5898 + (freq >= freq_range[mac_id].low_5ghz_freq && 5899 + freq <= freq_range[mac_id].high_5ghz_freq); 5900 + } 5901 + 5902 + static bool 5903 + ath12k_mac_2_freq_same_mac_in_freq_range(struct ath12k_base *ab, 5904 + struct ath12k_hw_mode_freq_range_arg *freq_range, 5905 + u32 freq_link1, u32 freq_link2) 5906 + { 5907 + u8 i; 5908 + 5909 + for (i = 0; i < MAX_RADIOS; i++) { 5910 + if (ath12k_mac_is_freq_on_mac(freq_range, freq_link1, i) && 5911 + ath12k_mac_is_freq_on_mac(freq_range, freq_link2, i)) 5912 + return true; 5913 + } 5914 + 5915 + return false; 5916 + } 5917 + 5918 + static bool ath12k_mac_is_hw_dbs_capable(struct ath12k_base *ab) 5919 + { 5920 + return test_bit(WMI_TLV_SERVICE_DUAL_BAND_SIMULTANEOUS_SUPPORT, 5921 + ab->wmi_ab.svc_map) && 5922 + ab->wmi_ab.hw_mode_info.support_dbs; 5923 + } 5924 + 5925 + static bool ath12k_mac_2_freq_same_mac_in_dbs(struct ath12k_base *ab, 5926 + u32 freq_link1, u32 freq_link2) 5927 + { 5928 + struct ath12k_hw_mode_freq_range_arg *freq_range; 5929 + 5930 + if (!ath12k_mac_is_hw_dbs_capable(ab)) 5931 + return true; 5932 + 5933 + freq_range = ab->wmi_ab.hw_mode_info.freq_range_caps[ATH12K_HW_MODE_DBS]; 5934 + return ath12k_mac_2_freq_same_mac_in_freq_range(ab, freq_range, 5935 + freq_link1, freq_link2); 5936 + } 5937 + 5938 + static bool ath12k_mac_is_hw_sbs_capable(struct ath12k_base *ab) 5939 + { 5940 + return test_bit(WMI_TLV_SERVICE_DUAL_BAND_SIMULTANEOUS_SUPPORT, 5941 + ab->wmi_ab.svc_map) && 5942 + ab->wmi_ab.hw_mode_info.support_sbs; 5943 + } 5944 + 5945 + static bool ath12k_mac_2_freq_same_mac_in_sbs(struct ath12k_base *ab, 5946 + u32 freq_link1, u32 freq_link2) 5947 + { 5948 + struct ath12k_hw_mode_info *info = &ab->wmi_ab.hw_mode_info; 5949 + struct ath12k_hw_mode_freq_range_arg *sbs_uppr_share; 5950 + struct ath12k_hw_mode_freq_range_arg *sbs_low_share; 5951 + struct ath12k_hw_mode_freq_range_arg *sbs_range; 5952 + 5953 + if (!ath12k_mac_is_hw_sbs_capable(ab)) 5954 + return true; 5955 + 5956 + if (ab->wmi_ab.sbs_lower_band_end_freq) { 5957 + sbs_uppr_share = info->freq_range_caps[ATH12K_HW_MODE_SBS_UPPER_SHARE]; 5958 + sbs_low_share = info->freq_range_caps[ATH12K_HW_MODE_SBS_LOWER_SHARE]; 5959 + 5960 + return ath12k_mac_2_freq_same_mac_in_freq_range(ab, sbs_low_share, 5961 + freq_link1, freq_link2) || 5962 + ath12k_mac_2_freq_same_mac_in_freq_range(ab, sbs_uppr_share, 5963 + freq_link1, freq_link2); 5964 + } 5965 + 5966 + sbs_range = info->freq_range_caps[ATH12K_HW_MODE_SBS]; 5967 + return ath12k_mac_2_freq_same_mac_in_freq_range(ab, sbs_range, 5968 + freq_link1, freq_link2); 5969 + } 5970 + 5971 + static bool ath12k_mac_freqs_on_same_mac(struct ath12k_base *ab, 5972 + u32 freq_link1, u32 freq_link2) 5973 + { 5974 + return ath12k_mac_2_freq_same_mac_in_dbs(ab, freq_link1, freq_link2) && 5975 + ath12k_mac_2_freq_same_mac_in_sbs(ab, freq_link1, freq_link2); 5976 + } 5977 + 5978 + static int ath12k_mac_mlo_sta_set_link_active(struct ath12k_base *ab, 5979 + enum wmi_mlo_link_force_reason reason, 5980 + enum wmi_mlo_link_force_mode mode, 5981 + u8 *mlo_vdev_id_lst, 5982 + u8 num_mlo_vdev, 5983 + u8 *mlo_inactive_vdev_lst, 5984 + u8 num_mlo_inactive_vdev) 5985 + { 5986 + struct wmi_mlo_link_set_active_arg param = {0}; 5987 + u32 entry_idx, entry_offset, vdev_idx; 5988 + u8 vdev_id; 5989 + 5990 + param.reason = reason; 5991 + param.force_mode = mode; 5992 + 5993 + for (vdev_idx = 0; vdev_idx < num_mlo_vdev; vdev_idx++) { 5994 + vdev_id = mlo_vdev_id_lst[vdev_idx]; 5995 + entry_idx = vdev_id / 32; 5996 + entry_offset = vdev_id % 32; 5997 + if (entry_idx >= WMI_MLO_LINK_NUM_SZ) { 5998 + ath12k_warn(ab, "Invalid entry_idx %d num_mlo_vdev %d vdev %d", 5999 + entry_idx, num_mlo_vdev, vdev_id); 6000 + return -EINVAL; 6001 + } 6002 + param.vdev_bitmap[entry_idx] |= 1 << entry_offset; 6003 + /* update entry number if entry index changed */ 6004 + if (param.num_vdev_bitmap < entry_idx + 1) 6005 + param.num_vdev_bitmap = entry_idx + 1; 6006 + } 6007 + 6008 + ath12k_dbg(ab, ATH12K_DBG_MAC, 6009 + "num_vdev_bitmap %d vdev_bitmap[0] = 0x%x, vdev_bitmap[1] = 0x%x", 6010 + param.num_vdev_bitmap, param.vdev_bitmap[0], param.vdev_bitmap[1]); 6011 + 6012 + if (mode == WMI_MLO_LINK_FORCE_MODE_ACTIVE_INACTIVE) { 6013 + for (vdev_idx = 0; vdev_idx < num_mlo_inactive_vdev; vdev_idx++) { 6014 + vdev_id = mlo_inactive_vdev_lst[vdev_idx]; 6015 + entry_idx = vdev_id / 32; 6016 + entry_offset = vdev_id % 32; 6017 + if (entry_idx >= WMI_MLO_LINK_NUM_SZ) { 6018 + ath12k_warn(ab, "Invalid entry_idx %d num_mlo_vdev %d vdev %d", 6019 + entry_idx, num_mlo_inactive_vdev, vdev_id); 6020 + return -EINVAL; 6021 + } 6022 + param.inactive_vdev_bitmap[entry_idx] |= 1 << entry_offset; 6023 + /* update entry number if entry index changed */ 6024 + if (param.num_inactive_vdev_bitmap < entry_idx + 1) 6025 + param.num_inactive_vdev_bitmap = entry_idx + 1; 6026 + } 6027 + 6028 + ath12k_dbg(ab, ATH12K_DBG_MAC, 6029 + "num_vdev_bitmap %d inactive_vdev_bitmap[0] = 0x%x, inactive_vdev_bitmap[1] = 0x%x", 6030 + param.num_inactive_vdev_bitmap, 6031 + param.inactive_vdev_bitmap[0], 6032 + param.inactive_vdev_bitmap[1]); 6033 + } 6034 + 6035 + if (mode == WMI_MLO_LINK_FORCE_MODE_ACTIVE_LINK_NUM || 6036 + mode == WMI_MLO_LINK_FORCE_MODE_INACTIVE_LINK_NUM) { 6037 + param.num_link_entry = 1; 6038 + param.link_num[0].num_of_link = num_mlo_vdev - 1; 6039 + } 6040 + 6041 + return ath12k_wmi_send_mlo_link_set_active_cmd(ab, &param); 6042 + } 6043 + 6044 + static int ath12k_mac_mlo_sta_update_link_active(struct ath12k_base *ab, 6045 + struct ieee80211_hw *hw, 6046 + struct ath12k_vif *ahvif) 6047 + { 6048 + u8 mlo_vdev_id_lst[IEEE80211_MLD_MAX_NUM_LINKS] = {0}; 6049 + u32 mlo_freq_list[IEEE80211_MLD_MAX_NUM_LINKS] = {0}; 6050 + unsigned long links = ahvif->links_map; 6051 + enum wmi_mlo_link_force_reason reason; 6052 + struct ieee80211_chanctx_conf *conf; 6053 + enum wmi_mlo_link_force_mode mode; 6054 + struct ieee80211_bss_conf *info; 6055 + struct ath12k_link_vif *arvif; 6056 + u8 num_mlo_vdev = 0; 6057 + u8 link_id; 6058 + 6059 + for_each_set_bit(link_id, &links, IEEE80211_MLD_MAX_NUM_LINKS) { 6060 + arvif = wiphy_dereference(hw->wiphy, ahvif->link[link_id]); 6061 + /* make sure vdev is created on this device */ 6062 + if (!arvif || !arvif->is_created || arvif->ar->ab != ab) 6063 + continue; 6064 + 6065 + info = ath12k_mac_get_link_bss_conf(arvif); 6066 + conf = wiphy_dereference(hw->wiphy, info->chanctx_conf); 6067 + mlo_freq_list[num_mlo_vdev] = conf->def.chan->center_freq; 6068 + 6069 + mlo_vdev_id_lst[num_mlo_vdev] = arvif->vdev_id; 6070 + num_mlo_vdev++; 6071 + } 6072 + 6073 + /* It is not allowed to activate more links than a single device 6074 + * supported. Something goes wrong if we reach here. 6075 + */ 6076 + if (num_mlo_vdev > ATH12K_NUM_MAX_ACTIVE_LINKS_PER_DEVICE) { 6077 + WARN_ON_ONCE(1); 6078 + return -EINVAL; 6079 + } 6080 + 6081 + /* if 2 links are established and both link channels fall on the 6082 + * same hardware MAC, send command to firmware to deactivate one 6083 + * of them. 6084 + */ 6085 + if (num_mlo_vdev == 2 && 6086 + ath12k_mac_freqs_on_same_mac(ab, mlo_freq_list[0], 6087 + mlo_freq_list[1])) { 6088 + mode = WMI_MLO_LINK_FORCE_MODE_INACTIVE_LINK_NUM; 6089 + reason = WMI_MLO_LINK_FORCE_REASON_NEW_CONNECT; 6090 + return ath12k_mac_mlo_sta_set_link_active(ab, reason, mode, 6091 + mlo_vdev_id_lst, num_mlo_vdev, 6092 + NULL, 0); 6093 + } 6094 + 6095 + return 0; 6096 + } 6097 + 6098 + static bool ath12k_mac_are_sbs_chan(struct ath12k_base *ab, u32 freq_1, u32 freq_2) 6099 + { 6100 + if (!ath12k_mac_is_hw_sbs_capable(ab)) 6101 + return false; 6102 + 6103 + if (ath12k_is_2ghz_channel_freq(freq_1) || 6104 + ath12k_is_2ghz_channel_freq(freq_2)) 6105 + return false; 6106 + 6107 + return !ath12k_mac_2_freq_same_mac_in_sbs(ab, freq_1, freq_2); 6108 + } 6109 + 6110 + static bool ath12k_mac_are_dbs_chan(struct ath12k_base *ab, u32 freq_1, u32 freq_2) 6111 + { 6112 + if (!ath12k_mac_is_hw_dbs_capable(ab)) 6113 + return false; 6114 + 6115 + return !ath12k_mac_2_freq_same_mac_in_dbs(ab, freq_1, freq_2); 6116 + } 6117 + 6118 + static int ath12k_mac_select_links(struct ath12k_base *ab, 6119 + struct ieee80211_vif *vif, 6120 + struct ieee80211_hw *hw, 6121 + u16 *selected_links) 6122 + { 6123 + unsigned long useful_links = ieee80211_vif_usable_links(vif); 6124 + struct ath12k_vif *ahvif = ath12k_vif_to_ahvif(vif); 6125 + u8 num_useful_links = hweight_long(useful_links); 6126 + struct ieee80211_chanctx_conf *chanctx; 6127 + struct ath12k_link_vif *assoc_arvif; 6128 + u32 assoc_link_freq, partner_freq; 6129 + u16 sbs_links = 0, dbs_links = 0; 6130 + struct ieee80211_bss_conf *info; 6131 + struct ieee80211_channel *chan; 6132 + struct ieee80211_sta *sta; 6133 + struct ath12k_sta *ahsta; 6134 + u8 link_id; 6135 + 6136 + /* activate all useful links if less than max supported */ 6137 + if (num_useful_links <= ATH12K_NUM_MAX_ACTIVE_LINKS_PER_DEVICE) { 6138 + *selected_links = useful_links; 6139 + return 0; 6140 + } 6141 + 6142 + /* only in station mode we can get here, so it's safe 6143 + * to use ap_addr 6144 + */ 6145 + rcu_read_lock(); 6146 + sta = ieee80211_find_sta(vif, vif->cfg.ap_addr); 6147 + if (!sta) { 6148 + rcu_read_unlock(); 6149 + ath12k_warn(ab, "failed to find sta with addr %pM\n", vif->cfg.ap_addr); 6150 + return -EINVAL; 6151 + } 6152 + 6153 + ahsta = ath12k_sta_to_ahsta(sta); 6154 + assoc_arvif = wiphy_dereference(hw->wiphy, ahvif->link[ahsta->assoc_link_id]); 6155 + info = ath12k_mac_get_link_bss_conf(assoc_arvif); 6156 + chanctx = rcu_dereference(info->chanctx_conf); 6157 + assoc_link_freq = chanctx->def.chan->center_freq; 6158 + rcu_read_unlock(); 6159 + ath12k_dbg(ab, ATH12K_DBG_MAC, "assoc link %u freq %u\n", 6160 + assoc_arvif->link_id, assoc_link_freq); 6161 + 6162 + /* assoc link is already activated and has to be kept active, 6163 + * only need to select a partner link from others. 6164 + */ 6165 + useful_links &= ~BIT(assoc_arvif->link_id); 6166 + for_each_set_bit(link_id, &useful_links, IEEE80211_MLD_MAX_NUM_LINKS) { 6167 + info = wiphy_dereference(hw->wiphy, vif->link_conf[link_id]); 6168 + if (!info) { 6169 + ath12k_warn(ab, "failed to get link info for link: %u\n", 6170 + link_id); 6171 + return -ENOLINK; 6172 + } 6173 + 6174 + chan = info->chanreq.oper.chan; 6175 + if (!chan) { 6176 + ath12k_warn(ab, "failed to get chan for link: %u\n", link_id); 6177 + return -EINVAL; 6178 + } 6179 + 6180 + partner_freq = chan->center_freq; 6181 + if (ath12k_mac_are_sbs_chan(ab, assoc_link_freq, partner_freq)) { 6182 + sbs_links |= BIT(link_id); 6183 + ath12k_dbg(ab, ATH12K_DBG_MAC, "new SBS link %u freq %u\n", 6184 + link_id, partner_freq); 6185 + continue; 6186 + } 6187 + 6188 + if (ath12k_mac_are_dbs_chan(ab, assoc_link_freq, partner_freq)) { 6189 + dbs_links |= BIT(link_id); 6190 + ath12k_dbg(ab, ATH12K_DBG_MAC, "new DBS link %u freq %u\n", 6191 + link_id, partner_freq); 6192 + continue; 6193 + } 6194 + 6195 + ath12k_dbg(ab, ATH12K_DBG_MAC, "non DBS/SBS link %u freq %u\n", 6196 + link_id, partner_freq); 6197 + } 6198 + 6199 + /* choose the first candidate no matter how many is in the list */ 6200 + if (sbs_links) 6201 + link_id = __ffs(sbs_links); 6202 + else if (dbs_links) 6203 + link_id = __ffs(dbs_links); 6204 + else 6205 + link_id = ffs(useful_links) - 1; 6206 + 6207 + ath12k_dbg(ab, ATH12K_DBG_MAC, "select partner link %u\n", link_id); 6208 + 6209 + *selected_links = BIT(assoc_arvif->link_id) | BIT(link_id); 6210 + 6211 + return 0; 6212 + } 6213 + 5881 6214 static int ath12k_mac_op_sta_state(struct ieee80211_hw *hw, 5882 6215 struct ieee80211_vif *vif, 5883 6216 struct ieee80211_sta *sta, ··· 6208 5899 struct ath12k_vif *ahvif = ath12k_vif_to_ahvif(vif); 6209 5900 struct ath12k_sta *ahsta = ath12k_sta_to_ahsta(sta); 6210 5901 struct ath12k_hw *ah = ath12k_hw_to_ah(hw); 5902 + struct ath12k_base *prev_ab = NULL, *ab; 6211 5903 struct ath12k_link_vif *arvif; 6212 5904 struct ath12k_link_sta *arsta; 6213 5905 unsigned long valid_links; 6214 - u8 link_id = 0; 5906 + u16 selected_links = 0; 5907 + u8 link_id = 0, i; 5908 + struct ath12k *ar; 6215 5909 int ret; 6216 5910 6217 5911 lockdep_assert_wiphy(hw->wiphy); ··· 6284 5972 * about to move to the associated state. 6285 5973 */ 6286 5974 if (ieee80211_vif_is_mld(vif) && vif->type == NL80211_IFTYPE_STATION && 6287 - old_state == IEEE80211_STA_AUTH && new_state == IEEE80211_STA_ASSOC) 6288 - ieee80211_set_active_links(vif, ieee80211_vif_usable_links(vif)); 5975 + old_state == IEEE80211_STA_AUTH && new_state == IEEE80211_STA_ASSOC) { 5976 + /* TODO: for now only do link selection for single device 5977 + * MLO case. Other cases would be handled in the future. 5978 + */ 5979 + ab = ah->radio[0].ab; 5980 + if (ab->ag->num_devices == 1) { 5981 + ret = ath12k_mac_select_links(ab, vif, hw, &selected_links); 5982 + if (ret) { 5983 + ath12k_warn(ab, 5984 + "failed to get selected links: %d\n", ret); 5985 + goto exit; 5986 + } 5987 + } else { 5988 + selected_links = ieee80211_vif_usable_links(vif); 5989 + } 5990 + 5991 + ieee80211_set_active_links(vif, selected_links); 5992 + } 6289 5993 6290 5994 /* Handle all the other state transitions in generic way */ 6291 5995 valid_links = ahsta->links_map; ··· 6325 5997 } 6326 5998 } 6327 5999 6000 + if (ieee80211_vif_is_mld(vif) && vif->type == NL80211_IFTYPE_STATION && 6001 + old_state == IEEE80211_STA_ASSOC && new_state == IEEE80211_STA_AUTHORIZED) { 6002 + for_each_ar(ah, ar, i) { 6003 + ab = ar->ab; 6004 + if (prev_ab == ab) 6005 + continue; 6006 + 6007 + ret = ath12k_mac_mlo_sta_update_link_active(ab, hw, ahvif); 6008 + if (ret) { 6009 + ath12k_warn(ab, 6010 + "failed to update link active state on connect %d\n", 6011 + ret); 6012 + goto exit; 6013 + } 6014 + 6015 + prev_ab = ab; 6016 + } 6017 + } 6328 6018 /* IEEE80211_STA_NONE -> IEEE80211_STA_NOTEXIST: 6329 6019 * Remove the station from driver (handle ML sta here since that 6330 6020 * needs special handling. Normal sta will be handled in generic
+2
drivers/net/wireless/ath/ath12k/mac.h
··· 54 54 #define ATH12K_DEFAULT_SCAN_LINK IEEE80211_MLD_MAX_NUM_LINKS 55 55 #define ATH12K_NUM_MAX_LINKS (IEEE80211_MLD_MAX_NUM_LINKS + 1) 56 56 57 + #define ATH12K_NUM_MAX_ACTIVE_LINKS_PER_DEVICE 2 58 + 57 59 enum ath12k_supported_bw { 58 60 ATH12K_BW_20 = 0, 59 61 ATH12K_BW_40 = 1,
+819 -10
drivers/net/wireless/ath/ath12k/wmi.c
··· 91 91 bool dma_ring_cap_done; 92 92 bool spectral_bin_scaling_done; 93 93 bool mac_phy_caps_ext_done; 94 + bool hal_reg_caps_ext2_done; 95 + bool scan_radio_caps_ext2_done; 96 + bool twt_caps_done; 97 + bool htt_msdu_idx_to_qtype_map_done; 98 + bool dbs_or_sbs_cap_ext_done; 94 99 }; 95 100 96 101 struct ath12k_wmi_rdy_parse { ··· 4400 4395 static int ath12k_wmi_hw_mode_caps(struct ath12k_base *soc, 4401 4396 u16 len, const void *ptr, void *data) 4402 4397 { 4398 + struct ath12k_svc_ext_info *svc_ext_info = &soc->wmi_ab.svc_ext_info; 4403 4399 struct ath12k_wmi_svc_rdy_ext_parse *svc_rdy_ext = data; 4404 4400 const struct ath12k_wmi_hw_mode_cap_params *hw_mode_caps; 4405 4401 enum wmi_host_hw_mode_config_type mode, pref; ··· 4433 4427 } 4434 4428 } 4435 4429 4436 - ath12k_dbg(soc, ATH12K_DBG_WMI, "preferred_hw_mode:%d\n", 4437 - soc->wmi_ab.preferred_hw_mode); 4430 + svc_ext_info->num_hw_modes = svc_rdy_ext->n_hw_mode_caps; 4431 + 4432 + ath12k_dbg(soc, ATH12K_DBG_WMI, "num hw modes %u preferred_hw_mode %d\n", 4433 + svc_ext_info->num_hw_modes, soc->wmi_ab.preferred_hw_mode); 4434 + 4438 4435 if (soc->wmi_ab.preferred_hw_mode == WMI_HOST_HW_MODE_MAX) 4439 4436 return -EINVAL; 4440 4437 ··· 4667 4658 return ret; 4668 4659 } 4669 4660 4661 + static void 4662 + ath12k_wmi_save_mac_phy_info(struct ath12k_base *ab, 4663 + const struct ath12k_wmi_mac_phy_caps_params *mac_phy_cap, 4664 + struct ath12k_svc_ext_mac_phy_info *mac_phy_info) 4665 + { 4666 + mac_phy_info->phy_id = __le32_to_cpu(mac_phy_cap->phy_id); 4667 + mac_phy_info->supported_bands = __le32_to_cpu(mac_phy_cap->supported_bands); 4668 + mac_phy_info->hw_freq_range.low_2ghz_freq = 4669 + __le32_to_cpu(mac_phy_cap->low_2ghz_chan_freq); 4670 + mac_phy_info->hw_freq_range.high_2ghz_freq = 4671 + __le32_to_cpu(mac_phy_cap->high_2ghz_chan_freq); 4672 + mac_phy_info->hw_freq_range.low_5ghz_freq = 4673 + __le32_to_cpu(mac_phy_cap->low_5ghz_chan_freq); 4674 + mac_phy_info->hw_freq_range.high_5ghz_freq = 4675 + __le32_to_cpu(mac_phy_cap->high_5ghz_chan_freq); 4676 + } 4677 + 4678 + static void 4679 + ath12k_wmi_save_all_mac_phy_info(struct ath12k_base *ab, 4680 + struct ath12k_wmi_svc_rdy_ext_parse *svc_rdy_ext) 4681 + { 4682 + struct ath12k_svc_ext_info *svc_ext_info = &ab->wmi_ab.svc_ext_info; 4683 + const struct ath12k_wmi_mac_phy_caps_params *mac_phy_cap; 4684 + const struct ath12k_wmi_hw_mode_cap_params *hw_mode_cap; 4685 + struct ath12k_svc_ext_mac_phy_info *mac_phy_info; 4686 + u32 hw_mode_id, phy_bit_map; 4687 + u8 hw_idx; 4688 + 4689 + mac_phy_info = &svc_ext_info->mac_phy_info[0]; 4690 + mac_phy_cap = svc_rdy_ext->mac_phy_caps; 4691 + 4692 + for (hw_idx = 0; hw_idx < svc_ext_info->num_hw_modes; hw_idx++) { 4693 + hw_mode_cap = &svc_rdy_ext->hw_mode_caps[hw_idx]; 4694 + hw_mode_id = __le32_to_cpu(hw_mode_cap->hw_mode_id); 4695 + phy_bit_map = __le32_to_cpu(hw_mode_cap->phy_id_map); 4696 + 4697 + while (phy_bit_map) { 4698 + ath12k_wmi_save_mac_phy_info(ab, mac_phy_cap, mac_phy_info); 4699 + mac_phy_info->hw_mode_config_type = 4700 + le32_get_bits(hw_mode_cap->hw_mode_config_type, 4701 + WMI_HW_MODE_CAP_CFG_TYPE); 4702 + ath12k_dbg(ab, ATH12K_DBG_WMI, 4703 + "hw_idx %u hw_mode_id %u hw_mode_config_type %u supported_bands %u phy_id %u 2 GHz [%u - %u] 5 GHz [%u - %u]\n", 4704 + hw_idx, hw_mode_id, 4705 + mac_phy_info->hw_mode_config_type, 4706 + mac_phy_info->supported_bands, mac_phy_info->phy_id, 4707 + mac_phy_info->hw_freq_range.low_2ghz_freq, 4708 + mac_phy_info->hw_freq_range.high_2ghz_freq, 4709 + mac_phy_info->hw_freq_range.low_5ghz_freq, 4710 + mac_phy_info->hw_freq_range.high_5ghz_freq); 4711 + 4712 + mac_phy_cap++; 4713 + mac_phy_info++; 4714 + 4715 + phy_bit_map >>= 1; 4716 + } 4717 + } 4718 + } 4719 + 4670 4720 static int ath12k_wmi_svc_rdy_ext_parse(struct ath12k_base *ab, 4671 4721 u16 tag, u16 len, 4672 4722 const void *ptr, void *data) ··· 4773 4705 ath12k_warn(ab, "failed to parse tlv %d\n", ret); 4774 4706 return ret; 4775 4707 } 4708 + 4709 + ath12k_wmi_save_all_mac_phy_info(ab, svc_rdy_ext); 4776 4710 4777 4711 svc_rdy_ext->mac_phy_done = true; 4778 4712 } else if (!svc_rdy_ext->ext_hal_reg_done) { ··· 4992 4922 return 0; 4993 4923 } 4994 4924 4925 + static void 4926 + ath12k_wmi_update_freq_info(struct ath12k_base *ab, 4927 + struct ath12k_svc_ext_mac_phy_info *mac_cap, 4928 + enum ath12k_hw_mode mode, 4929 + u32 phy_id) 4930 + { 4931 + struct ath12k_hw_mode_info *hw_mode_info = &ab->wmi_ab.hw_mode_info; 4932 + struct ath12k_hw_mode_freq_range_arg *mac_range; 4933 + 4934 + mac_range = &hw_mode_info->freq_range_caps[mode][phy_id]; 4935 + 4936 + if (mac_cap->supported_bands & WMI_HOST_WLAN_2GHZ_CAP) { 4937 + mac_range->low_2ghz_freq = max_t(u32, 4938 + mac_cap->hw_freq_range.low_2ghz_freq, 4939 + ATH12K_MIN_2GHZ_FREQ); 4940 + mac_range->high_2ghz_freq = mac_cap->hw_freq_range.high_2ghz_freq ? 4941 + min_t(u32, 4942 + mac_cap->hw_freq_range.high_2ghz_freq, 4943 + ATH12K_MAX_2GHZ_FREQ) : 4944 + ATH12K_MAX_2GHZ_FREQ; 4945 + } 4946 + 4947 + if (mac_cap->supported_bands & WMI_HOST_WLAN_5GHZ_CAP) { 4948 + mac_range->low_5ghz_freq = max_t(u32, 4949 + mac_cap->hw_freq_range.low_5ghz_freq, 4950 + ATH12K_MIN_5GHZ_FREQ); 4951 + mac_range->high_5ghz_freq = mac_cap->hw_freq_range.high_5ghz_freq ? 4952 + min_t(u32, 4953 + mac_cap->hw_freq_range.high_5ghz_freq, 4954 + ATH12K_MAX_6GHZ_FREQ) : 4955 + ATH12K_MAX_6GHZ_FREQ; 4956 + } 4957 + } 4958 + 4959 + static bool 4960 + ath12k_wmi_all_phy_range_updated(struct ath12k_base *ab, 4961 + enum ath12k_hw_mode hwmode) 4962 + { 4963 + struct ath12k_hw_mode_info *hw_mode_info = &ab->wmi_ab.hw_mode_info; 4964 + struct ath12k_hw_mode_freq_range_arg *mac_range; 4965 + u8 phy_id; 4966 + 4967 + for (phy_id = 0; phy_id < MAX_RADIOS; phy_id++) { 4968 + mac_range = &hw_mode_info->freq_range_caps[hwmode][phy_id]; 4969 + /* modify SBS/DBS range only when both phy for DBS are filled */ 4970 + if (!mac_range->low_2ghz_freq && !mac_range->low_5ghz_freq) 4971 + return false; 4972 + } 4973 + 4974 + return true; 4975 + } 4976 + 4977 + static void ath12k_wmi_update_dbs_freq_info(struct ath12k_base *ab) 4978 + { 4979 + struct ath12k_hw_mode_info *hw_mode_info = &ab->wmi_ab.hw_mode_info; 4980 + struct ath12k_hw_mode_freq_range_arg *mac_range; 4981 + u8 phy_id; 4982 + 4983 + mac_range = hw_mode_info->freq_range_caps[ATH12K_HW_MODE_DBS]; 4984 + /* Reset 5 GHz range for shared mac for DBS */ 4985 + for (phy_id = 0; phy_id < MAX_RADIOS; phy_id++) { 4986 + if (mac_range[phy_id].low_2ghz_freq && 4987 + mac_range[phy_id].low_5ghz_freq) { 4988 + mac_range[phy_id].low_5ghz_freq = 0; 4989 + mac_range[phy_id].high_5ghz_freq = 0; 4990 + } 4991 + } 4992 + } 4993 + 4994 + static u32 4995 + ath12k_wmi_get_highest_5ghz_freq_from_range(struct ath12k_hw_mode_freq_range_arg *range) 4996 + { 4997 + u32 highest_freq = 0; 4998 + u8 phy_id; 4999 + 5000 + for (phy_id = 0; phy_id < MAX_RADIOS; phy_id++) { 5001 + if (range[phy_id].high_5ghz_freq > highest_freq) 5002 + highest_freq = range[phy_id].high_5ghz_freq; 5003 + } 5004 + 5005 + return highest_freq ? highest_freq : ATH12K_MAX_6GHZ_FREQ; 5006 + } 5007 + 5008 + static u32 5009 + ath12k_wmi_get_lowest_5ghz_freq_from_range(struct ath12k_hw_mode_freq_range_arg *range) 5010 + { 5011 + u32 lowest_freq = 0; 5012 + u8 phy_id; 5013 + 5014 + for (phy_id = 0; phy_id < MAX_RADIOS; phy_id++) { 5015 + if ((!lowest_freq && range[phy_id].low_5ghz_freq) || 5016 + range[phy_id].low_5ghz_freq < lowest_freq) 5017 + lowest_freq = range[phy_id].low_5ghz_freq; 5018 + } 5019 + 5020 + return lowest_freq ? lowest_freq : ATH12K_MIN_5GHZ_FREQ; 5021 + } 5022 + 5023 + static void 5024 + ath12k_wmi_fill_upper_share_sbs_freq(struct ath12k_base *ab, 5025 + u16 sbs_range_sep, 5026 + struct ath12k_hw_mode_freq_range_arg *ref_freq) 5027 + { 5028 + struct ath12k_hw_mode_info *hw_mode_info = &ab->wmi_ab.hw_mode_info; 5029 + struct ath12k_hw_mode_freq_range_arg *upper_sbs_freq_range; 5030 + u8 phy_id; 5031 + 5032 + upper_sbs_freq_range = 5033 + hw_mode_info->freq_range_caps[ATH12K_HW_MODE_SBS_UPPER_SHARE]; 5034 + 5035 + for (phy_id = 0; phy_id < MAX_RADIOS; phy_id++) { 5036 + upper_sbs_freq_range[phy_id].low_2ghz_freq = 5037 + ref_freq[phy_id].low_2ghz_freq; 5038 + upper_sbs_freq_range[phy_id].high_2ghz_freq = 5039 + ref_freq[phy_id].high_2ghz_freq; 5040 + 5041 + /* update for shared mac */ 5042 + if (upper_sbs_freq_range[phy_id].low_2ghz_freq) { 5043 + upper_sbs_freq_range[phy_id].low_5ghz_freq = sbs_range_sep + 10; 5044 + upper_sbs_freq_range[phy_id].high_5ghz_freq = 5045 + ath12k_wmi_get_highest_5ghz_freq_from_range(ref_freq); 5046 + } else { 5047 + upper_sbs_freq_range[phy_id].low_5ghz_freq = 5048 + ath12k_wmi_get_lowest_5ghz_freq_from_range(ref_freq); 5049 + upper_sbs_freq_range[phy_id].high_5ghz_freq = sbs_range_sep; 5050 + } 5051 + } 5052 + } 5053 + 5054 + static void 5055 + ath12k_wmi_fill_lower_share_sbs_freq(struct ath12k_base *ab, 5056 + u16 sbs_range_sep, 5057 + struct ath12k_hw_mode_freq_range_arg *ref_freq) 5058 + { 5059 + struct ath12k_hw_mode_info *hw_mode_info = &ab->wmi_ab.hw_mode_info; 5060 + struct ath12k_hw_mode_freq_range_arg *lower_sbs_freq_range; 5061 + u8 phy_id; 5062 + 5063 + lower_sbs_freq_range = 5064 + hw_mode_info->freq_range_caps[ATH12K_HW_MODE_SBS_LOWER_SHARE]; 5065 + 5066 + for (phy_id = 0; phy_id < MAX_RADIOS; phy_id++) { 5067 + lower_sbs_freq_range[phy_id].low_2ghz_freq = 5068 + ref_freq[phy_id].low_2ghz_freq; 5069 + lower_sbs_freq_range[phy_id].high_2ghz_freq = 5070 + ref_freq[phy_id].high_2ghz_freq; 5071 + 5072 + /* update for shared mac */ 5073 + if (lower_sbs_freq_range[phy_id].low_2ghz_freq) { 5074 + lower_sbs_freq_range[phy_id].low_5ghz_freq = 5075 + ath12k_wmi_get_lowest_5ghz_freq_from_range(ref_freq); 5076 + lower_sbs_freq_range[phy_id].high_5ghz_freq = sbs_range_sep; 5077 + } else { 5078 + lower_sbs_freq_range[phy_id].low_5ghz_freq = sbs_range_sep + 10; 5079 + lower_sbs_freq_range[phy_id].high_5ghz_freq = 5080 + ath12k_wmi_get_highest_5ghz_freq_from_range(ref_freq); 5081 + } 5082 + } 5083 + } 5084 + 5085 + static const char *ath12k_wmi_hw_mode_to_str(enum ath12k_hw_mode hw_mode) 5086 + { 5087 + static const char * const mode_str[] = { 5088 + [ATH12K_HW_MODE_SMM] = "SMM", 5089 + [ATH12K_HW_MODE_DBS] = "DBS", 5090 + [ATH12K_HW_MODE_SBS] = "SBS", 5091 + [ATH12K_HW_MODE_SBS_UPPER_SHARE] = "SBS_UPPER_SHARE", 5092 + [ATH12K_HW_MODE_SBS_LOWER_SHARE] = "SBS_LOWER_SHARE", 5093 + }; 5094 + 5095 + if (hw_mode >= ARRAY_SIZE(mode_str)) 5096 + return "Unknown"; 5097 + 5098 + return mode_str[hw_mode]; 5099 + } 5100 + 5101 + static void 5102 + ath12k_wmi_dump_freq_range_per_mac(struct ath12k_base *ab, 5103 + struct ath12k_hw_mode_freq_range_arg *freq_range, 5104 + enum ath12k_hw_mode hw_mode) 5105 + { 5106 + u8 i; 5107 + 5108 + for (i = 0; i < MAX_RADIOS; i++) 5109 + if (freq_range[i].low_2ghz_freq || freq_range[i].low_5ghz_freq) 5110 + ath12k_dbg(ab, ATH12K_DBG_WMI, 5111 + "frequency range: %s(%d) mac %d 2 GHz [%d - %d] 5 GHz [%d - %d]", 5112 + ath12k_wmi_hw_mode_to_str(hw_mode), 5113 + hw_mode, i, 5114 + freq_range[i].low_2ghz_freq, 5115 + freq_range[i].high_2ghz_freq, 5116 + freq_range[i].low_5ghz_freq, 5117 + freq_range[i].high_5ghz_freq); 5118 + } 5119 + 5120 + static void ath12k_wmi_dump_freq_range(struct ath12k_base *ab) 5121 + { 5122 + struct ath12k_hw_mode_freq_range_arg *freq_range; 5123 + u8 i; 5124 + 5125 + for (i = ATH12K_HW_MODE_SMM; i < ATH12K_HW_MODE_MAX; i++) { 5126 + freq_range = ab->wmi_ab.hw_mode_info.freq_range_caps[i]; 5127 + ath12k_wmi_dump_freq_range_per_mac(ab, freq_range, i); 5128 + } 5129 + } 5130 + 5131 + static int ath12k_wmi_modify_sbs_freq(struct ath12k_base *ab, u8 phy_id) 5132 + { 5133 + struct ath12k_hw_mode_info *hw_mode_info = &ab->wmi_ab.hw_mode_info; 5134 + struct ath12k_hw_mode_freq_range_arg *sbs_mac_range, *shared_mac_range; 5135 + struct ath12k_hw_mode_freq_range_arg *non_shared_range; 5136 + u8 shared_phy_id; 5137 + 5138 + sbs_mac_range = &hw_mode_info->freq_range_caps[ATH12K_HW_MODE_SBS][phy_id]; 5139 + 5140 + /* if SBS mac range has both 2.4 and 5 GHz ranges, i.e. shared phy_id 5141 + * keep the range as it is in SBS 5142 + */ 5143 + if (sbs_mac_range->low_2ghz_freq && sbs_mac_range->low_5ghz_freq) 5144 + return 0; 5145 + 5146 + if (sbs_mac_range->low_2ghz_freq && !sbs_mac_range->low_5ghz_freq) { 5147 + ath12k_err(ab, "Invalid DBS/SBS mode with only 2.4Ghz"); 5148 + ath12k_wmi_dump_freq_range_per_mac(ab, sbs_mac_range, ATH12K_HW_MODE_SBS); 5149 + return -EINVAL; 5150 + } 5151 + 5152 + non_shared_range = sbs_mac_range; 5153 + /* if SBS mac range has only 5 GHz then it's the non-shared phy, so 5154 + * modify the range as per the shared mac. 5155 + */ 5156 + shared_phy_id = phy_id ? 0 : 1; 5157 + shared_mac_range = 5158 + &hw_mode_info->freq_range_caps[ATH12K_HW_MODE_SBS][shared_phy_id]; 5159 + 5160 + if (shared_mac_range->low_5ghz_freq > non_shared_range->low_5ghz_freq) { 5161 + ath12k_dbg(ab, ATH12K_DBG_WMI, "high 5 GHz shared"); 5162 + /* If the shared mac lower 5 GHz frequency is greater than 5163 + * non-shared mac lower 5 GHz frequency then the shared mac has 5164 + * high 5 GHz shared with 2.4 GHz. So non-shared mac's 5 GHz high 5165 + * freq should be less than the shared mac's low 5 GHz freq. 5166 + */ 5167 + if (non_shared_range->high_5ghz_freq >= 5168 + shared_mac_range->low_5ghz_freq) 5169 + non_shared_range->high_5ghz_freq = 5170 + max_t(u32, shared_mac_range->low_5ghz_freq - 10, 5171 + non_shared_range->low_5ghz_freq); 5172 + } else if (shared_mac_range->high_5ghz_freq < 5173 + non_shared_range->high_5ghz_freq) { 5174 + ath12k_dbg(ab, ATH12K_DBG_WMI, "low 5 GHz shared"); 5175 + /* If the shared mac high 5 GHz frequency is less than 5176 + * non-shared mac high 5 GHz frequency then the shared mac has 5177 + * low 5 GHz shared with 2.4 GHz. So non-shared mac's 5 GHz low 5178 + * freq should be greater than the shared mac's high 5 GHz freq. 5179 + */ 5180 + if (shared_mac_range->high_5ghz_freq >= 5181 + non_shared_range->low_5ghz_freq) 5182 + non_shared_range->low_5ghz_freq = 5183 + min_t(u32, shared_mac_range->high_5ghz_freq + 10, 5184 + non_shared_range->high_5ghz_freq); 5185 + } else { 5186 + ath12k_warn(ab, "invalid SBS range with all 5 GHz shared"); 5187 + return -EINVAL; 5188 + } 5189 + 5190 + return 0; 5191 + } 5192 + 5193 + static void ath12k_wmi_update_sbs_freq_info(struct ath12k_base *ab) 5194 + { 5195 + struct ath12k_hw_mode_info *hw_mode_info = &ab->wmi_ab.hw_mode_info; 5196 + struct ath12k_hw_mode_freq_range_arg *mac_range; 5197 + u16 sbs_range_sep; 5198 + u8 phy_id; 5199 + int ret; 5200 + 5201 + mac_range = hw_mode_info->freq_range_caps[ATH12K_HW_MODE_SBS]; 5202 + 5203 + /* If sbs_lower_band_end_freq has a value, then the frequency range 5204 + * will be split using that value. 5205 + */ 5206 + sbs_range_sep = ab->wmi_ab.sbs_lower_band_end_freq; 5207 + if (sbs_range_sep) { 5208 + ath12k_wmi_fill_upper_share_sbs_freq(ab, sbs_range_sep, 5209 + mac_range); 5210 + ath12k_wmi_fill_lower_share_sbs_freq(ab, sbs_range_sep, 5211 + mac_range); 5212 + /* Hardware specifies the range boundary with sbs_range_sep, 5213 + * (i.e. the boundary between 5 GHz high and 5 GHz low), 5214 + * reset the original one to make sure it will not get used. 5215 + */ 5216 + memset(mac_range, 0, sizeof(*mac_range) * MAX_RADIOS); 5217 + return; 5218 + } 5219 + 5220 + /* If sbs_lower_band_end_freq is not set that means firmware will send one 5221 + * shared mac range and one non-shared mac range. so update that freq. 5222 + */ 5223 + for (phy_id = 0; phy_id < MAX_RADIOS; phy_id++) { 5224 + ret = ath12k_wmi_modify_sbs_freq(ab, phy_id); 5225 + if (ret) { 5226 + memset(mac_range, 0, sizeof(*mac_range) * MAX_RADIOS); 5227 + break; 5228 + } 5229 + } 5230 + } 5231 + 5232 + static void 5233 + ath12k_wmi_update_mac_freq_info(struct ath12k_base *ab, 5234 + enum wmi_host_hw_mode_config_type hw_config_type, 5235 + u32 phy_id, 5236 + struct ath12k_svc_ext_mac_phy_info *mac_cap) 5237 + { 5238 + if (phy_id >= MAX_RADIOS) { 5239 + ath12k_err(ab, "mac more than two not supported: %d", phy_id); 5240 + return; 5241 + } 5242 + 5243 + ath12k_dbg(ab, ATH12K_DBG_WMI, 5244 + "hw_mode_cfg %d mac %d band 0x%x SBS cutoff freq %d 2 GHz [%d - %d] 5 GHz [%d - %d]", 5245 + hw_config_type, phy_id, mac_cap->supported_bands, 5246 + ab->wmi_ab.sbs_lower_band_end_freq, 5247 + mac_cap->hw_freq_range.low_2ghz_freq, 5248 + mac_cap->hw_freq_range.high_2ghz_freq, 5249 + mac_cap->hw_freq_range.low_5ghz_freq, 5250 + mac_cap->hw_freq_range.high_5ghz_freq); 5251 + 5252 + switch (hw_config_type) { 5253 + case WMI_HOST_HW_MODE_SINGLE: 5254 + if (phy_id) { 5255 + ath12k_dbg(ab, ATH12K_DBG_WMI, "mac phy 1 is not supported"); 5256 + break; 5257 + } 5258 + ath12k_wmi_update_freq_info(ab, mac_cap, ATH12K_HW_MODE_SMM, phy_id); 5259 + break; 5260 + 5261 + case WMI_HOST_HW_MODE_DBS: 5262 + if (!ath12k_wmi_all_phy_range_updated(ab, ATH12K_HW_MODE_DBS)) 5263 + ath12k_wmi_update_freq_info(ab, mac_cap, 5264 + ATH12K_HW_MODE_DBS, phy_id); 5265 + break; 5266 + case WMI_HOST_HW_MODE_DBS_SBS: 5267 + case WMI_HOST_HW_MODE_DBS_OR_SBS: 5268 + ath12k_wmi_update_freq_info(ab, mac_cap, ATH12K_HW_MODE_DBS, phy_id); 5269 + if (ab->wmi_ab.sbs_lower_band_end_freq || 5270 + mac_cap->hw_freq_range.low_5ghz_freq || 5271 + mac_cap->hw_freq_range.low_2ghz_freq) 5272 + ath12k_wmi_update_freq_info(ab, mac_cap, ATH12K_HW_MODE_SBS, 5273 + phy_id); 5274 + 5275 + if (ath12k_wmi_all_phy_range_updated(ab, ATH12K_HW_MODE_DBS)) 5276 + ath12k_wmi_update_dbs_freq_info(ab); 5277 + if (ath12k_wmi_all_phy_range_updated(ab, ATH12K_HW_MODE_SBS)) 5278 + ath12k_wmi_update_sbs_freq_info(ab); 5279 + break; 5280 + case WMI_HOST_HW_MODE_SBS: 5281 + case WMI_HOST_HW_MODE_SBS_PASSIVE: 5282 + ath12k_wmi_update_freq_info(ab, mac_cap, ATH12K_HW_MODE_SBS, phy_id); 5283 + if (ath12k_wmi_all_phy_range_updated(ab, ATH12K_HW_MODE_SBS)) 5284 + ath12k_wmi_update_sbs_freq_info(ab); 5285 + 5286 + break; 5287 + default: 5288 + break; 5289 + } 5290 + } 5291 + 5292 + static bool ath12k_wmi_sbs_range_present(struct ath12k_base *ab) 5293 + { 5294 + if (ath12k_wmi_all_phy_range_updated(ab, ATH12K_HW_MODE_SBS) || 5295 + (ab->wmi_ab.sbs_lower_band_end_freq && 5296 + ath12k_wmi_all_phy_range_updated(ab, ATH12K_HW_MODE_SBS_LOWER_SHARE) && 5297 + ath12k_wmi_all_phy_range_updated(ab, ATH12K_HW_MODE_SBS_UPPER_SHARE))) 5298 + return true; 5299 + 5300 + return false; 5301 + } 5302 + 5303 + static int ath12k_wmi_update_hw_mode_list(struct ath12k_base *ab) 5304 + { 5305 + struct ath12k_svc_ext_info *svc_ext_info = &ab->wmi_ab.svc_ext_info; 5306 + struct ath12k_hw_mode_info *info = &ab->wmi_ab.hw_mode_info; 5307 + enum wmi_host_hw_mode_config_type hw_config_type; 5308 + struct ath12k_svc_ext_mac_phy_info *tmp; 5309 + bool dbs_mode = false, sbs_mode = false; 5310 + u32 i, j = 0; 5311 + 5312 + if (!svc_ext_info->num_hw_modes) { 5313 + ath12k_err(ab, "invalid number of hw modes"); 5314 + return -EINVAL; 5315 + } 5316 + 5317 + ath12k_dbg(ab, ATH12K_DBG_WMI, "updated HW mode list: num modes %d", 5318 + svc_ext_info->num_hw_modes); 5319 + 5320 + memset(info->freq_range_caps, 0, sizeof(info->freq_range_caps)); 5321 + 5322 + for (i = 0; i < svc_ext_info->num_hw_modes; i++) { 5323 + if (j >= ATH12K_MAX_MAC_PHY_CAP) 5324 + return -EINVAL; 5325 + 5326 + /* Update for MAC0 */ 5327 + tmp = &svc_ext_info->mac_phy_info[j++]; 5328 + hw_config_type = tmp->hw_mode_config_type; 5329 + ath12k_wmi_update_mac_freq_info(ab, hw_config_type, tmp->phy_id, tmp); 5330 + 5331 + /* SBS and DBS have dual MAC. Up to 2 MACs are considered. */ 5332 + if (hw_config_type == WMI_HOST_HW_MODE_DBS || 5333 + hw_config_type == WMI_HOST_HW_MODE_SBS_PASSIVE || 5334 + hw_config_type == WMI_HOST_HW_MODE_SBS || 5335 + hw_config_type == WMI_HOST_HW_MODE_DBS_OR_SBS) { 5336 + if (j >= ATH12K_MAX_MAC_PHY_CAP) 5337 + return -EINVAL; 5338 + /* Update for MAC1 */ 5339 + tmp = &svc_ext_info->mac_phy_info[j++]; 5340 + ath12k_wmi_update_mac_freq_info(ab, hw_config_type, 5341 + tmp->phy_id, tmp); 5342 + 5343 + if (hw_config_type == WMI_HOST_HW_MODE_DBS || 5344 + hw_config_type == WMI_HOST_HW_MODE_DBS_OR_SBS) 5345 + dbs_mode = true; 5346 + 5347 + if (ath12k_wmi_sbs_range_present(ab) && 5348 + (hw_config_type == WMI_HOST_HW_MODE_SBS_PASSIVE || 5349 + hw_config_type == WMI_HOST_HW_MODE_SBS || 5350 + hw_config_type == WMI_HOST_HW_MODE_DBS_OR_SBS)) 5351 + sbs_mode = true; 5352 + } 5353 + } 5354 + 5355 + info->support_dbs = dbs_mode; 5356 + info->support_sbs = sbs_mode; 5357 + 5358 + ath12k_wmi_dump_freq_range(ab); 5359 + 5360 + return 0; 5361 + } 5362 + 4995 5363 static int ath12k_wmi_svc_rdy_ext2_parse(struct ath12k_base *ab, 4996 5364 u16 tag, u16 len, 4997 5365 const void *ptr, void *data) 4998 5366 { 5367 + const struct ath12k_wmi_dbs_or_sbs_cap_params *dbs_or_sbs_caps; 4999 5368 struct ath12k_wmi_pdev *wmi_handle = &ab->wmi_ab.wmi[0]; 5000 5369 struct ath12k_wmi_svc_rdy_ext2_parse *parse = data; 5001 5370 int ret; ··· 5476 4967 } 5477 4968 5478 4969 parse->mac_phy_caps_ext_done = true; 4970 + } else if (!parse->hal_reg_caps_ext2_done) { 4971 + parse->hal_reg_caps_ext2_done = true; 4972 + } else if (!parse->scan_radio_caps_ext2_done) { 4973 + parse->scan_radio_caps_ext2_done = true; 4974 + } else if (!parse->twt_caps_done) { 4975 + parse->twt_caps_done = true; 4976 + } else if (!parse->htt_msdu_idx_to_qtype_map_done) { 4977 + parse->htt_msdu_idx_to_qtype_map_done = true; 4978 + } else if (!parse->dbs_or_sbs_cap_ext_done) { 4979 + dbs_or_sbs_caps = ptr; 4980 + ab->wmi_ab.sbs_lower_band_end_freq = 4981 + __le32_to_cpu(dbs_or_sbs_caps->sbs_lower_band_end_freq); 4982 + 4983 + ath12k_dbg(ab, ATH12K_DBG_WMI, "sbs_lower_band_end_freq %u\n", 4984 + ab->wmi_ab.sbs_lower_band_end_freq); 4985 + 4986 + ret = ath12k_wmi_update_hw_mode_list(ab); 4987 + if (ret) { 4988 + ath12k_warn(ab, "failed to update hw mode list: %d\n", 4989 + ret); 4990 + return ret; 4991 + } 4992 + 4993 + parse->dbs_or_sbs_cap_ext_done = true; 5479 4994 } 4995 + 5480 4996 break; 5481 4997 default: 5482 4998 break; ··· 8160 7626 &parse); 8161 7627 } 8162 7628 7629 + static void ath12k_wmi_fw_stats_process(struct ath12k *ar, 7630 + struct ath12k_fw_stats *stats) 7631 + { 7632 + struct ath12k_base *ab = ar->ab; 7633 + struct ath12k_pdev *pdev; 7634 + bool is_end = true; 7635 + size_t total_vdevs_started = 0; 7636 + int i; 7637 + 7638 + if (stats->stats_id == WMI_REQUEST_VDEV_STAT) { 7639 + if (list_empty(&stats->vdevs)) { 7640 + ath12k_warn(ab, "empty vdev stats"); 7641 + return; 7642 + } 7643 + /* FW sends all the active VDEV stats irrespective of PDEV, 7644 + * hence limit until the count of all VDEVs started 7645 + */ 7646 + rcu_read_lock(); 7647 + for (i = 0; i < ab->num_radios; i++) { 7648 + pdev = rcu_dereference(ab->pdevs_active[i]); 7649 + if (pdev && pdev->ar) 7650 + total_vdevs_started += pdev->ar->num_started_vdevs; 7651 + } 7652 + rcu_read_unlock(); 7653 + 7654 + if (total_vdevs_started) 7655 + is_end = ((++ar->fw_stats.num_vdev_recvd) == 7656 + total_vdevs_started); 7657 + 7658 + list_splice_tail_init(&stats->vdevs, 7659 + &ar->fw_stats.vdevs); 7660 + 7661 + if (is_end) 7662 + complete(&ar->fw_stats_done); 7663 + 7664 + return; 7665 + } 7666 + 7667 + if (stats->stats_id == WMI_REQUEST_BCN_STAT) { 7668 + if (list_empty(&stats->bcn)) { 7669 + ath12k_warn(ab, "empty beacon stats"); 7670 + return; 7671 + } 7672 + /* Mark end until we reached the count of all started VDEVs 7673 + * within the PDEV 7674 + */ 7675 + if (ar->num_started_vdevs) 7676 + is_end = ((++ar->fw_stats.num_bcn_recvd) == 7677 + ar->num_started_vdevs); 7678 + 7679 + list_splice_tail_init(&stats->bcn, 7680 + &ar->fw_stats.bcn); 7681 + 7682 + if (is_end) 7683 + complete(&ar->fw_stats_done); 7684 + } 7685 + } 7686 + 8163 7687 static void ath12k_update_stats_event(struct ath12k_base *ab, struct sk_buff *skb) 8164 7688 { 8165 7689 struct ath12k_fw_stats stats = {}; ··· 8247 7655 8248 7656 spin_lock_bh(&ar->data_lock); 8249 7657 8250 - /* WMI_REQUEST_PDEV_STAT can be requested via .get_txpower mac ops or via 8251 - * debugfs fw stats. Therefore, processing it separately. 8252 - */ 7658 + /* Handle WMI_REQUEST_PDEV_STAT status update */ 8253 7659 if (stats.stats_id == WMI_REQUEST_PDEV_STAT) { 8254 7660 list_splice_tail_init(&stats.pdevs, &ar->fw_stats.pdevs); 8255 - ar->fw_stats.fw_stats_done = true; 7661 + complete(&ar->fw_stats_done); 8256 7662 goto complete; 8257 7663 } 8258 7664 8259 - /* WMI_REQUEST_VDEV_STAT and WMI_REQUEST_BCN_STAT are currently requested only 8260 - * via debugfs fw stats. Hence, processing these in debugfs context. 8261 - */ 8262 - ath12k_debugfs_fw_stats_process(ar, &stats); 7665 + /* Handle WMI_REQUEST_VDEV_STAT and WMI_REQUEST_BCN_STAT updates. */ 7666 + ath12k_wmi_fw_stats_process(ar, &stats); 8263 7667 8264 7668 complete: 8265 7669 complete(&ar->fw_stats_complete); ··· 10498 9910 } 10499 9911 10500 9912 return 0; 9913 + } 9914 + 9915 + static int 9916 + ath12k_wmi_fill_disallowed_bmap(struct ath12k_base *ab, 9917 + struct wmi_disallowed_mlo_mode_bitmap_params *dislw_bmap, 9918 + struct wmi_mlo_link_set_active_arg *arg) 9919 + { 9920 + struct wmi_ml_disallow_mode_bmap_arg *dislw_bmap_arg; 9921 + u8 i; 9922 + 9923 + if (arg->num_disallow_mode_comb > 9924 + ARRAY_SIZE(arg->disallow_bmap)) { 9925 + ath12k_warn(ab, "invalid num_disallow_mode_comb: %d", 9926 + arg->num_disallow_mode_comb); 9927 + return -EINVAL; 9928 + } 9929 + 9930 + dislw_bmap_arg = &arg->disallow_bmap[0]; 9931 + for (i = 0; i < arg->num_disallow_mode_comb; i++) { 9932 + dislw_bmap->tlv_header = 9933 + ath12k_wmi_tlv_cmd_hdr(0, sizeof(*dislw_bmap)); 9934 + dislw_bmap->disallowed_mode_bitmap = 9935 + cpu_to_le32(dislw_bmap_arg->disallowed_mode); 9936 + dislw_bmap->ieee_link_id_comb = 9937 + le32_encode_bits(dislw_bmap_arg->ieee_link_id[0], 9938 + WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_1) | 9939 + le32_encode_bits(dislw_bmap_arg->ieee_link_id[1], 9940 + WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_2) | 9941 + le32_encode_bits(dislw_bmap_arg->ieee_link_id[2], 9942 + WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_3) | 9943 + le32_encode_bits(dislw_bmap_arg->ieee_link_id[3], 9944 + WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_4); 9945 + 9946 + ath12k_dbg(ab, ATH12K_DBG_WMI, 9947 + "entry %d disallowed_mode %d ieee_link_id_comb 0x%x", 9948 + i, dislw_bmap_arg->disallowed_mode, 9949 + dislw_bmap_arg->ieee_link_id_comb); 9950 + dislw_bmap++; 9951 + dislw_bmap_arg++; 9952 + } 9953 + 9954 + return 0; 9955 + } 9956 + 9957 + int ath12k_wmi_send_mlo_link_set_active_cmd(struct ath12k_base *ab, 9958 + struct wmi_mlo_link_set_active_arg *arg) 9959 + { 9960 + struct wmi_disallowed_mlo_mode_bitmap_params *disallowed_mode_bmap; 9961 + struct wmi_mlo_set_active_link_number_params *link_num_param; 9962 + u32 num_link_num_param = 0, num_vdev_bitmap = 0; 9963 + struct ath12k_wmi_base *wmi_ab = &ab->wmi_ab; 9964 + struct wmi_mlo_link_set_active_cmd *cmd; 9965 + u32 num_inactive_vdev_bitmap = 0; 9966 + u32 num_disallow_mode_comb = 0; 9967 + struct wmi_tlv *tlv; 9968 + struct sk_buff *skb; 9969 + __le32 *vdev_bitmap; 9970 + void *buf_ptr; 9971 + int i, ret; 9972 + u32 len; 9973 + 9974 + if (!arg->num_vdev_bitmap && !arg->num_link_entry) { 9975 + ath12k_warn(ab, "Invalid num_vdev_bitmap and num_link_entry"); 9976 + return -EINVAL; 9977 + } 9978 + 9979 + switch (arg->force_mode) { 9980 + case WMI_MLO_LINK_FORCE_MODE_ACTIVE_LINK_NUM: 9981 + case WMI_MLO_LINK_FORCE_MODE_INACTIVE_LINK_NUM: 9982 + num_link_num_param = arg->num_link_entry; 9983 + fallthrough; 9984 + case WMI_MLO_LINK_FORCE_MODE_ACTIVE: 9985 + case WMI_MLO_LINK_FORCE_MODE_INACTIVE: 9986 + case WMI_MLO_LINK_FORCE_MODE_NO_FORCE: 9987 + num_vdev_bitmap = arg->num_vdev_bitmap; 9988 + break; 9989 + case WMI_MLO_LINK_FORCE_MODE_ACTIVE_INACTIVE: 9990 + num_vdev_bitmap = arg->num_vdev_bitmap; 9991 + num_inactive_vdev_bitmap = arg->num_inactive_vdev_bitmap; 9992 + break; 9993 + default: 9994 + ath12k_warn(ab, "Invalid force mode: %u", arg->force_mode); 9995 + return -EINVAL; 9996 + } 9997 + 9998 + num_disallow_mode_comb = arg->num_disallow_mode_comb; 9999 + len = sizeof(*cmd) + 10000 + TLV_HDR_SIZE + sizeof(*link_num_param) * num_link_num_param + 10001 + TLV_HDR_SIZE + sizeof(*vdev_bitmap) * num_vdev_bitmap + 10002 + TLV_HDR_SIZE + TLV_HDR_SIZE + TLV_HDR_SIZE + 10003 + TLV_HDR_SIZE + sizeof(*disallowed_mode_bmap) * num_disallow_mode_comb; 10004 + if (arg->force_mode == WMI_MLO_LINK_FORCE_MODE_ACTIVE_INACTIVE) 10005 + len += sizeof(*vdev_bitmap) * num_inactive_vdev_bitmap; 10006 + 10007 + skb = ath12k_wmi_alloc_skb(wmi_ab, len); 10008 + if (!skb) 10009 + return -ENOMEM; 10010 + 10011 + cmd = (struct wmi_mlo_link_set_active_cmd *)skb->data; 10012 + cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_MLO_LINK_SET_ACTIVE_CMD, 10013 + sizeof(*cmd)); 10014 + cmd->force_mode = cpu_to_le32(arg->force_mode); 10015 + cmd->reason = cpu_to_le32(arg->reason); 10016 + ath12k_dbg(ab, ATH12K_DBG_WMI, 10017 + "mode %d reason %d num_link_num_param %d num_vdev_bitmap %d inactive %d num_disallow_mode_comb %d", 10018 + arg->force_mode, arg->reason, num_link_num_param, 10019 + num_vdev_bitmap, num_inactive_vdev_bitmap, 10020 + num_disallow_mode_comb); 10021 + 10022 + buf_ptr = skb->data + sizeof(*cmd); 10023 + tlv = buf_ptr; 10024 + tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT, 10025 + sizeof(*link_num_param) * num_link_num_param); 10026 + buf_ptr += TLV_HDR_SIZE; 10027 + 10028 + if (num_link_num_param) { 10029 + cmd->ctrl_flags = 10030 + le32_encode_bits(arg->ctrl_flags.dync_force_link_num ? 1 : 0, 10031 + CRTL_F_DYNC_FORCE_LINK_NUM); 10032 + 10033 + link_num_param = buf_ptr; 10034 + for (i = 0; i < num_link_num_param; i++) { 10035 + link_num_param->tlv_header = 10036 + ath12k_wmi_tlv_cmd_hdr(0, sizeof(*link_num_param)); 10037 + link_num_param->num_of_link = 10038 + cpu_to_le32(arg->link_num[i].num_of_link); 10039 + link_num_param->vdev_type = 10040 + cpu_to_le32(arg->link_num[i].vdev_type); 10041 + link_num_param->vdev_subtype = 10042 + cpu_to_le32(arg->link_num[i].vdev_subtype); 10043 + link_num_param->home_freq = 10044 + cpu_to_le32(arg->link_num[i].home_freq); 10045 + ath12k_dbg(ab, ATH12K_DBG_WMI, 10046 + "entry %d num_of_link %d vdev type %d subtype %d freq %d control_flags %d", 10047 + i, arg->link_num[i].num_of_link, 10048 + arg->link_num[i].vdev_type, 10049 + arg->link_num[i].vdev_subtype, 10050 + arg->link_num[i].home_freq, 10051 + __le32_to_cpu(cmd->ctrl_flags)); 10052 + link_num_param++; 10053 + } 10054 + 10055 + buf_ptr += sizeof(*link_num_param) * num_link_num_param; 10056 + } 10057 + 10058 + tlv = buf_ptr; 10059 + tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_UINT32, 10060 + sizeof(*vdev_bitmap) * num_vdev_bitmap); 10061 + buf_ptr += TLV_HDR_SIZE; 10062 + 10063 + if (num_vdev_bitmap) { 10064 + vdev_bitmap = buf_ptr; 10065 + for (i = 0; i < num_vdev_bitmap; i++) { 10066 + vdev_bitmap[i] = cpu_to_le32(arg->vdev_bitmap[i]); 10067 + ath12k_dbg(ab, ATH12K_DBG_WMI, "entry %d vdev_id_bitmap 0x%x", 10068 + i, arg->vdev_bitmap[i]); 10069 + } 10070 + 10071 + buf_ptr += sizeof(*vdev_bitmap) * num_vdev_bitmap; 10072 + } 10073 + 10074 + if (arg->force_mode == WMI_MLO_LINK_FORCE_MODE_ACTIVE_INACTIVE) { 10075 + tlv = buf_ptr; 10076 + tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_UINT32, 10077 + sizeof(*vdev_bitmap) * 10078 + num_inactive_vdev_bitmap); 10079 + buf_ptr += TLV_HDR_SIZE; 10080 + 10081 + if (num_inactive_vdev_bitmap) { 10082 + vdev_bitmap = buf_ptr; 10083 + for (i = 0; i < num_inactive_vdev_bitmap; i++) { 10084 + vdev_bitmap[i] = 10085 + cpu_to_le32(arg->inactive_vdev_bitmap[i]); 10086 + ath12k_dbg(ab, ATH12K_DBG_WMI, 10087 + "entry %d inactive_vdev_id_bitmap 0x%x", 10088 + i, arg->inactive_vdev_bitmap[i]); 10089 + } 10090 + 10091 + buf_ptr += sizeof(*vdev_bitmap) * num_inactive_vdev_bitmap; 10092 + } 10093 + } else { 10094 + /* add empty vdev bitmap2 tlv */ 10095 + tlv = buf_ptr; 10096 + tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_UINT32, 0); 10097 + buf_ptr += TLV_HDR_SIZE; 10098 + } 10099 + 10100 + /* add empty ieee_link_id_bitmap tlv */ 10101 + tlv = buf_ptr; 10102 + tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_UINT32, 0); 10103 + buf_ptr += TLV_HDR_SIZE; 10104 + 10105 + /* add empty ieee_link_id_bitmap2 tlv */ 10106 + tlv = buf_ptr; 10107 + tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_UINT32, 0); 10108 + buf_ptr += TLV_HDR_SIZE; 10109 + 10110 + tlv = buf_ptr; 10111 + tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT, 10112 + sizeof(*disallowed_mode_bmap) * 10113 + arg->num_disallow_mode_comb); 10114 + buf_ptr += TLV_HDR_SIZE; 10115 + 10116 + ret = ath12k_wmi_fill_disallowed_bmap(ab, buf_ptr, arg); 10117 + if (ret) 10118 + goto free_skb; 10119 + 10120 + ret = ath12k_wmi_cmd_send(&wmi_ab->wmi[0], skb, WMI_MLO_LINK_SET_ACTIVE_CMDID); 10121 + if (ret) { 10122 + ath12k_warn(ab, 10123 + "failed to send WMI_MLO_LINK_SET_ACTIVE_CMDID: %d\n", ret); 10124 + goto free_skb; 10125 + } 10126 + 10127 + ath12k_dbg(ab, ATH12K_DBG_WMI, "WMI mlo link set active cmd"); 10128 + 10129 + return ret; 10130 + 10131 + free_skb: 10132 + dev_kfree_skb(skb); 10133 + return ret; 10501 10134 }
+179 -1
drivers/net/wireless/ath/ath12k/wmi.h
··· 1974 1974 WMI_TAG_TPC_STATS_CTL_PWR_TABLE_EVENT, 1975 1975 WMI_TAG_VDEV_SET_TPC_POWER_CMD = 0x3B5, 1976 1976 WMI_TAG_VDEV_CH_POWER_INFO, 1977 + WMI_TAG_MLO_LINK_SET_ACTIVE_CMD = 0x3BE, 1977 1978 WMI_TAG_EHT_RATE_SET = 0x3C4, 1978 1979 WMI_TAG_DCS_AWGN_INT_TYPE = 0x3C5, 1979 1980 WMI_TAG_MLO_TX_SEND_PARAMS, ··· 2618 2617 __le32 num_chainmask_tables; 2619 2618 } __packed; 2620 2619 2620 + #define WMI_HW_MODE_CAP_CFG_TYPE GENMASK(27, 0) 2621 + 2621 2622 struct ath12k_wmi_hw_mode_cap_params { 2622 2623 __le32 tlv_header; 2623 2624 __le32 hw_mode_id; ··· 2669 2666 __le32 he_cap_info_2g_ext; 2670 2667 __le32 he_cap_info_5g_ext; 2671 2668 __le32 he_cap_info_internal; 2669 + __le32 wireless_modes; 2670 + __le32 low_2ghz_chan_freq; 2671 + __le32 high_2ghz_chan_freq; 2672 + __le32 low_5ghz_chan_freq; 2673 + __le32 high_5ghz_chan_freq; 2674 + __le32 nss_ratio; 2672 2675 } __packed; 2673 2676 2674 2677 struct ath12k_wmi_hal_reg_caps_ext_params { ··· 2746 2737 __le32 max_num_linkview_peers; 2747 2738 __le32 max_num_msduq_supported_per_tid; 2748 2739 __le32 default_num_msduq_supported_per_tid; 2740 + } __packed; 2741 + 2742 + struct ath12k_wmi_dbs_or_sbs_cap_params { 2743 + __le32 hw_mode_id; 2744 + __le32 sbs_lower_band_end_freq; 2749 2745 } __packed; 2750 2746 2751 2747 struct ath12k_wmi_caps_ext_params { ··· 5063 5049 u32 rx_decap_mode; 5064 5050 }; 5065 5051 5052 + struct ath12k_hw_mode_freq_range_arg { 5053 + u32 low_2ghz_freq; 5054 + u32 high_2ghz_freq; 5055 + u32 low_5ghz_freq; 5056 + u32 high_5ghz_freq; 5057 + }; 5058 + 5059 + struct ath12k_svc_ext_mac_phy_info { 5060 + enum wmi_host_hw_mode_config_type hw_mode_config_type; 5061 + u32 phy_id; 5062 + u32 supported_bands; 5063 + struct ath12k_hw_mode_freq_range_arg hw_freq_range; 5064 + }; 5065 + 5066 + #define ATH12K_MAX_MAC_PHY_CAP 8 5067 + 5068 + struct ath12k_svc_ext_info { 5069 + u32 num_hw_modes; 5070 + struct ath12k_svc_ext_mac_phy_info mac_phy_info[ATH12K_MAX_MAC_PHY_CAP]; 5071 + }; 5072 + 5073 + /** 5074 + * enum ath12k_hw_mode - enum for host mode 5075 + * @ATH12K_HW_MODE_SMM: Single mac mode 5076 + * @ATH12K_HW_MODE_DBS: DBS mode 5077 + * @ATH12K_HW_MODE_SBS: SBS mode with either high share or low share 5078 + * @ATH12K_HW_MODE_SBS_UPPER_SHARE: Higher 5 GHz shared with 2.4 GHz 5079 + * @ATH12K_HW_MODE_SBS_LOWER_SHARE: Lower 5 GHz shared with 2.4 GHz 5080 + * @ATH12K_HW_MODE_MAX: Max, used to indicate invalid mode 5081 + */ 5082 + enum ath12k_hw_mode { 5083 + ATH12K_HW_MODE_SMM, 5084 + ATH12K_HW_MODE_DBS, 5085 + ATH12K_HW_MODE_SBS, 5086 + ATH12K_HW_MODE_SBS_UPPER_SHARE, 5087 + ATH12K_HW_MODE_SBS_LOWER_SHARE, 5088 + ATH12K_HW_MODE_MAX, 5089 + }; 5090 + 5091 + struct ath12k_hw_mode_info { 5092 + bool support_dbs:1; 5093 + bool support_sbs:1; 5094 + 5095 + struct ath12k_hw_mode_freq_range_arg freq_range_caps[ATH12K_HW_MODE_MAX] 5096 + [MAX_RADIOS]; 5097 + }; 5098 + 5066 5099 struct ath12k_wmi_base { 5067 5100 struct ath12k_base *ab; 5068 5101 struct ath12k_wmi_pdev wmi[MAX_RADIOS]; ··· 5127 5066 enum wmi_host_hw_mode_config_type preferred_hw_mode; 5128 5067 5129 5068 struct ath12k_wmi_target_cap_arg *targ_cap; 5069 + 5070 + struct ath12k_svc_ext_info svc_ext_info; 5071 + u32 sbs_lower_band_end_freq; 5072 + struct ath12k_hw_mode_info hw_mode_info; 5130 5073 }; 5131 5074 5132 5075 struct wmi_pdev_set_bios_interface_cmd { ··· 6062 5997 */ 6063 5998 } __packed; 6064 5999 6000 + #define CRTL_F_DYNC_FORCE_LINK_NUM GENMASK(3, 2) 6001 + 6002 + struct wmi_mlo_link_set_active_cmd { 6003 + __le32 tlv_header; 6004 + __le32 force_mode; 6005 + __le32 reason; 6006 + __le32 use_ieee_link_id_bitmap; 6007 + struct ath12k_wmi_mac_addr_params ap_mld_mac_addr; 6008 + __le32 ctrl_flags; 6009 + } __packed; 6010 + 6011 + struct wmi_mlo_set_active_link_number_params { 6012 + __le32 tlv_header; 6013 + __le32 num_of_link; 6014 + __le32 vdev_type; 6015 + __le32 vdev_subtype; 6016 + __le32 home_freq; 6017 + } __packed; 6018 + 6019 + #define WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_1 GENMASK(7, 0) 6020 + #define WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_2 GENMASK(15, 8) 6021 + #define WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_3 GENMASK(23, 16) 6022 + #define WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_4 GENMASK(31, 24) 6023 + 6024 + struct wmi_disallowed_mlo_mode_bitmap_params { 6025 + __le32 tlv_header; 6026 + __le32 disallowed_mode_bitmap; 6027 + __le32 ieee_link_id_comb; 6028 + } __packed; 6029 + 6030 + enum wmi_mlo_link_force_mode { 6031 + WMI_MLO_LINK_FORCE_MODE_ACTIVE = 1, 6032 + WMI_MLO_LINK_FORCE_MODE_INACTIVE = 2, 6033 + WMI_MLO_LINK_FORCE_MODE_ACTIVE_LINK_NUM = 3, 6034 + WMI_MLO_LINK_FORCE_MODE_INACTIVE_LINK_NUM = 4, 6035 + WMI_MLO_LINK_FORCE_MODE_NO_FORCE = 5, 6036 + WMI_MLO_LINK_FORCE_MODE_ACTIVE_INACTIVE = 6, 6037 + WMI_MLO_LINK_FORCE_MODE_NON_FORCE_UPDATE = 7, 6038 + }; 6039 + 6040 + enum wmi_mlo_link_force_reason { 6041 + WMI_MLO_LINK_FORCE_REASON_NEW_CONNECT = 1, 6042 + WMI_MLO_LINK_FORCE_REASON_NEW_DISCONNECT = 2, 6043 + WMI_MLO_LINK_FORCE_REASON_LINK_REMOVAL = 3, 6044 + WMI_MLO_LINK_FORCE_REASON_TDLS = 4, 6045 + WMI_MLO_LINK_FORCE_REASON_REVERT_FAILURE = 5, 6046 + WMI_MLO_LINK_FORCE_REASON_LINK_DELETE = 6, 6047 + WMI_MLO_LINK_FORCE_REASON_SINGLE_LINK_EMLSR_OP = 7, 6048 + }; 6049 + 6050 + struct wmi_mlo_link_num_arg { 6051 + u32 num_of_link; 6052 + u32 vdev_type; 6053 + u32 vdev_subtype; 6054 + u32 home_freq; 6055 + }; 6056 + 6057 + struct wmi_mlo_control_flags_arg { 6058 + bool overwrite_force_active_bitmap; 6059 + bool overwrite_force_inactive_bitmap; 6060 + bool dync_force_link_num; 6061 + bool post_re_evaluate; 6062 + u8 post_re_evaluate_loops; 6063 + bool dont_reschedule_workqueue; 6064 + }; 6065 + 6066 + struct wmi_ml_link_force_cmd_arg { 6067 + u8 ap_mld_mac_addr[ETH_ALEN]; 6068 + u16 ieee_link_id_bitmap; 6069 + u16 ieee_link_id_bitmap2; 6070 + u8 link_num; 6071 + }; 6072 + 6073 + struct wmi_ml_disallow_mode_bmap_arg { 6074 + u32 disallowed_mode; 6075 + union { 6076 + u32 ieee_link_id_comb; 6077 + u8 ieee_link_id[4]; 6078 + }; 6079 + }; 6080 + 6081 + /* maximum size of link number param array 6082 + * for MLO link set active command 6083 + */ 6084 + #define WMI_MLO_LINK_NUM_SZ 2 6085 + 6086 + /* maximum size of vdev bitmap array for 6087 + * MLO link set active command 6088 + */ 6089 + #define WMI_MLO_VDEV_BITMAP_SZ 2 6090 + 6091 + /* Max number of disallowed bitmap combination 6092 + * sent to firmware 6093 + */ 6094 + #define WMI_ML_MAX_DISALLOW_BMAP_COMB 4 6095 + 6096 + struct wmi_mlo_link_set_active_arg { 6097 + enum wmi_mlo_link_force_mode force_mode; 6098 + enum wmi_mlo_link_force_reason reason; 6099 + u32 num_link_entry; 6100 + u32 num_vdev_bitmap; 6101 + u32 num_inactive_vdev_bitmap; 6102 + struct wmi_mlo_link_num_arg link_num[WMI_MLO_LINK_NUM_SZ]; 6103 + u32 vdev_bitmap[WMI_MLO_VDEV_BITMAP_SZ]; 6104 + u32 inactive_vdev_bitmap[WMI_MLO_VDEV_BITMAP_SZ]; 6105 + struct wmi_mlo_control_flags_arg ctrl_flags; 6106 + bool use_ieee_link_id; 6107 + struct wmi_ml_link_force_cmd_arg force_cmd; 6108 + u32 num_disallow_mode_comb; 6109 + struct wmi_ml_disallow_mode_bmap_arg disallow_bmap[WMI_ML_MAX_DISALLOW_BMAP_COMB]; 6110 + }; 6111 + 6065 6112 void ath12k_wmi_init_qcn9274(struct ath12k_base *ab, 6066 6113 struct ath12k_wmi_resource_config_arg *config); 6067 6114 void ath12k_wmi_init_wcn7850(struct ath12k_base *ab, ··· 6372 6195 int ath12k_wmi_send_vdev_set_tpc_power(struct ath12k *ar, 6373 6196 u32 vdev_id, 6374 6197 struct ath12k_reg_tpc_power_info *param); 6375 - 6198 + int ath12k_wmi_send_mlo_link_set_active_cmd(struct ath12k_base *ab, 6199 + struct wmi_mlo_link_set_active_arg *param); 6376 6200 #endif
+3 -1
drivers/net/wireless/ath/ath6kl/bmi.c
··· 87 87 * We need to do some backwards compatibility to make this work. 88 88 */ 89 89 if (le32_to_cpu(targ_info->byte_count) != sizeof(*targ_info)) { 90 - WARN_ON(1); 90 + ath6kl_err("mismatched byte count %d vs. expected %zd\n", 91 + le32_to_cpu(targ_info->byte_count), 92 + sizeof(*targ_info)); 91 93 return -EINVAL; 92 94 } 93 95
+13 -6
drivers/net/wireless/ath/carl9170/usb.c
··· 438 438 439 439 if (atomic_read(&ar->rx_anch_urbs) == 0) { 440 440 /* 441 - * The system is too slow to cope with 442 - * the enormous workload. We have simply 443 - * run out of active rx urbs and this 444 - * unfortunately leads to an unpredictable 445 - * device. 441 + * At this point, either the system is too slow to 442 + * cope with the enormous workload (so we have simply 443 + * run out of active rx urbs and this unfortunately 444 + * leads to an unpredictable device), or the device 445 + * is not fully functional after an unsuccessful 446 + * firmware loading attempts (so it doesn't pass 447 + * ieee80211_register_hw() and there is no internal 448 + * workqueue at all). 446 449 */ 447 450 448 - ieee80211_queue_work(ar->hw, &ar->ping_work); 451 + if (ar->registered) 452 + ieee80211_queue_work(ar->hw, &ar->ping_work); 453 + else 454 + pr_warn_once("device %s is not registered\n", 455 + dev_name(&ar->udev->dev)); 449 456 } 450 457 } else { 451 458 /*
+1
drivers/net/wireless/intel/iwlwifi/dvm/main.c
··· 1316 1316 sizeof(trans->conf.no_reclaim_cmds)); 1317 1317 memcpy(trans->conf.no_reclaim_cmds, no_reclaim_cmds, 1318 1318 sizeof(no_reclaim_cmds)); 1319 + trans->conf.n_no_reclaim_cmds = ARRAY_SIZE(no_reclaim_cmds); 1319 1320 1320 1321 switch (iwlwifi_mod_params.amsdu_size) { 1321 1322 case IWL_AMSDU_DEF:
+1
drivers/net/wireless/intel/iwlwifi/mld/mld.c
··· 77 77 78 78 /* Setup async RX handling */ 79 79 spin_lock_init(&mld->async_handlers_lock); 80 + INIT_LIST_HEAD(&mld->async_handlers_list); 80 81 wiphy_work_init(&mld->async_handlers_wk, 81 82 iwl_mld_async_handlers_wk); 82 83
+1 -1
drivers/net/wireless/intel/iwlwifi/mvm/mld-mac.c
··· 34 34 WIDE_ID(MAC_CONF_GROUP, 35 35 MAC_CONFIG_CMD), 0); 36 36 37 - if (WARN_ON(cmd_ver < 1 && cmd_ver > 3)) 37 + if (WARN_ON(cmd_ver < 1 || cmd_ver > 3)) 38 38 return; 39 39 40 40 cmd->id_and_color = cpu_to_le32(mvmvif->id);
+6 -5
drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
··· 166 166 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 167 167 struct iwl_context_info *ctxt_info; 168 168 struct iwl_context_info_rbd_cfg *rx_cfg; 169 - u32 control_flags = 0, rb_size; 169 + u32 control_flags = 0, rb_size, cb_size; 170 170 dma_addr_t phys; 171 171 int ret; 172 172 ··· 202 202 rb_size = IWL_CTXT_INFO_RB_SIZE_4K; 203 203 } 204 204 205 - WARN_ON(RX_QUEUE_CB_SIZE(iwl_trans_get_num_rbds(trans)) > 12); 205 + cb_size = RX_QUEUE_CB_SIZE(iwl_trans_get_num_rbds(trans)); 206 + if (WARN_ON(cb_size > 12)) 207 + cb_size = 12; 208 + 206 209 control_flags = IWL_CTXT_INFO_TFD_FORMAT_LONG; 207 - control_flags |= 208 - u32_encode_bits(RX_QUEUE_CB_SIZE(iwl_trans_get_num_rbds(trans)), 209 - IWL_CTXT_INFO_RB_CB_SIZE); 210 + control_flags |= u32_encode_bits(cb_size, IWL_CTXT_INFO_RB_CB_SIZE); 210 211 control_flags |= u32_encode_bits(rb_size, IWL_CTXT_INFO_RB_SIZE); 211 212 ctxt_info->control.control_flags = cpu_to_le32(control_flags); 212 213
+7 -14
drivers/nvme/host/ioctl.c
··· 429 429 pdu->result = le64_to_cpu(nvme_req(req)->result.u64); 430 430 431 431 /* 432 - * For iopoll, complete it directly. Note that using the uring_cmd 433 - * helper for this is safe only because we check blk_rq_is_poll(). 434 - * As that returns false if we're NOT on a polled queue, then it's 435 - * safe to use the polled completion helper. 436 - * 437 - * Otherwise, move the completion to task work. 432 + * IOPOLL could potentially complete this request directly, but 433 + * if multiple rings are polling on the same queue, then it's possible 434 + * for one ring to find completions for another ring. Punting the 435 + * completion via task_work will always direct it to the right 436 + * location, rather than potentially complete requests for ringA 437 + * under iopoll invocations from ringB. 438 438 */ 439 - if (blk_rq_is_poll(req)) { 440 - if (pdu->bio) 441 - blk_rq_unmap_user(pdu->bio); 442 - io_uring_cmd_iopoll_done(ioucmd, pdu->result, pdu->status); 443 - } else { 444 - io_uring_cmd_do_in_task_lazy(ioucmd, nvme_uring_task_cb); 445 - } 446 - 439 + io_uring_cmd_do_in_task_lazy(ioucmd, nvme_uring_task_cb); 447 440 return RQ_END_IO_FREE; 448 441 } 449 442
+6 -8
drivers/platform/x86/amd/hsmp/hsmp.c
··· 97 97 short_sleep = jiffies + msecs_to_jiffies(HSMP_SHORT_SLEEP); 98 98 timeout = jiffies + msecs_to_jiffies(HSMP_MSG_TIMEOUT); 99 99 100 - while (time_before(jiffies, timeout)) { 100 + while (true) { 101 101 ret = sock->amd_hsmp_rdwr(sock, mbinfo->msg_resp_off, &mbox_status, HSMP_RD); 102 102 if (ret) { 103 103 dev_err(sock->dev, "Error %d reading mailbox status\n", ret); ··· 106 106 107 107 if (mbox_status != HSMP_STATUS_NOT_READY) 108 108 break; 109 + 110 + if (!time_before(jiffies, timeout)) 111 + break; 112 + 109 113 if (time_before(jiffies, short_sleep)) 110 114 usleep_range(50, 100); 111 115 else ··· 214 210 return -ENODEV; 215 211 sock = &hsmp_pdev.sock[msg->sock_ind]; 216 212 217 - /* 218 - * The time taken by smu operation to complete is between 219 - * 10us to 1ms. Sometime it may take more time. 220 - * In SMP system timeout of 100 millisecs should 221 - * be enough for the previous thread to finish the operation 222 - */ 223 - ret = down_timeout(&sock->hsmp_sem, msecs_to_jiffies(HSMP_MSG_TIMEOUT)); 213 + ret = down_interruptible(&sock->hsmp_sem); 224 214 if (ret < 0) 225 215 return ret; 226 216
+9
drivers/platform/x86/amd/pmc/pmc-quirks.c
··· 225 225 DMI_MATCH(DMI_BOARD_NAME, "WUJIE14-GX4HRXL"), 226 226 } 227 227 }, 228 + /* https://bugzilla.kernel.org/show_bug.cgi?id=220116 */ 229 + { 230 + .ident = "PCSpecialist Lafite Pro V 14M", 231 + .driver_data = &quirk_spurious_8042, 232 + .matches = { 233 + DMI_MATCH(DMI_SYS_VENDOR, "PCSpecialist"), 234 + DMI_MATCH(DMI_PRODUCT_NAME, "Lafite Pro V 14M"), 235 + } 236 + }, 228 237 {} 229 238 }; 230 239
+2
drivers/platform/x86/amd/pmc/pmc.c
··· 157 157 return -ENOMEM; 158 158 } 159 159 160 + memset_io(dev->smu_virt_addr, 0, sizeof(struct smu_metrics)); 161 + 160 162 /* Start the logging */ 161 163 amd_pmc_send_cmd(dev, 0, NULL, SMU_MSG_LOG_RESET, false); 162 164 amd_pmc_send_cmd(dev, 0, NULL, SMU_MSG_LOG_START, false);
+1 -2
drivers/platform/x86/amd/pmf/core.c
··· 280 280 dev_err(dev->dev, "Invalid CPU id: 0x%x", dev->cpu_id); 281 281 } 282 282 283 - dev->buf = kzalloc(dev->mtable_size, GFP_KERNEL); 283 + dev->buf = devm_kzalloc(dev->dev, dev->mtable_size, GFP_KERNEL); 284 284 if (!dev->buf) 285 285 return -ENOMEM; 286 286 } ··· 493 493 mutex_destroy(&dev->lock); 494 494 mutex_destroy(&dev->update_mutex); 495 495 mutex_destroy(&dev->cb_mutex); 496 - kfree(dev->buf); 497 496 } 498 497 499 498 static const struct attribute_group *amd_pmf_driver_groups[] = {
+38 -70
drivers/platform/x86/amd/pmf/tee-if.c
··· 358 358 return -EINVAL; 359 359 360 360 /* re-alloc to the new buffer length of the policy binary */ 361 - new_policy_buf = memdup_user(buf, length); 362 - if (IS_ERR(new_policy_buf)) 363 - return PTR_ERR(new_policy_buf); 361 + new_policy_buf = devm_kzalloc(dev->dev, length, GFP_KERNEL); 362 + if (!new_policy_buf) 363 + return -ENOMEM; 364 364 365 - kfree(dev->policy_buf); 365 + if (copy_from_user(new_policy_buf, buf, length)) { 366 + devm_kfree(dev->dev, new_policy_buf); 367 + return -EFAULT; 368 + } 369 + 370 + devm_kfree(dev->dev, dev->policy_buf); 366 371 dev->policy_buf = new_policy_buf; 367 372 dev->policy_sz = length; 368 373 369 - if (!amd_pmf_pb_valid(dev)) { 370 - ret = -EINVAL; 371 - goto cleanup; 372 - } 374 + if (!amd_pmf_pb_valid(dev)) 375 + return -EINVAL; 373 376 374 377 amd_pmf_hex_dump_pb(dev); 375 378 ret = amd_pmf_start_policy_engine(dev); 376 379 if (ret < 0) 377 - goto cleanup; 380 + return ret; 378 381 379 382 return length; 380 - 381 - cleanup: 382 - kfree(dev->policy_buf); 383 - dev->policy_buf = NULL; 384 - return ret; 385 383 } 386 384 387 385 static const struct file_operations pb_fops = { ··· 420 422 rc = tee_client_open_session(ctx, &sess_arg, NULL); 421 423 if (rc < 0 || sess_arg.ret != 0) { 422 424 pr_err("Failed to open TEE session err:%#x, rc:%d\n", sess_arg.ret, rc); 423 - return rc; 425 + return rc ?: -EINVAL; 424 426 } 425 427 426 428 *id = sess_arg.session; 427 429 428 - return rc; 430 + return 0; 429 431 } 430 432 431 433 static int amd_pmf_register_input_device(struct amd_pmf_dev *dev) ··· 460 462 dev->tee_ctx = tee_client_open_context(NULL, amd_pmf_amdtee_ta_match, NULL, NULL); 461 463 if (IS_ERR(dev->tee_ctx)) { 462 464 dev_err(dev->dev, "Failed to open TEE context\n"); 463 - return PTR_ERR(dev->tee_ctx); 465 + ret = PTR_ERR(dev->tee_ctx); 466 + dev->tee_ctx = NULL; 467 + return ret; 464 468 } 465 469 466 470 ret = amd_pmf_ta_open_session(dev->tee_ctx, &dev->session_id, uuid); ··· 502 502 503 503 static void amd_pmf_tee_deinit(struct amd_pmf_dev *dev) 504 504 { 505 + if (!dev->tee_ctx) 506 + return; 505 507 tee_shm_free(dev->fw_shm_pool); 506 508 tee_client_close_session(dev->tee_ctx, dev->session_id); 507 509 tee_client_close_context(dev->tee_ctx); 510 + dev->tee_ctx = NULL; 508 511 } 509 512 510 513 int amd_pmf_init_smart_pc(struct amd_pmf_dev *dev) ··· 530 527 531 528 ret = amd_pmf_set_dram_addr(dev, true); 532 529 if (ret) 533 - goto err_cancel_work; 530 + return ret; 534 531 535 532 dev->policy_base = devm_ioremap_resource(dev->dev, dev->res); 536 - if (IS_ERR(dev->policy_base)) { 537 - ret = PTR_ERR(dev->policy_base); 538 - goto err_free_dram_buf; 539 - } 533 + if (IS_ERR(dev->policy_base)) 534 + return PTR_ERR(dev->policy_base); 540 535 541 - dev->policy_buf = kzalloc(dev->policy_sz, GFP_KERNEL); 542 - if (!dev->policy_buf) { 543 - ret = -ENOMEM; 544 - goto err_free_dram_buf; 545 - } 536 + dev->policy_buf = devm_kzalloc(dev->dev, dev->policy_sz, GFP_KERNEL); 537 + if (!dev->policy_buf) 538 + return -ENOMEM; 546 539 547 540 memcpy_fromio(dev->policy_buf, dev->policy_base, dev->policy_sz); 548 541 549 542 if (!amd_pmf_pb_valid(dev)) { 550 543 dev_info(dev->dev, "No Smart PC policy present\n"); 551 - ret = -EINVAL; 552 - goto err_free_policy; 544 + return -EINVAL; 553 545 } 554 546 555 547 amd_pmf_hex_dump_pb(dev); 556 548 557 - dev->prev_data = kzalloc(sizeof(*dev->prev_data), GFP_KERNEL); 558 - if (!dev->prev_data) { 559 - ret = -ENOMEM; 560 - goto err_free_policy; 561 - } 549 + dev->prev_data = devm_kzalloc(dev->dev, sizeof(*dev->prev_data), GFP_KERNEL); 550 + if (!dev->prev_data) 551 + return -ENOMEM; 562 552 563 553 for (i = 0; i < ARRAY_SIZE(amd_pmf_ta_uuid); i++) { 564 554 ret = amd_pmf_tee_init(dev, &amd_pmf_ta_uuid[i]); 565 555 if (ret) 566 - goto err_free_prev_data; 556 + return ret; 567 557 568 558 ret = amd_pmf_start_policy_engine(dev); 569 - switch (ret) { 570 - case TA_PMF_TYPE_SUCCESS: 571 - status = true; 572 - break; 573 - case TA_ERROR_CRYPTO_INVALID_PARAM: 574 - case TA_ERROR_CRYPTO_BIN_TOO_LARGE: 575 - amd_pmf_tee_deinit(dev); 576 - status = false; 577 - break; 578 - default: 579 - ret = -EINVAL; 580 - amd_pmf_tee_deinit(dev); 581 - goto err_free_prev_data; 582 - } 583 - 559 + dev_dbg(dev->dev, "start policy engine ret: %d\n", ret); 560 + status = ret == TA_PMF_TYPE_SUCCESS; 584 561 if (status) 585 562 break; 563 + amd_pmf_tee_deinit(dev); 586 564 } 587 565 588 566 if (!status && !pb_side_load) { 589 567 ret = -EINVAL; 590 - goto err_free_prev_data; 568 + goto err; 591 569 } 592 570 593 571 if (pb_side_load) ··· 576 592 577 593 ret = amd_pmf_register_input_device(dev); 578 594 if (ret) 579 - goto err_pmf_remove_pb; 595 + goto err; 580 596 581 597 return 0; 582 598 583 - err_pmf_remove_pb: 584 - if (pb_side_load && dev->esbin) 585 - amd_pmf_remove_pb(dev); 586 - amd_pmf_tee_deinit(dev); 587 - err_free_prev_data: 588 - kfree(dev->prev_data); 589 - err_free_policy: 590 - kfree(dev->policy_buf); 591 - err_free_dram_buf: 592 - kfree(dev->buf); 593 - err_cancel_work: 594 - cancel_delayed_work_sync(&dev->pb_work); 599 + err: 600 + amd_pmf_deinit_smart_pc(dev); 595 601 596 602 return ret; 597 603 } ··· 595 621 amd_pmf_remove_pb(dev); 596 622 597 623 cancel_delayed_work_sync(&dev->pb_work); 598 - kfree(dev->prev_data); 599 - dev->prev_data = NULL; 600 - kfree(dev->policy_buf); 601 - dev->policy_buf = NULL; 602 - kfree(dev->buf); 603 - dev->buf = NULL; 604 624 amd_pmf_tee_deinit(dev); 605 625 }
+1 -1
drivers/platform/x86/dell/alienware-wmi-wmax.c
··· 119 119 DMI_MATCH(DMI_SYS_VENDOR, "Alienware"), 120 120 DMI_MATCH(DMI_PRODUCT_NAME, "Alienware m16 R1 AMD"), 121 121 }, 122 - .driver_data = &g_series_quirks, 122 + .driver_data = &generic_quirks, 123 123 }, 124 124 { 125 125 .ident = "Alienware m16 R2",
+5 -5
drivers/platform/x86/dell/dell_rbu.c
··· 45 45 MODULE_AUTHOR("Abhay Salunke <abhay_salunke@dell.com>"); 46 46 MODULE_DESCRIPTION("Driver for updating BIOS image on DELL systems"); 47 47 MODULE_LICENSE("GPL"); 48 - MODULE_VERSION("3.2"); 48 + MODULE_VERSION("3.3"); 49 49 50 50 #define BIOS_SCAN_LIMIT 0xffffffff 51 51 #define MAX_IMAGE_LENGTH 16 ··· 91 91 rbu_data.imagesize = 0; 92 92 } 93 93 94 - static int create_packet(void *data, size_t length) 94 + static int create_packet(void *data, size_t length) __must_hold(&rbu_data.lock) 95 95 { 96 96 struct packet_data *newpacket; 97 97 int ordernum = 0; ··· 292 292 remaining_bytes = *pread_length; 293 293 bytes_read = rbu_data.packet_read_count; 294 294 295 - list_for_each_entry(newpacket, (&packet_data_head.list)->next, list) { 295 + list_for_each_entry(newpacket, &packet_data_head.list, list) { 296 296 bytes_copied = do_packet_read(pdest, newpacket, 297 297 remaining_bytes, bytes_read, &temp_count); 298 298 remaining_bytes -= bytes_copied; ··· 315 315 { 316 316 struct packet_data *newpacket, *tmp; 317 317 318 - list_for_each_entry_safe(newpacket, tmp, (&packet_data_head.list)->next, list) { 318 + list_for_each_entry_safe(newpacket, tmp, &packet_data_head.list, list) { 319 319 list_del(&newpacket->list); 320 320 321 321 /* 322 322 * zero out the RBU packet memory before freeing 323 323 * to make sure there are no stale RBU packets left in memory 324 324 */ 325 - memset(newpacket->data, 0, rbu_data.packetsize); 325 + memset(newpacket->data, 0, newpacket->length); 326 326 set_memory_wb((unsigned long)newpacket->data, 327 327 1 << newpacket->ordernum); 328 328 free_pages((unsigned long) newpacket->data,
+17 -2
drivers/platform/x86/ideapad-laptop.c
··· 15 15 #include <linux/bug.h> 16 16 #include <linux/cleanup.h> 17 17 #include <linux/debugfs.h> 18 + #include <linux/delay.h> 18 19 #include <linux/device.h> 19 20 #include <linux/dmi.h> 20 21 #include <linux/i8042.h> ··· 268 267 */ 269 268 #define IDEAPAD_EC_TIMEOUT 200 /* in ms */ 270 269 270 + /* 271 + * Some models (e.g., ThinkBook since 2024) have a low tolerance for being 272 + * polled too frequently. Doing so may break the state machine in the EC, 273 + * resulting in a hard shutdown. 274 + * 275 + * It is also observed that frequent polls may disturb the ongoing operation 276 + * and notably delay the availability of EC response. 277 + * 278 + * These values are used as the delay before the first poll and the interval 279 + * between subsequent polls to solve the above issues. 280 + */ 281 + #define IDEAPAD_EC_POLL_MIN_US 150 282 + #define IDEAPAD_EC_POLL_MAX_US 300 283 + 271 284 static int eval_int(acpi_handle handle, const char *name, unsigned long *res) 272 285 { 273 286 unsigned long long result; ··· 398 383 end_jiffies = jiffies + msecs_to_jiffies(IDEAPAD_EC_TIMEOUT) + 1; 399 384 400 385 while (time_before(jiffies, end_jiffies)) { 401 - schedule(); 386 + usleep_range(IDEAPAD_EC_POLL_MIN_US, IDEAPAD_EC_POLL_MAX_US); 402 387 403 388 err = eval_vpcr(handle, 1, &val); 404 389 if (err) ··· 429 414 end_jiffies = jiffies + msecs_to_jiffies(IDEAPAD_EC_TIMEOUT) + 1; 430 415 431 416 while (time_before(jiffies, end_jiffies)) { 432 - schedule(); 417 + usleep_range(IDEAPAD_EC_POLL_MIN_US, IDEAPAD_EC_POLL_MAX_US); 433 418 434 419 err = eval_vpcr(handle, 1, &val); 435 420 if (err)
+7
drivers/platform/x86/intel/pmc/core.h
··· 299 299 #define PTL_PCD_PMC_MMIO_REG_LEN 0x31A8 300 300 301 301 /* SSRAM PMC Device ID */ 302 + /* LNL */ 303 + #define PMC_DEVID_LNL_SOCM 0xa87f 304 + 305 + /* PTL */ 306 + #define PMC_DEVID_PTL_PCDH 0xe37f 307 + #define PMC_DEVID_PTL_PCDP 0xe47f 308 + 302 309 /* ARL */ 303 310 #define PMC_DEVID_ARL_SOCM 0x777f 304 311 #define PMC_DEVID_ARL_SOCS 0xae7f
+3
drivers/platform/x86/intel/pmc/ssram_telemetry.c
··· 187 187 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PMC_DEVID_MTL_SOCM) }, 188 188 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PMC_DEVID_ARL_SOCS) }, 189 189 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PMC_DEVID_ARL_SOCM) }, 190 + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PMC_DEVID_LNL_SOCM) }, 191 + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PMC_DEVID_PTL_PCDH) }, 192 + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PMC_DEVID_PTL_PCDP) }, 190 193 { } 191 194 }; 192 195 MODULE_DEVICE_TABLE(pci, intel_pmc_ssram_telemetry_pci_ids);
+3 -1
drivers/platform/x86/intel/tpmi_power_domains.c
··· 228 228 229 229 domain_die_map = kcalloc(size_mul(topology_max_packages(), MAX_POWER_DOMAINS), 230 230 sizeof(*domain_die_map), GFP_KERNEL); 231 - if (!domain_die_map) 231 + if (!domain_die_map) { 232 + ret = -ENOMEM; 232 233 goto free_domain_mask; 234 + } 233 235 234 236 ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, 235 237 "platform/x86/tpmi_power_domains:online",
+1 -1
drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.c
··· 58 58 if (length) 59 59 length += sysfs_emit_at(buf, length, " "); 60 60 61 - length += sysfs_emit_at(buf, length, agent_name[agent]); 61 + length += sysfs_emit_at(buf, length, "%s", agent_name[agent]); 62 62 } 63 63 64 64 length += sysfs_emit_at(buf, length, "\n");
+6 -3
drivers/platform/x86/intel/uncore-frequency/uncore-frequency-tpmi.c
··· 511 511 512 512 /* Get the package ID from the TPMI core */ 513 513 plat_info = tpmi_get_platform_data(auxdev); 514 - if (plat_info) 515 - pkg = plat_info->package_id; 516 - else 514 + if (unlikely(!plat_info)) { 517 515 dev_info(&auxdev->dev, "Platform information is NULL\n"); 516 + ret = -ENODEV; 517 + goto err_rem_common; 518 + } 519 + 520 + pkg = plat_info->package_id; 518 521 519 522 for (i = 0; i < num_resources; ++i) { 520 523 struct tpmi_uncore_power_domain_info *pd_info;
+1
drivers/platform/x86/samsung-galaxybook.c
··· 1403 1403 } 1404 1404 1405 1405 static const struct acpi_device_id galaxybook_device_ids[] = { 1406 + { "SAM0426" }, 1406 1407 { "SAM0427" }, 1407 1408 { "SAM0428" }, 1408 1409 { "SAM0429" },
+2 -1
drivers/ptp/ptp_clock.c
··· 121 121 struct ptp_clock_info *ops; 122 122 int err = -EOPNOTSUPP; 123 123 124 - if (ptp_clock_freerun(ptp)) { 124 + if (tx->modes & (ADJ_SETOFFSET | ADJ_FREQUENCY | ADJ_OFFSET) && 125 + ptp_clock_freerun(ptp)) { 125 126 pr_err("ptp: physical clock is free running\n"); 126 127 return -EBUSY; 127 128 }
+21 -1
drivers/ptp/ptp_private.h
··· 98 98 /* Check if ptp virtual clock is in use */ 99 99 static inline bool ptp_vclock_in_use(struct ptp_clock *ptp) 100 100 { 101 - return !ptp->is_virtual_clock; 101 + bool in_use = false; 102 + 103 + /* Virtual clocks can't be stacked on top of virtual clocks. 104 + * Avoid acquiring the n_vclocks_mux on virtual clocks, to allow this 105 + * function to be called from code paths where the n_vclocks_mux of the 106 + * parent physical clock is already held. Functionally that's not an 107 + * issue, but lockdep would complain, because they have the same lock 108 + * class. 109 + */ 110 + if (ptp->is_virtual_clock) 111 + return false; 112 + 113 + if (mutex_lock_interruptible(&ptp->n_vclocks_mux)) 114 + return true; 115 + 116 + if (ptp->n_vclocks) 117 + in_use = true; 118 + 119 + mutex_unlock(&ptp->n_vclocks_mux); 120 + 121 + return in_use; 102 122 } 103 123 104 124 /* Check if ptp clock shall be free running */
+3
drivers/rapidio/rio_cm.c
··· 783 783 if (buf == NULL || ch_id == 0 || len == 0 || len > RIO_MAX_MSG_SIZE) 784 784 return -EINVAL; 785 785 786 + if (len < sizeof(struct rio_ch_chan_hdr)) 787 + return -EINVAL; /* insufficient data from user */ 788 + 786 789 ch = riocm_get_channel(ch_id); 787 790 if (!ch) { 788 791 riocm_error("%s(%d) ch_%d not found", current->comm,
+3 -3
drivers/regulator/max20086-regulator.c
··· 5 5 // Copyright (C) 2022 Laurent Pinchart <laurent.pinchart@idesonboard.com> 6 6 // Copyright (C) 2018 Avnet, Inc. 7 7 8 + #include <linux/cleanup.h> 8 9 #include <linux/err.h> 9 10 #include <linux/gpio/consumer.h> 10 11 #include <linux/i2c.h> ··· 134 133 static int max20086_parse_regulators_dt(struct max20086 *chip, bool *boot_on) 135 134 { 136 135 struct of_regulator_match *matches; 137 - struct device_node *node; 138 136 unsigned int i; 139 137 int ret; 140 138 141 - node = of_get_child_by_name(chip->dev->of_node, "regulators"); 139 + struct device_node *node __free(device_node) = 140 + of_get_child_by_name(chip->dev->of_node, "regulators"); 142 141 if (!node) { 143 142 dev_err(chip->dev, "regulators node not found\n"); 144 143 return -ENODEV; ··· 154 153 155 154 ret = of_regulator_match(chip->dev, node, matches, 156 155 chip->info->num_outputs); 157 - of_node_put(node); 158 156 if (ret < 0) { 159 157 dev_err(chip->dev, "Failed to match regulators\n"); 160 158 return -EINVAL;
+2
drivers/s390/scsi/zfcp_sysfs.c
··· 449 449 if (kstrtoull(buf, 0, (unsigned long long *) &fcp_lun)) 450 450 return -EINVAL; 451 451 452 + flush_work(&port->rport_work); 453 + 452 454 retval = zfcp_unit_add(port, fcp_lun); 453 455 if (retval) 454 456 return retval;
+2 -2
drivers/scsi/mvsas/mv_defs.h
··· 215 215 216 216 /* MVS_Px_INT_STAT, MVS_Px_INT_MASK (per-phy events) */ 217 217 PHYEV_DEC_ERR = (1U << 24), /* Phy Decoding Error */ 218 - PHYEV_DCDR_ERR = (1U << 23), /* STP Deocder Error */ 218 + PHYEV_DCDR_ERR = (1U << 23), /* STP Decoder Error */ 219 219 PHYEV_CRC_ERR = (1U << 22), /* STP CRC Error */ 220 220 PHYEV_UNASSOC_FIS = (1U << 19), /* unassociated FIS rx'd */ 221 221 PHYEV_AN = (1U << 18), /* SATA async notification */ ··· 347 347 CMD_SATA_PORT_MEM_CTL0 = 0x158, /* SATA Port Memory Control 0 */ 348 348 CMD_SATA_PORT_MEM_CTL1 = 0x15c, /* SATA Port Memory Control 1 */ 349 349 CMD_XOR_MEM_BIST_CTL = 0x160, /* XOR Memory BIST Control */ 350 - CMD_XOR_MEM_BIST_STAT = 0x164, /* XOR Memroy BIST Status */ 350 + CMD_XOR_MEM_BIST_STAT = 0x164, /* XOR Memory BIST Status */ 351 351 CMD_DMA_MEM_BIST_CTL = 0x168, /* DMA Memory BIST Control */ 352 352 CMD_DMA_MEM_BIST_STAT = 0x16c, /* DMA Memory BIST Status */ 353 353 CMD_PORT_MEM_BIST_CTL = 0x170, /* Port Memory BIST Control */
+2 -1
drivers/scsi/scsi_error.c
··· 665 665 * if the device is in the process of becoming ready, we 666 666 * should retry. 667 667 */ 668 - if ((sshdr.asc == 0x04) && (sshdr.ascq == 0x01)) 668 + if ((sshdr.asc == 0x04) && 669 + (sshdr.ascq == 0x01 || sshdr.ascq == 0x0a)) 669 670 return NEEDS_RETRY; 670 671 /* 671 672 * if the device is not started, we need to wake
+5 -6
drivers/scsi/scsi_transport_iscsi.c
··· 3499 3499 pr_err("%s could not find host no %u\n", 3500 3500 __func__, ev->u.new_flashnode.host_no); 3501 3501 err = -ENODEV; 3502 - goto put_host; 3502 + goto exit_new_fnode; 3503 3503 } 3504 3504 3505 3505 index = transport->new_flashnode(shost, data, len); ··· 3509 3509 else 3510 3510 err = -EIO; 3511 3511 3512 - put_host: 3513 3512 scsi_host_put(shost); 3514 3513 3515 3514 exit_new_fnode: ··· 3533 3534 pr_err("%s could not find host no %u\n", 3534 3535 __func__, ev->u.del_flashnode.host_no); 3535 3536 err = -ENODEV; 3536 - goto put_host; 3537 + goto exit_del_fnode; 3537 3538 } 3538 3539 3539 3540 idx = ev->u.del_flashnode.flashnode_idx; ··· 3575 3576 pr_err("%s could not find host no %u\n", 3576 3577 __func__, ev->u.login_flashnode.host_no); 3577 3578 err = -ENODEV; 3578 - goto put_host; 3579 + goto exit_login_fnode; 3579 3580 } 3580 3581 3581 3582 idx = ev->u.login_flashnode.flashnode_idx; ··· 3627 3628 pr_err("%s could not find host no %u\n", 3628 3629 __func__, ev->u.logout_flashnode.host_no); 3629 3630 err = -ENODEV; 3630 - goto put_host; 3631 + goto exit_logout_fnode; 3631 3632 } 3632 3633 3633 3634 idx = ev->u.logout_flashnode.flashnode_idx; ··· 3677 3678 pr_err("%s could not find host no %u\n", 3678 3679 __func__, ev->u.logout_flashnode.host_no); 3679 3680 err = -ENODEV; 3680 - goto put_host; 3681 + goto exit_logout_sid; 3681 3682 } 3682 3683 3683 3684 session = iscsi_session_lookup(ev->u.logout_flashnode_sid.sid);
+6 -4
drivers/scsi/storvsc_drv.c
··· 362 362 /* 363 363 * Timeout in seconds for all devices managed by this driver. 364 364 */ 365 - static int storvsc_timeout = 180; 365 + static const int storvsc_timeout = 180; 366 366 367 367 #if IS_ENABLED(CONFIG_SCSI_FC_ATTRS) 368 368 static struct scsi_transport_template *fc_transport_template; ··· 768 768 return; 769 769 } 770 770 771 - t = wait_for_completion_timeout(&request->wait_event, 10*HZ); 771 + t = wait_for_completion_timeout(&request->wait_event, storvsc_timeout * HZ); 772 772 if (t == 0) { 773 773 dev_err(dev, "Failed to create sub-channel: timed out\n"); 774 774 return; ··· 833 833 if (ret != 0) 834 834 return ret; 835 835 836 - t = wait_for_completion_timeout(&request->wait_event, 5*HZ); 836 + t = wait_for_completion_timeout(&request->wait_event, storvsc_timeout * HZ); 837 837 if (t == 0) 838 838 return -ETIMEDOUT; 839 839 ··· 1350 1350 return ret; 1351 1351 1352 1352 ret = storvsc_channel_init(device, is_fc); 1353 + if (ret) 1354 + vmbus_close(device->channel); 1353 1355 1354 1356 return ret; 1355 1357 } ··· 1670 1668 if (ret != 0) 1671 1669 return FAILED; 1672 1670 1673 - t = wait_for_completion_timeout(&request->wait_event, 5*HZ); 1671 + t = wait_for_completion_timeout(&request->wait_event, storvsc_timeout * HZ); 1674 1672 if (t == 0) 1675 1673 return TIMEOUT_ERROR; 1676 1674
+1
drivers/spi/spi-loongson-core.c
··· 5 5 #include <linux/clk.h> 6 6 #include <linux/delay.h> 7 7 #include <linux/err.h> 8 + #include <linux/export.h> 8 9 #include <linux/init.h> 9 10 #include <linux/interrupt.h> 10 11 #include <linux/io.h>
+1 -1
drivers/spi/spi-offload.c
··· 297 297 if (trigger->ops->enable) { 298 298 ret = trigger->ops->enable(trigger, config); 299 299 if (ret) { 300 - if (offload->ops->trigger_disable) 300 + if (offload->ops && offload->ops->trigger_disable) 301 301 offload->ops->trigger_disable(offload); 302 302 return ret; 303 303 }
+18 -12
drivers/spi/spi-omap2-mcspi.c
··· 134 134 size_t max_xfer_len; 135 135 u32 ref_clk_hz; 136 136 bool use_multi_mode; 137 + bool last_msg_kept_cs; 137 138 }; 138 139 139 140 struct omap2_mcspi_cs { ··· 1270 1269 * multi-mode is applicable. 1271 1270 */ 1272 1271 mcspi->use_multi_mode = true; 1272 + 1273 + if (mcspi->last_msg_kept_cs) 1274 + mcspi->use_multi_mode = false; 1275 + 1273 1276 list_for_each_entry(tr, &msg->transfers, transfer_list) { 1274 1277 if (!tr->bits_per_word) 1275 1278 bits_per_word = msg->spi->bits_per_word; ··· 1292 1287 mcspi->use_multi_mode = false; 1293 1288 } 1294 1289 1295 - /* Check if transfer asks to change the CS status after the transfer */ 1296 - if (!tr->cs_change) 1297 - mcspi->use_multi_mode = false; 1298 - 1299 - /* 1300 - * If at least one message is not compatible, switch back to single mode 1301 - * 1302 - * The bits_per_word of certain transfer can be different, but it will have no 1303 - * impact on the signal itself. 1304 - */ 1305 - if (!mcspi->use_multi_mode) 1306 - break; 1290 + if (list_is_last(&tr->transfer_list, &msg->transfers)) { 1291 + /* Check if transfer asks to keep the CS status after the whole message */ 1292 + if (tr->cs_change) { 1293 + mcspi->use_multi_mode = false; 1294 + mcspi->last_msg_kept_cs = true; 1295 + } else { 1296 + mcspi->last_msg_kept_cs = false; 1297 + } 1298 + } else { 1299 + /* Check if transfer asks to change the CS status after the transfer */ 1300 + if (!tr->cs_change) 1301 + mcspi->use_multi_mode = false; 1302 + } 1307 1303 } 1308 1304 1309 1305 omap2_mcspi_set_mode(ctlr);
+2 -2
drivers/spi/spi-pci1xxxx.c
··· 762 762 return -EINVAL; 763 763 764 764 num_vector = pci_alloc_irq_vectors(pdev, 1, hw_inst_cnt, 765 - PCI_IRQ_ALL_TYPES); 765 + PCI_IRQ_INTX | PCI_IRQ_MSI); 766 766 if (num_vector < 0) { 767 767 dev_err(&pdev->dev, "Error allocating MSI vectors\n"); 768 - return ret; 768 + return num_vector; 769 769 } 770 770 771 771 init_completion(&spi_sub_ptr->spi_xfer_done);
+19 -5
drivers/spi/spi-stm32-ospi.c
··· 804 804 return ret; 805 805 } 806 806 807 - ospi->rstc = devm_reset_control_array_get_exclusive(dev); 807 + ospi->rstc = devm_reset_control_array_get_exclusive_released(dev); 808 808 if (IS_ERR(ospi->rstc)) 809 809 return dev_err_probe(dev, PTR_ERR(ospi->rstc), 810 810 "Can't get reset\n"); ··· 936 936 if (ret < 0) 937 937 goto err_pm_enable; 938 938 939 - if (ospi->rstc) { 940 - reset_control_assert(ospi->rstc); 941 - udelay(2); 942 - reset_control_deassert(ospi->rstc); 939 + ret = reset_control_acquire(ospi->rstc); 940 + if (ret) { 941 + dev_err_probe(dev, ret, "Can not acquire reset %d\n", ret); 942 + goto err_pm_resume; 943 943 } 944 + 945 + reset_control_assert(ospi->rstc); 946 + udelay(2); 947 + reset_control_deassert(ospi->rstc); 944 948 945 949 ret = spi_register_controller(ctrl); 946 950 if (ret) { ··· 991 987 if (ospi->dma_chrx) 992 988 dma_release_channel(ospi->dma_chrx); 993 989 990 + reset_control_release(ospi->rstc); 991 + 994 992 pm_runtime_put_sync_suspend(ospi->dev); 995 993 pm_runtime_force_suspend(ospi->dev); 996 994 } ··· 1002 996 struct stm32_ospi *ospi = dev_get_drvdata(dev); 1003 997 1004 998 pinctrl_pm_select_sleep_state(dev); 999 + 1000 + reset_control_release(ospi->rstc); 1005 1001 1006 1002 return pm_runtime_force_suspend(ospi->dev); 1007 1003 } ··· 1023 1015 ret = pm_runtime_resume_and_get(ospi->dev); 1024 1016 if (ret < 0) 1025 1017 return ret; 1018 + 1019 + ret = reset_control_acquire(ospi->rstc); 1020 + if (ret) { 1021 + dev_err(dev, "Can not acquire reset\n"); 1022 + return ret; 1023 + } 1026 1024 1027 1025 writel_relaxed(ospi->cr_reg, regs_base + OSPI_CR); 1028 1026 writel_relaxed(ospi->dcr_reg, regs_base + OSPI_DCR1);
+6 -1
drivers/ufs/core/ufshcd.c
··· 6623 6623 up(&hba->host_sem); 6624 6624 return; 6625 6625 } 6626 + spin_unlock_irqrestore(hba->host->host_lock, flags); 6627 + 6628 + ufshcd_err_handling_prepare(hba); 6629 + 6630 + spin_lock_irqsave(hba->host->host_lock, flags); 6626 6631 ufshcd_set_eh_in_progress(hba); 6627 6632 spin_unlock_irqrestore(hba->host->host_lock, flags); 6628 - ufshcd_err_handling_prepare(hba); 6633 + 6629 6634 /* Complete requests that have door-bell cleared by h/w */ 6630 6635 ufshcd_complete_requests(hba, false); 6631 6636 spin_lock_irqsave(hba->host->host_lock, flags);
-1
fs/bcachefs/bcachefs.h
··· 296 296 #define bch2_fmt(_c, fmt) bch2_log_msg(_c, fmt "\n") 297 297 298 298 void bch2_print_str(struct bch_fs *, const char *, const char *); 299 - void bch2_print_str_nonblocking(struct bch_fs *, const char *, const char *); 300 299 301 300 __printf(2, 3) 302 301 void bch2_print_opts(struct bch_opts *, const char *, ...);
+60 -35
fs/bcachefs/btree_gc.c
··· 397 397 continue; 398 398 } 399 399 400 - ret = btree_check_node_boundaries(trans, b, prev, cur, pulled_from_scan); 400 + ret = lockrestart_do(trans, 401 + btree_check_node_boundaries(trans, b, prev, cur, pulled_from_scan)); 402 + if (ret < 0) 403 + goto err; 404 + 401 405 if (ret == DID_FILL_FROM_SCAN) { 402 406 new_pass = true; 403 407 ret = 0; ··· 442 438 443 439 if (!ret && !IS_ERR_OR_NULL(prev)) { 444 440 BUG_ON(cur); 445 - ret = btree_repair_node_end(trans, b, prev, pulled_from_scan); 441 + ret = lockrestart_do(trans, 442 + btree_repair_node_end(trans, b, prev, pulled_from_scan)); 446 443 if (ret == DID_FILL_FROM_SCAN) { 447 444 new_pass = true; 448 445 ret = 0; ··· 524 519 bch2_bkey_buf_exit(&prev_k, c); 525 520 bch2_bkey_buf_exit(&cur_k, c); 526 521 printbuf_exit(&buf); 522 + bch_err_fn(c, ret); 523 + return ret; 524 + } 525 + 526 + static int bch2_check_root(struct btree_trans *trans, enum btree_id i, 527 + bool *reconstructed_root) 528 + { 529 + struct bch_fs *c = trans->c; 530 + struct btree_root *r = bch2_btree_id_root(c, i); 531 + struct printbuf buf = PRINTBUF; 532 + int ret = 0; 533 + 534 + bch2_btree_id_to_text(&buf, i); 535 + 536 + if (r->error) { 537 + bch_info(c, "btree root %s unreadable, must recover from scan", buf.buf); 538 + 539 + r->alive = false; 540 + r->error = 0; 541 + 542 + if (!bch2_btree_has_scanned_nodes(c, i)) { 543 + __fsck_err(trans, 544 + FSCK_CAN_FIX|(!btree_id_important(i) ? FSCK_AUTOFIX : 0), 545 + btree_root_unreadable_and_scan_found_nothing, 546 + "no nodes found for btree %s, continue?", buf.buf); 547 + bch2_btree_root_alloc_fake_trans(trans, i, 0); 548 + } else { 549 + bch2_btree_root_alloc_fake_trans(trans, i, 1); 550 + bch2_shoot_down_journal_keys(c, i, 1, BTREE_MAX_DEPTH, POS_MIN, SPOS_MAX); 551 + ret = bch2_get_scanned_nodes(c, i, 0, POS_MIN, SPOS_MAX); 552 + if (ret) 553 + goto err; 554 + } 555 + 556 + *reconstructed_root = true; 557 + } 558 + err: 559 + fsck_err: 560 + printbuf_exit(&buf); 561 + bch_err_fn(c, ret); 527 562 return ret; 528 563 } 529 564 ··· 571 526 { 572 527 struct btree_trans *trans = bch2_trans_get(c); 573 528 struct bpos pulled_from_scan = POS_MIN; 574 - struct printbuf buf = PRINTBUF; 575 529 int ret = 0; 576 530 577 531 bch2_trans_srcu_unlock(trans); 578 532 579 533 for (unsigned i = 0; i < btree_id_nr_alive(c) && !ret; i++) { 580 - struct btree_root *r = bch2_btree_id_root(c, i); 581 534 bool reconstructed_root = false; 535 + recover: 536 + ret = lockrestart_do(trans, bch2_check_root(trans, i, &reconstructed_root)); 537 + if (ret) 538 + break; 582 539 583 - printbuf_reset(&buf); 584 - bch2_btree_id_to_text(&buf, i); 585 - 586 - if (r->error) { 587 - reconstruct_root: 588 - bch_info(c, "btree root %s unreadable, must recover from scan", buf.buf); 589 - 590 - r->alive = false; 591 - r->error = 0; 592 - 593 - if (!bch2_btree_has_scanned_nodes(c, i)) { 594 - __fsck_err(trans, 595 - FSCK_CAN_FIX|(!btree_id_important(i) ? FSCK_AUTOFIX : 0), 596 - btree_root_unreadable_and_scan_found_nothing, 597 - "no nodes found for btree %s, continue?", buf.buf); 598 - bch2_btree_root_alloc_fake_trans(trans, i, 0); 599 - } else { 600 - bch2_btree_root_alloc_fake_trans(trans, i, 1); 601 - bch2_shoot_down_journal_keys(c, i, 1, BTREE_MAX_DEPTH, POS_MIN, SPOS_MAX); 602 - ret = bch2_get_scanned_nodes(c, i, 0, POS_MIN, SPOS_MAX); 603 - if (ret) 604 - break; 605 - } 606 - 607 - reconstructed_root = true; 608 - } 609 - 540 + struct btree_root *r = bch2_btree_id_root(c, i); 610 541 struct btree *b = r->b; 611 542 612 543 btree_node_lock_nopath_nofail(trans, &b->c, SIX_LOCK_read); ··· 596 575 597 576 r->b = NULL; 598 577 599 - if (!reconstructed_root) 600 - goto reconstruct_root; 578 + if (!reconstructed_root) { 579 + r->error = -EIO; 580 + goto recover; 581 + } 601 582 583 + struct printbuf buf = PRINTBUF; 584 + bch2_btree_id_to_text(&buf, i); 602 585 bch_err(c, "empty btree root %s", buf.buf); 586 + printbuf_exit(&buf); 603 587 bch2_btree_root_alloc_fake_trans(trans, i, 0); 604 588 r->alive = false; 605 589 ret = 0; 606 590 } 607 591 } 608 - fsck_err: 609 - printbuf_exit(&buf); 592 + 610 593 bch2_trans_put(trans); 611 594 return ret; 612 595 }
+19 -7
fs/bcachefs/btree_io.c
··· 741 741 BCH_VERSION_MAJOR(version), 742 742 BCH_VERSION_MINOR(version)); 743 743 744 - if (btree_err_on(version < c->sb.version_min, 744 + if (c->recovery.curr_pass != BCH_RECOVERY_PASS_scan_for_btree_nodes && 745 + btree_err_on(version < c->sb.version_min, 745 746 -BCH_ERR_btree_node_read_err_fixable, 746 747 c, NULL, b, i, NULL, 747 748 btree_node_bset_older_than_sb_min, 748 749 "bset version %u older than superblock version_min %u", 749 750 version, c->sb.version_min)) { 750 - mutex_lock(&c->sb_lock); 751 - c->disk_sb.sb->version_min = cpu_to_le16(version); 752 - bch2_write_super(c); 753 - mutex_unlock(&c->sb_lock); 751 + if (bch2_version_compatible(version)) { 752 + mutex_lock(&c->sb_lock); 753 + c->disk_sb.sb->version_min = cpu_to_le16(version); 754 + bch2_write_super(c); 755 + mutex_unlock(&c->sb_lock); 756 + } else { 757 + /* We have no idea what's going on: */ 758 + i->version = cpu_to_le16(c->sb.version); 759 + } 754 760 } 755 761 756 762 if (btree_err_on(BCH_VERSION_MAJOR(version) > ··· 1051 1045 le16_add_cpu(&i->u64s, -next_good_key); 1052 1046 memmove_u64s_down(k, (u64 *) k + next_good_key, (u64 *) vstruct_end(i) - (u64 *) k); 1053 1047 set_btree_node_need_rewrite(b); 1048 + set_btree_node_need_rewrite_error(b); 1054 1049 } 1055 1050 fsck_err: 1056 1051 printbuf_exit(&buf); ··· 1312 1305 (u64 *) vstruct_end(i) - (u64 *) k); 1313 1306 set_btree_bset_end(b, b->set); 1314 1307 set_btree_node_need_rewrite(b); 1308 + set_btree_node_need_rewrite_error(b); 1315 1309 continue; 1316 1310 } 1317 1311 if (ret) ··· 1337 1329 bkey_for_each_ptr(bch2_bkey_ptrs(bkey_i_to_s(&b->key)), ptr) { 1338 1330 struct bch_dev *ca2 = bch2_dev_rcu(c, ptr->dev); 1339 1331 1340 - if (!ca2 || ca2->mi.state != BCH_MEMBER_STATE_rw) 1332 + if (!ca2 || ca2->mi.state != BCH_MEMBER_STATE_rw) { 1341 1333 set_btree_node_need_rewrite(b); 1334 + set_btree_node_need_rewrite_degraded(b); 1335 + } 1342 1336 } 1343 1337 1344 - if (!ptr_written) 1338 + if (!ptr_written) { 1345 1339 set_btree_node_need_rewrite(b); 1340 + set_btree_node_need_rewrite_ptr_written_zero(b); 1341 + } 1346 1342 fsck_err: 1347 1343 mempool_free(iter, &c->fill_iter); 1348 1344 printbuf_exit(&buf);
+1 -1
fs/bcachefs/btree_locking.c
··· 213 213 prt_newline(&buf); 214 214 } 215 215 216 - bch2_print_str_nonblocking(g->g->trans->c, KERN_ERR, buf.buf); 216 + bch2_print_str(g->g->trans->c, KERN_ERR, buf.buf); 217 217 printbuf_exit(&buf); 218 218 BUG(); 219 219 }
+4 -2
fs/bcachefs/btree_locking.h
··· 417 417 EBUG_ON(!btree_node_locked(path, path->level)); 418 418 EBUG_ON(path->uptodate); 419 419 420 - path->should_be_locked = true; 421 - trace_btree_path_should_be_locked(trans, path); 420 + if (!path->should_be_locked) { 421 + path->should_be_locked = true; 422 + trace_btree_path_should_be_locked(trans, path); 423 + } 422 424 } 423 425 424 426 static inline void __btree_path_set_level_up(struct btree_trans *trans,
+29
fs/bcachefs/btree_types.h
··· 617 617 x(dying) \ 618 618 x(fake) \ 619 619 x(need_rewrite) \ 620 + x(need_rewrite_error) \ 621 + x(need_rewrite_degraded) \ 622 + x(need_rewrite_ptr_written_zero) \ 620 623 x(never_write) \ 621 624 x(pinned) 622 625 ··· 643 640 644 641 BTREE_FLAGS() 645 642 #undef x 643 + 644 + #define BTREE_NODE_REWRITE_REASON() \ 645 + x(none) \ 646 + x(unknown) \ 647 + x(error) \ 648 + x(degraded) \ 649 + x(ptr_written_zero) 650 + 651 + enum btree_node_rewrite_reason { 652 + #define x(n) BTREE_NODE_REWRITE_##n, 653 + BTREE_NODE_REWRITE_REASON() 654 + #undef x 655 + }; 656 + 657 + static inline enum btree_node_rewrite_reason btree_node_rewrite_reason(struct btree *b) 658 + { 659 + if (btree_node_need_rewrite_ptr_written_zero(b)) 660 + return BTREE_NODE_REWRITE_ptr_written_zero; 661 + if (btree_node_need_rewrite_degraded(b)) 662 + return BTREE_NODE_REWRITE_degraded; 663 + if (btree_node_need_rewrite_error(b)) 664 + return BTREE_NODE_REWRITE_error; 665 + if (btree_node_need_rewrite(b)) 666 + return BTREE_NODE_REWRITE_unknown; 667 + return BTREE_NODE_REWRITE_none; 668 + } 646 669 647 670 static inline struct btree_write *btree_current_write(struct btree *b) 648 671 {
+31 -2
fs/bcachefs/btree_update_interior.c
··· 1138 1138 start_time); 1139 1139 } 1140 1140 1141 + static const char * const btree_node_reawrite_reason_strs[] = { 1142 + #define x(n) #n, 1143 + BTREE_NODE_REWRITE_REASON() 1144 + #undef x 1145 + NULL, 1146 + }; 1147 + 1141 1148 static struct btree_update * 1142 1149 bch2_btree_update_start(struct btree_trans *trans, struct btree_path *path, 1143 1150 unsigned level_start, bool split, ··· 1238 1231 mutex_lock(&c->btree_interior_update_lock); 1239 1232 list_add_tail(&as->list, &c->btree_interior_update_list); 1240 1233 mutex_unlock(&c->btree_interior_update_lock); 1234 + 1235 + struct btree *b = btree_path_node(path, path->level); 1236 + as->node_start = b->data->min_key; 1237 + as->node_end = b->data->max_key; 1238 + as->node_needed_rewrite = btree_node_rewrite_reason(b); 1239 + as->node_written = b->written; 1240 + as->node_sectors = btree_buf_bytes(b) >> 9; 1241 + as->node_remaining = __bch2_btree_u64s_remaining(b, 1242 + btree_bkey_last(b, bset_tree_last(b))); 1241 1243 1242 1244 /* 1243 1245 * We don't want to allocate if we're in an error state, that can cause ··· 2124 2108 if (ret) 2125 2109 goto err; 2126 2110 2111 + as->node_start = prev->data->min_key; 2112 + as->node_end = next->data->max_key; 2113 + 2127 2114 trace_and_count(c, btree_node_merge, trans, b); 2128 2115 2129 2116 n = bch2_btree_node_alloc(as, trans, b->c.level); ··· 2700 2681 2701 2682 prt_str(out, " "); 2702 2683 bch2_btree_id_to_text(out, as->btree_id); 2703 - prt_printf(out, " l=%u-%u mode=%s nodes_written=%u cl.remaining=%u journal_seq=%llu\n", 2684 + prt_printf(out, " l=%u-%u ", 2704 2685 as->update_level_start, 2705 - as->update_level_end, 2686 + as->update_level_end); 2687 + bch2_bpos_to_text(out, as->node_start); 2688 + prt_char(out, ' '); 2689 + bch2_bpos_to_text(out, as->node_end); 2690 + prt_printf(out, "\nwritten %u/%u u64s_remaining %u need_rewrite %s", 2691 + as->node_written, 2692 + as->node_sectors, 2693 + as->node_remaining, 2694 + btree_node_reawrite_reason_strs[as->node_needed_rewrite]); 2695 + 2696 + prt_printf(out, "\nmode=%s nodes_written=%u cl.remaining=%u journal_seq=%llu\n", 2706 2697 bch2_btree_update_modes[as->mode], 2707 2698 as->nodes_written, 2708 2699 closure_nr_remaining(&as->cl),
+7
fs/bcachefs/btree_update_interior.h
··· 57 57 unsigned took_gc_lock:1; 58 58 59 59 enum btree_id btree_id; 60 + struct bpos node_start; 61 + struct bpos node_end; 62 + enum btree_node_rewrite_reason node_needed_rewrite; 63 + u16 node_written; 64 + u16 node_sectors; 65 + u16 node_remaining; 66 + 60 67 unsigned update_level_start; 61 68 unsigned update_level_end; 62 69
+2 -2
fs/bcachefs/chardev.c
··· 399 399 return ret; 400 400 } 401 401 402 - static long bch2_ioctl_fs_usage(struct bch_fs *c, 402 + static noinline_for_stack long bch2_ioctl_fs_usage(struct bch_fs *c, 403 403 struct bch_ioctl_fs_usage __user *user_arg) 404 404 { 405 405 struct bch_ioctl_fs_usage arg = {}; ··· 469 469 } 470 470 471 471 /* obsolete, didn't allow for new data types: */ 472 - static long bch2_ioctl_dev_usage(struct bch_fs *c, 472 + static noinline_for_stack long bch2_ioctl_dev_usage(struct bch_fs *c, 473 473 struct bch_ioctl_dev_usage __user *user_arg) 474 474 { 475 475 struct bch_ioctl_dev_usage arg;
+3 -1
fs/bcachefs/disk_accounting.c
··· 618 618 for (unsigned j = 0; j < nr; j++) 619 619 src_v[j] -= dst_v[j]; 620 620 621 - if (fsck_err(trans, accounting_mismatch, "%s", buf.buf)) { 621 + bch2_trans_unlock_long(trans); 622 + 623 + if (fsck_err(c, accounting_mismatch, "%s", buf.buf)) { 622 624 percpu_up_write(&c->mark_lock); 623 625 ret = commit_do(trans, NULL, NULL, 0, 624 626 bch2_disk_accounting_mod(trans, &acc_k, src_v, nr, false));
+4 -1
fs/bcachefs/error.c
··· 69 69 if (trans) 70 70 bch2_trans_updates_to_text(&buf, trans); 71 71 bool ret = __bch2_inconsistent_error(c, &buf); 72 - bch2_print_str_nonblocking(c, KERN_ERR, buf.buf); 72 + bch2_print_str(c, KERN_ERR, buf.buf); 73 73 74 74 printbuf_exit(&buf); 75 75 return ret; ··· 620 620 621 621 if (s) 622 622 s->ret = ret; 623 + 624 + if (trans) 625 + ret = bch2_trans_log_str(trans, bch2_sb_error_strs[err]) ?: ret; 623 626 err_unlock: 624 627 mutex_unlock(&c->fsck_error_msgs_lock); 625 628 err:
+8
fs/bcachefs/fs.c
··· 2490 2490 if (ret) 2491 2491 goto err_stop_fs; 2492 2492 2493 + /* 2494 + * We might be doing a RO mount because other options required it, or we 2495 + * have no alloc info and it's a small image with no room to regenerate 2496 + * it 2497 + */ 2498 + if (c->opts.read_only) 2499 + fc->sb_flags |= SB_RDONLY; 2500 + 2493 2501 sb = sget(fc->fs_type, NULL, bch2_set_super, fc->sb_flags|SB_NOSEC, c); 2494 2502 ret = PTR_ERR_OR_ZERO(sb); 2495 2503 if (ret)
+9 -2
fs/bcachefs/io_read.c
··· 343 343 344 344 *bounce = true; 345 345 *read_full = promote_full; 346 + 347 + if (have_io_error(failed)) 348 + orig->self_healing = true; 349 + 346 350 return promote; 347 351 nopromote: 348 352 trace_io_read_nopromote(c, ret); ··· 639 635 prt_str(&buf, "(internal move) "); 640 636 641 637 prt_str(&buf, "data read error, "); 642 - if (!ret) 638 + if (!ret) { 643 639 prt_str(&buf, "successful retry"); 644 - else 640 + if (rbio->self_healing) 641 + prt_str(&buf, ", self healing"); 642 + } else 645 643 prt_str(&buf, bch2_err_str(ret)); 646 644 prt_newline(&buf); 645 + 647 646 648 647 if (!bkey_deleted(&sk.k->k)) { 649 648 bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(sk.k));
+1
fs/bcachefs/io_read.h
··· 44 44 have_ioref:1, 45 45 narrow_crcs:1, 46 46 saw_error:1, 47 + self_healing:1, 47 48 context:2; 48 49 }; 49 50 u16 _state;
+13 -9
fs/bcachefs/movinggc.c
··· 28 28 #include <linux/wait.h> 29 29 30 30 struct buckets_in_flight { 31 - struct rhashtable table; 31 + struct rhashtable *table; 32 32 struct move_bucket *first; 33 33 struct move_bucket *last; 34 34 size_t nr; ··· 98 98 static void move_bucket_free(struct buckets_in_flight *list, 99 99 struct move_bucket *b) 100 100 { 101 - int ret = rhashtable_remove_fast(&list->table, &b->hash, 101 + int ret = rhashtable_remove_fast(list->table, &b->hash, 102 102 bch_move_bucket_params); 103 103 BUG_ON(ret); 104 104 kfree(b); ··· 133 133 static bool bucket_in_flight(struct buckets_in_flight *list, 134 134 struct move_bucket_key k) 135 135 { 136 - return rhashtable_lookup_fast(&list->table, &k, bch_move_bucket_params); 136 + return rhashtable_lookup_fast(list->table, &k, bch_move_bucket_params); 137 137 } 138 138 139 139 static int bch2_copygc_get_buckets(struct moving_context *ctxt, ··· 185 185 goto err; 186 186 } 187 187 188 - ret2 = rhashtable_lookup_insert_fast(&buckets_in_flight->table, &b_i->hash, 188 + ret2 = rhashtable_lookup_insert_fast(buckets_in_flight->table, &b_i->hash, 189 189 bch_move_bucket_params); 190 190 BUG_ON(ret2); 191 191 ··· 350 350 struct buckets_in_flight buckets = {}; 351 351 u64 last, wait; 352 352 353 - int ret = rhashtable_init(&buckets.table, &bch_move_bucket_params); 353 + buckets.table = kzalloc(sizeof(*buckets.table), GFP_KERNEL); 354 + int ret = !buckets.table 355 + ? -ENOMEM 356 + : rhashtable_init(buckets.table, &bch_move_bucket_params); 354 357 bch_err_msg(c, ret, "allocating copygc buckets in flight"); 355 358 if (ret) 356 - return ret; 359 + goto err; 357 360 358 361 set_freezable(); 359 362 ··· 424 421 } 425 422 426 423 move_buckets_wait(&ctxt, &buckets, true); 427 - rhashtable_destroy(&buckets.table); 424 + rhashtable_destroy(buckets.table); 428 425 bch2_moving_ctxt_exit(&ctxt); 429 426 bch2_move_stats_exit(&move_stats, c); 430 - 431 - return 0; 427 + err: 428 + kfree(buckets.table); 429 + return ret; 432 430 } 433 431 434 432 void bch2_copygc_stop(struct bch_fs *c)
+10
fs/bcachefs/namei.c
··· 175 175 new_inode->bi_dir_offset = dir_offset; 176 176 } 177 177 178 + if (S_ISDIR(mode)) { 179 + ret = bch2_maybe_propagate_has_case_insensitive(trans, 180 + (subvol_inum) { 181 + new_inode->bi_subvol ?: dir.subvol, 182 + new_inode->bi_inum }, 183 + new_inode); 184 + if (ret) 185 + goto err; 186 + } 187 + 178 188 if (S_ISDIR(mode) && 179 189 !new_inode->bi_subvol) 180 190 new_inode->bi_depth = dir_u->bi_depth + 1;
+11 -11
fs/bcachefs/rcu_pending.c
··· 182 182 while (nr--) 183 183 kfree(*p); 184 184 } 185 - 186 - #define local_irq_save(flags) \ 187 - do { \ 188 - flags = 0; \ 189 - } while (0) 190 185 #endif 191 186 192 187 static noinline void __process_finished_items(struct rcu_pending *pending, ··· 424 429 425 430 BUG_ON((ptr != NULL) != (pending->process == RCU_PENDING_KVFREE_FN)); 426 431 427 - local_irq_save(flags); 428 - p = this_cpu_ptr(pending->p); 429 - spin_lock(&p->lock); 432 + /* We could technically be scheduled before taking the lock and end up 433 + * using a different cpu's rcu_pending_pcpu: that's ok, it needs a lock 434 + * anyways 435 + * 436 + * And we have to do it this way to avoid breaking PREEMPT_RT, which 437 + * redefines how spinlocks work: 438 + */ 439 + p = raw_cpu_ptr(pending->p); 440 + spin_lock_irqsave(&p->lock, flags); 430 441 rcu_gp_poll_state_t seq = __get_state_synchronize_rcu(pending->srcu); 431 442 restart: 432 443 if (may_sleep && ··· 521 520 goto free_node; 522 521 } 523 522 524 - local_irq_save(flags); 525 - p = this_cpu_ptr(pending->p); 526 - spin_lock(&p->lock); 523 + p = raw_cpu_ptr(pending->p); 524 + spin_lock_irqsave(&p->lock, flags); 527 525 goto restart; 528 526 } 529 527
+21 -6
fs/bcachefs/recovery.c
··· 99 99 goto out; 100 100 case BTREE_ID_snapshots: 101 101 ret = __bch2_run_explicit_recovery_pass(c, msg, BCH_RECOVERY_PASS_reconstruct_snapshots, 0) ?: ret; 102 + ret = __bch2_run_explicit_recovery_pass(c, msg, BCH_RECOVERY_PASS_check_topology, 0) ?: ret; 102 103 ret = __bch2_run_explicit_recovery_pass(c, msg, BCH_RECOVERY_PASS_scan_for_btree_nodes, 0) ?: ret; 103 104 goto out; 104 105 default: 106 + ret = __bch2_run_explicit_recovery_pass(c, msg, BCH_RECOVERY_PASS_check_topology, 0) ?: ret; 105 107 ret = __bch2_run_explicit_recovery_pass(c, msg, BCH_RECOVERY_PASS_scan_for_btree_nodes, 0) ?: ret; 106 108 goto out; 107 109 } ··· 273 271 goto out; 274 272 275 273 struct btree_path *path = btree_iter_path(trans, &iter); 276 - if (unlikely(!btree_path_node(path, k->level))) { 274 + if (unlikely(!btree_path_node(path, k->level) && 275 + !k->allocated)) { 276 + struct bch_fs *c = trans->c; 277 + 278 + if (!(c->recovery.passes_complete & (BIT_ULL(BCH_RECOVERY_PASS_scan_for_btree_nodes)| 279 + BIT_ULL(BCH_RECOVERY_PASS_check_topology)))) { 280 + bch_err(c, "have key in journal replay for btree depth that does not exist, confused"); 281 + ret = -EINVAL; 282 + } 283 + #if 0 277 284 bch2_trans_iter_exit(trans, &iter); 278 285 bch2_trans_node_iter_init(trans, &iter, k->btree_id, k->k->k.p, 279 286 BTREE_MAX_DEPTH, 0, iter_flags); 280 287 ret = bch2_btree_iter_traverse(trans, &iter) ?: 281 288 bch2_btree_increase_depth(trans, iter.path, 0) ?: 282 289 -BCH_ERR_transaction_restart_nested; 290 + #endif 291 + k->overwritten = true; 283 292 goto out; 284 293 } 285 294 ··· 752 739 ? min(c->opts.recovery_pass_last, BCH_RECOVERY_PASS_snapshots_read) 753 740 : BCH_RECOVERY_PASS_snapshots_read; 754 741 c->opts.nochanges = true; 755 - c->opts.read_only = true; 756 742 } 743 + 744 + if (c->opts.nochanges) 745 + c->opts.read_only = true; 757 746 758 747 mutex_lock(&c->sb_lock); 759 748 struct bch_sb_field_ext *ext = bch2_sb_field_get(c->disk_sb.sb, ext); ··· 1108 1093 out: 1109 1094 bch2_flush_fsck_errs(c); 1110 1095 1111 - if (!IS_ERR(clean)) 1112 - kfree(clean); 1113 - 1114 1096 if (!ret && 1115 1097 test_bit(BCH_FS_need_delete_dead_snapshots, &c->flags) && 1116 1098 !c->opts.nochanges) { ··· 1116 1104 } 1117 1105 1118 1106 bch_err_fn(c, ret); 1107 + final_out: 1108 + if (!IS_ERR(clean)) 1109 + kfree(clean); 1119 1110 return ret; 1120 1111 err: 1121 1112 fsck_err: ··· 1132 1117 bch2_print_str(c, KERN_ERR, buf.buf); 1133 1118 printbuf_exit(&buf); 1134 1119 } 1135 - return ret; 1120 + goto final_out; 1136 1121 } 1137 1122 1138 1123 int bch2_fs_initialize(struct bch_fs *c)
+11 -3
fs/bcachefs/recovery_passes.c
··· 294 294 enum bch_run_recovery_pass_flags *flags) 295 295 { 296 296 struct bch_fs_recovery *r = &c->recovery; 297 - bool in_recovery = test_bit(BCH_FS_in_recovery, &c->flags); 298 - bool persistent = !in_recovery || !(*flags & RUN_RECOVERY_PASS_nopersistent); 297 + 298 + /* 299 + * Never run scan_for_btree_nodes persistently: check_topology will run 300 + * it if required 301 + */ 302 + if (pass == BCH_RECOVERY_PASS_scan_for_btree_nodes) 303 + *flags |= RUN_RECOVERY_PASS_nopersistent; 299 304 300 305 if ((*flags & RUN_RECOVERY_PASS_ratelimit) && 301 306 !bch2_recovery_pass_want_ratelimit(c, pass)) ··· 315 310 * Otherwise, we run run_explicit_recovery_pass when we find damage, so 316 311 * it should run again even if it's already run: 317 312 */ 313 + bool in_recovery = test_bit(BCH_FS_in_recovery, &c->flags); 314 + bool persistent = !in_recovery || !(*flags & RUN_RECOVERY_PASS_nopersistent); 318 315 319 316 if (persistent 320 317 ? !(c->sb.recovery_passes_required & BIT_ULL(pass)) ··· 340 333 { 341 334 struct bch_fs_recovery *r = &c->recovery; 342 335 int ret = 0; 336 + 343 337 344 338 lockdep_assert_held(&c->sb_lock); 345 339 ··· 454 446 455 447 int bch2_run_print_explicit_recovery_pass(struct bch_fs *c, enum bch_recovery_pass pass) 456 448 { 457 - enum bch_run_recovery_pass_flags flags = RUN_RECOVERY_PASS_nopersistent; 449 + enum bch_run_recovery_pass_flags flags = 0; 458 450 459 451 if (!recovery_pass_needs_set(c, pass, &flags)) 460 452 return 0;
+4 -1
fs/bcachefs/sb-downgrade.c
··· 253 253 254 254 static int downgrade_table_extra(struct bch_fs *c, darray_char *table) 255 255 { 256 + unsigned dst_offset = table->nr; 256 257 struct bch_sb_field_downgrade_entry *dst = (void *) &darray_top(*table); 257 258 unsigned bytes = sizeof(*dst) + sizeof(dst->errors[0]) * le16_to_cpu(dst->nr_errors); 258 259 int ret = 0; ··· 269 268 if (ret) 270 269 return ret; 271 270 271 + dst = (void *) &table->data[dst_offset]; 272 + dst->nr_errors = cpu_to_le16(nr_errors + 1); 273 + 272 274 /* open coded __set_bit_le64, as dst is packed and 273 275 * dst->recovery_passes is misaligned */ 274 276 unsigned b = BCH_RECOVERY_PASS_STABLE_check_allocations; ··· 282 278 break; 283 279 } 284 280 285 - dst->nr_errors = cpu_to_le16(nr_errors); 286 281 return ret; 287 282 } 288 283
+5 -5
fs/bcachefs/sb-errors_format.h
··· 134 134 x(bucket_gens_to_invalid_buckets, 121, FSCK_AUTOFIX) \ 135 135 x(bucket_gens_nonzero_for_invalid_buckets, 122, FSCK_AUTOFIX) \ 136 136 x(need_discard_freespace_key_to_invalid_dev_bucket, 123, 0) \ 137 - x(need_discard_freespace_key_bad, 124, 0) \ 137 + x(need_discard_freespace_key_bad, 124, FSCK_AUTOFIX) \ 138 138 x(discarding_bucket_not_in_need_discard_btree, 291, 0) \ 139 139 x(backpointer_bucket_offset_wrong, 125, 0) \ 140 140 x(backpointer_level_bad, 294, 0) \ ··· 165 165 x(ptr_to_missing_replicas_entry, 149, FSCK_AUTOFIX) \ 166 166 x(ptr_to_missing_stripe, 150, 0) \ 167 167 x(ptr_to_incorrect_stripe, 151, 0) \ 168 - x(ptr_gen_newer_than_bucket_gen, 152, 0) \ 168 + x(ptr_gen_newer_than_bucket_gen, 152, FSCK_AUTOFIX) \ 169 169 x(ptr_too_stale, 153, 0) \ 170 170 x(stale_dirty_ptr, 154, FSCK_AUTOFIX) \ 171 171 x(ptr_bucket_data_type_mismatch, 155, 0) \ ··· 236 236 x(inode_multiple_links_but_nlink_0, 207, FSCK_AUTOFIX) \ 237 237 x(inode_wrong_backpointer, 208, FSCK_AUTOFIX) \ 238 238 x(inode_wrong_nlink, 209, FSCK_AUTOFIX) \ 239 - x(inode_has_child_snapshots_wrong, 287, 0) \ 239 + x(inode_has_child_snapshots_wrong, 287, FSCK_AUTOFIX) \ 240 240 x(inode_unreachable, 210, FSCK_AUTOFIX) \ 241 241 x(inode_journal_seq_in_future, 299, FSCK_AUTOFIX) \ 242 242 x(inode_i_sectors_underflow, 312, FSCK_AUTOFIX) \ ··· 279 279 x(root_dir_missing, 239, 0) \ 280 280 x(root_inode_not_dir, 240, 0) \ 281 281 x(dir_loop, 241, 0) \ 282 - x(hash_table_key_duplicate, 242, 0) \ 283 - x(hash_table_key_wrong_offset, 243, 0) \ 282 + x(hash_table_key_duplicate, 242, FSCK_AUTOFIX) \ 283 + x(hash_table_key_wrong_offset, 243, FSCK_AUTOFIX) \ 284 284 x(unlinked_inode_not_on_deleted_list, 244, FSCK_AUTOFIX) \ 285 285 x(reflink_p_front_pad_bad, 245, 0) \ 286 286 x(journal_entry_dup_same_device, 246, 0) \
+30 -4
fs/bcachefs/sb-members.c
··· 325 325 { 326 326 struct bch_sb_field_members_v1 *mi = field_to_type(f, members_v1); 327 327 struct bch_sb_field_disk_groups *gi = bch2_sb_field_get(sb, disk_groups); 328 - unsigned i; 329 328 330 - for (i = 0; i < sb->nr_devices; i++) 329 + if (vstruct_end(&mi->field) <= (void *) &mi->_members[0]) { 330 + prt_printf(out, "field ends before start of entries"); 331 + return; 332 + } 333 + 334 + unsigned nr = (vstruct_end(&mi->field) - (void *) &mi->_members[0]) / sizeof(mi->_members[0]); 335 + if (nr != sb->nr_devices) 336 + prt_printf(out, "nr_devices mismatch: have %i entries, should be %u", nr, sb->nr_devices); 337 + 338 + for (unsigned i = 0; i < min(sb->nr_devices, nr); i++) 331 339 member_to_text(out, members_v1_get(mi, i), gi, sb, i); 332 340 } 333 341 ··· 349 341 { 350 342 struct bch_sb_field_members_v2 *mi = field_to_type(f, members_v2); 351 343 struct bch_sb_field_disk_groups *gi = bch2_sb_field_get(sb, disk_groups); 352 - unsigned i; 353 344 354 - for (i = 0; i < sb->nr_devices; i++) 345 + if (vstruct_end(&mi->field) <= (void *) &mi->_members[0]) { 346 + prt_printf(out, "field ends before start of entries"); 347 + return; 348 + } 349 + 350 + if (!le16_to_cpu(mi->member_bytes)) { 351 + prt_printf(out, "member_bytes 0"); 352 + return; 353 + } 354 + 355 + unsigned nr = (vstruct_end(&mi->field) - (void *) &mi->_members[0]) / le16_to_cpu(mi->member_bytes); 356 + if (nr != sb->nr_devices) 357 + prt_printf(out, "nr_devices mismatch: have %i entries, should be %u", nr, sb->nr_devices); 358 + 359 + /* 360 + * We call to_text() on superblock sections that haven't passed 361 + * validate, so we can't trust sb->nr_devices. 362 + */ 363 + 364 + for (unsigned i = 0; i < min(sb->nr_devices, nr); i++) 355 365 member_to_text(out, members_v2_get(mi, i), gi, sb, i); 356 366 } 357 367
+33 -14
fs/bcachefs/super.c
··· 104 104 #undef x 105 105 106 106 static void __bch2_print_str(struct bch_fs *c, const char *prefix, 107 - const char *str, bool nonblocking) 107 + const char *str) 108 108 { 109 109 #ifdef __KERNEL__ 110 110 struct stdio_redirect *stdio = bch2_fs_stdio_redirect(c); ··· 114 114 return; 115 115 } 116 116 #endif 117 - bch2_print_string_as_lines(KERN_ERR, str, nonblocking); 117 + bch2_print_string_as_lines(KERN_ERR, str); 118 118 } 119 119 120 120 void bch2_print_str(struct bch_fs *c, const char *prefix, const char *str) 121 121 { 122 - __bch2_print_str(c, prefix, str, false); 123 - } 124 - 125 - void bch2_print_str_nonblocking(struct bch_fs *c, const char *prefix, const char *str) 126 - { 127 - __bch2_print_str(c, prefix, str, true); 122 + __bch2_print_str(c, prefix, str); 128 123 } 129 124 130 125 __printf(2, 0) ··· 1067 1072 static void print_mount_opts(struct bch_fs *c) 1068 1073 { 1069 1074 enum bch_opt_id i; 1070 - struct printbuf p = PRINTBUF; 1071 - bool first = true; 1075 + CLASS(printbuf, p)(); 1076 + bch2_log_msg_start(c, &p); 1072 1077 1073 1078 prt_str(&p, "starting version "); 1074 1079 bch2_version_to_text(&p, c->sb.version); 1075 1080 1081 + bool first = true; 1076 1082 for (i = 0; i < bch2_opts_nr; i++) { 1077 1083 const struct bch_option *opt = &bch2_opt_table[i]; 1078 1084 u64 v = bch2_opt_get_by_id(&c->opts, i); ··· 1090 1094 } 1091 1095 1092 1096 if (c->sb.version_incompat_allowed != c->sb.version) { 1093 - prt_printf(&p, "\n allowing incompatible features above "); 1097 + prt_printf(&p, "\nallowing incompatible features above "); 1094 1098 bch2_version_to_text(&p, c->sb.version_incompat_allowed); 1095 1099 } 1096 1100 1097 1101 if (c->opts.verbose) { 1098 - prt_printf(&p, "\n features: "); 1102 + prt_printf(&p, "\nfeatures: "); 1099 1103 prt_bitflags(&p, bch2_sb_features, c->sb.features); 1100 1104 } 1101 1105 1102 - bch_info(c, "%s", p.buf); 1103 - printbuf_exit(&p); 1106 + if (c->sb.multi_device) { 1107 + prt_printf(&p, "\nwith devices"); 1108 + for_each_online_member(c, ca, BCH_DEV_READ_REF_bch2_online_devs) { 1109 + prt_char(&p, ' '); 1110 + prt_str(&p, ca->name); 1111 + } 1112 + } 1113 + 1114 + bch2_print_str(c, KERN_INFO, p.buf); 1104 1115 } 1105 1116 1106 1117 static bool bch2_fs_may_start(struct bch_fs *c) ··· 1997 1994 if (ret) 1998 1995 goto err_late; 1999 1996 } 1997 + 1998 + /* 1999 + * We just changed the superblock UUID, invalidate cache and send a 2000 + * uevent to update /dev/disk/by-uuid 2001 + */ 2002 + invalidate_bdev(ca->disk_sb.bdev); 2003 + 2004 + char uuid_str[37]; 2005 + snprintf(uuid_str, sizeof(uuid_str), "UUID=%pUb", &c->sb.uuid); 2006 + 2007 + char *envp[] = { 2008 + "CHANGE=uuid", 2009 + uuid_str, 2010 + NULL, 2011 + }; 2012 + kobject_uevent_env(&ca->disk_sb.bdev->bd_device.kobj, KOBJ_CHANGE, envp); 2000 2013 2001 2014 up_write(&c->state_lock); 2002 2015 out:
+2 -8
fs/bcachefs/util.c
··· 262 262 return true; 263 263 } 264 264 265 - void bch2_print_string_as_lines(const char *prefix, const char *lines, 266 - bool nonblocking) 265 + void bch2_print_string_as_lines(const char *prefix, const char *lines) 267 266 { 268 267 bool locked = false; 269 268 const char *p; ··· 272 273 return; 273 274 } 274 275 275 - if (!nonblocking) { 276 - console_lock(); 277 - locked = true; 278 - } else { 279 - locked = console_trylock(); 280 - } 276 + locked = console_trylock(); 281 277 282 278 while (*lines) { 283 279 p = strchrnul(lines, '\n');
+1 -1
fs/bcachefs/util.h
··· 214 214 void bch2_prt_u64_base2_nbits(struct printbuf *, u64, unsigned); 215 215 void bch2_prt_u64_base2(struct printbuf *, u64); 216 216 217 - void bch2_print_string_as_lines(const char *, const char *, bool); 217 + void bch2_print_string_as_lines(const char *, const char *); 218 218 219 219 typedef DARRAY(unsigned long) bch_stacktrace; 220 220 int bch2_save_backtrace(bch_stacktrace *stack, struct task_struct *, unsigned, gfp_t);
+6 -2
fs/file.c
··· 1198 1198 if (!(file->f_mode & FMODE_ATOMIC_POS) && !file->f_op->iterate_shared) 1199 1199 return false; 1200 1200 1201 - VFS_WARN_ON_ONCE((file_count(file) > 1) && 1202 - !mutex_is_locked(&file->f_pos_lock)); 1201 + /* 1202 + * Note that we are not guaranteed to be called after fdget_pos() on 1203 + * this file obj, in which case the caller is expected to provide the 1204 + * appropriate locking. 1205 + */ 1206 + 1203 1207 return true; 1204 1208 } 1205 1209
+13 -4
fs/namei.c
··· 2917 2917 * @base: base directory to lookup from 2918 2918 * 2919 2919 * Look up a dentry by name in the dcache, returning NULL if it does not 2920 - * currently exist. The function does not try to create a dentry. 2920 + * currently exist. The function does not try to create a dentry and if one 2921 + * is found it doesn't try to revalidate it. 2921 2922 * 2922 2923 * Note that this routine is purely a helper for filesystem usage and should 2923 2924 * not be called by generic code. It does no permission checking. ··· 2934 2933 if (err) 2935 2934 return ERR_PTR(err); 2936 2935 2937 - return lookup_dcache(name, base, 0); 2936 + return d_lookup(base, name); 2938 2937 } 2939 2938 EXPORT_SYMBOL(try_lookup_noperm); 2940 2939 ··· 3058 3057 * Note that this routine is purely a helper for filesystem usage and should 3059 3058 * not be called by generic code. It does no permission checking. 3060 3059 * 3061 - * Unlike lookup_noperm, it should be called without the parent 3060 + * Unlike lookup_noperm(), it should be called without the parent 3062 3061 * i_rwsem held, and will take the i_rwsem itself if necessary. 3062 + * 3063 + * Unlike try_lookup_noperm() it *does* revalidate the dentry if it already 3064 + * existed. 3063 3065 */ 3064 3066 struct dentry *lookup_noperm_unlocked(struct qstr *name, struct dentry *base) 3065 3067 { 3066 3068 struct dentry *ret; 3069 + int err; 3067 3070 3068 - ret = try_lookup_noperm(name, base); 3071 + err = lookup_noperm_common(name, base); 3072 + if (err) 3073 + return ERR_PTR(err); 3074 + 3075 + ret = lookup_dcache(name, base, 0); 3069 3076 if (!ret) 3070 3077 ret = lookup_slow(name, base, 0); 3071 3078 return ret;
+8 -2
fs/overlayfs/namei.c
··· 1393 1393 bool ovl_lower_positive(struct dentry *dentry) 1394 1394 { 1395 1395 struct ovl_entry *poe = OVL_E(dentry->d_parent); 1396 - struct qstr *name = &dentry->d_name; 1396 + const struct qstr *name = &dentry->d_name; 1397 1397 const struct cred *old_cred; 1398 1398 unsigned int i; 1399 1399 bool positive = false; ··· 1416 1416 struct dentry *this; 1417 1417 struct ovl_path *parentpath = &ovl_lowerstack(poe)[i]; 1418 1418 1419 + /* 1420 + * We need to make a non-const copy of dentry->d_name, 1421 + * because lookup_one_positive_unlocked() will hash name 1422 + * with parentpath base, which is on another (lower fs). 1423 + */ 1419 1424 this = lookup_one_positive_unlocked( 1420 1425 mnt_idmap(parentpath->layer->mnt), 1421 - name, parentpath->dentry); 1426 + &QSTR_LEN(name->name, name->len), 1427 + parentpath->dentry); 1422 1428 if (IS_ERR(this)) { 1423 1429 switch (PTR_ERR(this)) { 1424 1430 case -ENOENT:
+5 -3
fs/overlayfs/overlayfs.h
··· 246 246 struct dentry *dentry, 247 247 umode_t mode) 248 248 { 249 - dentry = vfs_mkdir(ovl_upper_mnt_idmap(ofs), dir, dentry, mode); 250 - pr_debug("mkdir(%pd2, 0%o) = %i\n", dentry, mode, PTR_ERR_OR_ZERO(dentry)); 251 - return dentry; 249 + struct dentry *ret; 250 + 251 + ret = vfs_mkdir(ovl_upper_mnt_idmap(ofs), dir, dentry, mode); 252 + pr_debug("mkdir(%pd2, 0%o) = %i\n", dentry, mode, PTR_ERR_OR_ZERO(ret)); 253 + return ret; 252 254 } 253 255 254 256 static inline int ovl_do_mknod(struct ovl_fs *ofs,
+1 -1
fs/pidfs.c
··· 366 366 kinfo.pid = task_pid_vnr(task); 367 367 kinfo.mask |= PIDFD_INFO_PID; 368 368 369 - if (kinfo.pid == 0 || kinfo.tgid == 0 || (kinfo.ppid == 0 && kinfo.pid != 1)) 369 + if (kinfo.pid == 0 || kinfo.tgid == 0) 370 370 return -ESRCH; 371 371 372 372 copy_out:
+4 -4
fs/smb/client/cached_dir.h
··· 21 21 struct cached_dirents { 22 22 bool is_valid:1; 23 23 bool is_failed:1; 24 - struct dir_context *ctx; /* 25 - * Only used to make sure we only take entries 26 - * from a single context. Never dereferenced. 27 - */ 24 + struct file *file; /* 25 + * Used to associate the cache with a single 26 + * open file instance. 27 + */ 28 28 struct mutex de_mutex; 29 29 int pos; /* Expected ctx->pos */ 30 30 struct list_head entries;
+8 -2
fs/smb/client/connect.c
··· 3718 3718 goto out; 3719 3719 } 3720 3720 3721 - /* if new SMB3.11 POSIX extensions are supported do not remap / and \ */ 3722 - if (tcon->posix_extensions) 3721 + /* 3722 + * if new SMB3.11 POSIX extensions are supported, do not change anything in the 3723 + * path (i.e., do not remap / and \ and do not map any special characters) 3724 + */ 3725 + if (tcon->posix_extensions) { 3723 3726 cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_POSIX_PATHS; 3727 + cifs_sb->mnt_cifs_flags &= ~(CIFS_MOUNT_MAP_SFM_CHR | 3728 + CIFS_MOUNT_MAP_SPECIAL_CHR); 3729 + } 3724 3730 3725 3731 #ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY 3726 3732 /* tell server which Unix caps we support */
+6 -3
fs/smb/client/file.c
··· 999 999 rc = cifs_get_readable_path(tcon, full_path, &cfile); 1000 1000 } 1001 1001 if (rc == 0) { 1002 - if (file->f_flags == cfile->f_flags) { 1002 + unsigned int oflags = file->f_flags & ~(O_CREAT|O_EXCL|O_TRUNC); 1003 + unsigned int cflags = cfile->f_flags & ~(O_CREAT|O_EXCL|O_TRUNC); 1004 + 1005 + if (cifs_convert_flags(oflags, 0) == cifs_convert_flags(cflags, 0) && 1006 + (oflags & (O_SYNC|O_DIRECT)) == (cflags & (O_SYNC|O_DIRECT))) { 1003 1007 file->private_data = cfile; 1004 1008 spin_lock(&CIFS_I(inode)->deferred_lock); 1005 1009 cifs_del_deferred_close(cfile); 1006 1010 spin_unlock(&CIFS_I(inode)->deferred_lock); 1007 1011 goto use_cache; 1008 - } else { 1009 - _cifsFileInfo_put(cfile, true, false); 1010 1012 } 1013 + _cifsFileInfo_put(cfile, true, false); 1011 1014 } else { 1012 1015 /* hard link on the defeered close file */ 1013 1016 rc = cifs_get_hardlink_path(tcon, inode, file);
+15 -13
fs/smb/client/readdir.c
··· 851 851 } 852 852 853 853 static void update_cached_dirents_count(struct cached_dirents *cde, 854 - struct dir_context *ctx) 854 + struct file *file) 855 855 { 856 - if (cde->ctx != ctx) 856 + if (cde->file != file) 857 857 return; 858 858 if (cde->is_valid || cde->is_failed) 859 859 return; ··· 862 862 } 863 863 864 864 static void finished_cached_dirents_count(struct cached_dirents *cde, 865 - struct dir_context *ctx) 865 + struct dir_context *ctx, struct file *file) 866 866 { 867 - if (cde->ctx != ctx) 867 + if (cde->file != file) 868 868 return; 869 869 if (cde->is_valid || cde->is_failed) 870 870 return; ··· 877 877 static void add_cached_dirent(struct cached_dirents *cde, 878 878 struct dir_context *ctx, 879 879 const char *name, int namelen, 880 - struct cifs_fattr *fattr) 880 + struct cifs_fattr *fattr, 881 + struct file *file) 881 882 { 882 883 struct cached_dirent *de; 883 884 884 - if (cde->ctx != ctx) 885 + if (cde->file != file) 885 886 return; 886 887 if (cde->is_valid || cde->is_failed) 887 888 return; ··· 912 911 static bool cifs_dir_emit(struct dir_context *ctx, 913 912 const char *name, int namelen, 914 913 struct cifs_fattr *fattr, 915 - struct cached_fid *cfid) 914 + struct cached_fid *cfid, 915 + struct file *file) 916 916 { 917 917 bool rc; 918 918 ino_t ino = cifs_uniqueid_to_ino_t(fattr->cf_uniqueid); ··· 925 923 if (cfid) { 926 924 mutex_lock(&cfid->dirents.de_mutex); 927 925 add_cached_dirent(&cfid->dirents, ctx, name, namelen, 928 - fattr); 926 + fattr, file); 929 927 mutex_unlock(&cfid->dirents.de_mutex); 930 928 } 931 929 ··· 1025 1023 cifs_prime_dcache(file_dentry(file), &name, &fattr); 1026 1024 1027 1025 return !cifs_dir_emit(ctx, name.name, name.len, 1028 - &fattr, cfid); 1026 + &fattr, cfid, file); 1029 1027 } 1030 1028 1031 1029 ··· 1076 1074 * we need to initialize scanning and storing the 1077 1075 * directory content. 1078 1076 */ 1079 - if (ctx->pos == 0 && cfid->dirents.ctx == NULL) { 1080 - cfid->dirents.ctx = ctx; 1077 + if (ctx->pos == 0 && cfid->dirents.file == NULL) { 1078 + cfid->dirents.file = file; 1081 1079 cfid->dirents.pos = 2; 1082 1080 } 1083 1081 /* ··· 1145 1143 } else { 1146 1144 if (cfid) { 1147 1145 mutex_lock(&cfid->dirents.de_mutex); 1148 - finished_cached_dirents_count(&cfid->dirents, ctx); 1146 + finished_cached_dirents_count(&cfid->dirents, ctx, file); 1149 1147 mutex_unlock(&cfid->dirents.de_mutex); 1150 1148 } 1151 1149 cifs_dbg(FYI, "Could not find entry\n"); ··· 1186 1184 ctx->pos++; 1187 1185 if (cfid) { 1188 1186 mutex_lock(&cfid->dirents.de_mutex); 1189 - update_cached_dirents_count(&cfid->dirents, ctx); 1187 + update_cached_dirents_count(&cfid->dirents, file); 1190 1188 mutex_unlock(&cfid->dirents.de_mutex); 1191 1189 } 1192 1190
+1 -1
fs/smb/server/connection.c
··· 40 40 kvfree(conn->request_buf); 41 41 kfree(conn->preauth_info); 42 42 if (atomic_dec_and_test(&conn->refcnt)) { 43 - ksmbd_free_transport(conn->transport); 43 + conn->transport->ops->free_transport(conn->transport); 44 44 kfree(conn); 45 45 } 46 46 }
+1
fs/smb/server/connection.h
··· 133 133 void *buf, unsigned int len, 134 134 struct smb2_buffer_desc_v1 *desc, 135 135 unsigned int desc_len); 136 + void (*free_transport)(struct ksmbd_transport *kt); 136 137 }; 137 138 138 139 struct ksmbd_transport {
+56 -18
fs/smb/server/smb2pdu.c
··· 1607 1607 out_len = work->response_sz - 1608 1608 (le16_to_cpu(rsp->SecurityBufferOffset) + 4); 1609 1609 1610 - /* Check previous session */ 1611 - prev_sess_id = le64_to_cpu(req->PreviousSessionId); 1612 - if (prev_sess_id && prev_sess_id != sess->id) 1613 - destroy_previous_session(conn, sess->user, prev_sess_id); 1614 - 1615 1610 retval = ksmbd_krb5_authenticate(sess, in_blob, in_len, 1616 1611 out_blob, &out_len); 1617 1612 if (retval) { 1618 1613 ksmbd_debug(SMB, "krb5 authentication failed\n"); 1619 1614 return -EINVAL; 1620 1615 } 1616 + 1617 + /* Check previous session */ 1618 + prev_sess_id = le64_to_cpu(req->PreviousSessionId); 1619 + if (prev_sess_id && prev_sess_id != sess->id) 1620 + destroy_previous_session(conn, sess->user, prev_sess_id); 1621 + 1621 1622 rsp->SecurityBufferLength = cpu_to_le16(out_len); 1622 1623 1623 1624 if ((conn->sign || server_conf.enforced_signing) || ··· 4872 4871 sinfo = (struct smb2_file_standard_info *)rsp->Buffer; 4873 4872 delete_pending = ksmbd_inode_pending_delete(fp); 4874 4873 4875 - sinfo->AllocationSize = cpu_to_le64(stat.blocks << 9); 4876 - sinfo->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size); 4874 + if (ksmbd_stream_fd(fp) == false) { 4875 + sinfo->AllocationSize = cpu_to_le64(stat.blocks << 9); 4876 + sinfo->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size); 4877 + } else { 4878 + sinfo->AllocationSize = cpu_to_le64(fp->stream.size); 4879 + sinfo->EndOfFile = cpu_to_le64(fp->stream.size); 4880 + } 4877 4881 sinfo->NumberOfLinks = cpu_to_le32(get_nlink(&stat) - delete_pending); 4878 4882 sinfo->DeletePending = delete_pending; 4879 4883 sinfo->Directory = S_ISDIR(stat.mode) ? 1 : 0; ··· 4941 4935 file_info->ChangeTime = cpu_to_le64(time); 4942 4936 file_info->Attributes = fp->f_ci->m_fattr; 4943 4937 file_info->Pad1 = 0; 4944 - file_info->AllocationSize = 4945 - cpu_to_le64(stat.blocks << 9); 4946 - file_info->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size); 4938 + if (ksmbd_stream_fd(fp) == false) { 4939 + file_info->AllocationSize = 4940 + cpu_to_le64(stat.blocks << 9); 4941 + file_info->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size); 4942 + } else { 4943 + file_info->AllocationSize = cpu_to_le64(fp->stream.size); 4944 + file_info->EndOfFile = cpu_to_le64(fp->stream.size); 4945 + } 4947 4946 file_info->NumberOfLinks = 4948 4947 cpu_to_le32(get_nlink(&stat) - delete_pending); 4949 4948 file_info->DeletePending = delete_pending; ··· 4957 4946 file_info->IndexNumber = cpu_to_le64(stat.ino); 4958 4947 file_info->EASize = 0; 4959 4948 file_info->AccessFlags = fp->daccess; 4960 - file_info->CurrentByteOffset = cpu_to_le64(fp->filp->f_pos); 4949 + if (ksmbd_stream_fd(fp) == false) 4950 + file_info->CurrentByteOffset = cpu_to_le64(fp->filp->f_pos); 4951 + else 4952 + file_info->CurrentByteOffset = cpu_to_le64(fp->stream.pos); 4961 4953 file_info->Mode = fp->coption; 4962 4954 file_info->AlignmentRequirement = 0; 4963 4955 conv_len = smbConvertToUTF16((__le16 *)file_info->FileName, filename, ··· 5148 5134 time = ksmbd_UnixTimeToNT(stat.ctime); 5149 5135 file_info->ChangeTime = cpu_to_le64(time); 5150 5136 file_info->Attributes = fp->f_ci->m_fattr; 5151 - file_info->AllocationSize = cpu_to_le64(stat.blocks << 9); 5152 - file_info->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size); 5137 + if (ksmbd_stream_fd(fp) == false) { 5138 + file_info->AllocationSize = cpu_to_le64(stat.blocks << 9); 5139 + file_info->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size); 5140 + } else { 5141 + file_info->AllocationSize = cpu_to_le64(fp->stream.size); 5142 + file_info->EndOfFile = cpu_to_le64(fp->stream.size); 5143 + } 5153 5144 file_info->Reserved = cpu_to_le32(0); 5154 5145 rsp->OutputBufferLength = 5155 5146 cpu_to_le32(sizeof(struct smb2_file_ntwrk_info)); ··· 5177 5158 struct smb2_file_pos_info *file_info; 5178 5159 5179 5160 file_info = (struct smb2_file_pos_info *)rsp->Buffer; 5180 - file_info->CurrentByteOffset = cpu_to_le64(fp->filp->f_pos); 5161 + if (ksmbd_stream_fd(fp) == false) 5162 + file_info->CurrentByteOffset = cpu_to_le64(fp->filp->f_pos); 5163 + else 5164 + file_info->CurrentByteOffset = cpu_to_le64(fp->stream.pos); 5165 + 5181 5166 rsp->OutputBufferLength = 5182 5167 cpu_to_le32(sizeof(struct smb2_file_pos_info)); 5183 5168 } ··· 5270 5247 file_info->ChangeTime = cpu_to_le64(time); 5271 5248 file_info->DosAttributes = fp->f_ci->m_fattr; 5272 5249 file_info->Inode = cpu_to_le64(stat.ino); 5273 - file_info->EndOfFile = cpu_to_le64(stat.size); 5274 - file_info->AllocationSize = cpu_to_le64(stat.blocks << 9); 5250 + if (ksmbd_stream_fd(fp) == false) { 5251 + file_info->EndOfFile = cpu_to_le64(stat.size); 5252 + file_info->AllocationSize = cpu_to_le64(stat.blocks << 9); 5253 + } else { 5254 + file_info->EndOfFile = cpu_to_le64(fp->stream.size); 5255 + file_info->AllocationSize = cpu_to_le64(fp->stream.size); 5256 + } 5275 5257 file_info->HardLinks = cpu_to_le32(stat.nlink); 5276 5258 file_info->Mode = cpu_to_le32(stat.mode & 0777); 5277 5259 switch (stat.mode & S_IFMT) { ··· 6218 6190 if (!(fp->daccess & FILE_WRITE_DATA_LE)) 6219 6191 return -EACCES; 6220 6192 6193 + if (ksmbd_stream_fd(fp) == true) 6194 + return 0; 6195 + 6221 6196 rc = vfs_getattr(&fp->filp->f_path, &stat, STATX_BASIC_STATS, 6222 6197 AT_STATX_SYNC_AS_STAT); 6223 6198 if (rc) ··· 6279 6248 * truncate of some filesystem like FAT32 fill zero data in 6280 6249 * truncated range. 6281 6250 */ 6282 - if (inode->i_sb->s_magic != MSDOS_SUPER_MAGIC) { 6251 + if (inode->i_sb->s_magic != MSDOS_SUPER_MAGIC && 6252 + ksmbd_stream_fd(fp) == false) { 6283 6253 ksmbd_debug(SMB, "truncated to newsize %lld\n", newsize); 6284 6254 rc = ksmbd_vfs_truncate(work, fp, newsize); 6285 6255 if (rc) { ··· 6353 6321 return -EINVAL; 6354 6322 } 6355 6323 6356 - fp->filp->f_pos = current_byte_offset; 6324 + if (ksmbd_stream_fd(fp) == false) 6325 + fp->filp->f_pos = current_byte_offset; 6326 + else { 6327 + if (current_byte_offset > XATTR_SIZE_MAX) 6328 + current_byte_offset = XATTR_SIZE_MAX; 6329 + fp->stream.pos = current_byte_offset; 6330 + } 6357 6331 return 0; 6358 6332 } 6359 6333
+8 -2
fs/smb/server/transport_rdma.c
··· 159 159 }; 160 160 161 161 #define KSMBD_TRANS(t) ((struct ksmbd_transport *)&((t)->transport)) 162 - 162 + #define SMBD_TRANS(t) ((struct smb_direct_transport *)container_of(t, \ 163 + struct smb_direct_transport, transport)) 163 164 enum { 164 165 SMB_DIRECT_MSG_NEGOTIATE_REQ = 0, 165 166 SMB_DIRECT_MSG_DATA_TRANSFER ··· 411 410 return NULL; 412 411 } 413 412 413 + static void smb_direct_free_transport(struct ksmbd_transport *kt) 414 + { 415 + kfree(SMBD_TRANS(kt)); 416 + } 417 + 414 418 static void free_transport(struct smb_direct_transport *t) 415 419 { 416 420 struct smb_direct_recvmsg *recvmsg; ··· 461 455 462 456 smb_direct_destroy_pools(t); 463 457 ksmbd_conn_free(KSMBD_TRANS(t)->conn); 464 - kfree(t); 465 458 } 466 459 467 460 static struct smb_direct_sendmsg ··· 2286 2281 .read = smb_direct_read, 2287 2282 .rdma_read = smb_direct_rdma_read, 2288 2283 .rdma_write = smb_direct_rdma_write, 2284 + .free_transport = smb_direct_free_transport, 2289 2285 };
+2 -1
fs/smb/server/transport_tcp.c
··· 93 93 return t; 94 94 } 95 95 96 - void ksmbd_free_transport(struct ksmbd_transport *kt) 96 + static void ksmbd_tcp_free_transport(struct ksmbd_transport *kt) 97 97 { 98 98 struct tcp_transport *t = TCP_TRANS(kt); 99 99 ··· 656 656 .read = ksmbd_tcp_read, 657 657 .writev = ksmbd_tcp_writev, 658 658 .disconnect = ksmbd_tcp_disconnect, 659 + .free_transport = ksmbd_tcp_free_transport, 659 660 };
+3 -2
fs/smb/server/vfs.c
··· 293 293 294 294 if (v_len - *pos < count) 295 295 count = v_len - *pos; 296 + fp->stream.pos = v_len; 296 297 297 298 memcpy(buf, &stream_buf[*pos], count); 298 299 ··· 457 456 true); 458 457 if (err < 0) 459 458 goto out; 460 - 461 - fp->filp->f_pos = *pos; 459 + else 460 + fp->stream.pos = size; 462 461 err = 0; 463 462 out: 464 463 kvfree(stream_buf);
+1
fs/smb/server/vfs_cache.h
··· 44 44 struct stream { 45 45 char *name; 46 46 ssize_t size; 47 + loff_t pos; 47 48 }; 48 49 49 50 struct ksmbd_inode {
+3 -1
fs/super.c
··· 964 964 spin_unlock(&sb_lock); 965 965 966 966 locked = super_lock_shared(sb); 967 - if (locked) 967 + if (locked) { 968 968 f(sb, arg); 969 + super_unlock_shared(sb); 970 + } 969 971 970 972 spin_lock(&sb_lock); 971 973 if (p)
+1
fs/xattr.c
··· 1479 1479 buffer += err; 1480 1480 } 1481 1481 remaining_size -= err; 1482 + err = 0; 1482 1483 1483 1484 read_lock(&xattrs->lock); 1484 1485 for (rbp = rb_first(&xattrs->rb_root); rbp; rbp = rb_next(rbp)) {
+6
include/linux/atmdev.h
··· 249 249 ATM_SKB(skb)->atm_options = vcc->atm_options; 250 250 } 251 251 252 + static inline void atm_return_tx(struct atm_vcc *vcc, struct sk_buff *skb) 253 + { 254 + WARN_ON_ONCE(refcount_sub_and_test(ATM_SKB(skb)->acct_truesize, 255 + &sk_atm(vcc)->sk_wmem_alloc)); 256 + } 257 + 252 258 static inline void atm_force_charge(struct atm_vcc *vcc,int truesize) 253 259 { 254 260 atomic_add(truesize, &sk_atm(vcc)->sk_rmem_alloc);
+1 -1
include/linux/bio.h
··· 291 291 292 292 fi->folio = page_folio(bvec->bv_page); 293 293 fi->offset = bvec->bv_offset + 294 - PAGE_SIZE * (bvec->bv_page - &fi->folio->page); 294 + PAGE_SIZE * folio_page_idx(fi->folio, bvec->bv_page); 295 295 fi->_seg_count = bvec->bv_len; 296 296 fi->length = min(folio_size(fi->folio) - fi->offset, fi->_seg_count); 297 297 fi->_next = folio_next(fi->folio);
+5 -2
include/linux/bvec.h
··· 57 57 * @offset: offset into the folio 58 58 */ 59 59 static inline void bvec_set_folio(struct bio_vec *bv, struct folio *folio, 60 - unsigned int len, unsigned int offset) 60 + size_t len, size_t offset) 61 61 { 62 - bvec_set_page(bv, &folio->page, len, offset); 62 + unsigned long nr = offset / PAGE_SIZE; 63 + 64 + WARN_ON_ONCE(len > UINT_MAX); 65 + bvec_set_page(bv, folio_page(folio, nr), len, offset % PAGE_SIZE); 63 66 } 64 67 65 68 /**
+3
include/linux/cpu.h
··· 120 120 extern void cpu_maps_update_done(void); 121 121 int bringup_hibernate_cpu(unsigned int sleep_cpu); 122 122 void bringup_nonboot_cpus(unsigned int max_cpus); 123 + int arch_cpu_rescan_dead_smt_siblings(void); 123 124 124 125 #else /* CONFIG_SMP */ 125 126 #define cpuhp_tasks_frozen 0 ··· 134 133 } 135 134 136 135 static inline int add_cpu(unsigned int cpu) { return 0;} 136 + 137 + static inline int arch_cpu_rescan_dead_smt_siblings(void) { return 0; } 137 138 138 139 #endif /* CONFIG_SMP */ 139 140 extern const struct bus_type cpu_subsys;
+1 -7
include/linux/execmem.h
··· 54 54 EXECMEM_ROX_CACHE = (1 << 1), 55 55 }; 56 56 57 - #if defined(CONFIG_ARCH_HAS_EXECMEM_ROX) && defined(CONFIG_EXECMEM) 57 + #ifdef CONFIG_ARCH_HAS_EXECMEM_ROX 58 58 /** 59 59 * execmem_fill_trapping_insns - set memory to contain instructions that 60 60 * will trap ··· 94 94 * Return: 0 on success or negative error code on failure. 95 95 */ 96 96 int execmem_restore_rox(void *ptr, size_t size); 97 - 98 - /* 99 - * Called from mark_readonly(), where the system transitions to ROX. 100 - */ 101 - void execmem_cache_make_ro(void); 102 97 #else 103 98 static inline int execmem_make_temp_rw(void *ptr, size_t size) { return 0; } 104 99 static inline int execmem_restore_rox(void *ptr, size_t size) { return 0; } 105 - static inline void execmem_cache_make_ro(void) { } 106 100 #endif 107 101 108 102 /**
+7 -3
include/linux/fs.h
··· 399 399 { IOCB_WAITQ, "WAITQ" }, \ 400 400 { IOCB_NOIO, "NOIO" }, \ 401 401 { IOCB_ALLOC_CACHE, "ALLOC_CACHE" }, \ 402 - { IOCB_DIO_CALLER_COMP, "CALLER_COMP" } 402 + { IOCB_DIO_CALLER_COMP, "CALLER_COMP" }, \ 403 + { IOCB_AIO_RW, "AIO_RW" }, \ 404 + { IOCB_HAS_METADATA, "AIO_HAS_METADATA" } 403 405 404 406 struct kiocb { 405 407 struct file *ki_filp; ··· 2276 2274 return true; 2277 2275 } 2278 2276 2277 + int compat_vma_mmap_prepare(struct file *file, struct vm_area_struct *vma); 2278 + 2279 2279 static inline int call_mmap(struct file *file, struct vm_area_struct *vma) 2280 2280 { 2281 - if (WARN_ON_ONCE(file->f_op->mmap_prepare)) 2282 - return -EINVAL; 2281 + if (file->f_op->mmap_prepare) 2282 + return compat_vma_mmap_prepare(file, vma); 2283 2283 2284 2284 return file->f_op->mmap(file, vma); 2285 2285 }
+9 -9
include/linux/ieee80211.h
··· 1278 1278 u8 sa[ETH_ALEN]; 1279 1279 __le32 timestamp; 1280 1280 u8 change_seq; 1281 - u8 variable[0]; 1281 + u8 variable[]; 1282 1282 } __packed s1g_beacon; 1283 1283 } u; 1284 1284 } __packed __aligned(2); ··· 1536 1536 u8 action_code; 1537 1537 u8 dialog_token; 1538 1538 __le16 capability; 1539 - u8 variable[0]; 1539 + u8 variable[]; 1540 1540 } __packed tdls_discover_resp; 1541 1541 struct { 1542 1542 u8 action_code; ··· 1721 1721 struct { 1722 1722 u8 dialog_token; 1723 1723 __le16 capability; 1724 - u8 variable[0]; 1724 + u8 variable[]; 1725 1725 } __packed setup_req; 1726 1726 struct { 1727 1727 __le16 status_code; 1728 1728 u8 dialog_token; 1729 1729 __le16 capability; 1730 - u8 variable[0]; 1730 + u8 variable[]; 1731 1731 } __packed setup_resp; 1732 1732 struct { 1733 1733 __le16 status_code; 1734 1734 u8 dialog_token; 1735 - u8 variable[0]; 1735 + u8 variable[]; 1736 1736 } __packed setup_cfm; 1737 1737 struct { 1738 1738 __le16 reason_code; 1739 - u8 variable[0]; 1739 + u8 variable[]; 1740 1740 } __packed teardown; 1741 1741 struct { 1742 1742 u8 dialog_token; 1743 - u8 variable[0]; 1743 + u8 variable[]; 1744 1744 } __packed discover_req; 1745 1745 struct { 1746 1746 u8 target_channel; 1747 1747 u8 oper_class; 1748 - u8 variable[0]; 1748 + u8 variable[]; 1749 1749 } __packed chan_switch_req; 1750 1750 struct { 1751 1751 __le16 status_code; 1752 - u8 variable[0]; 1752 + u8 variable[]; 1753 1753 } __packed chan_switch_resp; 1754 1754 } u; 1755 1755 } __packed;
+3 -4
include/linux/libata.h
··· 1352 1352 int ata_acpi_gtm(struct ata_port *ap, struct ata_acpi_gtm *stm); 1353 1353 unsigned int ata_acpi_gtm_xfermask(struct ata_device *dev, 1354 1354 const struct ata_acpi_gtm *gtm); 1355 - int ata_acpi_cbl_80wire(struct ata_port *ap, const struct ata_acpi_gtm *gtm); 1355 + int ata_acpi_cbl_pata_type(struct ata_port *ap); 1356 1356 #else 1357 1357 static inline const struct ata_acpi_gtm *ata_acpi_init_gtm(struct ata_port *ap) 1358 1358 { ··· 1377 1377 return 0; 1378 1378 } 1379 1379 1380 - static inline int ata_acpi_cbl_80wire(struct ata_port *ap, 1381 - const struct ata_acpi_gtm *gtm) 1380 + static inline int ata_acpi_cbl_pata_type(struct ata_port *ap) 1382 1381 { 1383 - return 0; 1382 + return ATA_CBL_PATA40; 1384 1383 } 1385 1384 #endif 1386 1385
-5
include/linux/module.h
··· 586 586 atomic_t refcnt; 587 587 #endif 588 588 589 - #ifdef CONFIG_MITIGATION_ITS 590 - int its_num_pages; 591 - void **its_page_array; 592 - #endif 593 - 594 589 #ifdef CONFIG_CONSTRUCTORS 595 590 /* Constructor functions. */ 596 591 ctor_fn_t *ctors;
+2 -2
include/linux/scatterlist.h
··· 99 99 * @sg: The current sg entry 100 100 * 101 101 * Description: 102 - * Usually the next entry will be @sg@ + 1, but if this sg element is part 102 + * Usually the next entry will be @sg + 1, but if this sg element is part 103 103 * of a chained scatterlist, it could jump to the start of a new 104 104 * scatterlist array. 105 105 * ··· 254 254 * @sgl: Second scatterlist 255 255 * 256 256 * Description: 257 - * Links @prv@ and @sgl@ together, to form a longer scatterlist. 257 + * Links @prv and @sgl together, to form a longer scatterlist. 258 258 * 259 259 **/ 260 260 static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents,
+2 -2
include/uapi/linux/bits.h
··· 4 4 #ifndef _UAPI_LINUX_BITS_H 5 5 #define _UAPI_LINUX_BITS_H 6 6 7 - #define __GENMASK(h, l) (((~_UL(0)) << (l)) & (~_UL(0) >> (BITS_PER_LONG - 1 - (h)))) 7 + #define __GENMASK(h, l) (((~_UL(0)) << (l)) & (~_UL(0) >> (__BITS_PER_LONG - 1 - (h)))) 8 8 9 - #define __GENMASK_ULL(h, l) (((~_ULL(0)) << (l)) & (~_ULL(0) >> (BITS_PER_LONG_LONG - 1 - (h)))) 9 + #define __GENMASK_ULL(h, l) (((~_ULL(0)) << (l)) & (~_ULL(0) >> (__BITS_PER_LONG_LONG - 1 - (h)))) 10 10 11 11 #define __GENMASK_U128(h, l) \ 12 12 ((_BIT128((h)) << 1) - (_BIT128(l)))
-4
include/uapi/linux/ethtool_netlink.h
··· 208 208 ETHTOOL_A_STATS_PHY_MAX = (__ETHTOOL_A_STATS_PHY_CNT - 1) 209 209 }; 210 210 211 - /* generic netlink info */ 212 - #define ETHTOOL_GENL_NAME "ethtool" 213 - #define ETHTOOL_GENL_VERSION 1 214 - 215 211 #define ETHTOOL_MCGRP_MONITOR_NAME "monitor" 216 212 217 213 #endif /* _UAPI_LINUX_ETHTOOL_NETLINK_H_ */
+1
init/initramfs.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include <linux/init.h> 3 3 #include <linux/async.h> 4 + #include <linux/export.h> 4 5 #include <linux/fs.h> 5 6 #include <linux/slab.h> 6 7 #include <linux/types.h>
+1
init/main.c
··· 13 13 #define DEBUG /* Enable initcall_debug */ 14 14 15 15 #include <linux/types.h> 16 + #include <linux/export.h> 16 17 #include <linux/extable.h> 17 18 #include <linux/module.h> 18 19 #include <linux/proc_fs.h>
+10 -2
io_uring/fdinfo.c
··· 141 141 142 142 if (ctx->flags & IORING_SETUP_SQPOLL) { 143 143 struct io_sq_data *sq = ctx->sq_data; 144 + struct task_struct *tsk; 144 145 146 + rcu_read_lock(); 147 + tsk = rcu_dereference(sq->thread); 145 148 /* 146 149 * sq->thread might be NULL if we raced with the sqpoll 147 150 * thread termination. 148 151 */ 149 - if (sq->thread) { 152 + if (tsk) { 153 + get_task_struct(tsk); 154 + rcu_read_unlock(); 155 + getrusage(tsk, RUSAGE_SELF, &sq_usage); 156 + put_task_struct(tsk); 150 157 sq_pid = sq->task_pid; 151 158 sq_cpu = sq->sq_cpu; 152 - getrusage(sq->thread, RUSAGE_SELF, &sq_usage); 153 159 sq_total_time = (sq_usage.ru_stime.tv_sec * 1000000 154 160 + sq_usage.ru_stime.tv_usec); 155 161 sq_work_time = sq->work_time; 162 + } else { 163 + rcu_read_unlock(); 156 164 } 157 165 } 158 166
+5 -2
io_uring/io_uring.c
··· 1523 1523 } 1524 1524 } 1525 1525 mutex_unlock(&ctx->uring_lock); 1526 + 1527 + if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) 1528 + io_move_task_work_from_local(ctx); 1526 1529 } 1527 1530 1528 1531 static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned int min_events) ··· 2909 2906 struct task_struct *tsk; 2910 2907 2911 2908 io_sq_thread_park(sqd); 2912 - tsk = sqd->thread; 2909 + tsk = sqpoll_task_locked(sqd); 2913 2910 if (tsk && tsk->io_uring && tsk->io_uring->io_wq) 2914 2911 io_wq_cancel_cb(tsk->io_uring->io_wq, 2915 2912 io_cancel_ctx_cb, ctx, true); ··· 3145 3142 s64 inflight; 3146 3143 DEFINE_WAIT(wait); 3147 3144 3148 - WARN_ON_ONCE(sqd && sqd->thread != current); 3145 + WARN_ON_ONCE(sqd && sqpoll_task_locked(sqd) != current); 3149 3146 3150 3147 if (!current->io_uring) 3151 3148 return;
+4 -1
io_uring/kbuf.c
··· 270 270 /* truncate end piece, if needed, for non partial buffers */ 271 271 if (len > arg->max_len) { 272 272 len = arg->max_len; 273 - if (!(bl->flags & IOBL_INC)) 273 + if (!(bl->flags & IOBL_INC)) { 274 + if (iov != arg->iovs) 275 + break; 274 276 buf->len = len; 277 + } 275 278 } 276 279 277 280 iov->iov_base = u64_to_user_ptr(buf->addr);
+5 -2
io_uring/register.c
··· 273 273 if (ctx->flags & IORING_SETUP_SQPOLL) { 274 274 sqd = ctx->sq_data; 275 275 if (sqd) { 276 + struct task_struct *tsk; 277 + 276 278 /* 277 279 * Observe the correct sqd->lock -> ctx->uring_lock 278 280 * ordering. Fine to drop uring_lock here, we hold ··· 284 282 mutex_unlock(&ctx->uring_lock); 285 283 mutex_lock(&sqd->lock); 286 284 mutex_lock(&ctx->uring_lock); 287 - if (sqd->thread) 288 - tctx = sqd->thread->io_uring; 285 + tsk = sqpoll_task_locked(sqd); 286 + if (tsk) 287 + tctx = tsk->io_uring; 289 288 } 290 289 } else { 291 290 tctx = current->io_uring;
+28 -15
io_uring/sqpoll.c
··· 30 30 void io_sq_thread_unpark(struct io_sq_data *sqd) 31 31 __releases(&sqd->lock) 32 32 { 33 - WARN_ON_ONCE(sqd->thread == current); 33 + WARN_ON_ONCE(sqpoll_task_locked(sqd) == current); 34 34 35 35 /* 36 36 * Do the dance but not conditional clear_bit() because it'd race with ··· 46 46 void io_sq_thread_park(struct io_sq_data *sqd) 47 47 __acquires(&sqd->lock) 48 48 { 49 - WARN_ON_ONCE(data_race(sqd->thread) == current); 49 + struct task_struct *tsk; 50 50 51 51 atomic_inc(&sqd->park_pending); 52 52 set_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state); 53 53 mutex_lock(&sqd->lock); 54 - if (sqd->thread) 55 - wake_up_process(sqd->thread); 54 + 55 + tsk = sqpoll_task_locked(sqd); 56 + if (tsk) { 57 + WARN_ON_ONCE(tsk == current); 58 + wake_up_process(tsk); 59 + } 56 60 } 57 61 58 62 void io_sq_thread_stop(struct io_sq_data *sqd) 59 63 { 60 - WARN_ON_ONCE(sqd->thread == current); 64 + struct task_struct *tsk; 65 + 61 66 WARN_ON_ONCE(test_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state)); 62 67 63 68 set_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state); 64 69 mutex_lock(&sqd->lock); 65 - if (sqd->thread) 66 - wake_up_process(sqd->thread); 70 + tsk = sqpoll_task_locked(sqd); 71 + if (tsk) { 72 + WARN_ON_ONCE(tsk == current); 73 + wake_up_process(tsk); 74 + } 67 75 mutex_unlock(&sqd->lock); 68 76 wait_for_completion(&sqd->exited); 69 77 } ··· 278 270 /* offload context creation failed, just exit */ 279 271 if (!current->io_uring) { 280 272 mutex_lock(&sqd->lock); 281 - sqd->thread = NULL; 273 + rcu_assign_pointer(sqd->thread, NULL); 274 + put_task_struct(current); 282 275 mutex_unlock(&sqd->lock); 283 276 goto err_out; 284 277 } ··· 388 379 io_sq_tw(&retry_list, UINT_MAX); 389 380 390 381 io_uring_cancel_generic(true, sqd); 391 - sqd->thread = NULL; 382 + rcu_assign_pointer(sqd->thread, NULL); 383 + put_task_struct(current); 392 384 list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) 393 385 atomic_or(IORING_SQ_NEED_WAKEUP, &ctx->rings->sq_flags); 394 386 io_run_task_work(); ··· 494 484 goto err_sqpoll; 495 485 } 496 486 497 - sqd->thread = tsk; 487 + mutex_lock(&sqd->lock); 488 + rcu_assign_pointer(sqd->thread, tsk); 489 + mutex_unlock(&sqd->lock); 490 + 498 491 task_to_put = get_task_struct(tsk); 499 492 ret = io_uring_alloc_task_context(tsk, ctx); 500 493 wake_up_new_task(tsk); ··· 508 495 ret = -EINVAL; 509 496 goto err; 510 497 } 511 - 512 - if (task_to_put) 513 - put_task_struct(task_to_put); 514 498 return 0; 515 499 err_sqpoll: 516 500 complete(&ctx->sq_data->exited); ··· 525 515 int ret = -EINVAL; 526 516 527 517 if (sqd) { 518 + struct task_struct *tsk; 519 + 528 520 io_sq_thread_park(sqd); 529 521 /* Don't set affinity for a dying thread */ 530 - if (sqd->thread) 531 - ret = io_wq_cpu_affinity(sqd->thread->io_uring, mask); 522 + tsk = sqpoll_task_locked(sqd); 523 + if (tsk) 524 + ret = io_wq_cpu_affinity(tsk->io_uring, mask); 532 525 io_sq_thread_unpark(sqd); 533 526 } 534 527
+7 -1
io_uring/sqpoll.h
··· 8 8 /* ctx's that are using this sqd */ 9 9 struct list_head ctx_list; 10 10 11 - struct task_struct *thread; 11 + struct task_struct __rcu *thread; 12 12 struct wait_queue_head wait; 13 13 14 14 unsigned sq_thread_idle; ··· 29 29 void io_put_sq_data(struct io_sq_data *sqd); 30 30 void io_sqpoll_wait_sq(struct io_ring_ctx *ctx); 31 31 int io_sqpoll_wq_cpu_affinity(struct io_ring_ctx *ctx, cpumask_var_t mask); 32 + 33 + static inline struct task_struct *sqpoll_task_locked(struct io_sq_data *sqd) 34 + { 35 + return rcu_dereference_protected(sqd->thread, 36 + lockdep_is_held(&sqd->lock)); 37 + }
+1 -2
kernel/cgroup/legacy_freezer.c
··· 188 188 if (!(freezer->state & CGROUP_FREEZING)) { 189 189 __thaw_task(task); 190 190 } else { 191 - freeze_task(task); 192 - 193 191 /* clear FROZEN and propagate upwards */ 194 192 while (freezer && (freezer->state & CGROUP_FROZEN)) { 195 193 freezer->state &= ~CGROUP_FROZEN; 196 194 freezer = parent_freezer(freezer); 197 195 } 196 + freeze_task(task); 198 197 } 199 198 } 200 199
+2 -2
kernel/sched/core.c
··· 8545 8545 init_cfs_bandwidth(&root_task_group.cfs_bandwidth, NULL); 8546 8546 #endif /* CONFIG_FAIR_GROUP_SCHED */ 8547 8547 #ifdef CONFIG_EXT_GROUP_SCHED 8548 - root_task_group.scx_weight = CGROUP_WEIGHT_DFL; 8548 + scx_tg_init(&root_task_group); 8549 8549 #endif /* CONFIG_EXT_GROUP_SCHED */ 8550 8550 #ifdef CONFIG_RT_GROUP_SCHED 8551 8551 root_task_group.rt_se = (struct sched_rt_entity **)ptr; ··· 8985 8985 if (!alloc_rt_sched_group(tg, parent)) 8986 8986 goto err; 8987 8987 8988 - scx_group_set_weight(tg, CGROUP_WEIGHT_DFL); 8988 + scx_tg_init(tg); 8989 8989 alloc_uclamp_sched_group(tg, parent); 8990 8990 8991 8991 return tg;
+11 -6
kernel/sched/ext.c
··· 4092 4092 DEFINE_STATIC_PERCPU_RWSEM(scx_cgroup_rwsem); 4093 4093 static bool scx_cgroup_enabled; 4094 4094 4095 + void scx_tg_init(struct task_group *tg) 4096 + { 4097 + tg->scx_weight = CGROUP_WEIGHT_DFL; 4098 + } 4099 + 4095 4100 int scx_tg_online(struct task_group *tg) 4096 4101 { 4097 4102 struct scx_sched *sch = scx_root; ··· 4246 4241 4247 4242 percpu_down_read(&scx_cgroup_rwsem); 4248 4243 4249 - if (scx_cgroup_enabled && tg->scx_weight != weight) { 4250 - if (SCX_HAS_OP(sch, cgroup_set_weight)) 4251 - SCX_CALL_OP(sch, SCX_KF_UNLOCKED, cgroup_set_weight, NULL, 4252 - tg_cgrp(tg), weight); 4253 - tg->scx_weight = weight; 4254 - } 4244 + if (scx_cgroup_enabled && SCX_HAS_OP(sch, cgroup_set_weight) && 4245 + tg->scx_weight != weight) 4246 + SCX_CALL_OP(sch, SCX_KF_UNLOCKED, cgroup_set_weight, NULL, 4247 + tg_cgrp(tg), weight); 4248 + 4249 + tg->scx_weight = weight; 4255 4250 4256 4251 percpu_up_read(&scx_cgroup_rwsem); 4257 4252 }
+2
kernel/sched/ext.h
··· 79 79 80 80 #ifdef CONFIG_CGROUP_SCHED 81 81 #ifdef CONFIG_EXT_GROUP_SCHED 82 + void scx_tg_init(struct task_group *tg); 82 83 int scx_tg_online(struct task_group *tg); 83 84 void scx_tg_offline(struct task_group *tg); 84 85 int scx_cgroup_can_attach(struct cgroup_taskset *tset); ··· 89 88 void scx_group_set_weight(struct task_group *tg, unsigned long cgrp_weight); 90 89 void scx_group_set_idle(struct task_group *tg, bool idle); 91 90 #else /* CONFIG_EXT_GROUP_SCHED */ 91 + static inline void scx_tg_init(struct task_group *tg) {} 92 92 static inline int scx_tg_online(struct task_group *tg) { return 0; } 93 93 static inline void scx_tg_offline(struct task_group *tg) {} 94 94 static inline int scx_cgroup_can_attach(struct cgroup_taskset *tset) { return 0; }
+9
kernel/time/posix-cpu-timers.c
··· 1406 1406 lockdep_assert_irqs_disabled(); 1407 1407 1408 1408 /* 1409 + * Ensure that release_task(tsk) can't happen while 1410 + * handle_posix_cpu_timers() is running. Otherwise, a concurrent 1411 + * posix_cpu_timer_del() may fail to lock_task_sighand(tsk) and 1412 + * miss timer->it.cpu.firing != 0. 1413 + */ 1414 + if (tsk->exit_state) 1415 + return; 1416 + 1417 + /* 1409 1418 * If the actual expiry is deferred to task work context and the 1410 1419 * work is already scheduled there is no point to do anything here. 1411 1420 */
+1 -3
kernel/trace/trace_events_filter.c
··· 1437 1437 INIT_LIST_HEAD(&head->list); 1438 1438 1439 1439 item = kmalloc(sizeof(*item), GFP_KERNEL); 1440 - if (!item) { 1441 - kfree(head); 1440 + if (!item) 1442 1441 goto free_now; 1443 - } 1444 1442 1445 1443 item->filter = filter; 1446 1444 list_add_tail(&item->list, &head->list);
+6
kernel/trace/trace_functions_graph.c
··· 455 455 return 0; 456 456 } 457 457 458 + static struct tracer graph_trace; 459 + 458 460 static int ftrace_graph_trace_args(struct trace_array *tr, int set) 459 461 { 460 462 trace_func_graph_ent_t entry; 463 + 464 + /* Do nothing if the current tracer is not this tracer */ 465 + if (tr->current_trace != &graph_trace) 466 + return 0; 461 467 462 468 if (set) 463 469 entry = trace_graph_entry_args;
+2 -1
kernel/workqueue.c
··· 7767 7767 restrict_unbound_cpumask("workqueue.unbound_cpus", &wq_cmdline_cpumask); 7768 7768 7769 7769 cpumask_copy(wq_requested_unbound_cpumask, wq_unbound_cpumask); 7770 - 7770 + cpumask_andnot(wq_isolated_cpumask, cpu_possible_mask, 7771 + housekeeping_cpumask(HK_TYPE_DOMAIN)); 7771 7772 pwq_cache = KMEM_CACHE(pool_workqueue, SLAB_PANIC); 7772 7773 7773 7774 unbound_wq_update_pwq_attrs_buf = alloc_workqueue_attrs();
+1
lib/Kconfig
··· 716 716 717 717 config PLDMFW 718 718 bool 719 + select CRC32 719 720 default n 720 721 721 722 config ASN1_ENCODER
+4
lib/crypto/Makefile
··· 35 35 libcurve25519-generic-y := curve25519-fiat32.o 36 36 libcurve25519-generic-$(CONFIG_ARCH_SUPPORTS_INT128) := curve25519-hacl64.o 37 37 libcurve25519-generic-y += curve25519-generic.o 38 + # clang versions prior to 18 may blow out the stack with KASAN 39 + ifeq ($(call clang-min-version, 180000),) 40 + KASAN_SANITIZE_curve25519-hacl64.o := n 41 + endif 38 42 39 43 obj-$(CONFIG_CRYPTO_LIB_CURVE25519) += libcurve25519.o 40 44 libcurve25519-y += curve25519.o
+4 -4
lib/crypto/aescfb.c
··· 106 106 */ 107 107 108 108 static struct { 109 - u8 ptext[64]; 110 - u8 ctext[64]; 109 + u8 ptext[64] __nonstring; 110 + u8 ctext[64] __nonstring; 111 111 112 - u8 key[AES_MAX_KEY_SIZE]; 113 - u8 iv[AES_BLOCK_SIZE]; 112 + u8 key[AES_MAX_KEY_SIZE] __nonstring; 113 + u8 iv[AES_BLOCK_SIZE] __nonstring; 114 114 115 115 int klen; 116 116 int len;
+23 -23
lib/crypto/aesgcm.c
··· 205 205 * Test code below. Vectors taken from crypto/testmgr.h 206 206 */ 207 207 208 - static const u8 __initconst ctext0[16] = 208 + static const u8 __initconst ctext0[16] __nonstring = 209 209 "\x58\xe2\xfc\xce\xfa\x7e\x30\x61" 210 210 "\x36\x7f\x1d\x57\xa4\xe7\x45\x5a"; 211 211 212 212 static const u8 __initconst ptext1[16]; 213 213 214 - static const u8 __initconst ctext1[32] = 214 + static const u8 __initconst ctext1[32] __nonstring = 215 215 "\x03\x88\xda\xce\x60\xb6\xa3\x92" 216 216 "\xf3\x28\xc2\xb9\x71\xb2\xfe\x78" 217 217 "\xab\x6e\x47\xd4\x2c\xec\x13\xbd" 218 218 "\xf5\x3a\x67\xb2\x12\x57\xbd\xdf"; 219 219 220 - static const u8 __initconst ptext2[64] = 220 + static const u8 __initconst ptext2[64] __nonstring = 221 221 "\xd9\x31\x32\x25\xf8\x84\x06\xe5" 222 222 "\xa5\x59\x09\xc5\xaf\xf5\x26\x9a" 223 223 "\x86\xa7\xa9\x53\x15\x34\xf7\xda" ··· 227 227 "\xb1\x6a\xed\xf5\xaa\x0d\xe6\x57" 228 228 "\xba\x63\x7b\x39\x1a\xaf\xd2\x55"; 229 229 230 - static const u8 __initconst ctext2[80] = 230 + static const u8 __initconst ctext2[80] __nonstring = 231 231 "\x42\x83\x1e\xc2\x21\x77\x74\x24" 232 232 "\x4b\x72\x21\xb7\x84\xd0\xd4\x9c" 233 233 "\xe3\xaa\x21\x2f\x2c\x02\xa4\xe0" ··· 239 239 "\x4d\x5c\x2a\xf3\x27\xcd\x64\xa6" 240 240 "\x2c\xf3\x5a\xbd\x2b\xa6\xfa\xb4"; 241 241 242 - static const u8 __initconst ptext3[60] = 242 + static const u8 __initconst ptext3[60] __nonstring = 243 243 "\xd9\x31\x32\x25\xf8\x84\x06\xe5" 244 244 "\xa5\x59\x09\xc5\xaf\xf5\x26\x9a" 245 245 "\x86\xa7\xa9\x53\x15\x34\xf7\xda" ··· 249 249 "\xb1\x6a\xed\xf5\xaa\x0d\xe6\x57" 250 250 "\xba\x63\x7b\x39"; 251 251 252 - static const u8 __initconst ctext3[76] = 252 + static const u8 __initconst ctext3[76] __nonstring = 253 253 "\x42\x83\x1e\xc2\x21\x77\x74\x24" 254 254 "\x4b\x72\x21\xb7\x84\xd0\xd4\x9c" 255 255 "\xe3\xaa\x21\x2f\x2c\x02\xa4\xe0" ··· 261 261 "\x5b\xc9\x4f\xbc\x32\x21\xa5\xdb" 262 262 "\x94\xfa\xe9\x5a\xe7\x12\x1a\x47"; 263 263 264 - static const u8 __initconst ctext4[16] = 264 + static const u8 __initconst ctext4[16] __nonstring = 265 265 "\xcd\x33\xb2\x8a\xc7\x73\xf7\x4b" 266 266 "\xa0\x0e\xd1\xf3\x12\x57\x24\x35"; 267 267 268 - static const u8 __initconst ctext5[32] = 268 + static const u8 __initconst ctext5[32] __nonstring = 269 269 "\x98\xe7\x24\x7c\x07\xf0\xfe\x41" 270 270 "\x1c\x26\x7e\x43\x84\xb0\xf6\x00" 271 271 "\x2f\xf5\x8d\x80\x03\x39\x27\xab" 272 272 "\x8e\xf4\xd4\x58\x75\x14\xf0\xfb"; 273 273 274 - static const u8 __initconst ptext6[64] = 274 + static const u8 __initconst ptext6[64] __nonstring = 275 275 "\xd9\x31\x32\x25\xf8\x84\x06\xe5" 276 276 "\xa5\x59\x09\xc5\xaf\xf5\x26\x9a" 277 277 "\x86\xa7\xa9\x53\x15\x34\xf7\xda" ··· 281 281 "\xb1\x6a\xed\xf5\xaa\x0d\xe6\x57" 282 282 "\xba\x63\x7b\x39\x1a\xaf\xd2\x55"; 283 283 284 - static const u8 __initconst ctext6[80] = 284 + static const u8 __initconst ctext6[80] __nonstring = 285 285 "\x39\x80\xca\x0b\x3c\x00\xe8\x41" 286 286 "\xeb\x06\xfa\xc4\x87\x2a\x27\x57" 287 287 "\x85\x9e\x1c\xea\xa6\xef\xd9\x84" ··· 293 293 "\x99\x24\xa7\xc8\x58\x73\x36\xbf" 294 294 "\xb1\x18\x02\x4d\xb8\x67\x4a\x14"; 295 295 296 - static const u8 __initconst ctext7[16] = 296 + static const u8 __initconst ctext7[16] __nonstring = 297 297 "\x53\x0f\x8a\xfb\xc7\x45\x36\xb9" 298 298 "\xa9\x63\xb4\xf1\xc4\xcb\x73\x8b"; 299 299 300 - static const u8 __initconst ctext8[32] = 300 + static const u8 __initconst ctext8[32] __nonstring = 301 301 "\xce\xa7\x40\x3d\x4d\x60\x6b\x6e" 302 302 "\x07\x4e\xc5\xd3\xba\xf3\x9d\x18" 303 303 "\xd0\xd1\xc8\xa7\x99\x99\x6b\xf0" 304 304 "\x26\x5b\x98\xb5\xd4\x8a\xb9\x19"; 305 305 306 - static const u8 __initconst ptext9[64] = 306 + static const u8 __initconst ptext9[64] __nonstring = 307 307 "\xd9\x31\x32\x25\xf8\x84\x06\xe5" 308 308 "\xa5\x59\x09\xc5\xaf\xf5\x26\x9a" 309 309 "\x86\xa7\xa9\x53\x15\x34\xf7\xda" ··· 313 313 "\xb1\x6a\xed\xf5\xaa\x0d\xe6\x57" 314 314 "\xba\x63\x7b\x39\x1a\xaf\xd2\x55"; 315 315 316 - static const u8 __initconst ctext9[80] = 316 + static const u8 __initconst ctext9[80] __nonstring = 317 317 "\x52\x2d\xc1\xf0\x99\x56\x7d\x07" 318 318 "\xf4\x7f\x37\xa3\x2a\x84\x42\x7d" 319 319 "\x64\x3a\x8c\xdc\xbf\xe5\xc0\xc9" ··· 325 325 "\xb0\x94\xda\xc5\xd9\x34\x71\xbd" 326 326 "\xec\x1a\x50\x22\x70\xe3\xcc\x6c"; 327 327 328 - static const u8 __initconst ptext10[60] = 328 + static const u8 __initconst ptext10[60] __nonstring = 329 329 "\xd9\x31\x32\x25\xf8\x84\x06\xe5" 330 330 "\xa5\x59\x09\xc5\xaf\xf5\x26\x9a" 331 331 "\x86\xa7\xa9\x53\x15\x34\xf7\xda" ··· 335 335 "\xb1\x6a\xed\xf5\xaa\x0d\xe6\x57" 336 336 "\xba\x63\x7b\x39"; 337 337 338 - static const u8 __initconst ctext10[76] = 338 + static const u8 __initconst ctext10[76] __nonstring = 339 339 "\x52\x2d\xc1\xf0\x99\x56\x7d\x07" 340 340 "\xf4\x7f\x37\xa3\x2a\x84\x42\x7d" 341 341 "\x64\x3a\x8c\xdc\xbf\xe5\xc0\xc9" ··· 347 347 "\x76\xfc\x6e\xce\x0f\x4e\x17\x68" 348 348 "\xcd\xdf\x88\x53\xbb\x2d\x55\x1b"; 349 349 350 - static const u8 __initconst ptext11[60] = 350 + static const u8 __initconst ptext11[60] __nonstring = 351 351 "\xd9\x31\x32\x25\xf8\x84\x06\xe5" 352 352 "\xa5\x59\x09\xc5\xaf\xf5\x26\x9a" 353 353 "\x86\xa7\xa9\x53\x15\x34\xf7\xda" ··· 357 357 "\xb1\x6a\xed\xf5\xaa\x0d\xe6\x57" 358 358 "\xba\x63\x7b\x39"; 359 359 360 - static const u8 __initconst ctext11[76] = 360 + static const u8 __initconst ctext11[76] __nonstring = 361 361 "\x39\x80\xca\x0b\x3c\x00\xe8\x41" 362 362 "\xeb\x06\xfa\xc4\x87\x2a\x27\x57" 363 363 "\x85\x9e\x1c\xea\xa6\xef\xd9\x84" ··· 369 369 "\x25\x19\x49\x8e\x80\xf1\x47\x8f" 370 370 "\x37\xba\x55\xbd\x6d\x27\x61\x8c"; 371 371 372 - static const u8 __initconst ptext12[719] = 372 + static const u8 __initconst ptext12[719] __nonstring = 373 373 "\x42\xc1\xcc\x08\x48\x6f\x41\x3f" 374 374 "\x2f\x11\x66\x8b\x2a\x16\xf0\xe0" 375 375 "\x58\x83\xf0\xc3\x70\x14\xc0\x5b" ··· 461 461 "\x59\xfa\xfa\xaa\x44\x04\x01\xa7" 462 462 "\xa4\x78\xdb\x74\x3d\x8b\xb5"; 463 463 464 - static const u8 __initconst ctext12[735] = 464 + static const u8 __initconst ctext12[735] __nonstring = 465 465 "\x84\x0b\xdb\xd5\xb7\xa8\xfe\x20" 466 466 "\xbb\xb1\x12\x7f\x41\xea\xb3\xc0" 467 467 "\xa2\xb4\x37\x19\x11\x58\xb6\x0b" ··· 559 559 const u8 *ptext; 560 560 const u8 *ctext; 561 561 562 - u8 key[AES_MAX_KEY_SIZE]; 563 - u8 iv[GCM_AES_IV_SIZE]; 564 - u8 assoc[20]; 562 + u8 key[AES_MAX_KEY_SIZE] __nonstring; 563 + u8 iv[GCM_AES_IV_SIZE] __nonstring; 564 + u8 assoc[20] __nonstring; 565 565 566 566 int klen; 567 567 int clen;
+4 -4
lib/scatterlist.c
··· 73 73 * Should only be used casually, it (currently) scans the entire list 74 74 * to get the last entry. 75 75 * 76 - * Note that the @sgl@ pointer passed in need not be the first one, 77 - * the important bit is that @nents@ denotes the number of entries that 78 - * exist from @sgl@. 76 + * Note that the @sgl pointer passed in need not be the first one, 77 + * the important bit is that @nents denotes the number of entries that 78 + * exist from @sgl. 79 79 * 80 80 **/ 81 81 struct scatterlist *sg_last(struct scatterlist *sgl, unsigned int nents) ··· 345 345 * @gfp_mask: GFP allocation mask 346 346 * 347 347 * Description: 348 - * Allocate and initialize an sg table. If @nents@ is larger than 348 + * Allocate and initialize an sg table. If @nents is larger than 349 349 * SG_MAX_SINGLE_ALLOC a chained sg table will be setup. 350 350 * 351 351 **/
+3 -37
mm/execmem.c
··· 254 254 return ptr; 255 255 } 256 256 257 - static bool execmem_cache_rox = false; 258 - 259 - void execmem_cache_make_ro(void) 260 - { 261 - struct maple_tree *free_areas = &execmem_cache.free_areas; 262 - struct maple_tree *busy_areas = &execmem_cache.busy_areas; 263 - MA_STATE(mas_free, free_areas, 0, ULONG_MAX); 264 - MA_STATE(mas_busy, busy_areas, 0, ULONG_MAX); 265 - struct mutex *mutex = &execmem_cache.mutex; 266 - void *area; 267 - 268 - execmem_cache_rox = true; 269 - 270 - mutex_lock(mutex); 271 - 272 - mas_for_each(&mas_free, area, ULONG_MAX) { 273 - unsigned long pages = mas_range_len(&mas_free) >> PAGE_SHIFT; 274 - set_memory_ro(mas_free.index, pages); 275 - } 276 - 277 - mas_for_each(&mas_busy, area, ULONG_MAX) { 278 - unsigned long pages = mas_range_len(&mas_busy) >> PAGE_SHIFT; 279 - set_memory_ro(mas_busy.index, pages); 280 - } 281 - 282 - mutex_unlock(mutex); 283 - } 284 - 285 257 static int execmem_cache_populate(struct execmem_range *range, size_t size) 286 258 { 287 259 unsigned long vm_flags = VM_ALLOW_HUGE_VMAP; ··· 274 302 /* fill memory with instructions that will trap */ 275 303 execmem_fill_trapping_insns(p, alloc_size, /* writable = */ true); 276 304 277 - if (execmem_cache_rox) { 278 - err = set_memory_rox((unsigned long)p, vm->nr_pages); 279 - if (err) 280 - goto err_free_mem; 281 - } else { 282 - err = set_memory_x((unsigned long)p, vm->nr_pages); 283 - if (err) 284 - goto err_free_mem; 285 - } 305 + err = set_memory_rox((unsigned long)p, vm->nr_pages); 306 + if (err) 307 + goto err_free_mem; 286 308 287 309 err = execmem_cache_add(p, alloc_size); 288 310 if (err)
+2
mm/madvise.c
··· 508 508 pte_offset_map_lock(mm, pmd, addr, &ptl); 509 509 if (!start_pte) 510 510 break; 511 + flush_tlb_batched_pending(mm); 511 512 arch_enter_lazy_mmu_mode(); 512 513 if (!err) 513 514 nr = 0; ··· 742 741 start_pte = pte; 743 742 if (!start_pte) 744 743 break; 744 + flush_tlb_batched_pending(mm); 745 745 arch_enter_lazy_mmu_mode(); 746 746 if (!err) 747 747 nr = 0;
+40
mm/util.c
··· 1131 1131 } 1132 1132 EXPORT_SYMBOL(flush_dcache_folio); 1133 1133 #endif 1134 + 1135 + /** 1136 + * compat_vma_mmap_prepare() - Apply the file's .mmap_prepare() hook to an 1137 + * existing VMA 1138 + * @file: The file which possesss an f_op->mmap_prepare() hook 1139 + * @vma: The VMA to apply the .mmap_prepare() hook to. 1140 + * 1141 + * Ordinarily, .mmap_prepare() is invoked directly upon mmap(). However, certain 1142 + * 'wrapper' file systems invoke a nested mmap hook of an underlying file. 1143 + * 1144 + * Until all filesystems are converted to use .mmap_prepare(), we must be 1145 + * conservative and continue to invoke these 'wrapper' filesystems using the 1146 + * deprecated .mmap() hook. 1147 + * 1148 + * However we have a problem if the underlying file system possesses an 1149 + * .mmap_prepare() hook, as we are in a different context when we invoke the 1150 + * .mmap() hook, already having a VMA to deal with. 1151 + * 1152 + * compat_vma_mmap_prepare() is a compatibility function that takes VMA state, 1153 + * establishes a struct vm_area_desc descriptor, passes to the underlying 1154 + * .mmap_prepare() hook and applies any changes performed by it. 1155 + * 1156 + * Once the conversion of filesystems is complete this function will no longer 1157 + * be required and will be removed. 1158 + * 1159 + * Returns: 0 on success or error. 1160 + */ 1161 + int compat_vma_mmap_prepare(struct file *file, struct vm_area_struct *vma) 1162 + { 1163 + struct vm_area_desc desc; 1164 + int err; 1165 + 1166 + err = file->f_op->mmap_prepare(vma_to_desc(vma, &desc)); 1167 + if (err) 1168 + return err; 1169 + set_vma_from_desc(vma, &desc); 1170 + 1171 + return 0; 1172 + } 1173 + EXPORT_SYMBOL(compat_vma_mmap_prepare);
+4 -19
mm/vma.c
··· 967 967 err = dup_anon_vma(next, middle, &anon_dup); 968 968 } 969 969 970 - if (err) 970 + if (err || commit_merge(vmg)) 971 971 goto abort; 972 - 973 - err = commit_merge(vmg); 974 - if (err) { 975 - VM_WARN_ON(err != -ENOMEM); 976 - 977 - if (anon_dup) 978 - unlink_anon_vmas(anon_dup); 979 - 980 - /* 981 - * We've cleaned up any cloned anon_vma's, no VMAs have been 982 - * modified, no harm no foul if the user requests that we not 983 - * report this and just give up, leaving the VMAs unmerged. 984 - */ 985 - if (!vmg->give_up_on_oom) 986 - vmg->state = VMA_MERGE_ERROR_NOMEM; 987 - return NULL; 988 - } 989 972 990 973 khugepaged_enter_vma(vmg->target, vmg->flags); 991 974 vmg->state = VMA_MERGE_SUCCESS; ··· 977 994 abort: 978 995 vma_iter_set(vmg->vmi, start); 979 996 vma_iter_load(vmg->vmi); 997 + 998 + if (anon_dup) 999 + unlink_anon_vmas(anon_dup); 980 1000 981 1001 /* 982 1002 * This means we have failed to clone anon_vma's correctly, but no ··· 3112 3126 userfaultfd_unmap_complete(mm, &uf); 3113 3127 return ret; 3114 3128 } 3115 - 3116 3129 3117 3130 /* Insert vm structure into process list sorted by address 3118 3131 * and into the inode's i_mmap tree. If vm_file is non-NULL
+47
mm/vma.h
··· 222 222 return 0; 223 223 } 224 224 225 + 226 + /* 227 + * Temporary helper functions for file systems which wrap an invocation of 228 + * f_op->mmap() but which might have an underlying file system which implements 229 + * f_op->mmap_prepare(). 230 + */ 231 + 232 + static inline struct vm_area_desc *vma_to_desc(struct vm_area_struct *vma, 233 + struct vm_area_desc *desc) 234 + { 235 + desc->mm = vma->vm_mm; 236 + desc->start = vma->vm_start; 237 + desc->end = vma->vm_end; 238 + 239 + desc->pgoff = vma->vm_pgoff; 240 + desc->file = vma->vm_file; 241 + desc->vm_flags = vma->vm_flags; 242 + desc->page_prot = vma->vm_page_prot; 243 + 244 + desc->vm_ops = NULL; 245 + desc->private_data = NULL; 246 + 247 + return desc; 248 + } 249 + 250 + static inline void set_vma_from_desc(struct vm_area_struct *vma, 251 + struct vm_area_desc *desc) 252 + { 253 + /* 254 + * Since we're invoking .mmap_prepare() despite having a partially 255 + * established VMA, we must take care to handle setting fields 256 + * correctly. 257 + */ 258 + 259 + /* Mutable fields. Populated with initial state. */ 260 + vma->vm_pgoff = desc->pgoff; 261 + if (vma->vm_file != desc->file) 262 + vma_set_file(vma, desc->file); 263 + if (vma->vm_flags != desc->vm_flags) 264 + vm_flags_set(vma, desc->vm_flags); 265 + vma->vm_page_prot = desc->page_prot; 266 + 267 + /* User-defined fields. */ 268 + vma->vm_ops = desc->vm_ops; 269 + vma->vm_private_data = desc->private_data; 270 + } 271 + 225 272 int 226 273 do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma, 227 274 struct mm_struct *mm, unsigned long start,
+1
net/atm/common.c
··· 635 635 636 636 skb->dev = NULL; /* for paths shared with net_device interfaces */ 637 637 if (!copy_from_iter_full(skb_put(skb, size), size, &m->msg_iter)) { 638 + atm_return_tx(vcc, skb); 638 639 kfree_skb(skb); 639 640 error = -EFAULT; 640 641 goto out;
+10 -2
net/atm/lec.c
··· 124 124 125 125 /* Device structures */ 126 126 static struct net_device *dev_lec[MAX_LEC_ITF]; 127 + static DEFINE_MUTEX(lec_mutex); 127 128 128 129 #if IS_ENABLED(CONFIG_BRIDGE) 129 130 static void lec_handle_bridge(struct sk_buff *skb, struct net_device *dev) ··· 686 685 int bytes_left; 687 686 struct atmlec_ioc ioc_data; 688 687 688 + lockdep_assert_held(&lec_mutex); 689 689 /* Lecd must be up in this case */ 690 690 bytes_left = copy_from_user(&ioc_data, arg, sizeof(struct atmlec_ioc)); 691 691 if (bytes_left != 0) ··· 712 710 713 711 static int lec_mcast_attach(struct atm_vcc *vcc, int arg) 714 712 { 713 + lockdep_assert_held(&lec_mutex); 715 714 if (arg < 0 || arg >= MAX_LEC_ITF) 716 715 return -EINVAL; 717 716 arg = array_index_nospec(arg, MAX_LEC_ITF); ··· 728 725 int i; 729 726 struct lec_priv *priv; 730 727 728 + lockdep_assert_held(&lec_mutex); 731 729 if (arg < 0) 732 730 arg = 0; 733 731 if (arg >= MAX_LEC_ITF) ··· 746 742 snprintf(dev_lec[i]->name, IFNAMSIZ, "lec%d", i); 747 743 if (register_netdev(dev_lec[i])) { 748 744 free_netdev(dev_lec[i]); 745 + dev_lec[i] = NULL; 749 746 return -EINVAL; 750 747 } 751 748 ··· 909 904 v = (dev && netdev_priv(dev)) ? 910 905 lec_priv_walk(state, l, netdev_priv(dev)) : NULL; 911 906 if (!v && dev) { 912 - dev_put(dev); 913 907 /* Partial state reset for the next time we get called */ 914 908 dev = NULL; 915 909 } ··· 932 928 { 933 929 struct lec_state *state = seq->private; 934 930 931 + mutex_lock(&lec_mutex); 935 932 state->itf = 0; 936 933 state->dev = NULL; 937 934 state->locked = NULL; ··· 950 945 if (state->dev) { 951 946 spin_unlock_irqrestore(&state->locked->lec_arp_lock, 952 947 state->flags); 953 - dev_put(state->dev); 948 + state->dev = NULL; 954 949 } 950 + mutex_unlock(&lec_mutex); 955 951 } 956 952 957 953 static void *lec_seq_next(struct seq_file *seq, void *v, loff_t *pos) ··· 1009 1003 return -ENOIOCTLCMD; 1010 1004 } 1011 1005 1006 + mutex_lock(&lec_mutex); 1012 1007 switch (cmd) { 1013 1008 case ATMLEC_CTRL: 1014 1009 err = lecd_attach(vcc, (int)arg); ··· 1024 1017 break; 1025 1018 } 1026 1019 1020 + mutex_unlock(&lec_mutex); 1027 1021 return err; 1028 1022 } 1029 1023
+1 -1
net/atm/raw.c
··· 36 36 37 37 pr_debug("(%d) %d -= %d\n", 38 38 vcc->vci, sk_wmem_alloc_get(sk), ATM_SKB(skb)->acct_truesize); 39 - WARN_ON(refcount_sub_and_test(ATM_SKB(skb)->acct_truesize, &sk->sk_wmem_alloc)); 39 + atm_return_tx(vcc, skb); 40 40 dev_kfree_skb_any(skb); 41 41 sk->sk_write_space(sk); 42 42 }
-3
net/core/skbuff.c
··· 6261 6261 if (!pskb_may_pull(skb, write_len)) 6262 6262 return -ENOMEM; 6263 6263 6264 - if (!skb_frags_readable(skb)) 6265 - return -EFAULT; 6266 - 6267 6264 if (!skb_cloned(skb) || skb_clone_writable(skb, write_len)) 6268 6265 return 0; 6269 6266
+3
net/ipv4/tcp_fastopen.c
··· 3 3 #include <linux/tcp.h> 4 4 #include <linux/rcupdate.h> 5 5 #include <net/tcp.h> 6 + #include <net/busy_poll.h> 6 7 7 8 void tcp_fastopen_init_key_once(struct net *net) 8 9 { ··· 279 278 req->timeout, false); 280 279 281 280 refcount_set(&req->rsk_refcnt, 2); 281 + 282 + sk_mark_napi_id_set(child, skb); 282 283 283 284 /* Now finish processing the fastopen child socket. */ 284 285 tcp_init_transfer(child, BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB, skb);
+24 -11
net/ipv4/tcp_input.c
··· 2367 2367 { 2368 2368 const struct sock *sk = (const struct sock *)tp; 2369 2369 2370 - if (tp->retrans_stamp && 2371 - tcp_tsopt_ecr_before(tp, tp->retrans_stamp)) 2372 - return true; /* got echoed TS before first retransmission */ 2370 + /* Received an echoed timestamp before the first retransmission? */ 2371 + if (tp->retrans_stamp) 2372 + return tcp_tsopt_ecr_before(tp, tp->retrans_stamp); 2373 2373 2374 - /* Check if nothing was retransmitted (retrans_stamp==0), which may 2375 - * happen in fast recovery due to TSQ. But we ignore zero retrans_stamp 2376 - * in TCP_SYN_SENT, since when we set FLAG_SYN_ACKED we also clear 2377 - * retrans_stamp even if we had retransmitted the SYN. 2374 + /* We set tp->retrans_stamp upon the first retransmission of a loss 2375 + * recovery episode, so normally if tp->retrans_stamp is 0 then no 2376 + * retransmission has happened yet (likely due to TSQ, which can cause 2377 + * fast retransmits to be delayed). So if snd_una advanced while 2378 + * (tp->retrans_stamp is 0 then apparently a packet was merely delayed, 2379 + * not lost. But there are exceptions where we retransmit but then 2380 + * clear tp->retrans_stamp, so we check for those exceptions. 2378 2381 */ 2379 - if (!tp->retrans_stamp && /* no record of a retransmit/SYN? */ 2380 - sk->sk_state != TCP_SYN_SENT) /* not the FLAG_SYN_ACKED case? */ 2381 - return true; /* nothing was retransmitted */ 2382 2382 2383 - return false; 2383 + /* (1) For non-SACK connections, tcp_is_non_sack_preventing_reopen() 2384 + * clears tp->retrans_stamp when snd_una == high_seq. 2385 + */ 2386 + if (!tcp_is_sack(tp) && !before(tp->snd_una, tp->high_seq)) 2387 + return false; 2388 + 2389 + /* (2) In TCP_SYN_SENT tcp_clean_rtx_queue() clears tp->retrans_stamp 2390 + * when setting FLAG_SYN_ACKED is set, even if the SYN was 2391 + * retransmitted. 2392 + */ 2393 + if (sk->sk_state == TCP_SYN_SENT) 2394 + return false; 2395 + 2396 + return true; /* tp->retrans_stamp is zero; no retransmit yet */ 2384 2397 } 2385 2398 2386 2399 /* Undo procedures. */
+8
net/ipv6/calipso.c
··· 1207 1207 struct ipv6_opt_hdr *old, *new; 1208 1208 struct sock *sk = sk_to_full_sk(req_to_sk(req)); 1209 1209 1210 + /* sk is NULL for SYN+ACK w/ SYN Cookie */ 1211 + if (!sk) 1212 + return -ENOMEM; 1213 + 1210 1214 if (req_inet->ipv6_opt && req_inet->ipv6_opt->hopopt) 1211 1215 old = req_inet->ipv6_opt->hopopt; 1212 1216 else ··· 1250 1246 struct ipv6_opt_hdr *new; 1251 1247 struct ipv6_txoptions *txopts; 1252 1248 struct sock *sk = sk_to_full_sk(req_to_sk(req)); 1249 + 1250 + /* sk is NULL for SYN+ACK w/ SYN Cookie */ 1251 + if (!sk) 1252 + return; 1253 1253 1254 1254 if (!req_inet->ipv6_opt || !req_inet->ipv6_opt->hopopt) 1255 1255 return;
+4 -1
net/mac80211/debug.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* 3 3 * Portions 4 - * Copyright (C) 2022 - 2024 Intel Corporation 4 + * Copyright (C) 2022 - 2025 Intel Corporation 5 5 */ 6 6 #ifndef __MAC80211_DEBUG_H 7 7 #define __MAC80211_DEBUG_H 8 + #include <linux/once_lite.h> 8 9 #include <net/cfg80211.h> 9 10 10 11 #ifdef CONFIG_MAC80211_OCB_DEBUG ··· 153 152 else \ 154 153 _sdata_err((link)->sdata, fmt, ##__VA_ARGS__); \ 155 154 } while (0) 155 + #define link_err_once(link, fmt, ...) \ 156 + DO_ONCE_LITE(link_err, link, fmt, ##__VA_ARGS__) 156 157 #define link_id_info(sdata, link_id, fmt, ...) \ 157 158 do { \ 158 159 if (ieee80211_vif_is_mld(&sdata->vif)) \
+4
net/mac80211/rx.c
··· 4432 4432 if (!multicast && 4433 4433 !ether_addr_equal(sdata->dev->dev_addr, hdr->addr1)) 4434 4434 return false; 4435 + /* reject invalid/our STA address */ 4436 + if (!is_valid_ether_addr(hdr->addr2) || 4437 + ether_addr_equal(sdata->dev->dev_addr, hdr->addr2)) 4438 + return false; 4435 4439 if (!rx->sta) { 4436 4440 int rate_idx; 4437 4441 if (status->encoding != RX_ENC_LEGACY)
+21 -8
net/mac80211/tx.c
··· 5 5 * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz> 6 6 * Copyright 2007 Johannes Berg <johannes@sipsolutions.net> 7 7 * Copyright 2013-2014 Intel Mobile Communications GmbH 8 - * Copyright (C) 2018-2024 Intel Corporation 8 + * Copyright (C) 2018-2025 Intel Corporation 9 9 * 10 10 * Transmit and frame generation functions. 11 11 */ ··· 5016 5016 } 5017 5017 } 5018 5018 5019 - static u8 __ieee80211_beacon_update_cntdwn(struct beacon_data *beacon) 5019 + static u8 __ieee80211_beacon_update_cntdwn(struct ieee80211_link_data *link, 5020 + struct beacon_data *beacon) 5020 5021 { 5021 - beacon->cntdwn_current_counter--; 5022 + if (beacon->cntdwn_current_counter == 1) { 5023 + /* 5024 + * Channel switch handling is done by a worker thread while 5025 + * beacons get pulled from hardware timers. It's therefore 5026 + * possible that software threads are slow enough to not be 5027 + * able to complete CSA handling in a single beacon interval, 5028 + * in which case we get here. There isn't much to do about 5029 + * it, other than letting the user know that the AP isn't 5030 + * behaving correctly. 5031 + */ 5032 + link_err_once(link, 5033 + "beacon TX faster than countdown (channel/color switch) completion\n"); 5034 + return 0; 5035 + } 5022 5036 5023 - /* the counter should never reach 0 */ 5024 - WARN_ON_ONCE(!beacon->cntdwn_current_counter); 5037 + beacon->cntdwn_current_counter--; 5025 5038 5026 5039 return beacon->cntdwn_current_counter; 5027 5040 } ··· 5065 5052 if (!beacon) 5066 5053 goto unlock; 5067 5054 5068 - count = __ieee80211_beacon_update_cntdwn(beacon); 5055 + count = __ieee80211_beacon_update_cntdwn(link, beacon); 5069 5056 5070 5057 unlock: 5071 5058 rcu_read_unlock(); ··· 5463 5450 5464 5451 if (beacon->cntdwn_counter_offsets[0]) { 5465 5452 if (!is_template) 5466 - __ieee80211_beacon_update_cntdwn(beacon); 5453 + __ieee80211_beacon_update_cntdwn(link, beacon); 5467 5454 5468 5455 ieee80211_set_beacon_cntdwn(sdata, beacon, link); 5469 5456 } ··· 5495 5482 * for now we leave it consistent with overall 5496 5483 * mac80211's behavior. 5497 5484 */ 5498 - __ieee80211_beacon_update_cntdwn(beacon); 5485 + __ieee80211_beacon_update_cntdwn(link, beacon); 5499 5486 5500 5487 ieee80211_set_beacon_cntdwn(sdata, beacon, link); 5501 5488 }
+2 -2
net/mpls/af_mpls.c
··· 81 81 82 82 if (index < net->mpls.platform_labels) { 83 83 struct mpls_route __rcu **platform_label = 84 - rcu_dereference(net->mpls.platform_label); 85 - rt = rcu_dereference(platform_label[index]); 84 + rcu_dereference_rtnl(net->mpls.platform_label); 85 + rt = rcu_dereference_rtnl(platform_label[index]); 86 86 } 87 87 return rt; 88 88 }
+4 -4
net/nfc/nci/uart.c
··· 119 119 120 120 memcpy(nu, nci_uart_drivers[driver], sizeof(struct nci_uart)); 121 121 nu->tty = tty; 122 - tty->disc_data = nu; 123 122 skb_queue_head_init(&nu->tx_q); 124 123 INIT_WORK(&nu->write_work, nci_uart_write_work); 125 124 spin_lock_init(&nu->rx_lock); 126 125 127 126 ret = nu->ops.open(nu); 128 127 if (ret) { 129 - tty->disc_data = NULL; 130 128 kfree(nu); 129 + return ret; 131 130 } else if (!try_module_get(nu->owner)) { 132 131 nu->ops.close(nu); 133 - tty->disc_data = NULL; 134 132 kfree(nu); 135 133 return -ENOENT; 136 134 } 137 - return ret; 135 + tty->disc_data = nu; 136 + 137 + return 0; 138 138 } 139 139 140 140 /* ------ LDISC part ------ */
+10 -13
net/openvswitch/actions.c
··· 39 39 #include "flow_netlink.h" 40 40 #include "openvswitch_trace.h" 41 41 42 - DEFINE_PER_CPU(struct ovs_pcpu_storage, ovs_pcpu_storage) = { 43 - .bh_lock = INIT_LOCAL_LOCK(bh_lock), 44 - }; 42 + struct ovs_pcpu_storage __percpu *ovs_pcpu_storage; 45 43 46 44 /* Make a clone of the 'key', using the pre-allocated percpu 'flow_keys' 47 45 * space. Return NULL if out of key spaces. 48 46 */ 49 47 static struct sw_flow_key *clone_key(const struct sw_flow_key *key_) 50 48 { 51 - struct ovs_pcpu_storage *ovs_pcpu = this_cpu_ptr(&ovs_pcpu_storage); 49 + struct ovs_pcpu_storage *ovs_pcpu = this_cpu_ptr(ovs_pcpu_storage); 52 50 struct action_flow_keys *keys = &ovs_pcpu->flow_keys; 53 51 int level = ovs_pcpu->exec_level; 54 52 struct sw_flow_key *key = NULL; ··· 92 94 const struct nlattr *actions, 93 95 const int actions_len) 94 96 { 95 - struct action_fifo *fifo = this_cpu_ptr(&ovs_pcpu_storage.action_fifos); 97 + struct action_fifo *fifo = this_cpu_ptr(&ovs_pcpu_storage->action_fifos); 96 98 struct deferred_action *da; 97 99 98 100 da = action_fifo_put(fifo); ··· 753 755 static int ovs_vport_output(struct net *net, struct sock *sk, 754 756 struct sk_buff *skb) 755 757 { 756 - struct ovs_frag_data *data = this_cpu_ptr(&ovs_pcpu_storage.frag_data); 758 + struct ovs_frag_data *data = this_cpu_ptr(&ovs_pcpu_storage->frag_data); 757 759 struct vport *vport = data->vport; 758 760 759 761 if (skb_cow_head(skb, data->l2_len) < 0) { ··· 805 807 unsigned int hlen = skb_network_offset(skb); 806 808 struct ovs_frag_data *data; 807 809 808 - data = this_cpu_ptr(&ovs_pcpu_storage.frag_data); 810 + data = this_cpu_ptr(&ovs_pcpu_storage->frag_data); 809 811 data->dst = skb->_skb_refdst; 810 812 data->vport = vport; 811 813 data->cb = *OVS_CB(skb); ··· 1564 1566 clone = clone_flow_key ? clone_key(key) : key; 1565 1567 if (clone) { 1566 1568 int err = 0; 1567 - 1568 1569 if (actions) { /* Sample action */ 1569 1570 if (clone_flow_key) 1570 - __this_cpu_inc(ovs_pcpu_storage.exec_level); 1571 + __this_cpu_inc(ovs_pcpu_storage->exec_level); 1571 1572 1572 1573 err = do_execute_actions(dp, skb, clone, 1573 1574 actions, len); 1574 1575 1575 1576 if (clone_flow_key) 1576 - __this_cpu_dec(ovs_pcpu_storage.exec_level); 1577 + __this_cpu_dec(ovs_pcpu_storage->exec_level); 1577 1578 } else { /* Recirc action */ 1578 1579 clone->recirc_id = recirc_id; 1579 1580 ovs_dp_process_packet(skb, clone); ··· 1608 1611 1609 1612 static void process_deferred_actions(struct datapath *dp) 1610 1613 { 1611 - struct action_fifo *fifo = this_cpu_ptr(&ovs_pcpu_storage.action_fifos); 1614 + struct action_fifo *fifo = this_cpu_ptr(&ovs_pcpu_storage->action_fifos); 1612 1615 1613 1616 /* Do not touch the FIFO in case there is no deferred actions. */ 1614 1617 if (action_fifo_is_empty(fifo)) ··· 1639 1642 { 1640 1643 int err, level; 1641 1644 1642 - level = __this_cpu_inc_return(ovs_pcpu_storage.exec_level); 1645 + level = __this_cpu_inc_return(ovs_pcpu_storage->exec_level); 1643 1646 if (unlikely(level > OVS_RECURSION_LIMIT)) { 1644 1647 net_crit_ratelimited("ovs: recursion limit reached on datapath %s, probable configuration error\n", 1645 1648 ovs_dp_name(dp)); ··· 1656 1659 process_deferred_actions(dp); 1657 1660 1658 1661 out: 1659 - __this_cpu_dec(ovs_pcpu_storage.exec_level); 1662 + __this_cpu_dec(ovs_pcpu_storage->exec_level); 1660 1663 return err; 1661 1664 }
+35 -7
net/openvswitch/datapath.c
··· 244 244 /* Must be called with rcu_read_lock. */ 245 245 void ovs_dp_process_packet(struct sk_buff *skb, struct sw_flow_key *key) 246 246 { 247 - struct ovs_pcpu_storage *ovs_pcpu = this_cpu_ptr(&ovs_pcpu_storage); 247 + struct ovs_pcpu_storage *ovs_pcpu = this_cpu_ptr(ovs_pcpu_storage); 248 248 const struct vport *p = OVS_CB(skb)->input_vport; 249 249 struct datapath *dp = p->dp; 250 250 struct sw_flow *flow; ··· 299 299 * avoided. 300 300 */ 301 301 if (IS_ENABLED(CONFIG_PREEMPT_RT) && ovs_pcpu->owner != current) { 302 - local_lock_nested_bh(&ovs_pcpu_storage.bh_lock); 302 + local_lock_nested_bh(&ovs_pcpu_storage->bh_lock); 303 303 ovs_pcpu->owner = current; 304 304 ovs_pcpu_locked = true; 305 305 } ··· 310 310 ovs_dp_name(dp), error); 311 311 if (ovs_pcpu_locked) { 312 312 ovs_pcpu->owner = NULL; 313 - local_unlock_nested_bh(&ovs_pcpu_storage.bh_lock); 313 + local_unlock_nested_bh(&ovs_pcpu_storage->bh_lock); 314 314 } 315 315 316 316 stats_counter = &stats->n_hit; ··· 689 689 sf_acts = rcu_dereference(flow->sf_acts); 690 690 691 691 local_bh_disable(); 692 - local_lock_nested_bh(&ovs_pcpu_storage.bh_lock); 692 + local_lock_nested_bh(&ovs_pcpu_storage->bh_lock); 693 693 if (IS_ENABLED(CONFIG_PREEMPT_RT)) 694 - this_cpu_write(ovs_pcpu_storage.owner, current); 694 + this_cpu_write(ovs_pcpu_storage->owner, current); 695 695 err = ovs_execute_actions(dp, packet, sf_acts, &flow->key); 696 696 if (IS_ENABLED(CONFIG_PREEMPT_RT)) 697 - this_cpu_write(ovs_pcpu_storage.owner, NULL); 698 - local_unlock_nested_bh(&ovs_pcpu_storage.bh_lock); 697 + this_cpu_write(ovs_pcpu_storage->owner, NULL); 698 + local_unlock_nested_bh(&ovs_pcpu_storage->bh_lock); 699 699 local_bh_enable(); 700 700 rcu_read_unlock(); 701 701 ··· 2744 2744 .n_reasons = ARRAY_SIZE(ovs_drop_reasons), 2745 2745 }; 2746 2746 2747 + static int __init ovs_alloc_percpu_storage(void) 2748 + { 2749 + unsigned int cpu; 2750 + 2751 + ovs_pcpu_storage = alloc_percpu(*ovs_pcpu_storage); 2752 + if (!ovs_pcpu_storage) 2753 + return -ENOMEM; 2754 + 2755 + for_each_possible_cpu(cpu) { 2756 + struct ovs_pcpu_storage *ovs_pcpu; 2757 + 2758 + ovs_pcpu = per_cpu_ptr(ovs_pcpu_storage, cpu); 2759 + local_lock_init(&ovs_pcpu->bh_lock); 2760 + } 2761 + return 0; 2762 + } 2763 + 2764 + static void ovs_free_percpu_storage(void) 2765 + { 2766 + free_percpu(ovs_pcpu_storage); 2767 + } 2768 + 2747 2769 static int __init dp_init(void) 2748 2770 { 2749 2771 int err; ··· 2774 2752 sizeof_field(struct sk_buff, cb)); 2775 2753 2776 2754 pr_info("Open vSwitch switching datapath\n"); 2755 + 2756 + err = ovs_alloc_percpu_storage(); 2757 + if (err) 2758 + goto error; 2777 2759 2778 2760 err = ovs_internal_dev_rtnl_link_register(); 2779 2761 if (err) ··· 2825 2799 error_unreg_rtnl_link: 2826 2800 ovs_internal_dev_rtnl_link_unregister(); 2827 2801 error: 2802 + ovs_free_percpu_storage(); 2828 2803 return err; 2829 2804 } 2830 2805 ··· 2840 2813 ovs_vport_exit(); 2841 2814 ovs_flow_exit(); 2842 2815 ovs_internal_dev_rtnl_link_unregister(); 2816 + ovs_free_percpu_storage(); 2843 2817 } 2844 2818 2845 2819 module_init(dp_init);
+2 -1
net/openvswitch/datapath.h
··· 220 220 struct task_struct *owner; 221 221 local_lock_t bh_lock; 222 222 }; 223 - DECLARE_PER_CPU(struct ovs_pcpu_storage, ovs_pcpu_storage); 223 + 224 + extern struct ovs_pcpu_storage __percpu *ovs_pcpu_storage; 224 225 225 226 /** 226 227 * enum ovs_pkt_hash_types - hash info to include with a packet
+4 -2
net/sched/sch_taprio.c
··· 1328 1328 1329 1329 stab = rtnl_dereference(q->root->stab); 1330 1330 1331 - oper = rtnl_dereference(q->oper_sched); 1331 + rcu_read_lock(); 1332 + oper = rcu_dereference(q->oper_sched); 1332 1333 if (oper) 1333 1334 taprio_update_queue_max_sdu(q, oper, stab); 1334 1335 1335 - admin = rtnl_dereference(q->admin_sched); 1336 + admin = rcu_dereference(q->admin_sched); 1336 1337 if (admin) 1337 1338 taprio_update_queue_max_sdu(q, admin, stab); 1339 + rcu_read_unlock(); 1338 1340 1339 1341 break; 1340 1342 }
+2 -2
net/tipc/udp_media.c
··· 489 489 490 490 rtnl_lock(); 491 491 b = tipc_bearer_find(net, bname); 492 - if (!b) { 492 + if (!b || b->bcast_addr.media_id != TIPC_MEDIA_TYPE_UDP) { 493 493 rtnl_unlock(); 494 494 return -EINVAL; 495 495 } ··· 500 500 501 501 rtnl_lock(); 502 502 b = rtnl_dereference(tn->bearer_list[bid]); 503 - if (!b) { 503 + if (!b || b->bcast_addr.media_id != TIPC_MEDIA_TYPE_UDP) { 504 504 rtnl_unlock(); 505 505 return -EINVAL; 506 506 }
+1
rust/bindings/bindings_helper.h
··· 39 39 #include <linux/blk_types.h> 40 40 #include <linux/blkdev.h> 41 41 #include <linux/clk.h> 42 + #include <linux/completion.h> 42 43 #include <linux/configfs.h> 43 44 #include <linux/cpu.h> 44 45 #include <linux/cpufreq.h>
+8
rust/helpers/completion.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/completion.h> 4 + 5 + void rust_helper_init_completion(struct completion *x) 6 + { 7 + init_completion(x); 8 + }
+8
rust/helpers/cpu.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/smp.h> 4 + 5 + unsigned int rust_helper_raw_smp_processor_id(void) 6 + { 7 + return raw_smp_processor_id(); 8 + }
+2
rust/helpers/helpers.c
··· 13 13 #include "build_assert.c" 14 14 #include "build_bug.c" 15 15 #include "clk.c" 16 + #include "completion.c" 17 + #include "cpu.c" 16 18 #include "cpufreq.c" 17 19 #include "cpumask.c" 18 20 #include "cred.c"
+123 -2
rust/kernel/cpu.rs
··· 6 6 7 7 use crate::{bindings, device::Device, error::Result, prelude::ENODEV}; 8 8 9 + /// Returns the maximum number of possible CPUs in the current system configuration. 10 + #[inline] 11 + pub fn nr_cpu_ids() -> u32 { 12 + #[cfg(any(NR_CPUS_1, CONFIG_FORCE_NR_CPUS))] 13 + { 14 + bindings::NR_CPUS 15 + } 16 + 17 + #[cfg(not(any(NR_CPUS_1, CONFIG_FORCE_NR_CPUS)))] 18 + // SAFETY: `nr_cpu_ids` is a valid global provided by the kernel. 19 + unsafe { 20 + bindings::nr_cpu_ids 21 + } 22 + } 23 + 24 + /// The CPU ID. 25 + /// 26 + /// Represents a CPU identifier as a wrapper around an [`u32`]. 27 + /// 28 + /// # Invariants 29 + /// 30 + /// The CPU ID lies within the range `[0, nr_cpu_ids())`. 31 + /// 32 + /// # Examples 33 + /// 34 + /// ``` 35 + /// use kernel::cpu::CpuId; 36 + /// 37 + /// let cpu = 0; 38 + /// 39 + /// // SAFETY: 0 is always a valid CPU number. 40 + /// let id = unsafe { CpuId::from_u32_unchecked(cpu) }; 41 + /// 42 + /// assert_eq!(id.as_u32(), cpu); 43 + /// assert!(CpuId::from_i32(0).is_some()); 44 + /// assert!(CpuId::from_i32(-1).is_none()); 45 + /// ``` 46 + #[derive(Copy, Clone, PartialEq, Eq, Debug)] 47 + pub struct CpuId(u32); 48 + 49 + impl CpuId { 50 + /// Creates a new [`CpuId`] from the given `id` without checking bounds. 51 + /// 52 + /// # Safety 53 + /// 54 + /// The caller must ensure that `id` is a valid CPU ID (i.e., `0 <= id < nr_cpu_ids()`). 55 + #[inline] 56 + pub unsafe fn from_i32_unchecked(id: i32) -> Self { 57 + debug_assert!(id >= 0); 58 + debug_assert!((id as u32) < nr_cpu_ids()); 59 + 60 + // INVARIANT: The function safety guarantees `id` is a valid CPU id. 61 + Self(id as u32) 62 + } 63 + 64 + /// Creates a new [`CpuId`] from the given `id`, checking that it is valid. 65 + pub fn from_i32(id: i32) -> Option<Self> { 66 + if id < 0 || id as u32 >= nr_cpu_ids() { 67 + None 68 + } else { 69 + // INVARIANT: `id` has just been checked as a valid CPU ID. 70 + Some(Self(id as u32)) 71 + } 72 + } 73 + 74 + /// Creates a new [`CpuId`] from the given `id` without checking bounds. 75 + /// 76 + /// # Safety 77 + /// 78 + /// The caller must ensure that `id` is a valid CPU ID (i.e., `0 <= id < nr_cpu_ids()`). 79 + #[inline] 80 + pub unsafe fn from_u32_unchecked(id: u32) -> Self { 81 + debug_assert!(id < nr_cpu_ids()); 82 + 83 + // Ensure the `id` fits in an [`i32`] as it's also representable that way. 84 + debug_assert!(id <= i32::MAX as u32); 85 + 86 + // INVARIANT: The function safety guarantees `id` is a valid CPU id. 87 + Self(id) 88 + } 89 + 90 + /// Creates a new [`CpuId`] from the given `id`, checking that it is valid. 91 + pub fn from_u32(id: u32) -> Option<Self> { 92 + if id >= nr_cpu_ids() { 93 + None 94 + } else { 95 + // INVARIANT: `id` has just been checked as a valid CPU ID. 96 + Some(Self(id)) 97 + } 98 + } 99 + 100 + /// Returns CPU number. 101 + #[inline] 102 + pub fn as_u32(&self) -> u32 { 103 + self.0 104 + } 105 + 106 + /// Returns the ID of the CPU the code is currently running on. 107 + /// 108 + /// The returned value is considered unstable because it may change 109 + /// unexpectedly due to preemption or CPU migration. It should only be 110 + /// used when the context ensures that the task remains on the same CPU 111 + /// or the users could use a stale (yet valid) CPU ID. 112 + pub fn current() -> Self { 113 + // SAFETY: raw_smp_processor_id() always returns a valid CPU ID. 114 + unsafe { Self::from_u32_unchecked(bindings::raw_smp_processor_id()) } 115 + } 116 + } 117 + 118 + impl From<CpuId> for u32 { 119 + fn from(id: CpuId) -> Self { 120 + id.as_u32() 121 + } 122 + } 123 + 124 + impl From<CpuId> for i32 { 125 + fn from(id: CpuId) -> Self { 126 + id.as_u32() as i32 127 + } 128 + } 129 + 9 130 /// Creates a new instance of CPU's device. 10 131 /// 11 132 /// # Safety ··· 138 17 /// Callers must ensure that the CPU device is not used after it has been unregistered. 139 18 /// This can be achieved, for example, by registering a CPU hotplug notifier and removing 140 19 /// any references to the CPU device within the notifier's callback. 141 - pub unsafe fn from_cpu(cpu: u32) -> Result<&'static Device> { 20 + pub unsafe fn from_cpu(cpu: CpuId) -> Result<&'static Device> { 142 21 // SAFETY: It is safe to call `get_cpu_device()` for any CPU. 143 - let ptr = unsafe { bindings::get_cpu_device(cpu) }; 22 + let ptr = unsafe { bindings::get_cpu_device(u32::from(cpu)) }; 144 23 if ptr.is_null() { 145 24 return Err(ENODEV); 146 25 }
+128 -45
rust/kernel/cpufreq.rs
··· 10 10 11 11 use crate::{ 12 12 clk::Hertz, 13 + cpu::CpuId, 13 14 cpumask, 14 15 device::{Bound, Device}, 15 16 devres::Devres, ··· 466 465 467 466 /// Returns the primary CPU for the [`Policy`]. 468 467 #[inline] 469 - pub fn cpu(&self) -> u32 { 470 - self.as_ref().cpu 468 + pub fn cpu(&self) -> CpuId { 469 + // SAFETY: The C API guarantees that `cpu` refers to a valid CPU number. 470 + unsafe { CpuId::from_u32_unchecked(self.as_ref().cpu) } 471 471 } 472 472 473 473 /// Returns the minimum frequency for the [`Policy`]. ··· 527 525 #[inline] 528 526 pub fn generic_get(&self) -> Result<u32> { 529 527 // SAFETY: By the type invariant, the pointer stored in `self` is valid. 530 - Ok(unsafe { bindings::cpufreq_generic_get(self.cpu()) }) 528 + Ok(unsafe { bindings::cpufreq_generic_get(u32::from(self.cpu())) }) 531 529 } 532 530 533 531 /// Provides a wrapper to the register with energy model using the OPP core. ··· 680 678 struct PolicyCpu<'a>(&'a mut Policy); 681 679 682 680 impl<'a> PolicyCpu<'a> { 683 - fn from_cpu(cpu: u32) -> Result<Self> { 681 + fn from_cpu(cpu: CpuId) -> Result<Self> { 684 682 // SAFETY: It is safe to call `cpufreq_cpu_get` for any valid CPU. 685 - let ptr = from_err_ptr(unsafe { bindings::cpufreq_cpu_get(cpu) })?; 683 + let ptr = from_err_ptr(unsafe { bindings::cpufreq_cpu_get(u32::from(cpu)) })?; 686 684 687 685 Ok(Self( 688 686 // SAFETY: The `ptr` is guaranteed to be valid and remains valid for the lifetime of ··· 1057 1055 impl<T: Driver> Registration<T> { 1058 1056 /// Driver's `init` callback. 1059 1057 /// 1060 - /// SAFETY: Called from C. Inputs must be valid pointers. 1061 - extern "C" fn init_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int { 1058 + /// # Safety 1059 + /// 1060 + /// - This function may only be called from the cpufreq C infrastructure. 1061 + /// - The pointer arguments must be valid pointers. 1062 + unsafe extern "C" fn init_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int { 1062 1063 from_result(|| { 1063 1064 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1064 1065 // lifetime of `policy`. ··· 1075 1070 1076 1071 /// Driver's `exit` callback. 1077 1072 /// 1078 - /// SAFETY: Called from C. Inputs must be valid pointers. 1079 - extern "C" fn exit_callback(ptr: *mut bindings::cpufreq_policy) { 1073 + /// # Safety 1074 + /// 1075 + /// - This function may only be called from the cpufreq C infrastructure. 1076 + /// - The pointer arguments must be valid pointers. 1077 + unsafe extern "C" fn exit_callback(ptr: *mut bindings::cpufreq_policy) { 1080 1078 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1081 1079 // lifetime of `policy`. 1082 1080 let policy = unsafe { Policy::from_raw_mut(ptr) }; ··· 1090 1082 1091 1083 /// Driver's `online` callback. 1092 1084 /// 1093 - /// SAFETY: Called from C. Inputs must be valid pointers. 1094 - extern "C" fn online_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int { 1085 + /// # Safety 1086 + /// 1087 + /// - This function may only be called from the cpufreq C infrastructure. 1088 + /// - The pointer arguments must be valid pointers. 1089 + unsafe extern "C" fn online_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int { 1095 1090 from_result(|| { 1096 1091 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1097 1092 // lifetime of `policy`. ··· 1105 1094 1106 1095 /// Driver's `offline` callback. 1107 1096 /// 1108 - /// SAFETY: Called from C. Inputs must be valid pointers. 1109 - extern "C" fn offline_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int { 1097 + /// # Safety 1098 + /// 1099 + /// - This function may only be called from the cpufreq C infrastructure. 1100 + /// - The pointer arguments must be valid pointers. 1101 + unsafe extern "C" fn offline_callback( 1102 + ptr: *mut bindings::cpufreq_policy, 1103 + ) -> kernel::ffi::c_int { 1110 1104 from_result(|| { 1111 1105 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1112 1106 // lifetime of `policy`. ··· 1122 1106 1123 1107 /// Driver's `suspend` callback. 1124 1108 /// 1125 - /// SAFETY: Called from C. Inputs must be valid pointers. 1126 - extern "C" fn suspend_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int { 1109 + /// # Safety 1110 + /// 1111 + /// - This function may only be called from the cpufreq C infrastructure. 1112 + /// - The pointer arguments must be valid pointers. 1113 + unsafe extern "C" fn suspend_callback( 1114 + ptr: *mut bindings::cpufreq_policy, 1115 + ) -> kernel::ffi::c_int { 1127 1116 from_result(|| { 1128 1117 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1129 1118 // lifetime of `policy`. ··· 1139 1118 1140 1119 /// Driver's `resume` callback. 1141 1120 /// 1142 - /// SAFETY: Called from C. Inputs must be valid pointers. 1143 - extern "C" fn resume_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int { 1121 + /// # Safety 1122 + /// 1123 + /// - This function may only be called from the cpufreq C infrastructure. 1124 + /// - The pointer arguments must be valid pointers. 1125 + unsafe extern "C" fn resume_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int { 1144 1126 from_result(|| { 1145 1127 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1146 1128 // lifetime of `policy`. ··· 1154 1130 1155 1131 /// Driver's `ready` callback. 1156 1132 /// 1157 - /// SAFETY: Called from C. Inputs must be valid pointers. 1158 - extern "C" fn ready_callback(ptr: *mut bindings::cpufreq_policy) { 1133 + /// # Safety 1134 + /// 1135 + /// - This function may only be called from the cpufreq C infrastructure. 1136 + /// - The pointer arguments must be valid pointers. 1137 + unsafe extern "C" fn ready_callback(ptr: *mut bindings::cpufreq_policy) { 1159 1138 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1160 1139 // lifetime of `policy`. 1161 1140 let policy = unsafe { Policy::from_raw_mut(ptr) }; ··· 1167 1140 1168 1141 /// Driver's `verify` callback. 1169 1142 /// 1170 - /// SAFETY: Called from C. Inputs must be valid pointers. 1171 - extern "C" fn verify_callback(ptr: *mut bindings::cpufreq_policy_data) -> kernel::ffi::c_int { 1143 + /// # Safety 1144 + /// 1145 + /// - This function may only be called from the cpufreq C infrastructure. 1146 + /// - The pointer arguments must be valid pointers. 1147 + unsafe extern "C" fn verify_callback( 1148 + ptr: *mut bindings::cpufreq_policy_data, 1149 + ) -> kernel::ffi::c_int { 1172 1150 from_result(|| { 1173 1151 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1174 1152 // lifetime of `policy`. ··· 1184 1152 1185 1153 /// Driver's `setpolicy` callback. 1186 1154 /// 1187 - /// SAFETY: Called from C. Inputs must be valid pointers. 1188 - extern "C" fn setpolicy_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int { 1155 + /// # Safety 1156 + /// 1157 + /// - This function may only be called from the cpufreq C infrastructure. 1158 + /// - The pointer arguments must be valid pointers. 1159 + unsafe extern "C" fn setpolicy_callback( 1160 + ptr: *mut bindings::cpufreq_policy, 1161 + ) -> kernel::ffi::c_int { 1189 1162 from_result(|| { 1190 1163 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1191 1164 // lifetime of `policy`. ··· 1201 1164 1202 1165 /// Driver's `target` callback. 1203 1166 /// 1204 - /// SAFETY: Called from C. Inputs must be valid pointers. 1205 - extern "C" fn target_callback( 1167 + /// # Safety 1168 + /// 1169 + /// - This function may only be called from the cpufreq C infrastructure. 1170 + /// - The pointer arguments must be valid pointers. 1171 + unsafe extern "C" fn target_callback( 1206 1172 ptr: *mut bindings::cpufreq_policy, 1207 1173 target_freq: u32, 1208 1174 relation: u32, ··· 1220 1180 1221 1181 /// Driver's `target_index` callback. 1222 1182 /// 1223 - /// SAFETY: Called from C. Inputs must be valid pointers. 1224 - extern "C" fn target_index_callback( 1183 + /// # Safety 1184 + /// 1185 + /// - This function may only be called from the cpufreq C infrastructure. 1186 + /// - The pointer arguments must be valid pointers. 1187 + unsafe extern "C" fn target_index_callback( 1225 1188 ptr: *mut bindings::cpufreq_policy, 1226 1189 index: u32, 1227 1190 ) -> kernel::ffi::c_int { ··· 1243 1200 1244 1201 /// Driver's `fast_switch` callback. 1245 1202 /// 1246 - /// SAFETY: Called from C. Inputs must be valid pointers. 1247 - extern "C" fn fast_switch_callback( 1203 + /// # Safety 1204 + /// 1205 + /// - This function may only be called from the cpufreq C infrastructure. 1206 + /// - The pointer arguments must be valid pointers. 1207 + unsafe extern "C" fn fast_switch_callback( 1248 1208 ptr: *mut bindings::cpufreq_policy, 1249 1209 target_freq: u32, 1250 1210 ) -> kernel::ffi::c_uint { ··· 1258 1212 } 1259 1213 1260 1214 /// Driver's `adjust_perf` callback. 1261 - extern "C" fn adjust_perf_callback( 1215 + /// 1216 + /// # Safety 1217 + /// 1218 + /// - This function may only be called from the cpufreq C infrastructure. 1219 + unsafe extern "C" fn adjust_perf_callback( 1262 1220 cpu: u32, 1263 1221 min_perf: usize, 1264 1222 target_perf: usize, 1265 1223 capacity: usize, 1266 1224 ) { 1267 - if let Ok(mut policy) = PolicyCpu::from_cpu(cpu) { 1225 + // SAFETY: The C API guarantees that `cpu` refers to a valid CPU number. 1226 + let cpu_id = unsafe { CpuId::from_u32_unchecked(cpu) }; 1227 + 1228 + if let Ok(mut policy) = PolicyCpu::from_cpu(cpu_id) { 1268 1229 T::adjust_perf(&mut policy, min_perf, target_perf, capacity); 1269 1230 } 1270 1231 } 1271 1232 1272 1233 /// Driver's `get_intermediate` callback. 1273 1234 /// 1274 - /// SAFETY: Called from C. Inputs must be valid pointers. 1275 - extern "C" fn get_intermediate_callback( 1235 + /// # Safety 1236 + /// 1237 + /// - This function may only be called from the cpufreq C infrastructure. 1238 + /// - The pointer arguments must be valid pointers. 1239 + unsafe extern "C" fn get_intermediate_callback( 1276 1240 ptr: *mut bindings::cpufreq_policy, 1277 1241 index: u32, 1278 1242 ) -> kernel::ffi::c_uint { ··· 1299 1243 1300 1244 /// Driver's `target_intermediate` callback. 1301 1245 /// 1302 - /// SAFETY: Called from C. Inputs must be valid pointers. 1303 - extern "C" fn target_intermediate_callback( 1246 + /// # Safety 1247 + /// 1248 + /// - This function may only be called from the cpufreq C infrastructure. 1249 + /// - The pointer arguments must be valid pointers. 1250 + unsafe extern "C" fn target_intermediate_callback( 1304 1251 ptr: *mut bindings::cpufreq_policy, 1305 1252 index: u32, 1306 1253 ) -> kernel::ffi::c_int { ··· 1321 1262 } 1322 1263 1323 1264 /// Driver's `get` callback. 1324 - extern "C" fn get_callback(cpu: u32) -> kernel::ffi::c_uint { 1325 - PolicyCpu::from_cpu(cpu).map_or(0, |mut policy| T::get(&mut policy).map_or(0, |f| f)) 1265 + /// 1266 + /// # Safety 1267 + /// 1268 + /// - This function may only be called from the cpufreq C infrastructure. 1269 + unsafe extern "C" fn get_callback(cpu: u32) -> kernel::ffi::c_uint { 1270 + // SAFETY: The C API guarantees that `cpu` refers to a valid CPU number. 1271 + let cpu_id = unsafe { CpuId::from_u32_unchecked(cpu) }; 1272 + 1273 + PolicyCpu::from_cpu(cpu_id).map_or(0, |mut policy| T::get(&mut policy).map_or(0, |f| f)) 1326 1274 } 1327 1275 1328 1276 /// Driver's `update_limit` callback. 1329 - extern "C" fn update_limits_callback(ptr: *mut bindings::cpufreq_policy) { 1277 + /// 1278 + /// # Safety 1279 + /// 1280 + /// - This function may only be called from the cpufreq C infrastructure. 1281 + /// - The pointer arguments must be valid pointers. 1282 + unsafe extern "C" fn update_limits_callback(ptr: *mut bindings::cpufreq_policy) { 1330 1283 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1331 1284 // lifetime of `policy`. 1332 1285 let policy = unsafe { Policy::from_raw_mut(ptr) }; ··· 1347 1276 1348 1277 /// Driver's `bios_limit` callback. 1349 1278 /// 1350 - /// SAFETY: Called from C. Inputs must be valid pointers. 1351 - extern "C" fn bios_limit_callback(cpu: i32, limit: *mut u32) -> kernel::ffi::c_int { 1279 + /// # Safety 1280 + /// 1281 + /// - This function may only be called from the cpufreq C infrastructure. 1282 + /// - The pointer arguments must be valid pointers. 1283 + unsafe extern "C" fn bios_limit_callback(cpu: i32, limit: *mut u32) -> kernel::ffi::c_int { 1284 + // SAFETY: The C API guarantees that `cpu` refers to a valid CPU number. 1285 + let cpu_id = unsafe { CpuId::from_i32_unchecked(cpu) }; 1286 + 1352 1287 from_result(|| { 1353 - let mut policy = PolicyCpu::from_cpu(cpu as u32)?; 1288 + let mut policy = PolicyCpu::from_cpu(cpu_id)?; 1354 1289 1355 1290 // SAFETY: `limit` is guaranteed by the C code to be valid. 1356 1291 T::bios_limit(&mut policy, &mut (unsafe { *limit })).map(|()| 0) ··· 1365 1288 1366 1289 /// Driver's `set_boost` callback. 1367 1290 /// 1368 - /// SAFETY: Called from C. Inputs must be valid pointers. 1369 - extern "C" fn set_boost_callback( 1291 + /// # Safety 1292 + /// 1293 + /// - This function may only be called from the cpufreq C infrastructure. 1294 + /// - The pointer arguments must be valid pointers. 1295 + unsafe extern "C" fn set_boost_callback( 1370 1296 ptr: *mut bindings::cpufreq_policy, 1371 1297 state: i32, 1372 1298 ) -> kernel::ffi::c_int { ··· 1383 1303 1384 1304 /// Driver's `register_em` callback. 1385 1305 /// 1386 - /// SAFETY: Called from C. Inputs must be valid pointers. 1387 - extern "C" fn register_em_callback(ptr: *mut bindings::cpufreq_policy) { 1306 + /// # Safety 1307 + /// 1308 + /// - This function may only be called from the cpufreq C infrastructure. 1309 + /// - The pointer arguments must be valid pointers. 1310 + unsafe extern "C" fn register_em_callback(ptr: *mut bindings::cpufreq_policy) { 1388 1311 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1389 1312 // lifetime of `policy`. 1390 1313 let policy = unsafe { Policy::from_raw_mut(ptr) };
+36 -15
rust/kernel/cpumask.rs
··· 6 6 7 7 use crate::{ 8 8 alloc::{AllocError, Flags}, 9 + cpu::CpuId, 9 10 prelude::*, 10 11 types::Opaque, 11 12 }; ··· 36 35 /// 37 36 /// ``` 38 37 /// use kernel::bindings; 38 + /// use kernel::cpu::CpuId; 39 39 /// use kernel::cpumask::Cpumask; 40 40 /// 41 - /// fn set_clear_cpu(ptr: *mut bindings::cpumask, set_cpu: u32, clear_cpu: i32) { 41 + /// fn set_clear_cpu(ptr: *mut bindings::cpumask, set_cpu: CpuId, clear_cpu: CpuId) { 42 42 /// // SAFETY: The `ptr` is valid for writing and remains valid for the lifetime of the 43 43 /// // returned reference. 44 44 /// let mask = unsafe { Cpumask::as_mut_ref(ptr) }; ··· 92 90 /// This mismatches kernel naming convention and corresponds to the C 93 91 /// function `__cpumask_set_cpu()`. 94 92 #[inline] 95 - pub fn set(&mut self, cpu: u32) { 93 + pub fn set(&mut self, cpu: CpuId) { 96 94 // SAFETY: By the type invariant, `self.as_raw` is a valid argument to `__cpumask_set_cpu`. 97 - unsafe { bindings::__cpumask_set_cpu(cpu, self.as_raw()) }; 95 + unsafe { bindings::__cpumask_set_cpu(u32::from(cpu), self.as_raw()) }; 98 96 } 99 97 100 98 /// Clear `cpu` in the cpumask. ··· 103 101 /// This mismatches kernel naming convention and corresponds to the C 104 102 /// function `__cpumask_clear_cpu()`. 105 103 #[inline] 106 - pub fn clear(&mut self, cpu: i32) { 104 + pub fn clear(&mut self, cpu: CpuId) { 107 105 // SAFETY: By the type invariant, `self.as_raw` is a valid argument to 108 106 // `__cpumask_clear_cpu`. 109 - unsafe { bindings::__cpumask_clear_cpu(cpu, self.as_raw()) }; 107 + unsafe { bindings::__cpumask_clear_cpu(i32::from(cpu), self.as_raw()) }; 110 108 } 111 109 112 110 /// Test `cpu` in the cpumask. 113 111 /// 114 112 /// Equivalent to the kernel's `cpumask_test_cpu` API. 115 113 #[inline] 116 - pub fn test(&self, cpu: i32) -> bool { 114 + pub fn test(&self, cpu: CpuId) -> bool { 117 115 // SAFETY: By the type invariant, `self.as_raw` is a valid argument to `cpumask_test_cpu`. 118 - unsafe { bindings::cpumask_test_cpu(cpu, self.as_raw()) } 116 + unsafe { bindings::cpumask_test_cpu(i32::from(cpu), self.as_raw()) } 119 117 } 120 118 121 119 /// Set all CPUs in the cpumask. ··· 180 178 /// The following example demonstrates how to create and update a [`CpumaskVar`]. 181 179 /// 182 180 /// ``` 181 + /// use kernel::cpu::CpuId; 183 182 /// use kernel::cpumask::CpumaskVar; 184 183 /// 185 184 /// let mut mask = CpumaskVar::new_zero(GFP_KERNEL).unwrap(); 186 185 /// 187 186 /// assert!(mask.empty()); 188 - /// mask.set(2); 189 - /// assert!(mask.test(2)); 190 - /// mask.set(3); 191 - /// assert!(mask.test(3)); 192 - /// assert_eq!(mask.weight(), 2); 187 + /// let mut count = 0; 188 + /// 189 + /// let cpu2 = CpuId::from_u32(2); 190 + /// if let Some(cpu) = cpu2 { 191 + /// mask.set(cpu); 192 + /// assert!(mask.test(cpu)); 193 + /// count += 1; 194 + /// } 195 + /// 196 + /// let cpu3 = CpuId::from_u32(3); 197 + /// if let Some(cpu) = cpu3 { 198 + /// mask.set(cpu); 199 + /// assert!(mask.test(cpu)); 200 + /// count += 1; 201 + /// } 202 + /// 203 + /// assert_eq!(mask.weight(), count); 193 204 /// 194 205 /// let mask2 = CpumaskVar::try_clone(&mask).unwrap(); 195 - /// assert!(mask2.test(2)); 196 - /// assert!(mask2.test(3)); 197 - /// assert_eq!(mask2.weight(), 2); 206 + /// 207 + /// if let Some(cpu) = cpu2 { 208 + /// assert!(mask2.test(cpu)); 209 + /// } 210 + /// 211 + /// if let Some(cpu) = cpu3 { 212 + /// assert!(mask2.test(cpu)); 213 + /// } 214 + /// assert_eq!(mask2.weight(), count); 198 215 /// ``` 199 216 pub struct CpumaskVar { 200 217 #[cfg(CONFIG_CPUMASK_OFFSTACK)]
+43 -17
rust/kernel/devres.rs
··· 12 12 error::{Error, Result}, 13 13 ffi::c_void, 14 14 prelude::*, 15 - revocable::Revocable, 16 - sync::Arc, 15 + revocable::{Revocable, RevocableGuard}, 16 + sync::{rcu, Arc, Completion}, 17 17 types::ARef, 18 18 }; 19 - 20 - use core::ops::Deref; 21 19 22 20 #[pin_data] 23 21 struct DevresInner<T> { ··· 23 25 callback: unsafe extern "C" fn(*mut c_void), 24 26 #[pin] 25 27 data: Revocable<T>, 28 + #[pin] 29 + revoke: Completion, 26 30 } 27 31 28 32 /// This abstraction is meant to be used by subsystems to containerize [`Device`] bound resources to 29 33 /// manage their lifetime. 30 34 /// 31 35 /// [`Device`] bound resources should be freed when either the resource goes out of scope or the 32 - /// [`Device`] is unbound respectively, depending on what happens first. 36 + /// [`Device`] is unbound respectively, depending on what happens first. In any case, it is always 37 + /// guaranteed that revoking the device resource is completed before the corresponding [`Device`] 38 + /// is unbound. 33 39 /// 34 40 /// To achieve that [`Devres`] registers a devres callback on creation, which is called once the 35 41 /// [`Device`] is unbound, revoking access to the encapsulated resource (see also [`Revocable`]). ··· 104 102 dev: dev.into(), 105 103 callback: Self::devres_callback, 106 104 data <- Revocable::new(data), 105 + revoke <- Completion::new(), 107 106 }), 108 107 flags, 109 108 )?; ··· 133 130 self as _ 134 131 } 135 132 136 - fn remove_action(this: &Arc<Self>) { 133 + fn remove_action(this: &Arc<Self>) -> bool { 137 134 // SAFETY: 138 135 // - `self.inner.dev` is a valid `Device`, 139 136 // - the `action` and `data` pointers are the exact same ones as given to devm_add_action() 140 137 // previously, 141 138 // - `self` is always valid, even if the action has been released already. 142 - let ret = unsafe { 139 + let success = unsafe { 143 140 bindings::devm_remove_action_nowarn( 144 141 this.dev.as_raw(), 145 142 Some(this.callback), 146 143 this.as_ptr() as _, 147 144 ) 148 - }; 145 + } == 0; 149 146 150 - if ret == 0 { 147 + if success { 151 148 // SAFETY: We leaked an `Arc` reference to devm_add_action() in `DevresInner::new`; if 152 149 // devm_remove_action_nowarn() was successful we can (and have to) claim back ownership 153 150 // of this reference. 154 151 let _ = unsafe { Arc::from_raw(this.as_ptr()) }; 155 152 } 153 + 154 + success 156 155 } 157 156 158 157 #[allow(clippy::missing_safety_doc)] ··· 166 161 // `DevresInner::new`. 167 162 let inner = unsafe { Arc::from_raw(ptr) }; 168 163 169 - inner.data.revoke(); 164 + if !inner.data.revoke() { 165 + // If `revoke()` returns false, it means that `Devres::drop` already started revoking 166 + // `inner.data` for us. Hence we have to wait until `Devres::drop()` signals that it 167 + // completed revoking `inner.data`. 168 + inner.revoke.wait_for_completion(); 169 + } 170 170 } 171 171 } 172 172 ··· 228 218 // SAFETY: `dev` being the same device as the device this `Devres` has been created for 229 219 // proves that `self.0.data` hasn't been revoked and is guaranteed to not be revoked as 230 220 // long as `dev` lives; `dev` lives at least as long as `self`. 231 - Ok(unsafe { self.deref().access() }) 221 + Ok(unsafe { self.0.data.access() }) 232 222 } 233 - } 234 223 235 - impl<T> Deref for Devres<T> { 236 - type Target = Revocable<T>; 224 + /// [`Devres`] accessor for [`Revocable::try_access`]. 225 + pub fn try_access(&self) -> Option<RevocableGuard<'_, T>> { 226 + self.0.data.try_access() 227 + } 237 228 238 - fn deref(&self) -> &Self::Target { 239 - &self.0.data 229 + /// [`Devres`] accessor for [`Revocable::try_access_with`]. 230 + pub fn try_access_with<R, F: FnOnce(&T) -> R>(&self, f: F) -> Option<R> { 231 + self.0.data.try_access_with(f) 232 + } 233 + 234 + /// [`Devres`] accessor for [`Revocable::try_access_with_guard`]. 235 + pub fn try_access_with_guard<'a>(&'a self, guard: &'a rcu::Guard) -> Option<&'a T> { 236 + self.0.data.try_access_with_guard(guard) 240 237 } 241 238 } 242 239 243 240 impl<T> Drop for Devres<T> { 244 241 fn drop(&mut self) { 245 - DevresInner::remove_action(&self.0); 242 + // SAFETY: When `drop` runs, it is guaranteed that nobody is accessing the revocable data 243 + // anymore, hence it is safe not to wait for the grace period to finish. 244 + if unsafe { self.0.data.revoke_nosync() } { 245 + // We revoked `self.0.data` before the devres action did, hence try to remove it. 246 + if !DevresInner::remove_action(&self.0) { 247 + // We could not remove the devres action, which means that it now runs concurrently, 248 + // hence signal that `self.0.data` has been revoked successfully. 249 + self.0.revoke.complete_all(); 250 + } 251 + } 246 252 } 247 253 }
+14 -4
rust/kernel/revocable.rs
··· 154 154 /// # Safety 155 155 /// 156 156 /// Callers must ensure that there are no more concurrent users of the revocable object. 157 - unsafe fn revoke_internal<const SYNC: bool>(&self) { 158 - if self.is_available.swap(false, Ordering::Relaxed) { 157 + unsafe fn revoke_internal<const SYNC: bool>(&self) -> bool { 158 + let revoke = self.is_available.swap(false, Ordering::Relaxed); 159 + 160 + if revoke { 159 161 if SYNC { 160 162 // SAFETY: Just an FFI call, there are no further requirements. 161 163 unsafe { bindings::synchronize_rcu() }; ··· 167 165 // `compare_exchange` above that takes `is_available` from `true` to `false`. 168 166 unsafe { drop_in_place(self.data.get()) }; 169 167 } 168 + 169 + revoke 170 170 } 171 171 172 172 /// Revokes access to and drops the wrapped object. ··· 176 172 /// Access to the object is revoked immediately to new callers of [`Revocable::try_access`], 177 173 /// expecting that there are no concurrent users of the object. 178 174 /// 175 + /// Returns `true` if `&self` has been revoked with this call, `false` if it was revoked 176 + /// already. 177 + /// 179 178 /// # Safety 180 179 /// 181 180 /// Callers must ensure that there are no more concurrent users of the revocable object. 182 - pub unsafe fn revoke_nosync(&self) { 181 + pub unsafe fn revoke_nosync(&self) -> bool { 183 182 // SAFETY: By the safety requirement of this function, the caller ensures that nobody is 184 183 // accessing the data anymore and hence we don't have to wait for the grace period to 185 184 // finish. ··· 196 189 /// If there are concurrent users of the object (i.e., ones that called 197 190 /// [`Revocable::try_access`] beforehand and still haven't dropped the returned guard), this 198 191 /// function waits for the concurrent access to complete before dropping the wrapped object. 199 - pub fn revoke(&self) { 192 + /// 193 + /// Returns `true` if `&self` has been revoked with this call, `false` if it was revoked 194 + /// already. 195 + pub fn revoke(&self) -> bool { 200 196 // SAFETY: By passing `true` we ask `revoke_internal` to wait for the grace period to 201 197 // finish. 202 198 unsafe { self.revoke_internal::<true>() }
+2
rust/kernel/sync.rs
··· 10 10 use pin_init; 11 11 12 12 mod arc; 13 + pub mod completion; 13 14 mod condvar; 14 15 pub mod lock; 15 16 mod locked_by; ··· 18 17 pub mod rcu; 19 18 20 19 pub use arc::{Arc, ArcBorrow, UniqueArc}; 20 + pub use completion::Completion; 21 21 pub use condvar::{new_condvar, CondVar, CondVarTimeoutResult}; 22 22 pub use lock::global::{global_lock, GlobalGuard, GlobalLock, GlobalLockBackend, GlobalLockedBy}; 23 23 pub use lock::mutex::{new_mutex, Mutex, MutexGuard};
+112
rust/kernel/sync/completion.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Completion support. 4 + //! 5 + //! Reference: <https://docs.kernel.org/scheduler/completion.html> 6 + //! 7 + //! C header: [`include/linux/completion.h`](srctree/include/linux/completion.h) 8 + 9 + use crate::{bindings, prelude::*, types::Opaque}; 10 + 11 + /// Synchronization primitive to signal when a certain task has been completed. 12 + /// 13 + /// The [`Completion`] synchronization primitive signals when a certain task has been completed by 14 + /// waking up other tasks that have been queued up to wait for the [`Completion`] to be completed. 15 + /// 16 + /// # Examples 17 + /// 18 + /// ``` 19 + /// use kernel::sync::{Arc, Completion}; 20 + /// use kernel::workqueue::{self, impl_has_work, new_work, Work, WorkItem}; 21 + /// 22 + /// #[pin_data] 23 + /// struct MyTask { 24 + /// #[pin] 25 + /// work: Work<MyTask>, 26 + /// #[pin] 27 + /// done: Completion, 28 + /// } 29 + /// 30 + /// impl_has_work! { 31 + /// impl HasWork<Self> for MyTask { self.work } 32 + /// } 33 + /// 34 + /// impl MyTask { 35 + /// fn new() -> Result<Arc<Self>> { 36 + /// let this = Arc::pin_init(pin_init!(MyTask { 37 + /// work <- new_work!("MyTask::work"), 38 + /// done <- Completion::new(), 39 + /// }), GFP_KERNEL)?; 40 + /// 41 + /// let _ = workqueue::system().enqueue(this.clone()); 42 + /// 43 + /// Ok(this) 44 + /// } 45 + /// 46 + /// fn wait_for_completion(&self) { 47 + /// self.done.wait_for_completion(); 48 + /// 49 + /// pr_info!("Completion: task complete\n"); 50 + /// } 51 + /// } 52 + /// 53 + /// impl WorkItem for MyTask { 54 + /// type Pointer = Arc<MyTask>; 55 + /// 56 + /// fn run(this: Arc<MyTask>) { 57 + /// // process this task 58 + /// this.done.complete_all(); 59 + /// } 60 + /// } 61 + /// 62 + /// let task = MyTask::new()?; 63 + /// task.wait_for_completion(); 64 + /// # Ok::<(), Error>(()) 65 + /// ``` 66 + #[pin_data] 67 + pub struct Completion { 68 + #[pin] 69 + inner: Opaque<bindings::completion>, 70 + } 71 + 72 + // SAFETY: `Completion` is safe to be send to any task. 73 + unsafe impl Send for Completion {} 74 + 75 + // SAFETY: `Completion` is safe to be accessed concurrently. 76 + unsafe impl Sync for Completion {} 77 + 78 + impl Completion { 79 + /// Create an initializer for a new [`Completion`]. 80 + pub fn new() -> impl PinInit<Self> { 81 + pin_init!(Self { 82 + inner <- Opaque::ffi_init(|slot: *mut bindings::completion| { 83 + // SAFETY: `slot` is a valid pointer to an uninitialized `struct completion`. 84 + unsafe { bindings::init_completion(slot) }; 85 + }), 86 + }) 87 + } 88 + 89 + fn as_raw(&self) -> *mut bindings::completion { 90 + self.inner.get() 91 + } 92 + 93 + /// Signal all tasks waiting on this completion. 94 + /// 95 + /// This method wakes up all tasks waiting on this completion; after this operation the 96 + /// completion is permanently done, i.e. signals all current and future waiters. 97 + pub fn complete_all(&self) { 98 + // SAFETY: `self.as_raw()` is a pointer to a valid `struct completion`. 99 + unsafe { bindings::complete_all(self.as_raw()) }; 100 + } 101 + 102 + /// Wait for completion of a task. 103 + /// 104 + /// This method waits for the completion of a task; it is not interruptible and there is no 105 + /// timeout. 106 + /// 107 + /// See also [`Completion::complete_all`]. 108 + pub fn wait_for_completion(&self) { 109 + // SAFETY: `self.as_raw()` is a pointer to a valid `struct completion`. 110 + unsafe { bindings::wait_for_completion(self.as_raw()) }; 111 + } 112 + }
+1 -1
rust/kernel/time/hrtimer.rs
··· 517 517 ) -> *mut Self { 518 518 // SAFETY: As per the safety requirement of this function, `ptr` 519 519 // is pointing inside a `$timer_type`. 520 - unsafe { ::kernel::container_of!(ptr, $timer_type, $field).cast_mut() } 520 + unsafe { ::kernel::container_of!(ptr, $timer_type, $field) } 521 521 } 522 522 } 523 523 }
+2 -12
scripts/gendwarfksyms/gendwarfksyms.h
··· 216 216 void cache_init(struct cache *cache); 217 217 void cache_free(struct cache *cache); 218 218 219 - static inline void __cache_mark_expanded(struct cache *cache, uintptr_t addr) 220 - { 221 - cache_set(cache, addr, 1); 222 - } 223 - 224 - static inline bool __cache_was_expanded(struct cache *cache, uintptr_t addr) 225 - { 226 - return cache_get(cache, addr) == 1; 227 - } 228 - 229 219 static inline void cache_mark_expanded(struct cache *cache, void *addr) 230 220 { 231 - __cache_mark_expanded(cache, (uintptr_t)addr); 221 + cache_set(cache, (unsigned long)addr, 1); 232 222 } 233 223 234 224 static inline bool cache_was_expanded(struct cache *cache, void *addr) 235 225 { 236 - return __cache_was_expanded(cache, (uintptr_t)addr); 226 + return cache_get(cache, (unsigned long)addr) == 1; 237 227 } 238 228 239 229 /*
+19 -46
scripts/gendwarfksyms/types.c
··· 333 333 cache_free(&expansion_cache); 334 334 } 335 335 336 - static void __type_expand(struct die *cache, struct type_expansion *type, 337 - bool recursive); 338 - 339 - static void type_expand_child(struct die *cache, struct type_expansion *type, 340 - bool recursive) 341 - { 342 - struct type_expansion child; 343 - char *name; 344 - 345 - name = get_type_name(cache); 346 - if (!name) { 347 - __type_expand(cache, type, recursive); 348 - return; 349 - } 350 - 351 - if (recursive && !__cache_was_expanded(&expansion_cache, cache->addr)) { 352 - __cache_mark_expanded(&expansion_cache, cache->addr); 353 - type_expansion_init(&child); 354 - __type_expand(cache, &child, true); 355 - type_map_add(name, &child); 356 - type_expansion_free(&child); 357 - } 358 - 359 - type_expansion_append(type, name, name); 360 - } 361 - 362 - static void __type_expand(struct die *cache, struct type_expansion *type, 363 - bool recursive) 336 + static void __type_expand(struct die *cache, struct type_expansion *type) 364 337 { 365 338 struct die_fragment *df; 366 339 struct die *child; 340 + char *name; 367 341 368 342 list_for_each_entry(df, &cache->fragments, list) { 369 343 switch (df->type) { ··· 353 379 error("unknown child: %" PRIxPTR, 354 380 df->data.addr); 355 381 356 - type_expand_child(child, type, recursive); 382 + name = get_type_name(child); 383 + if (name) 384 + type_expansion_append(type, name, name); 385 + else 386 + __type_expand(child, type); 387 + 357 388 break; 358 389 case FRAGMENT_LINEBREAK: 359 390 /* ··· 376 397 } 377 398 } 378 399 379 - static void type_expand(struct die *cache, struct type_expansion *type, 380 - bool recursive) 400 + static void type_expand(const char *name, struct die *cache, 401 + struct type_expansion *type) 381 402 { 403 + const char *override; 404 + 382 405 type_expansion_init(type); 383 - __type_expand(cache, type, recursive); 384 - cache_free(&expansion_cache); 406 + 407 + if (stable && kabi_get_type_string(name, &override)) 408 + type_parse(name, override, type); 409 + else 410 + __type_expand(cache, type); 385 411 } 386 412 387 413 static void type_parse(const char *name, const char *str, ··· 399 415 400 416 if (!*str) 401 417 error("empty type string override for '%s'", name); 402 - 403 - type_expansion_init(type); 404 418 405 419 for (pos = 0; str[pos]; ++pos) { 406 420 bool empty; ··· 460 478 static void expand_type(struct die *cache, void *arg) 461 479 { 462 480 struct type_expansion type; 463 - const char *override; 464 481 char *name; 465 482 466 483 if (cache->mapped) ··· 485 504 486 505 debug("%s", name); 487 506 488 - if (stable && kabi_get_type_string(name, &override)) 489 - type_parse(name, override, &type); 490 - else 491 - type_expand(cache, &type, true); 492 - 507 + type_expand(name, cache, &type); 493 508 type_map_add(name, &type); 494 509 type_expansion_free(&type); 495 510 free(name); ··· 495 518 { 496 519 struct type_expansion type; 497 520 struct version version; 498 - const char *override; 499 521 struct die *cache; 500 522 501 523 /* ··· 508 532 if (__die_map_get(sym->die_addr, DIE_SYMBOL, &cache)) 509 533 return; /* We'll warn about missing CRCs later. */ 510 534 511 - if (stable && kabi_get_type_string(sym->name, &override)) 512 - type_parse(sym->name, override, &type); 513 - else 514 - type_expand(cache, &type, false); 535 + type_expand(sym->name, cache, &type); 515 536 516 537 /* If the symbol already has a version, don't calculate it again. */ 517 538 if (sym->state != SYMBOL_PROCESSED) {
+12 -3
scripts/misc-check
··· 62 62 xargs -r printf "%s: warning: EXPORT_SYMBOL() is not used, but #include <linux/export.h> is present\n" >&2 63 63 } 64 64 65 - check_tracked_ignored_files 66 - check_missing_include_linux_export_h 67 - check_unnecessary_include_linux_export_h 65 + case "${KBUILD_EXTRA_WARN}" in 66 + *1*) 67 + check_tracked_ignored_files 68 + ;; 69 + esac 70 + 71 + case "${KBUILD_EXTRA_WARN}" in 72 + *2*) 73 + check_missing_include_linux_export_h 74 + check_unnecessary_include_linux_export_h 75 + ;; 76 + esac
+1 -1
security/selinux/xfrm.c
··· 94 94 95 95 ctx->ctx_doi = XFRM_SC_DOI_LSM; 96 96 ctx->ctx_alg = XFRM_SC_ALG_SELINUX; 97 - ctx->ctx_len = str_len; 97 + ctx->ctx_len = str_len + 1; 98 98 memcpy(ctx->ctx_str, &uctx[1], str_len); 99 99 ctx->ctx_str[str_len] = '\0'; 100 100 rc = security_context_to_sid(ctx->ctx_str, str_len,
+17 -11
tools/net/ynl/pyynl/lib/ynl.py
··· 231 231 self.extack['unknown'].append(extack) 232 232 233 233 if attr_space: 234 - # We don't have the ability to parse nests yet, so only do global 235 - if 'miss-type' in self.extack and 'miss-nest' not in self.extack: 236 - miss_type = self.extack['miss-type'] 237 - if miss_type in attr_space.attrs_by_val: 238 - spec = attr_space.attrs_by_val[miss_type] 239 - self.extack['miss-type'] = spec['name'] 240 - if 'doc' in spec: 241 - self.extack['miss-type-doc'] = spec['doc'] 234 + self.annotate_extack(attr_space) 242 235 243 236 def _decode_policy(self, raw): 244 237 policy = {} ··· 257 264 policy['mask'] = attr.as_scalar('u64') 258 265 return policy 259 266 267 + def annotate_extack(self, attr_space): 268 + """ Make extack more human friendly with attribute information """ 269 + 270 + # We don't have the ability to parse nests yet, so only do global 271 + if 'miss-type' in self.extack and 'miss-nest' not in self.extack: 272 + miss_type = self.extack['miss-type'] 273 + if miss_type in attr_space.attrs_by_val: 274 + spec = attr_space.attrs_by_val[miss_type] 275 + self.extack['miss-type'] = spec['name'] 276 + if 'doc' in spec: 277 + self.extack['miss-type-doc'] = spec['doc'] 278 + 260 279 def cmd(self): 261 280 return self.nl_type 262 281 ··· 282 277 283 278 284 279 class NlMsgs: 285 - def __init__(self, data, attr_space=None): 280 + def __init__(self, data): 286 281 self.msgs = [] 287 282 288 283 offset = 0 289 284 while offset < len(data): 290 - msg = NlMsg(data, offset, attr_space=attr_space) 285 + msg = NlMsg(data, offset) 291 286 offset += msg.nl_len 292 287 self.msgs.append(msg) 293 288 ··· 1039 1034 op_rsp = [] 1040 1035 while not done: 1041 1036 reply = self.sock.recv(self._recv_size) 1042 - nms = NlMsgs(reply, attr_space=op.attr_set) 1037 + nms = NlMsgs(reply) 1043 1038 self._recv_dbg_print(reply, nms) 1044 1039 for nl_msg in nms: 1045 1040 if nl_msg.nl_seq in reqs_by_seq: 1046 1041 (op, vals, req_msg, req_flags) = reqs_by_seq[nl_msg.nl_seq] 1047 1042 if nl_msg.extack: 1043 + nl_msg.annotate_extack(op.attr_set) 1048 1044 self._decode_extack(req_msg, op, nl_msg.extack, vals) 1049 1045 else: 1050 1046 op = None
+5 -4
tools/power/cpupower/Makefile
··· 73 73 mandir ?= /usr/man 74 74 libdir ?= /usr/lib 75 75 libexecdir ?= /usr/libexec 76 + unitdir ?= /usr/lib/systemd/system 76 77 includedir ?= /usr/include 77 78 localedir ?= /usr/share/locale 78 79 docdir ?= /usr/share/doc/packages/cpupower ··· 310 309 $(INSTALL_DATA) cpupower-service.conf '$(DESTDIR)${confdir}' 311 310 $(INSTALL) -d $(DESTDIR)${libexecdir} 312 311 $(INSTALL_PROGRAM) cpupower.sh '$(DESTDIR)${libexecdir}/cpupower' 313 - $(INSTALL) -d $(DESTDIR)${libdir}/systemd/system 314 - sed 's|___CDIR___|${confdir}|; s|___LDIR___|${libexecdir}|' cpupower.service.in > '$(DESTDIR)${libdir}/systemd/system/cpupower.service' 315 - $(SETPERM_DATA) '$(DESTDIR)${libdir}/systemd/system/cpupower.service' 312 + $(INSTALL) -d $(DESTDIR)${unitdir} 313 + sed 's|___CDIR___|${confdir}|; s|___LDIR___|${libexecdir}|' cpupower.service.in > '$(DESTDIR)${unitdir}/cpupower.service' 314 + $(SETPERM_DATA) '$(DESTDIR)${unitdir}/cpupower.service' 316 315 317 316 install-man: 318 317 $(INSTALL_DATA) -D man/cpupower.1 $(DESTDIR)${mandir}/man1/cpupower.1 ··· 349 348 - rm -f $(DESTDIR)${bindir}/utils/cpupower 350 349 - rm -f $(DESTDIR)${confdir}cpupower-service.conf 351 350 - rm -f $(DESTDIR)${libexecdir}/cpupower 352 - - rm -f $(DESTDIR)${libdir}/systemd/system/cpupower.service 351 + - rm -f $(DESTDIR)${unitdir}/cpupower.service 353 352 - rm -f $(DESTDIR)${mandir}/man1/cpupower.1 354 353 - rm -f $(DESTDIR)${mandir}/man1/cpupower-frequency-set.1 355 354 - rm -f $(DESTDIR)${mandir}/man1/cpupower-frequency-info.1
+2 -1
tools/testing/selftests/drivers/net/netdevsim/peer.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0-only 3 3 4 - source ../../../net/lib.sh 4 + lib_dir=$(dirname $0)/../../../net 5 + source $lib_dir/lib.sh 5 6 6 7 NSIM_DEV_1_ID=$((256 + RANDOM % 256)) 7 8 NSIM_DEV_1_SYS=/sys/bus/netdevsim/devices/netdevsim$NSIM_DEV_1_ID
+25 -14
tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c
··· 22 22 #include "gic.h" 23 23 #include "vgic.h" 24 24 25 - static const uint64_t CVAL_MAX = ~0ULL; 25 + /* Depends on counter width. */ 26 + static uint64_t CVAL_MAX; 26 27 /* tval is a signed 32-bit int. */ 27 28 static const int32_t TVAL_MAX = INT32_MAX; 28 29 static const int32_t TVAL_MIN = INT32_MIN; ··· 31 30 /* After how much time we say there is no IRQ. */ 32 31 static const uint32_t TIMEOUT_NO_IRQ_US = 50000; 33 32 34 - /* A nice counter value to use as the starting one for most tests. */ 35 - static const uint64_t DEF_CNT = (CVAL_MAX / 2); 33 + /* Counter value to use as the starting one for most tests. Set to CVAL_MAX/2 */ 34 + static uint64_t DEF_CNT; 36 35 37 36 /* Number of runs. */ 38 37 static const uint32_t NR_TEST_ITERS_DEF = 5; ··· 192 191 { 193 192 atomic_set(&shared_data.handled, 0); 194 193 atomic_set(&shared_data.spurious, 0); 195 - timer_set_ctl(timer, ctl); 196 194 timer_set_tval(timer, tval_cycles); 195 + timer_set_ctl(timer, ctl); 197 196 } 198 197 199 198 static void set_xval_irq(enum arch_timer timer, uint64_t xval, uint32_t ctl, ··· 733 732 test_set_cnt_after_tval(timer, 0, tval, (uint64_t) tval + 1, 734 733 wm); 735 734 } 736 - 737 - for (i = 0; i < ARRAY_SIZE(sleep_method); i++) { 738 - sleep_method_t sm = sleep_method[i]; 739 - 740 - test_set_cnt_after_cval_no_irq(timer, 0, DEF_CNT, CVAL_MAX, sm); 741 - } 742 735 } 743 736 744 737 /* ··· 844 849 GUEST_DONE(); 845 850 } 846 851 852 + static cpu_set_t default_cpuset; 853 + 847 854 static uint32_t next_pcpu(void) 848 855 { 849 856 uint32_t max = get_nprocs(); 850 857 uint32_t cur = sched_getcpu(); 851 858 uint32_t next = cur; 852 - cpu_set_t cpuset; 859 + cpu_set_t cpuset = default_cpuset; 853 860 854 861 TEST_ASSERT(max > 1, "Need at least two physical cpus"); 855 - 856 - sched_getaffinity(0, sizeof(cpuset), &cpuset); 857 862 858 863 do { 859 864 next = (next + 1) % CPU_SETSIZE; ··· 970 975 test_init_timer_irq(*vm, *vcpu); 971 976 vgic_v3_setup(*vm, 1, 64); 972 977 sync_global_to_guest(*vm, test_args); 978 + sync_global_to_guest(*vm, CVAL_MAX); 979 + sync_global_to_guest(*vm, DEF_CNT); 973 980 } 974 981 975 982 static void test_print_help(char *name) ··· 983 986 pr_info("\t-b: Test both physical and virtual timers (default: true)\n"); 984 987 pr_info("\t-l: Delta (in ms) used for long wait time test (default: %u)\n", 985 988 LONG_WAIT_TEST_MS); 986 - pr_info("\t-l: Delta (in ms) used for wait times (default: %u)\n", 989 + pr_info("\t-w: Delta (in ms) used for wait times (default: %u)\n", 987 990 WAIT_TEST_MS); 988 991 pr_info("\t-p: Test physical timer (default: true)\n"); 989 992 pr_info("\t-v: Test virtual timer (default: true)\n"); ··· 1032 1035 return false; 1033 1036 } 1034 1037 1038 + static void set_counter_defaults(void) 1039 + { 1040 + const uint64_t MIN_ROLLOVER_SECS = 40ULL * 365 * 24 * 3600; 1041 + uint64_t freq = read_sysreg(CNTFRQ_EL0); 1042 + uint64_t width = ilog2(MIN_ROLLOVER_SECS * freq); 1043 + 1044 + width = clamp(width, 56, 64); 1045 + CVAL_MAX = GENMASK_ULL(width - 1, 0); 1046 + DEF_CNT = CVAL_MAX / 2; 1047 + } 1048 + 1035 1049 int main(int argc, char *argv[]) 1036 1050 { 1037 1051 struct kvm_vcpu *vcpu; ··· 1053 1045 1054 1046 if (!parse_args(argc, argv)) 1055 1047 exit(KSFT_SKIP); 1048 + 1049 + sched_getaffinity(0, sizeof(default_cpuset), &default_cpuset); 1050 + set_counter_defaults(); 1056 1051 1057 1052 if (test_args.test_virtual) { 1058 1053 test_vm_create(&vm, &vcpu, VIRTUAL);
+6 -1
tools/testing/selftests/mm/gup_longterm.c
··· 298 298 log_test_start("%s ... with memfd", desc); 299 299 300 300 fd = memfd_create("test", 0); 301 - if (fd < 0) 301 + if (fd < 0) { 302 302 ksft_print_msg("memfd_create() failed (%s)\n", strerror(errno)); 303 + log_test_result(KSFT_SKIP); 304 + return; 305 + } 303 306 304 307 fn(fd, pagesize); 305 308 close(fd); ··· 369 366 fd = memfd_create("test", flags); 370 367 if (fd < 0) { 371 368 ksft_print_msg("memfd_create() failed (%s)\n", strerror(errno)); 369 + log_test_result(KSFT_SKIP); 370 + return; 372 371 } 373 372 374 373 fn(fd, hugetlbsize);
+1
tools/testing/selftests/net/.gitignore
··· 50 50 tcp_fastopen_backup_key 51 51 tcp_inq 52 52 tcp_mmap 53 + tfo 53 54 timestamping 54 55 tls 55 56 toeplitz
+2
tools/testing/selftests/net/Makefile
··· 111 111 TEST_PROGS += lwt_dst_cache_ref_loop.sh 112 112 TEST_PROGS += skf_net_off.sh 113 113 TEST_GEN_FILES += skf_net_off 114 + TEST_GEN_FILES += tfo 115 + TEST_PROGS += tfo_passive.sh 114 116 115 117 # YNL files, must be before "include ..lib.mk" 116 118 YNL_GEN_FILES := busy_poller netlink-dumps
+171
tools/testing/selftests/net/tfo.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <error.h> 3 + #include <fcntl.h> 4 + #include <limits.h> 5 + #include <stdbool.h> 6 + #include <stdint.h> 7 + #include <stdio.h> 8 + #include <stdlib.h> 9 + #include <string.h> 10 + #include <unistd.h> 11 + #include <arpa/inet.h> 12 + #include <sys/socket.h> 13 + #include <netinet/tcp.h> 14 + #include <errno.h> 15 + 16 + static int cfg_server; 17 + static int cfg_client; 18 + static int cfg_port = 8000; 19 + static struct sockaddr_in6 cfg_addr; 20 + static char *cfg_outfile; 21 + 22 + static int parse_address(const char *str, int port, struct sockaddr_in6 *sin6) 23 + { 24 + int ret; 25 + 26 + sin6->sin6_family = AF_INET6; 27 + sin6->sin6_port = htons(port); 28 + 29 + ret = inet_pton(sin6->sin6_family, str, &sin6->sin6_addr); 30 + if (ret != 1) { 31 + /* fallback to plain IPv4 */ 32 + ret = inet_pton(AF_INET, str, &sin6->sin6_addr.s6_addr32[3]); 33 + if (ret != 1) 34 + return -1; 35 + 36 + /* add ::ffff prefix */ 37 + sin6->sin6_addr.s6_addr32[0] = 0; 38 + sin6->sin6_addr.s6_addr32[1] = 0; 39 + sin6->sin6_addr.s6_addr16[4] = 0; 40 + sin6->sin6_addr.s6_addr16[5] = 0xffff; 41 + } 42 + 43 + return 0; 44 + } 45 + 46 + static void run_server(void) 47 + { 48 + unsigned long qlen = 32; 49 + int fd, opt, connfd; 50 + socklen_t len; 51 + char buf[64]; 52 + FILE *outfile; 53 + 54 + outfile = fopen(cfg_outfile, "w"); 55 + if (!outfile) 56 + error(1, errno, "fopen() outfile"); 57 + 58 + fd = socket(AF_INET6, SOCK_STREAM, 0); 59 + if (fd == -1) 60 + error(1, errno, "socket()"); 61 + 62 + opt = 1; 63 + if (setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt)) < 0) 64 + error(1, errno, "setsockopt(SO_REUSEADDR)"); 65 + 66 + if (setsockopt(fd, SOL_TCP, TCP_FASTOPEN, &qlen, sizeof(qlen)) < 0) 67 + error(1, errno, "setsockopt(TCP_FASTOPEN)"); 68 + 69 + if (bind(fd, (struct sockaddr *)&cfg_addr, sizeof(cfg_addr)) < 0) 70 + error(1, errno, "bind()"); 71 + 72 + if (listen(fd, 5) < 0) 73 + error(1, errno, "listen()"); 74 + 75 + len = sizeof(cfg_addr); 76 + connfd = accept(fd, (struct sockaddr *)&cfg_addr, &len); 77 + if (connfd < 0) 78 + error(1, errno, "accept()"); 79 + 80 + len = sizeof(opt); 81 + if (getsockopt(connfd, SOL_SOCKET, SO_INCOMING_NAPI_ID, &opt, &len) < 0) 82 + error(1, errno, "getsockopt(SO_INCOMING_NAPI_ID)"); 83 + 84 + read(connfd, buf, 64); 85 + fprintf(outfile, "%d\n", opt); 86 + 87 + fclose(outfile); 88 + close(connfd); 89 + close(fd); 90 + } 91 + 92 + static void run_client(void) 93 + { 94 + int fd; 95 + char *msg = "Hello, world!"; 96 + 97 + fd = socket(AF_INET6, SOCK_STREAM, 0); 98 + if (fd == -1) 99 + error(1, errno, "socket()"); 100 + 101 + sendto(fd, msg, strlen(msg), MSG_FASTOPEN, (struct sockaddr *)&cfg_addr, sizeof(cfg_addr)); 102 + 103 + close(fd); 104 + } 105 + 106 + static void usage(const char *filepath) 107 + { 108 + error(1, 0, "Usage: %s (-s|-c) -h<server_ip> -p<port> -o<outfile> ", filepath); 109 + } 110 + 111 + static void parse_opts(int argc, char **argv) 112 + { 113 + struct sockaddr_in6 *addr6 = (void *) &cfg_addr; 114 + char *addr = NULL; 115 + int ret; 116 + int c; 117 + 118 + if (argc <= 1) 119 + usage(argv[0]); 120 + 121 + while ((c = getopt(argc, argv, "sch:p:o:")) != -1) { 122 + switch (c) { 123 + case 's': 124 + if (cfg_client) 125 + error(1, 0, "Pass one of -s or -c"); 126 + cfg_server = 1; 127 + break; 128 + case 'c': 129 + if (cfg_server) 130 + error(1, 0, "Pass one of -s or -c"); 131 + cfg_client = 1; 132 + break; 133 + case 'h': 134 + addr = optarg; 135 + break; 136 + case 'p': 137 + cfg_port = strtoul(optarg, NULL, 0); 138 + break; 139 + case 'o': 140 + cfg_outfile = strdup(optarg); 141 + if (!cfg_outfile) 142 + error(1, 0, "outfile invalid"); 143 + break; 144 + } 145 + } 146 + 147 + if (cfg_server && addr) 148 + error(1, 0, "Server cannot have -h specified"); 149 + 150 + memset(addr6, 0, sizeof(*addr6)); 151 + addr6->sin6_family = AF_INET6; 152 + addr6->sin6_port = htons(cfg_port); 153 + addr6->sin6_addr = in6addr_any; 154 + if (addr) { 155 + ret = parse_address(addr, cfg_port, addr6); 156 + if (ret) 157 + error(1, 0, "Client address parse error: %s", addr); 158 + } 159 + } 160 + 161 + int main(int argc, char **argv) 162 + { 163 + parse_opts(argc, argv); 164 + 165 + if (cfg_server) 166 + run_server(); 167 + else if (cfg_client) 168 + run_client(); 169 + 170 + return 0; 171 + }
+112
tools/testing/selftests/net/tfo_passive.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + source lib.sh 4 + 5 + NSIM_SV_ID=$((256 + RANDOM % 256)) 6 + NSIM_SV_SYS=/sys/bus/netdevsim/devices/netdevsim$NSIM_SV_ID 7 + NSIM_CL_ID=$((512 + RANDOM % 256)) 8 + NSIM_CL_SYS=/sys/bus/netdevsim/devices/netdevsim$NSIM_CL_ID 9 + 10 + NSIM_DEV_SYS_NEW=/sys/bus/netdevsim/new_device 11 + NSIM_DEV_SYS_DEL=/sys/bus/netdevsim/del_device 12 + NSIM_DEV_SYS_LINK=/sys/bus/netdevsim/link_device 13 + NSIM_DEV_SYS_UNLINK=/sys/bus/netdevsim/unlink_device 14 + 15 + SERVER_IP=192.168.1.1 16 + CLIENT_IP=192.168.1.2 17 + SERVER_PORT=48675 18 + 19 + setup_ns() 20 + { 21 + set -e 22 + ip netns add nssv 23 + ip netns add nscl 24 + 25 + NSIM_SV_NAME=$(find $NSIM_SV_SYS/net -maxdepth 1 -type d ! \ 26 + -path $NSIM_SV_SYS/net -exec basename {} \;) 27 + NSIM_CL_NAME=$(find $NSIM_CL_SYS/net -maxdepth 1 -type d ! \ 28 + -path $NSIM_CL_SYS/net -exec basename {} \;) 29 + 30 + ip link set $NSIM_SV_NAME netns nssv 31 + ip link set $NSIM_CL_NAME netns nscl 32 + 33 + ip netns exec nssv ip addr add "${SERVER_IP}/24" dev $NSIM_SV_NAME 34 + ip netns exec nscl ip addr add "${CLIENT_IP}/24" dev $NSIM_CL_NAME 35 + 36 + ip netns exec nssv ip link set dev $NSIM_SV_NAME up 37 + ip netns exec nscl ip link set dev $NSIM_CL_NAME up 38 + 39 + # Enable passive TFO 40 + ip netns exec nssv sysctl -w net.ipv4.tcp_fastopen=519 > /dev/null 41 + 42 + set +e 43 + } 44 + 45 + cleanup_ns() 46 + { 47 + ip netns del nscl 48 + ip netns del nssv 49 + } 50 + 51 + ### 52 + ### Code start 53 + ### 54 + 55 + modprobe netdevsim 56 + 57 + # linking 58 + 59 + echo $NSIM_SV_ID > $NSIM_DEV_SYS_NEW 60 + echo $NSIM_CL_ID > $NSIM_DEV_SYS_NEW 61 + udevadm settle 62 + 63 + setup_ns 64 + 65 + NSIM_SV_FD=$((256 + RANDOM % 256)) 66 + exec {NSIM_SV_FD}</var/run/netns/nssv 67 + NSIM_SV_IFIDX=$(ip netns exec nssv cat /sys/class/net/$NSIM_SV_NAME/ifindex) 68 + 69 + NSIM_CL_FD=$((256 + RANDOM % 256)) 70 + exec {NSIM_CL_FD}</var/run/netns/nscl 71 + NSIM_CL_IFIDX=$(ip netns exec nscl cat /sys/class/net/$NSIM_CL_NAME/ifindex) 72 + 73 + echo "$NSIM_SV_FD:$NSIM_SV_IFIDX $NSIM_CL_FD:$NSIM_CL_IFIDX" > \ 74 + $NSIM_DEV_SYS_LINK 75 + 76 + if [ $? -ne 0 ]; then 77 + echo "linking netdevsim1 with netdevsim2 should succeed" 78 + cleanup_ns 79 + exit 1 80 + fi 81 + 82 + out_file=$(mktemp) 83 + 84 + timeout -k 1s 30s ip netns exec nssv ./tfo \ 85 + -s \ 86 + -p ${SERVER_PORT} \ 87 + -o ${out_file}& 88 + 89 + wait_local_port_listen nssv ${SERVER_PORT} tcp 90 + 91 + ip netns exec nscl ./tfo -c -h ${SERVER_IP} -p ${SERVER_PORT} 92 + 93 + wait 94 + 95 + res=$(cat $out_file) 96 + rm $out_file 97 + 98 + if [ $res -eq 0 ]; then 99 + echo "got invalid NAPI ID from passive TFO socket" 100 + cleanup_ns 101 + exit 1 102 + fi 103 + 104 + echo "$NSIM_SV_FD:$NSIM_SV_IFIDX" > $NSIM_DEV_SYS_UNLINK 105 + 106 + echo $NSIM_CL_ID > $NSIM_DEV_SYS_DEL 107 + 108 + cleanup_ns 109 + 110 + modprobe -r netdevsim 111 + 112 + exit 0
+1 -1
tools/testing/selftests/x86/Makefile
··· 12 12 13 13 TARGETS_C_BOTHBITS := single_step_syscall sysret_ss_attrs syscall_nt test_mremap_vdso \ 14 14 check_initial_reg_state sigreturn iopl ioperm \ 15 - test_vsyscall mov_ss_trap \ 15 + test_vsyscall mov_ss_trap sigtrap_loop \ 16 16 syscall_arg_fault fsgsbase_restore sigaltstack 17 17 TARGETS_C_BOTHBITS += nx_stack 18 18 TARGETS_C_32BIT_ONLY := entry_from_vm86 test_syscall_vdso unwind_vdso \
+101
tools/testing/selftests/x86/sigtrap_loop.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (C) 2025 Intel Corporation 4 + */ 5 + #define _GNU_SOURCE 6 + 7 + #include <err.h> 8 + #include <signal.h> 9 + #include <stdio.h> 10 + #include <stdlib.h> 11 + #include <string.h> 12 + #include <sys/ucontext.h> 13 + 14 + #ifdef __x86_64__ 15 + # define REG_IP REG_RIP 16 + #else 17 + # define REG_IP REG_EIP 18 + #endif 19 + 20 + static void sethandler(int sig, void (*handler)(int, siginfo_t *, void *), int flags) 21 + { 22 + struct sigaction sa; 23 + 24 + memset(&sa, 0, sizeof(sa)); 25 + sa.sa_sigaction = handler; 26 + sa.sa_flags = SA_SIGINFO | flags; 27 + sigemptyset(&sa.sa_mask); 28 + 29 + if (sigaction(sig, &sa, 0)) 30 + err(1, "sigaction"); 31 + 32 + return; 33 + } 34 + 35 + static void sigtrap(int sig, siginfo_t *info, void *ctx_void) 36 + { 37 + ucontext_t *ctx = (ucontext_t *)ctx_void; 38 + static unsigned int loop_count_on_same_ip; 39 + static unsigned long last_trap_ip; 40 + 41 + if (last_trap_ip == ctx->uc_mcontext.gregs[REG_IP]) { 42 + printf("\tTrapped at %016lx\n", last_trap_ip); 43 + 44 + /* 45 + * If the same IP is hit more than 10 times in a row, it is 46 + * _considered_ an infinite loop. 47 + */ 48 + if (++loop_count_on_same_ip > 10) { 49 + printf("[FAIL]\tDetected SIGTRAP infinite loop\n"); 50 + exit(1); 51 + } 52 + 53 + return; 54 + } 55 + 56 + loop_count_on_same_ip = 0; 57 + last_trap_ip = ctx->uc_mcontext.gregs[REG_IP]; 58 + printf("\tTrapped at %016lx\n", last_trap_ip); 59 + } 60 + 61 + int main(int argc, char *argv[]) 62 + { 63 + sethandler(SIGTRAP, sigtrap, 0); 64 + 65 + /* 66 + * Set the Trap Flag (TF) to single-step the test code, therefore to 67 + * trigger a SIGTRAP signal after each instruction until the TF is 68 + * cleared. 69 + * 70 + * Because the arithmetic flags are not significant here, the TF is 71 + * set by pushing 0x302 onto the stack and then popping it into the 72 + * flags register. 73 + * 74 + * Four instructions in the following asm code are executed with the 75 + * TF set, thus the SIGTRAP handler is expected to run four times. 76 + */ 77 + printf("[RUN]\tSIGTRAP infinite loop detection\n"); 78 + asm volatile( 79 + #ifdef __x86_64__ 80 + /* 81 + * Avoid clobbering the redzone 82 + * 83 + * Equivalent to "sub $128, %rsp", however -128 can be encoded 84 + * in a single byte immediate while 128 uses 4 bytes. 85 + */ 86 + "add $-128, %rsp\n\t" 87 + #endif 88 + "push $0x302\n\t" 89 + "popf\n\t" 90 + "nop\n\t" 91 + "nop\n\t" 92 + "push $0x202\n\t" 93 + "popf\n\t" 94 + #ifdef __x86_64__ 95 + "sub $-128, %rsp\n\t" 96 + #endif 97 + ); 98 + 99 + printf("[OK]\tNo SIGTRAP infinite loop detected\n"); 100 + return 0; 101 + }
+16
tools/testing/vma/vma_internal.h
··· 159 159 160 160 #define ASSERT_EXCLUSIVE_WRITER(x) 161 161 162 + /** 163 + * swap - swap values of @a and @b 164 + * @a: first value 165 + * @b: second value 166 + */ 167 + #define swap(a, b) \ 168 + do { typeof(a) __tmp = (a); (a) = (b); (b) = __tmp; } while (0) 169 + 162 170 struct kref { 163 171 refcount_t refcount; 164 172 }; ··· 1474 1466 static inline void fixup_hugetlb_reservations(struct vm_area_struct *vma) 1475 1467 { 1476 1468 (void)vma; 1469 + } 1470 + 1471 + static inline void vma_set_file(struct vm_area_struct *vma, struct file *file) 1472 + { 1473 + /* Changing an anonymous vma with this is illegal */ 1474 + get_file(file); 1475 + swap(vma->vm_file, file); 1476 + fput(file); 1477 1477 } 1478 1478 1479 1479 #endif /* __MM_VMA_INTERNAL_H */