Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR.

No conflicts and no adjacent changes.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>

+4770 -2558
+7
.mailmap
··· 73 73 Andrzej Hajda <andrzej.hajda@intel.com> <a.hajda@samsung.com> 74 74 André Almeida <andrealmeid@igalia.com> <andrealmeid@collabora.com> 75 75 Andy Adamson <andros@citi.umich.edu> 76 + Andy Chiu <andybnac@gmail.com> <andy.chiu@sifive.com> 77 + Andy Chiu <andybnac@gmail.com> <taochiu@synology.com> 76 78 Andy Shevchenko <andy@kernel.org> <andy@smile.org.ua> 77 79 Andy Shevchenko <andy@kernel.org> <ext-andriy.shevchenko@nokia.com> 78 80 Anilkumar Kolli <quic_akolli@quicinc.com> <akolli@codeaurora.org> ··· 306 304 Jens Axboe <axboe@kernel.dk> <axboe@meta.com> 307 305 Jens Osterkamp <Jens.Osterkamp@de.ibm.com> 308 306 Jernej Skrabec <jernej.skrabec@gmail.com> <jernej.skrabec@siol.net> 307 + Jesper Dangaard Brouer <hawk@kernel.org> <brouer@redhat.com> 308 + Jesper Dangaard Brouer <hawk@kernel.org> <hawk@comx.dk> 309 + Jesper Dangaard Brouer <hawk@kernel.org> <jbrouer@redhat.com> 310 + Jesper Dangaard Brouer <hawk@kernel.org> <jdb@comx.dk> 311 + Jesper Dangaard Brouer <hawk@kernel.org> <netoptimizer@brouer.com> 309 312 Jessica Zhang <quic_jesszhan@quicinc.com> <jesszhan@codeaurora.org> 310 313 Jilai Wang <quic_jilaiw@quicinc.com> <jilaiw@codeaurora.org> 311 314 Jiri Kosina <jikos@kernel.org> <jikos@jikos.cz>
+5 -2
Documentation/admin-guide/LSM/ipe.rst
··· 223 223 authorization of the policies (prohibiting an attacker from gaining 224 224 unconstrained root, and deploying an "allow all" policy). These 225 225 policies must be signed by a certificate that chains to the 226 - ``SYSTEM_TRUSTED_KEYRING``. With openssl, the policy can be signed by:: 226 + ``SYSTEM_TRUSTED_KEYRING``, or to the secondary and/or platform keyrings if 227 + ``CONFIG_IPE_POLICY_SIG_SECONDARY_KEYRING`` and/or 228 + ``CONFIG_IPE_POLICY_SIG_PLATFORM_KEYRING`` are enabled, respectively. 229 + With openssl, the policy can be signed by:: 227 230 228 231 openssl smime -sign \ 229 232 -in "$MY_POLICY" \ ··· 269 266 policy. Two checks will always be performed on this policy: First, the 270 267 ``policy_names`` must match with the updated version and the existing 271 268 version. Second the updated policy must have a policy version greater than 272 - or equal to the currently-running version. This is to prevent rollback attacks. 269 + the currently-running version. This is to prevent rollback attacks. 273 270 274 271 The ``delete`` file is used to remove a policy that is no longer needed. 275 272 This file is write-only and accepts a value of ``1`` to delete the policy.
+30 -8
Documentation/core-api/protection-keys.rst
··· 12 12 * Intel server CPUs, Skylake and later 13 13 * Intel client CPUs, Tiger Lake (11th Gen Core) and later 14 14 * Future AMD CPUs 15 + * arm64 CPUs implementing the Permission Overlay Extension (FEAT_S1POE) 15 16 17 + x86_64 18 + ====== 16 19 Pkeys work by dedicating 4 previously Reserved bits in each page table entry to 17 20 a "protection key", giving 16 possible keys. 18 21 ··· 31 28 theoretically space in the PAE PTEs. These permissions are enforced on data 32 29 access only and have no effect on instruction fetches. 33 30 31 + arm64 32 + ===== 33 + 34 + Pkeys use 3 bits in each page table entry, to encode a "protection key index", 35 + giving 8 possible keys. 36 + 37 + Protections for each key are defined with a per-CPU user-writable system 38 + register (POR_EL0). This is a 64-bit register encoding read, write and execute 39 + overlay permissions for each protection key index. 40 + 41 + Being a CPU register, POR_EL0 is inherently thread-local, potentially giving 42 + each thread a different set of protections from every other thread. 43 + 44 + Unlike x86_64, the protection key permissions also apply to instruction 45 + fetches. 46 + 34 47 Syscalls 35 48 ======== 36 49 ··· 57 38 int pkey_mprotect(unsigned long start, size_t len, 58 39 unsigned long prot, int pkey); 59 40 60 - Before a pkey can be used, it must first be allocated with 61 - pkey_alloc(). An application calls the WRPKRU instruction 62 - directly in order to change access permissions to memory covered 63 - with a key. In this example WRPKRU is wrapped by a C function 64 - called pkey_set(). 41 + Before a pkey can be used, it must first be allocated with pkey_alloc(). An 42 + application writes to the architecture specific CPU register directly in order 43 + to change access permissions to memory covered with a key. In this example 44 + this is wrapped by a C function called pkey_set(). 65 45 :: 66 46 67 47 int real_prot = PROT_READ|PROT_WRITE; ··· 82 64 munmap(ptr, PAGE_SIZE); 83 65 pkey_free(pkey); 84 66 85 - .. note:: pkey_set() is a wrapper for the RDPKRU and WRPKRU instructions. 86 - An example implementation can be found in 87 - tools/testing/selftests/x86/protection_keys.c. 67 + .. note:: pkey_set() is a wrapper around writing to the CPU register. 68 + Example implementations can be found in 69 + tools/testing/selftests/mm/pkey-{arm64,powerpc,x86}.h 88 70 89 71 Behavior 90 72 ======== ··· 114 96 The kernel will send a SIGSEGV in both cases, but si_code will be set 115 97 to SEGV_PKERR when violating protection keys versus SEGV_ACCERR when 116 98 the plain mprotect() permissions are violated. 99 + 100 + Note that kernel accesses from a kthread (such as io_uring) will use a default 101 + value for the protection key register and so will not be consistent with 102 + userspace's value of the register or mprotect().
+17 -36
Documentation/devicetree/bindings/iio/dac/adi,ad5686.yaml
··· 4 4 $id: http://devicetree.org/schemas/iio/dac/adi,ad5686.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 - title: Analog Devices AD5360 and similar DACs 7 + title: Analog Devices AD5360 and similar SPI DACs 8 8 9 9 maintainers: 10 10 - Michael Hennerich <michael.hennerich@analog.com> ··· 12 12 13 13 properties: 14 14 compatible: 15 - oneOf: 16 - - description: SPI devices 17 - enum: 18 - - adi,ad5310r 19 - - adi,ad5672r 20 - - adi,ad5674r 21 - - adi,ad5676 22 - - adi,ad5676r 23 - - adi,ad5679r 24 - - adi,ad5681r 25 - - adi,ad5682r 26 - - adi,ad5683 27 - - adi,ad5683r 28 - - adi,ad5684 29 - - adi,ad5684r 30 - - adi,ad5685r 31 - - adi,ad5686 32 - - adi,ad5686r 33 - - description: I2C devices 34 - enum: 35 - - adi,ad5311r 36 - - adi,ad5337r 37 - - adi,ad5338r 38 - - adi,ad5671r 39 - - adi,ad5675r 40 - - adi,ad5691r 41 - - adi,ad5692r 42 - - adi,ad5693 43 - - adi,ad5693r 44 - - adi,ad5694 45 - - adi,ad5694r 46 - - adi,ad5695r 47 - - adi,ad5696 48 - - adi,ad5696r 49 - 15 + enum: 16 + - adi,ad5310r 17 + - adi,ad5672r 18 + - adi,ad5674r 19 + - adi,ad5676 20 + - adi,ad5676r 21 + - adi,ad5679r 22 + - adi,ad5681r 23 + - adi,ad5682r 24 + - adi,ad5683 25 + - adi,ad5683r 26 + - adi,ad5684 27 + - adi,ad5684r 28 + - adi,ad5685r 29 + - adi,ad5686 30 + - adi,ad5686r 50 31 51 32 reg: 52 33 maxItems: 1
+2 -1
Documentation/devicetree/bindings/iio/dac/adi,ad5696.yaml
··· 4 4 $id: http://devicetree.org/schemas/iio/dac/adi,ad5696.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 - title: Analog Devices AD5696 and similar multi-channel DACs 7 + title: Analog Devices AD5696 and similar I2C multi-channel DACs 8 8 9 9 maintainers: 10 10 - Michael Auchter <michael.auchter@ni.com> ··· 16 16 compatible: 17 17 enum: 18 18 - adi,ad5311r 19 + - adi,ad5337r 19 20 - adi,ad5338r 20 21 - adi,ad5671r 21 22 - adi,ad5675r
+1 -1
Documentation/filesystems/iomap/operations.rst
··· 208 208 such `reservations 209 209 <https://lore.kernel.org/linux-xfs/20220817093627.GZ3600936@dread.disaster.area/>`_ 210 210 because writeback will not consume the reservation. 211 - The ``iomap_file_buffered_write_punch_delalloc`` can be called from a 211 + The ``iomap_write_delalloc_release`` can be called from a 212 212 ``->iomap_end`` function to find all the clean areas of the folios 213 213 caching a fresh (``IOMAP_F_NEW``) delalloc mapping. 214 214 It takes the ``invalidate_lock``.
-1
Documentation/filesystems/netfs_library.rst
··· 592 592 593 593 .. kernel-doc:: include/linux/netfs.h 594 594 .. kernel-doc:: fs/netfs/buffered_read.c 595 - .. kernel-doc:: fs/netfs/io.c
+19 -19
Documentation/mm/damon/maintainer-profile.rst
··· 7 7 section of 'MAINTAINERS' file. 8 8 9 9 The mailing lists for the subsystem are damon@lists.linux.dev and 10 - linux-mm@kvack.org. Patches should be made against the mm-unstable `tree 11 - <https://git.kernel.org/akpm/mm/h/mm-unstable>` whenever possible and posted to 12 - the mailing lists. 10 + linux-mm@kvack.org. Patches should be made against the `mm-unstable tree 11 + <https://git.kernel.org/akpm/mm/h/mm-unstable>`_ whenever possible and posted 12 + to the mailing lists. 13 13 14 14 SCM Trees 15 15 --------- 16 16 17 17 There are multiple Linux trees for DAMON development. Patches under 18 18 development or testing are queued in `damon/next 19 - <https://git.kernel.org/sj/h/damon/next>` by the DAMON maintainer. 19 + <https://git.kernel.org/sj/h/damon/next>`_ by the DAMON maintainer. 20 20 Sufficiently reviewed patches will be queued in `mm-unstable 21 - <https://git.kernel.org/akpm/mm/h/mm-unstable>` by the memory management 21 + <https://git.kernel.org/akpm/mm/h/mm-unstable>`_ by the memory management 22 22 subsystem maintainer. After more sufficient tests, the patches will be queued 23 - in `mm-stable <https://git.kernel.org/akpm/mm/h/mm-stable>` , and finally 23 + in `mm-stable <https://git.kernel.org/akpm/mm/h/mm-stable>`_, and finally 24 24 pull-requested to the mainline by the memory management subsystem maintainer. 25 25 26 - Note again the patches for mm-unstable `tree 27 - <https://git.kernel.org/akpm/mm/h/mm-unstable>` are queued by the memory 26 + Note again the patches for `mm-unstable tree 27 + <https://git.kernel.org/akpm/mm/h/mm-unstable>`_ are queued by the memory 28 28 management subsystem maintainer. If the patches requires some patches in 29 - damon/next `tree <https://git.kernel.org/sj/h/damon/next>` which not yet merged 29 + `damon/next tree <https://git.kernel.org/sj/h/damon/next>`_ which not yet merged 30 30 in mm-unstable, please make sure the requirement is clearly specified. 31 31 32 32 Submit checklist addendum ··· 37 37 - Build changes related outputs including kernel and documents. 38 38 - Ensure the builds introduce no new errors or warnings. 39 39 - Run and ensure no new failures for DAMON `selftests 40 - <https://github.com/awslabs/damon-tests/blob/master/corr/run.sh#L49>` and 40 + <https://github.com/damonitor/damon-tests/blob/master/corr/run.sh#L49>`_ and 41 41 `kunittests 42 - <https://github.com/awslabs/damon-tests/blob/master/corr/tests/kunit.sh>`. 42 + <https://github.com/damonitor/damon-tests/blob/master/corr/tests/kunit.sh>`_. 43 43 44 44 Further doing below and putting the results will be helpful. 45 45 46 46 - Run `damon-tests/corr 47 - <https://github.com/awslabs/damon-tests/tree/master/corr>` for normal 47 + <https://github.com/damonitor/damon-tests/tree/master/corr>`_ for normal 48 48 changes. 49 49 - Run `damon-tests/perf 50 - <https://github.com/awslabs/damon-tests/tree/master/perf>` for performance 50 + <https://github.com/damonitor/damon-tests/tree/master/perf>`_ for performance 51 51 changes. 52 52 53 53 Key cycle dates 54 54 --------------- 55 55 56 56 Patches can be sent anytime. Key cycle dates of the `mm-unstable 57 - <https://git.kernel.org/akpm/mm/h/mm-unstable>` and `mm-stable 58 - <https://git.kernel.org/akpm/mm/h/mm-stable>` trees depend on the memory 57 + <https://git.kernel.org/akpm/mm/h/mm-unstable>`_ and `mm-stable 58 + <https://git.kernel.org/akpm/mm/h/mm-stable>`_ trees depend on the memory 59 59 management subsystem maintainer. 60 60 61 61 Review cadence ··· 72 72 Like many other Linux kernel subsystems, DAMON uses the mailing lists 73 73 (damon@lists.linux.dev and linux-mm@kvack.org) as the major communication 74 74 channel. There is a simple tool called `HacKerMaiL 75 - <https://github.com/damonitor/hackermail>` (``hkml``), which is for people who 75 + <https://github.com/damonitor/hackermail>`_ (``hkml``), which is for people who 76 76 are not very familiar with the mailing lists based communication. The tool 77 77 could be particularly helpful for DAMON community members since it is developed 78 78 and maintained by DAMON maintainer. The tool is also officially announced to 79 79 support DAMON and general Linux kernel development workflow. 80 80 81 - In other words, `hkml <https://github.com/damonitor/hackermail>` is a mailing 81 + In other words, `hkml <https://github.com/damonitor/hackermail>`_ is a mailing 82 82 tool for DAMON community, which DAMON maintainer is committed to support. 83 83 Please feel free to try and report issues or feature requests for the tool to 84 84 the maintainer. ··· 98 98 time slot, by reaching out to the maintainer. 99 99 100 100 Schedules and available reservation time slots are available at the Google `doc 101 - <https://docs.google.com/document/d/1v43Kcj3ly4CYqmAkMaZzLiM2GEnWfgdGbZAH3mi2vpM/edit?usp=sharing>`. 101 + <https://docs.google.com/document/d/1v43Kcj3ly4CYqmAkMaZzLiM2GEnWfgdGbZAH3mi2vpM/edit?usp=sharing>`_. 102 102 There is also a public Google `calendar 103 - <https://calendar.google.com/calendar/u/0?cid=ZDIwOTA4YTMxNjc2MDQ3NTIyMmUzYTM5ZmQyM2U4NDA0ZGIwZjBiYmJlZGQxNDM0MmY4ZTRjOTE0NjdhZDRiY0Bncm91cC5jYWxlbmRhci5nb29nbGUuY29t>` 103 + <https://calendar.google.com/calendar/u/0?cid=ZDIwOTA4YTMxNjc2MDQ3NTIyMmUzYTM5ZmQyM2U4NDA0ZGIwZjBiYmJlZGQxNDM0MmY4ZTRjOTE0NjdhZDRiY0Bncm91cC5jYWxlbmRhci5nb29nbGUuY29t>`_ 104 104 that has the events. Anyone can subscribe it. DAMON maintainer will also 105 105 provide periodic reminder to the mailing list (damon@lists.linux.dev).
+37 -5
Documentation/process/maintainer-soc.rst
··· 30 30 The main SoC tree is housed on git.kernel.org: 31 31 https://git.kernel.org/pub/scm/linux/kernel/git/soc/soc.git/ 32 32 33 + Maintainers 34 + ----------- 35 + 33 36 Clearly this is quite a wide range of topics, which no one person, or even 34 37 small group of people are capable of maintaining. Instead, the SoC subsystem 35 - is comprised of many submaintainers, each taking care of individual platforms 36 - and driver subdirectories. 38 + is comprised of many submaintainers (platform maintainers), each taking care of 39 + individual platforms and driver subdirectories. 37 40 In this regard, "platform" usually refers to a series of SoCs from a given 38 41 vendor, for example, Nvidia's series of Tegra SoCs. Many submaintainers operate 39 42 on a vendor level, responsible for multiple product lines. For several reasons, ··· 46 43 47 44 Most of these submaintainers have their own trees where they stage patches, 48 45 sending pull requests to the main SoC tree. These trees are usually, but not 49 - always, listed in MAINTAINERS. The main SoC maintainers can be reached via the 50 - alias soc@kernel.org if there is no platform-specific maintainer, or if they 51 - are unresponsive. 46 + always, listed in MAINTAINERS. 52 47 53 48 What the SoC tree is not, however, is a location for architecture-specific code 54 49 changes. Each architecture has its own maintainers that are responsible for 55 50 architectural details, CPU errata and the like. 51 + 52 + Submitting Patches for Given SoC 53 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 54 + 55 + All typical platform related patches should be sent via SoC submaintainers 56 + (platform-specific maintainers). This includes also changes to per-platform or 57 + shared defconfigs (scripts/get_maintainer.pl might not provide correct 58 + addresses in such case). 59 + 60 + Submitting Patches to the Main SoC Maintainers 61 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 62 + 63 + The main SoC maintainers can be reached via the alias soc@kernel.org only in 64 + following cases: 65 + 66 + 1. There are no platform-specific maintainers. 67 + 68 + 2. Platform-specific maintainers are unresponsive. 69 + 70 + 3. Introducing a completely new SoC platform. Such new SoC work should be sent 71 + first to common mailing lists, pointed out by scripts/get_maintainer.pl, for 72 + community review. After positive community review, work should be sent to 73 + soc@kernel.org in one patchset containing new arch/foo/Kconfig entry, DTS 74 + files, MAINTAINERS file entry and optionally initial drivers with their 75 + Devicetree bindings. The MAINTAINERS file entry should list new 76 + platform-specific maintainers, who are going to be responsible for handling 77 + patches for the platform from now on. 78 + 79 + Note that the soc@kernel.org is usually not the place to discuss the patches, 80 + thus work sent to this address should be already considered as acceptable by 81 + the community. 56 82 57 83 Information for (new) Submaintainers 58 84 ------------------------------------
+9 -7
Documentation/virt/kvm/api.rst
··· 8098 8098 KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT is 8099 8099 disabled. 8100 8100 8101 - KVM_X86_QUIRK_SLOT_ZAP_ALL By default, KVM invalidates all SPTEs in 8102 - fast way for memslot deletion when VM type 8103 - is KVM_X86_DEFAULT_VM. 8104 - When this quirk is disabled or when VM type 8105 - is other than KVM_X86_DEFAULT_VM, KVM zaps 8106 - only leaf SPTEs that are within the range of 8107 - the memslot being deleted. 8101 + KVM_X86_QUIRK_SLOT_ZAP_ALL By default, for KVM_X86_DEFAULT_VM VMs, KVM 8102 + invalidates all SPTEs in all memslots and 8103 + address spaces when a memslot is deleted or 8104 + moved. When this quirk is disabled (or the 8105 + VM type isn't KVM_X86_DEFAULT_VM), KVM only 8106 + ensures the backing memory of the deleted 8107 + or moved memslot isn't reachable, i.e KVM 8108 + _may_ invalidate only SPTEs related to the 8109 + memslot. 8108 8110 =================================== ============================================ 8109 8111 8110 8112 7.32 KVM_CAP_MAX_VCPU_ID
+1 -1
Documentation/virt/kvm/locking.rst
··· 136 136 to gfn. For indirect sp, we disabled fast page fault for simplicity. 137 137 138 138 A solution for indirect sp could be to pin the gfn, for example via 139 - kvm_vcpu_gfn_to_pfn_atomic, before the cmpxchg. After the pinning: 139 + gfn_to_pfn_memslot_atomic, before the cmpxchg. After the pinning: 140 140 141 141 - We have held the refcount of pfn; that means the pfn can not be freed and 142 142 be reused for another gfn.
+33 -185
MAINTAINERS
··· 258 258 S: Maintained 259 259 F: drivers/net/ethernet/alteon/acenic* 260 260 261 - ACER ASPIRE 1 EMBEDDED CONTROLLER DRIVER 262 - M: Nikita Travkin <nikita@trvn.ru> 263 - S: Maintained 264 - F: Documentation/devicetree/bindings/platform/acer,aspire1-ec.yaml 265 - F: drivers/platform/arm64/acer-aspire1-ec.c 266 - 267 261 ACER ASPIRE ONE TEMPERATURE AND FAN DRIVER 268 262 M: Peter Kaestle <peter@piie.net> 269 263 L: platform-driver-x86@vger.kernel.org ··· 882 888 883 889 ALPHA PORT 884 890 M: Richard Henderson <richard.henderson@linaro.org> 885 - M: Ivan Kokshaysky <ink@jurassic.park.msu.ru> 886 891 M: Matt Turner <mattst88@gmail.com> 887 892 L: linux-alpha@vger.kernel.org 888 893 S: Odd Fixes ··· 1754 1761 ARM AND ARM64 SoC SUB-ARCHITECTURES (COMMON PARTS) 1755 1762 M: Arnd Bergmann <arnd@arndb.de> 1756 1763 M: Olof Johansson <olof@lixom.net> 1757 - M: soc@kernel.org 1758 1764 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1765 + L: soc@lists.linux.dev 1759 1766 S: Maintained 1760 1767 P: Documentation/process/maintainer-soc.rst 1761 1768 C: irc://irc.libera.chat/armlinux ··· 2255 2262 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2256 2263 S: Maintained 2257 2264 F: arch/arm/mach-ep93xx/ts72xx.c 2258 - 2259 - ARM/CIRRUS LOGIC CLPS711X ARM ARCHITECTURE 2260 - M: Alexander Shiyan <shc_work@mail.ru> 2261 - L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2262 - S: Odd Fixes 2263 - N: clps711x 2264 2265 2265 2266 ARM/CIRRUS LOGIC EP93XX ARM ARCHITECTURE 2266 2267 M: Hartley Sweeten <hsweeten@visionengravers.com> ··· 3801 3814 F: drivers/video/backlight/ 3802 3815 F: include/linux/backlight.h 3803 3816 F: include/linux/pwm_backlight.h 3804 - 3805 - BAIKAL-T1 PVT HARDWARE MONITOR DRIVER 3806 - M: Serge Semin <fancer.lancer@gmail.com> 3807 - L: linux-hwmon@vger.kernel.org 3808 - S: Supported 3809 - F: Documentation/devicetree/bindings/hwmon/baikal,bt1-pvt.yaml 3810 - F: Documentation/hwmon/bt1-pvt.rst 3811 - F: drivers/hwmon/bt1-pvt.[ch] 3812 3817 3813 3818 BARCO P50 GPIO DRIVER 3814 3819 M: Santosh Kumar Yadav <santoshkumar.yadav@barco.com> ··· 6455 6476 6456 6477 DESIGNWARE EDMA CORE IP DRIVER 6457 6478 M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 6458 - R: Serge Semin <fancer.lancer@gmail.com> 6459 6479 L: dmaengine@vger.kernel.org 6460 6480 S: Maintained 6461 6481 F: drivers/dma/dw-edma/ ··· 9737 9759 F: include/uapi/linux/gpio.h 9738 9760 F: tools/gpio/ 9739 9761 9740 - GRE DEMULTIPLEXER DRIVER 9741 - M: Dmitry Kozlov <xeb@mail.ru> 9742 - L: netdev@vger.kernel.org 9743 - S: Maintained 9744 - F: include/net/gre.h 9745 - F: net/ipv4/gre_demux.c 9746 - F: net/ipv4/gre_offload.c 9747 - 9748 9762 GRETH 10/100/1G Ethernet MAC device driver 9749 9763 M: Andreas Larsson <andreas@gaisler.com> 9750 9764 L: netdev@vger.kernel.org ··· 11259 11289 F: security/integrity/ima/ 11260 11290 11261 11291 INTEGRITY POLICY ENFORCEMENT (IPE) 11262 - M: Fan Wu <wufan@linux.microsoft.com> 11292 + M: Fan Wu <wufan@kernel.org> 11263 11293 L: linux-security-module@vger.kernel.org 11264 11294 S: Supported 11265 - T: git https://github.com/microsoft/ipe.git 11295 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/wufan/ipe.git 11266 11296 F: Documentation/admin-guide/LSM/ipe.rst 11267 11297 F: Documentation/security/ipe.rst 11268 11298 F: scripts/ipe/ ··· 11579 11609 F: drivers/crypto/intel/keembay/keembay-ocs-hcu-core.c 11580 11610 F: drivers/crypto/intel/keembay/ocs-hcu.c 11581 11611 F: drivers/crypto/intel/keembay/ocs-hcu.h 11612 + 11613 + INTEL LA JOLLA COVE ADAPTER (LJCA) USB I/O EXPANDER DRIVERS 11614 + M: Wentong Wu <wentong.wu@intel.com> 11615 + M: Sakari Ailus <sakari.ailus@linux.intel.com> 11616 + S: Maintained 11617 + F: drivers/gpio/gpio-ljca.c 11618 + F: drivers/i2c/busses/i2c-ljca.c 11619 + F: drivers/spi/spi-ljca.c 11620 + F: drivers/usb/misc/usb-ljca.c 11621 + F: include/linux/usb/ljca.h 11582 11622 11583 11623 INTEL MANAGEMENT ENGINE (mei) 11584 11624 M: Tomas Winkler <tomas.winkler@intel.com> ··· 12228 12248 R: Vincenzo Frascino <vincenzo.frascino@arm.com> 12229 12249 L: kasan-dev@googlegroups.com 12230 12250 S: Maintained 12251 + B: https://bugzilla.kernel.org/buglist.cgi?component=Sanitizers&product=Memory%20Management 12231 12252 F: Documentation/dev-tools/kasan.rst 12232 12253 F: arch/*/include/asm/*kasan.h 12233 12254 F: arch/*/mm/kasan_init* ··· 12252 12271 R: Andrey Konovalov <andreyknvl@gmail.com> 12253 12272 L: kasan-dev@googlegroups.com 12254 12273 S: Maintained 12274 + B: https://bugzilla.kernel.org/buglist.cgi?component=Sanitizers&product=Memory%20Management 12255 12275 F: Documentation/dev-tools/kcov.rst 12256 12276 F: include/linux/kcov.h 12257 12277 F: include/uapi/linux/kcov.h ··· 12935 12953 F: drivers/ata/pata_arasan_cf.c 12936 12954 F: include/linux/pata_arasan_cf_data.h 12937 12955 12938 - LIBATA PATA DRIVERS 12939 - R: Sergey Shtylyov <s.shtylyov@omp.ru> 12940 - L: linux-ide@vger.kernel.org 12941 - F: drivers/ata/ata_*.c 12942 - F: drivers/ata/pata_*.c 12943 - 12944 12956 LIBATA PATA FARADAY FTIDE010 AND GEMINI SATA BRIDGE DRIVERS 12945 12957 M: Linus Walleij <linus.walleij@linaro.org> 12946 12958 L: linux-ide@vger.kernel.org ··· 12950 12974 F: drivers/ata/ahci_platform.c 12951 12975 F: drivers/ata/libahci_platform.c 12952 12976 F: include/linux/ahci_platform.h 12953 - 12954 - LIBATA SATA AHCI SYNOPSYS DWC CONTROLLER DRIVER 12955 - M: Serge Semin <fancer.lancer@gmail.com> 12956 - L: linux-ide@vger.kernel.org 12957 - S: Maintained 12958 - F: Documentation/devicetree/bindings/ata/baikal,bt1-ahci.yaml 12959 - F: Documentation/devicetree/bindings/ata/snps,dwc-ahci.yaml 12960 - F: drivers/ata/ahci_dwc.c 12961 12977 12962 12978 LIBATA SATA PROMISE TX2/TX4 CONTROLLER DRIVER 12963 12979 M: Mikael Pettersson <mikpelinux@gmail.com> ··· 14146 14178 T: git git://linuxtv.org/media_tree.git 14147 14179 F: drivers/media/platform/nxp/imx-pxp.[ch] 14148 14180 14149 - MEDIA DRIVERS FOR ASCOT2E 14150 - M: Sergey Kozlov <serjk@netup.ru> 14151 - M: Abylay Ospan <aospan@netup.ru> 14152 - L: linux-media@vger.kernel.org 14153 - S: Supported 14154 - W: https://linuxtv.org 14155 - W: http://netup.tv/ 14156 - T: git git://linuxtv.org/media_tree.git 14157 - F: drivers/media/dvb-frontends/ascot2e* 14158 - 14159 14181 MEDIA DRIVERS FOR CXD2099AR CI CONTROLLERS 14160 14182 M: Jasmin Jessich <jasmin@anw.at> 14161 14183 L: linux-media@vger.kernel.org ··· 14153 14195 W: https://linuxtv.org 14154 14196 T: git git://linuxtv.org/media_tree.git 14155 14197 F: drivers/media/dvb-frontends/cxd2099* 14156 - 14157 - MEDIA DRIVERS FOR CXD2841ER 14158 - M: Sergey Kozlov <serjk@netup.ru> 14159 - M: Abylay Ospan <aospan@netup.ru> 14160 - L: linux-media@vger.kernel.org 14161 - S: Supported 14162 - W: https://linuxtv.org 14163 - W: http://netup.tv/ 14164 - T: git git://linuxtv.org/media_tree.git 14165 - F: drivers/media/dvb-frontends/cxd2841er* 14166 14198 14167 14199 MEDIA DRIVERS FOR CXD2880 14168 14200 M: Yasunari Takiguchi <Yasunari.Takiguchi@sony.com> ··· 14198 14250 F: drivers/media/platform/nxp/imx7-media-csi.c 14199 14251 F: drivers/media/platform/nxp/imx8mq-mipi-csi2.c 14200 14252 14201 - MEDIA DRIVERS FOR HELENE 14202 - M: Abylay Ospan <aospan@netup.ru> 14203 - L: linux-media@vger.kernel.org 14204 - S: Supported 14205 - W: https://linuxtv.org 14206 - W: http://netup.tv/ 14207 - T: git git://linuxtv.org/media_tree.git 14208 - F: drivers/media/dvb-frontends/helene* 14209 - 14210 - MEDIA DRIVERS FOR HORUS3A 14211 - M: Sergey Kozlov <serjk@netup.ru> 14212 - M: Abylay Ospan <aospan@netup.ru> 14213 - L: linux-media@vger.kernel.org 14214 - S: Supported 14215 - W: https://linuxtv.org 14216 - W: http://netup.tv/ 14217 - T: git git://linuxtv.org/media_tree.git 14218 - F: drivers/media/dvb-frontends/horus3a* 14219 - 14220 - MEDIA DRIVERS FOR LNBH25 14221 - M: Sergey Kozlov <serjk@netup.ru> 14222 - M: Abylay Ospan <aospan@netup.ru> 14223 - L: linux-media@vger.kernel.org 14224 - S: Supported 14225 - W: https://linuxtv.org 14226 - W: http://netup.tv/ 14227 - T: git git://linuxtv.org/media_tree.git 14228 - F: drivers/media/dvb-frontends/lnbh25* 14229 - 14230 14253 MEDIA DRIVERS FOR MXL5XX TUNER DEMODULATORS 14231 14254 L: linux-media@vger.kernel.org 14232 14255 S: Orphan 14233 14256 W: https://linuxtv.org 14234 14257 T: git git://linuxtv.org/media_tree.git 14235 14258 F: drivers/media/dvb-frontends/mxl5xx* 14236 - 14237 - MEDIA DRIVERS FOR NETUP PCI UNIVERSAL DVB devices 14238 - M: Sergey Kozlov <serjk@netup.ru> 14239 - M: Abylay Ospan <aospan@netup.ru> 14240 - L: linux-media@vger.kernel.org 14241 - S: Supported 14242 - W: https://linuxtv.org 14243 - W: http://netup.tv/ 14244 - T: git git://linuxtv.org/media_tree.git 14245 - F: drivers/media/pci/netup_unidvb/* 14246 14259 14247 14260 MEDIA DRIVERS FOR NVIDIA TEGRA - VDE 14248 14261 M: Dmitry Osipenko <digetx@gmail.com> ··· 14822 14913 14823 14914 MEMORY MAPPING 14824 14915 M: Andrew Morton <akpm@linux-foundation.org> 14825 - R: Liam R. Howlett <Liam.Howlett@oracle.com> 14916 + M: Liam R. Howlett <Liam.Howlett@oracle.com> 14917 + M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 14826 14918 R: Vlastimil Babka <vbabka@suse.cz> 14827 - R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 14919 + R: Jann Horn <jannh@google.com> 14828 14920 L: linux-mm@kvack.org 14829 14921 S: Maintained 14830 14922 W: http://www.linux-mm.org ··· 14847 14937 F: drivers/mtd/ 14848 14938 F: include/linux/mtd/ 14849 14939 F: include/uapi/mtd/ 14850 - 14851 - MEMSENSING MICROSYSTEMS MSA311 DRIVER 14852 - M: Dmitry Rokosov <ddrokosov@sberdevices.ru> 14853 - L: linux-iio@vger.kernel.org 14854 - S: Maintained 14855 - F: Documentation/devicetree/bindings/iio/accel/memsensing,msa311.yaml 14856 - F: drivers/iio/accel/msa311.c 14857 14940 14858 14941 MEN A21 WATCHDOG DRIVER 14859 14942 M: Johannes Thumshirn <morbidrsa@gmail.com> ··· 15181 15278 15182 15279 MICROCHIP POLARFIRE FPGA DRIVERS 15183 15280 M: Conor Dooley <conor.dooley@microchip.com> 15184 - R: Vladimir Georgiev <v.georgiev@metrotek.ru> 15185 15281 L: linux-fpga@vger.kernel.org 15186 15282 S: Supported 15187 15283 F: Documentation/devicetree/bindings/fpga/microchip,mpf-spi-fpga-mgr.yaml ··· 15435 15533 F: drivers/platform/mips/ 15436 15534 F: include/dt-bindings/mips/ 15437 15535 15438 - MIPS BAIKAL-T1 PLATFORM 15439 - M: Serge Semin <fancer.lancer@gmail.com> 15440 - L: linux-mips@vger.kernel.org 15441 - S: Supported 15442 - F: Documentation/devicetree/bindings/bus/baikal,bt1-*.yaml 15443 - F: Documentation/devicetree/bindings/clock/baikal,bt1-*.yaml 15444 - F: drivers/bus/bt1-*.c 15445 - F: drivers/clk/baikal-t1/ 15446 - F: drivers/memory/bt1-l2-ctl.c 15447 - F: drivers/mtd/maps/physmap-bt1-rom.[ch] 15448 - 15449 15536 MIPS BOSTON DEVELOPMENT BOARD 15450 15537 M: Paul Burton <paulburton@kernel.org> 15451 15538 L: linux-mips@vger.kernel.org ··· 15447 15556 15448 15557 MIPS CORE DRIVERS 15449 15558 M: Thomas Bogendoerfer <tsbogend@alpha.franken.de> 15450 - M: Serge Semin <fancer.lancer@gmail.com> 15451 15559 L: linux-mips@vger.kernel.org 15452 15560 S: Supported 15453 15561 F: drivers/bus/mips_cdmm.c ··· 16049 16159 M: Eric Dumazet <edumazet@google.com> 16050 16160 M: Jakub Kicinski <kuba@kernel.org> 16051 16161 M: Paolo Abeni <pabeni@redhat.com> 16162 + R: Simon Horman <horms@kernel.org> 16052 16163 L: netdev@vger.kernel.org 16053 16164 S: Maintained 16054 16165 P: Documentation/process/maintainer-netdev.rst ··· 16092 16201 F: lib/net_utils.c 16093 16202 F: lib/random32.c 16094 16203 F: net/ 16204 + F: samples/pktgen/ 16095 16205 F: tools/net/ 16096 16206 F: tools/testing/selftests/net/ 16097 16207 X: Documentation/networking/mac80211-injection.rst ··· 16416 16524 F: include/linux/ntb.h 16417 16525 F: include/linux/ntb_transport.h 16418 16526 F: tools/testing/selftests/ntb/ 16419 - 16420 - NTB IDT DRIVER 16421 - M: Serge Semin <fancer.lancer@gmail.com> 16422 - L: ntb@lists.linux.dev 16423 - S: Supported 16424 - F: drivers/ntb/hw/idt/ 16425 16527 16426 16528 NTB INTEL DRIVER 16427 16529 M: Dave Jiang <dave.jiang@intel.com> ··· 18438 18552 F: include/linux/pps*.h 18439 18553 F: include/uapi/linux/pps.h 18440 18554 18441 - PPTP DRIVER 18442 - M: Dmitry Kozlov <xeb@mail.ru> 18443 - L: netdev@vger.kernel.org 18444 - S: Maintained 18445 - W: http://sourceforge.net/projects/accel-pptp 18446 - F: drivers/net/ppp/pptp.c 18447 - 18448 18555 PRESSURE STALL INFORMATION (PSI) 18449 18556 M: Johannes Weiner <hannes@cmpxchg.org> 18450 18557 M: Suren Baghdasaryan <surenb@google.com> ··· 19418 19539 F: Documentation/tools/rtla/ 19419 19540 F: tools/tracing/rtla/ 19420 19541 19542 + Real-time Linux (PREEMPT_RT) 19543 + M: Sebastian Andrzej Siewior <bigeasy@linutronix.de> 19544 + M: Clark Williams <clrkwllms@kernel.org> 19545 + M: Steven Rostedt <rostedt@goodmis.org> 19546 + L: linux-rt-devel@lists.linux.dev 19547 + S: Supported 19548 + K: PREEMPT_RT 19549 + 19421 19550 REALTEK AUDIO CODECS 19422 19551 M: Oder Chiou <oder_chiou@realtek.com> 19423 19552 S: Maintained ··· 19535 19648 F: Documentation/devicetree/bindings/i2c/renesas,iic-emev2.yaml 19536 19649 F: drivers/i2c/busses/i2c-emev2.c 19537 19650 19538 - RENESAS ETHERNET AVB DRIVER 19539 - R: Sergey Shtylyov <s.shtylyov@omp.ru> 19540 - L: netdev@vger.kernel.org 19541 - L: linux-renesas-soc@vger.kernel.org 19542 - F: Documentation/devicetree/bindings/net/renesas,etheravb.yaml 19543 - F: drivers/net/ethernet/renesas/Kconfig 19544 - F: drivers/net/ethernet/renesas/Makefile 19545 - F: drivers/net/ethernet/renesas/ravb* 19546 - 19547 19651 RENESAS ETHERNET SWITCH DRIVER 19548 19652 R: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> 19549 19653 L: netdev@vger.kernel.org ··· 19583 19705 F: Documentation/devicetree/bindings/i2c/renesas,rmobile-iic.yaml 19584 19706 F: drivers/i2c/busses/i2c-rcar.c 19585 19707 F: drivers/i2c/busses/i2c-sh_mobile.c 19586 - 19587 - RENESAS R-CAR SATA DRIVER 19588 - R: Sergey Shtylyov <s.shtylyov@omp.ru> 19589 - L: linux-ide@vger.kernel.org 19590 - L: linux-renesas-soc@vger.kernel.org 19591 - S: Supported 19592 - F: Documentation/devicetree/bindings/ata/renesas,rcar-sata.yaml 19593 - F: drivers/ata/sata_rcar.c 19594 19708 19595 19709 RENESAS R-CAR THERMAL DRIVERS 19596 19710 M: Niklas Söderlund <niklas.soderlund@ragnatech.se> ··· 19658 19788 S: Supported 19659 19789 F: Documentation/devicetree/bindings/i2c/renesas,rzv2m.yaml 19660 19790 F: drivers/i2c/busses/i2c-rzv2m.c 19661 - 19662 - RENESAS SUPERH ETHERNET DRIVER 19663 - R: Sergey Shtylyov <s.shtylyov@omp.ru> 19664 - L: netdev@vger.kernel.org 19665 - L: linux-renesas-soc@vger.kernel.org 19666 - F: Documentation/devicetree/bindings/net/renesas,ether.yaml 19667 - F: drivers/net/ethernet/renesas/Kconfig 19668 - F: drivers/net/ethernet/renesas/Makefile 19669 - F: drivers/net/ethernet/renesas/sh_eth* 19670 - F: include/linux/sh_eth.h 19671 19791 19672 19792 RENESAS USB PHY DRIVER 19673 19793 M: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> ··· 21653 21793 SPEAR PLATFORM/CLOCK/PINCTRL SUPPORT 21654 21794 M: Viresh Kumar <vireshk@kernel.org> 21655 21795 M: Shiraz Hashim <shiraz.linux.kernel@gmail.com> 21656 - M: soc@kernel.org 21657 21796 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 21797 + L: soc@lists.linux.dev 21658 21798 S: Maintained 21659 21799 W: http://www.st.com/spear 21660 21800 F: arch/arm/boot/dts/st/spear* ··· 22306 22446 22307 22447 SYNOPSYS DESIGNWARE APB GPIO DRIVER 22308 22448 M: Hoan Tran <hoan@os.amperecomputing.com> 22309 - M: Serge Semin <fancer.lancer@gmail.com> 22310 22449 L: linux-gpio@vger.kernel.org 22311 22450 S: Maintained 22312 22451 F: Documentation/devicetree/bindings/gpio/snps,dw-apb-gpio.yaml 22313 22452 F: drivers/gpio/gpio-dwapb.c 22314 - 22315 - SYNOPSYS DESIGNWARE APB SSI DRIVER 22316 - M: Serge Semin <fancer.lancer@gmail.com> 22317 - L: linux-spi@vger.kernel.org 22318 - S: Supported 22319 - F: Documentation/devicetree/bindings/spi/snps,dw-apb-ssi.yaml 22320 - F: drivers/spi/spi-dw* 22321 22453 22322 22454 SYNOPSYS DESIGNWARE AXI DMAC DRIVER 22323 22455 M: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> ··· 23620 23768 S: Maintained 23621 23769 F: drivers/hid/hid-udraw-ps3.c 23622 23770 23623 - UFS FILESYSTEM 23624 - M: Evgeniy Dushistov <dushistov@mail.ru> 23625 - S: Maintained 23626 - F: Documentation/admin-guide/ufs.rst 23627 - F: fs/ufs/ 23628 - 23629 23771 UHID USERSPACE HID IO DRIVER 23630 23772 M: David Rheinsberg <david@readahead.eu> 23631 23773 L: linux-input@vger.kernel.org ··· 23922 24076 R: Andrey Konovalov <andreyknvl@gmail.com> 23923 24077 L: linux-usb@vger.kernel.org 23924 24078 S: Maintained 24079 + B: https://github.com/xairy/raw-gadget/issues 23925 24080 F: Documentation/usb/raw-gadget.rst 23926 24081 F: drivers/usb/gadget/legacy/raw_gadget.c 23927 24082 F: include/uapi/linux/usb/raw_gadget.h ··· 24594 24747 24595 24748 VMA 24596 24749 M: Andrew Morton <akpm@linux-foundation.org> 24597 - R: Liam R. Howlett <Liam.Howlett@oracle.com> 24750 + M: Liam R. Howlett <Liam.Howlett@oracle.com> 24751 + M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 24598 24752 R: Vlastimil Babka <vbabka@suse.cz> 24599 - R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 24753 + R: Jann Horn <jannh@google.com> 24600 24754 L: linux-mm@kvack.org 24601 24755 S: Maintained 24602 24756 W: https://www.linux-mm.org
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 12 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 5 + EXTRAVERSION = -rc4 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+12 -14
arch/Kconfig
··· 838 838 config CFI_ICALL_NORMALIZE_INTEGERS 839 839 bool "Normalize CFI tags for integers" 840 840 depends on CFI_CLANG 841 - depends on HAVE_CFI_ICALL_NORMALIZE_INTEGERS 841 + depends on HAVE_CFI_ICALL_NORMALIZE_INTEGERS_CLANG 842 842 help 843 843 This option normalizes the CFI tags for integer types so that all 844 844 integer types of the same size and signedness receive the same CFI ··· 851 851 852 852 This option is necessary for using CFI with Rust. If unsure, say N. 853 853 854 - config HAVE_CFI_ICALL_NORMALIZE_INTEGERS 855 - def_bool !GCOV_KERNEL && !KASAN 856 - depends on CFI_CLANG 854 + config HAVE_CFI_ICALL_NORMALIZE_INTEGERS_CLANG 855 + def_bool y 857 856 depends on $(cc-option,-fsanitize=kcfi -fsanitize-cfi-icall-experimental-normalize-integers) 858 - help 859 - Is CFI_ICALL_NORMALIZE_INTEGERS supported with the set of compilers 860 - currently in use? 857 + # With GCOV/KASAN we need this fix: https://github.com/llvm/llvm-project/pull/104826 858 + depends on CLANG_VERSION >= 190000 || (!GCOV_KERNEL && !KASAN_GENERIC && !KASAN_SW_TAGS) 861 859 862 - This option defaults to false if GCOV or KASAN is enabled, as there is 863 - an LLVM bug that makes normalized integers tags incompatible with 864 - KASAN and GCOV. Kconfig currently does not have the infrastructure to 865 - detect whether your rustc compiler contains the fix for this bug, so 866 - it is assumed that it doesn't. If your compiler has the fix, you can 867 - explicitly enable this option in your config file. The Kconfig logic 868 - needed to detect this will be added in a future kernel release. 860 + config HAVE_CFI_ICALL_NORMALIZE_INTEGERS_RUSTC 861 + def_bool y 862 + depends on HAVE_CFI_ICALL_NORMALIZE_INTEGERS_CLANG 863 + depends on RUSTC_VERSION >= 107900 864 + # With GCOV/KASAN we need this fix: https://github.com/rust-lang/rust/pull/129373 865 + depends on (RUSTC_LLVM_VERSION >= 190000 && RUSTC_VERSION >= 108200) || \ 866 + (!GCOV_KERNEL && !KASAN_GENERIC && !KASAN_SW_TAGS) 869 867 870 868 config CFI_PERMISSIVE 871 869 bool "Use CFI in permissive mode"
+1 -1
arch/arm/boot/dts/broadcom/bcm2837-rpi-cm3-io3.dts
··· 77 77 }; 78 78 79 79 &hdmi { 80 - hpd-gpios = <&expgpio 1 GPIO_ACTIVE_LOW>; 80 + hpd-gpios = <&expgpio 0 GPIO_ACTIVE_LOW>; 81 81 power-domains = <&power RPI_POWER_DOMAIN_HDMI>; 82 82 status = "okay"; 83 83 };
+1 -1
arch/arm64/boot/dts/marvell/cn9130-sr-som.dtsi
··· 136 136 }; 137 137 138 138 cp0_mdio_pins: cp0-mdio-pins { 139 - marvell,pins = "mpp40", "mpp41"; 139 + marvell,pins = "mpp0", "mpp1"; 140 140 marvell,function = "ge"; 141 141 }; 142 142
+1
arch/arm64/include/asm/kvm_asm.h
··· 178 178 unsigned long hcr_el2; 179 179 unsigned long vttbr; 180 180 unsigned long vtcr; 181 + unsigned long tmp; 181 182 }; 182 183 183 184 /*
+7
arch/arm64/include/asm/kvm_host.h
··· 51 51 #define KVM_REQ_RELOAD_PMU KVM_ARCH_REQ(5) 52 52 #define KVM_REQ_SUSPEND KVM_ARCH_REQ(6) 53 53 #define KVM_REQ_RESYNC_PMU_EL0 KVM_ARCH_REQ(7) 54 + #define KVM_REQ_NESTED_S2_UNMAP KVM_ARCH_REQ(8) 54 55 55 56 #define KVM_DIRTY_LOG_MANUAL_CAPS (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | \ 56 57 KVM_DIRTY_LOG_INITIALLY_SET) ··· 211 210 * HCR_EL2.VM == 1 212 211 */ 213 212 bool nested_stage2_enabled; 213 + 214 + /* 215 + * true when this MMU needs to be unmapped before being used for a new 216 + * purpose. 217 + */ 218 + bool pending_unmap; 214 219 215 220 /* 216 221 * 0: Nobody is currently using this, check vttbr for validity
+2 -1
arch/arm64/include/asm/kvm_mmu.h
··· 166 166 int create_hyp_stack(phys_addr_t phys_addr, unsigned long *haddr); 167 167 void __init free_hyp_pgds(void); 168 168 169 - void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size); 169 + void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start, 170 + u64 size, bool may_block); 170 171 void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end); 171 172 void kvm_stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end); 172 173
+3 -1
arch/arm64/include/asm/kvm_nested.h
··· 78 78 extern void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu); 79 79 extern void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu); 80 80 81 + extern void check_nested_vcpu_requests(struct kvm_vcpu *vcpu); 82 + 81 83 struct kvm_s2_trans { 82 84 phys_addr_t output; 83 85 unsigned long block_size; ··· 126 124 struct kvm_s2_trans *trans); 127 125 extern int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2); 128 126 extern void kvm_nested_s2_wp(struct kvm *kvm); 129 - extern void kvm_nested_s2_unmap(struct kvm *kvm); 127 + extern void kvm_nested_s2_unmap(struct kvm *kvm, bool may_block); 130 128 extern void kvm_nested_s2_flush(struct kvm *kvm); 131 129 132 130 unsigned long compute_tlb_inval_range(struct kvm_s2_mmu *mmu, u64 val);
+3 -5
arch/arm64/include/asm/uprobes.h
··· 10 10 #include <asm/insn.h> 11 11 #include <asm/probes.h> 12 12 13 - #define MAX_UINSN_BYTES AARCH64_INSN_SIZE 14 - 15 13 #define UPROBE_SWBP_INSN cpu_to_le32(BRK64_OPCODE_UPROBES) 16 14 #define UPROBE_SWBP_INSN_SIZE AARCH64_INSN_SIZE 17 - #define UPROBE_XOL_SLOT_BYTES MAX_UINSN_BYTES 15 + #define UPROBE_XOL_SLOT_BYTES AARCH64_INSN_SIZE 18 16 19 17 typedef __le32 uprobe_opcode_t; 20 18 ··· 21 23 22 24 struct arch_uprobe { 23 25 union { 24 - u8 insn[MAX_UINSN_BYTES]; 25 - u8 ixol[MAX_UINSN_BYTES]; 26 + __le32 insn; 27 + __le32 ixol; 26 28 }; 27 29 struct arch_probe_insn api; 28 30 bool simulate;
+1
arch/arm64/kernel/asm-offsets.c
··· 146 146 DEFINE(NVHE_INIT_HCR_EL2, offsetof(struct kvm_nvhe_init_params, hcr_el2)); 147 147 DEFINE(NVHE_INIT_VTTBR, offsetof(struct kvm_nvhe_init_params, vttbr)); 148 148 DEFINE(NVHE_INIT_VTCR, offsetof(struct kvm_nvhe_init_params, vtcr)); 149 + DEFINE(NVHE_INIT_TMP, offsetof(struct kvm_nvhe_init_params, tmp)); 149 150 #endif 150 151 #ifdef CONFIG_CPU_PM 151 152 DEFINE(CPU_CTX_SP, offsetof(struct cpu_suspend_ctx, sp));
+11 -5
arch/arm64/kernel/probes/decode-insn.c
··· 99 99 aarch64_insn_is_blr(insn) || 100 100 aarch64_insn_is_ret(insn)) { 101 101 api->handler = simulate_br_blr_ret; 102 - } else if (aarch64_insn_is_ldr_lit(insn)) { 103 - api->handler = simulate_ldr_literal; 104 - } else if (aarch64_insn_is_ldrsw_lit(insn)) { 105 - api->handler = simulate_ldrsw_literal; 106 102 } else { 107 103 /* 108 104 * Instruction cannot be stepped out-of-line and we don't ··· 136 140 probe_opcode_t insn = le32_to_cpu(*addr); 137 141 probe_opcode_t *scan_end = NULL; 138 142 unsigned long size = 0, offset = 0; 143 + struct arch_probe_insn *api = &asi->api; 144 + 145 + if (aarch64_insn_is_ldr_lit(insn)) { 146 + api->handler = simulate_ldr_literal; 147 + decoded = INSN_GOOD_NO_SLOT; 148 + } else if (aarch64_insn_is_ldrsw_lit(insn)) { 149 + api->handler = simulate_ldrsw_literal; 150 + decoded = INSN_GOOD_NO_SLOT; 151 + } else { 152 + decoded = arm_probe_decode_insn(insn, &asi->api); 153 + } 139 154 140 155 /* 141 156 * If there's a symbol defined in front of and near enough to ··· 164 157 else 165 158 scan_end = addr - MAX_ATOMIC_CONTEXT_SIZE; 166 159 } 167 - decoded = arm_probe_decode_insn(insn, &asi->api); 168 160 169 161 if (decoded != INSN_REJECTED && scan_end) 170 162 if (is_probed_address_atomic(addr - 1, scan_end))
+7 -11
arch/arm64/kernel/probes/simulate-insn.c
··· 171 171 void __kprobes 172 172 simulate_ldr_literal(u32 opcode, long addr, struct pt_regs *regs) 173 173 { 174 - u64 *load_addr; 174 + unsigned long load_addr; 175 175 int xn = opcode & 0x1f; 176 - int disp; 177 176 178 - disp = ldr_displacement(opcode); 179 - load_addr = (u64 *) (addr + disp); 177 + load_addr = addr + ldr_displacement(opcode); 180 178 181 179 if (opcode & (1 << 30)) /* x0-x30 */ 182 - set_x_reg(regs, xn, *load_addr); 180 + set_x_reg(regs, xn, READ_ONCE(*(u64 *)load_addr)); 183 181 else /* w0-w30 */ 184 - set_w_reg(regs, xn, *load_addr); 182 + set_w_reg(regs, xn, READ_ONCE(*(u32 *)load_addr)); 185 183 186 184 instruction_pointer_set(regs, instruction_pointer(regs) + 4); 187 185 } ··· 187 189 void __kprobes 188 190 simulate_ldrsw_literal(u32 opcode, long addr, struct pt_regs *regs) 189 191 { 190 - s32 *load_addr; 192 + unsigned long load_addr; 191 193 int xn = opcode & 0x1f; 192 - int disp; 193 194 194 - disp = ldr_displacement(opcode); 195 - load_addr = (s32 *) (addr + disp); 195 + load_addr = addr + ldr_displacement(opcode); 196 196 197 - set_x_reg(regs, xn, *load_addr); 197 + set_x_reg(regs, xn, READ_ONCE(*(s32 *)load_addr)); 198 198 199 199 instruction_pointer_set(regs, instruction_pointer(regs) + 4); 200 200 }
+2 -2
arch/arm64/kernel/probes/uprobes.c
··· 42 42 else if (!IS_ALIGNED(addr, AARCH64_INSN_SIZE)) 43 43 return -EINVAL; 44 44 45 - insn = *(probe_opcode_t *)(&auprobe->insn[0]); 45 + insn = le32_to_cpu(auprobe->insn); 46 46 47 47 switch (arm_probe_decode_insn(insn, &auprobe->api)) { 48 48 case INSN_REJECTED: ··· 108 108 if (!auprobe->simulate) 109 109 return false; 110 110 111 - insn = *(probe_opcode_t *)(&auprobe->insn[0]); 111 + insn = le32_to_cpu(auprobe->insn); 112 112 addr = instruction_pointer(regs); 113 113 114 114 if (auprobe->api.handler)
+3
arch/arm64/kernel/process.c
··· 412 412 413 413 p->thread.cpu_context.x19 = (unsigned long)args->fn; 414 414 p->thread.cpu_context.x20 = (unsigned long)args->fn_arg; 415 + 416 + if (system_supports_poe()) 417 + p->thread.por_el0 = POR_EL0_INIT; 415 418 } 416 419 p->thread.cpu_context.pc = (unsigned long)ret_from_fork; 417 420 p->thread.cpu_context.sp = (unsigned long)childregs;
+5
arch/arm64/kvm/arm.c
··· 997 997 static int check_vcpu_requests(struct kvm_vcpu *vcpu) 998 998 { 999 999 if (kvm_request_pending(vcpu)) { 1000 + if (kvm_check_request(KVM_REQ_VM_DEAD, vcpu)) 1001 + return -EIO; 1002 + 1000 1003 if (kvm_check_request(KVM_REQ_SLEEP, vcpu)) 1001 1004 kvm_vcpu_sleep(vcpu); 1002 1005 ··· 1034 1031 1035 1032 if (kvm_dirty_ring_check_request(vcpu)) 1036 1033 return 0; 1034 + 1035 + check_nested_vcpu_requests(vcpu); 1037 1036 } 1038 1037 1039 1038 return 1;
+29 -23
arch/arm64/kvm/hyp/nvhe/hyp-init.S
··· 24 24 .align 11 25 25 26 26 SYM_CODE_START(__kvm_hyp_init) 27 - ventry __invalid // Synchronous EL2t 28 - ventry __invalid // IRQ EL2t 29 - ventry __invalid // FIQ EL2t 30 - ventry __invalid // Error EL2t 27 + ventry . // Synchronous EL2t 28 + ventry . // IRQ EL2t 29 + ventry . // FIQ EL2t 30 + ventry . // Error EL2t 31 31 32 - ventry __invalid // Synchronous EL2h 33 - ventry __invalid // IRQ EL2h 34 - ventry __invalid // FIQ EL2h 35 - ventry __invalid // Error EL2h 32 + ventry . // Synchronous EL2h 33 + ventry . // IRQ EL2h 34 + ventry . // FIQ EL2h 35 + ventry . // Error EL2h 36 36 37 37 ventry __do_hyp_init // Synchronous 64-bit EL1 38 - ventry __invalid // IRQ 64-bit EL1 39 - ventry __invalid // FIQ 64-bit EL1 40 - ventry __invalid // Error 64-bit EL1 38 + ventry . // IRQ 64-bit EL1 39 + ventry . // FIQ 64-bit EL1 40 + ventry . // Error 64-bit EL1 41 41 42 - ventry __invalid // Synchronous 32-bit EL1 43 - ventry __invalid // IRQ 32-bit EL1 44 - ventry __invalid // FIQ 32-bit EL1 45 - ventry __invalid // Error 32-bit EL1 46 - 47 - __invalid: 48 - b . 42 + ventry . // Synchronous 32-bit EL1 43 + ventry . // IRQ 32-bit EL1 44 + ventry . // FIQ 32-bit EL1 45 + ventry . // Error 32-bit EL1 49 46 50 47 /* 51 48 * Only uses x0..x3 so as to not clobber callee-saved SMCCC registers. ··· 73 76 eret 74 77 SYM_CODE_END(__kvm_hyp_init) 75 78 79 + SYM_CODE_START_LOCAL(__kvm_init_el2_state) 80 + /* Initialize EL2 CPU state to sane values. */ 81 + init_el2_state // Clobbers x0..x2 82 + finalise_el2_state 83 + ret 84 + SYM_CODE_END(__kvm_init_el2_state) 85 + 76 86 /* 77 87 * Initialize the hypervisor in EL2. 78 88 * ··· 106 102 // TPIDR_EL2 is used to preserve x0 across the macro maze... 107 103 isb 108 104 msr tpidr_el2, x0 109 - init_el2_state 110 - finalise_el2_state 105 + str lr, [x0, #NVHE_INIT_TMP] 106 + 107 + bl __kvm_init_el2_state 108 + 111 109 mrs x0, tpidr_el2 110 + ldr lr, [x0, #NVHE_INIT_TMP] 112 111 113 112 1: 114 113 ldr x1, [x0, #NVHE_INIT_TPIDR_EL2] ··· 206 199 207 200 2: msr SPsel, #1 // We want to use SP_EL{1,2} 208 201 209 - /* Initialize EL2 CPU state to sane values. */ 210 - init_el2_state // Clobbers x0..x2 211 - finalise_el2_state 202 + bl __kvm_init_el2_state 203 + 212 204 __init_el2_nvhe_prepare_eret 213 205 214 206 /* Enable MMU, set vectors and stack. */
+6 -6
arch/arm64/kvm/hypercalls.c
··· 317 317 * to the guest, and hide SSBS so that the 318 318 * guest stays protected. 319 319 */ 320 - if (cpus_have_final_cap(ARM64_SSBS)) 320 + if (kvm_has_feat(vcpu->kvm, ID_AA64PFR1_EL1, SSBS, IMP)) 321 321 break; 322 322 fallthrough; 323 323 case SPECTRE_UNAFFECTED: ··· 428 428 * Convert the workaround level into an easy-to-compare number, where higher 429 429 * values mean better protection. 430 430 */ 431 - static int get_kernel_wa_level(u64 regid) 431 + static int get_kernel_wa_level(struct kvm_vcpu *vcpu, u64 regid) 432 432 { 433 433 switch (regid) { 434 434 case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1: ··· 449 449 * don't have any FW mitigation if SSBS is there at 450 450 * all times. 451 451 */ 452 - if (cpus_have_final_cap(ARM64_SSBS)) 452 + if (kvm_has_feat(vcpu->kvm, ID_AA64PFR1_EL1, SSBS, IMP)) 453 453 return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL; 454 454 fallthrough; 455 455 case SPECTRE_UNAFFECTED: ··· 486 486 case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1: 487 487 case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2: 488 488 case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3: 489 - val = get_kernel_wa_level(reg->id) & KVM_REG_FEATURE_LEVEL_MASK; 489 + val = get_kernel_wa_level(vcpu, reg->id) & KVM_REG_FEATURE_LEVEL_MASK; 490 490 break; 491 491 case KVM_REG_ARM_STD_BMAP: 492 492 val = READ_ONCE(smccc_feat->std_bmap); ··· 588 588 if (val & ~KVM_REG_FEATURE_LEVEL_MASK) 589 589 return -EINVAL; 590 590 591 - if (get_kernel_wa_level(reg->id) < val) 591 + if (get_kernel_wa_level(vcpu, reg->id) < val) 592 592 return -EINVAL; 593 593 594 594 return 0; ··· 624 624 * We can deal with NOT_AVAIL on NOT_REQUIRED, but not the 625 625 * other way around. 626 626 */ 627 - if (get_kernel_wa_level(reg->id) < wa_level) 627 + if (get_kernel_wa_level(vcpu, reg->id) < wa_level) 628 628 return -EINVAL; 629 629 630 630 return 0;
+8 -7
arch/arm64/kvm/mmu.c
··· 328 328 may_block)); 329 329 } 330 330 331 - void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size) 331 + void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start, 332 + u64 size, bool may_block) 332 333 { 333 - __unmap_stage2_range(mmu, start, size, true); 334 + __unmap_stage2_range(mmu, start, size, may_block); 334 335 } 335 336 336 337 void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end) ··· 1016 1015 1017 1016 if (!(vma->vm_flags & VM_PFNMAP)) { 1018 1017 gpa_t gpa = addr + (vm_start - memslot->userspace_addr); 1019 - kvm_stage2_unmap_range(&kvm->arch.mmu, gpa, vm_end - vm_start); 1018 + kvm_stage2_unmap_range(&kvm->arch.mmu, gpa, vm_end - vm_start, true); 1020 1019 } 1021 1020 hva = vm_end; 1022 1021 } while (hva < reg_end); ··· 1043 1042 kvm_for_each_memslot(memslot, bkt, slots) 1044 1043 stage2_unmap_memslot(kvm, memslot); 1045 1044 1046 - kvm_nested_s2_unmap(kvm); 1045 + kvm_nested_s2_unmap(kvm, true); 1047 1046 1048 1047 write_unlock(&kvm->mmu_lock); 1049 1048 mmap_read_unlock(current->mm); ··· 1913 1912 (range->end - range->start) << PAGE_SHIFT, 1914 1913 range->may_block); 1915 1914 1916 - kvm_nested_s2_unmap(kvm); 1915 + kvm_nested_s2_unmap(kvm, range->may_block); 1917 1916 return false; 1918 1917 } 1919 1918 ··· 2180 2179 phys_addr_t size = slot->npages << PAGE_SHIFT; 2181 2180 2182 2181 write_lock(&kvm->mmu_lock); 2183 - kvm_stage2_unmap_range(&kvm->arch.mmu, gpa, size); 2184 - kvm_nested_s2_unmap(kvm); 2182 + kvm_stage2_unmap_range(&kvm->arch.mmu, gpa, size, true); 2183 + kvm_nested_s2_unmap(kvm, true); 2185 2184 write_unlock(&kvm->mmu_lock); 2186 2185 } 2187 2186
+46 -7
arch/arm64/kvm/nested.c
··· 632 632 /* Set the scene for the next search */ 633 633 kvm->arch.nested_mmus_next = (i + 1) % kvm->arch.nested_mmus_size; 634 634 635 - /* Clear the old state */ 635 + /* Make sure we don't forget to do the laundry */ 636 636 if (kvm_s2_mmu_valid(s2_mmu)) 637 - kvm_stage2_unmap_range(s2_mmu, 0, kvm_phys_size(s2_mmu)); 637 + s2_mmu->pending_unmap = true; 638 638 639 639 /* 640 640 * The virtual VMID (modulo CnP) will be used as a key when matching ··· 650 650 651 651 out: 652 652 atomic_inc(&s2_mmu->refcnt); 653 + 654 + /* 655 + * Set the vCPU request to perform an unmap, even if the pending unmap 656 + * originates from another vCPU. This guarantees that the MMU has been 657 + * completely unmapped before any vCPU actually uses it, and allows 658 + * multiple vCPUs to lend a hand with completing the unmap. 659 + */ 660 + if (s2_mmu->pending_unmap) 661 + kvm_make_request(KVM_REQ_NESTED_S2_UNMAP, vcpu); 662 + 653 663 return s2_mmu; 654 664 } 655 665 ··· 673 663 674 664 void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu) 675 665 { 666 + /* 667 + * The vCPU kept its reference on the MMU after the last put, keep 668 + * rolling with it. 669 + */ 670 + if (vcpu->arch.hw_mmu) 671 + return; 672 + 676 673 if (is_hyp_ctxt(vcpu)) { 677 674 vcpu->arch.hw_mmu = &vcpu->kvm->arch.mmu; 678 675 } else { ··· 691 674 692 675 void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu) 693 676 { 694 - if (kvm_is_nested_s2_mmu(vcpu->kvm, vcpu->arch.hw_mmu)) { 677 + /* 678 + * Keep a reference on the associated stage-2 MMU if the vCPU is 679 + * scheduling out and not in WFI emulation, suggesting it is likely to 680 + * reuse the MMU sometime soon. 681 + */ 682 + if (vcpu->scheduled_out && !vcpu_get_flag(vcpu, IN_WFI)) 683 + return; 684 + 685 + if (kvm_is_nested_s2_mmu(vcpu->kvm, vcpu->arch.hw_mmu)) 695 686 atomic_dec(&vcpu->arch.hw_mmu->refcnt); 696 - vcpu->arch.hw_mmu = NULL; 697 - } 687 + 688 + vcpu->arch.hw_mmu = NULL; 698 689 } 699 690 700 691 /* ··· 755 730 } 756 731 } 757 732 758 - void kvm_nested_s2_unmap(struct kvm *kvm) 733 + void kvm_nested_s2_unmap(struct kvm *kvm, bool may_block) 759 734 { 760 735 int i; 761 736 ··· 765 740 struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i]; 766 741 767 742 if (kvm_s2_mmu_valid(mmu)) 768 - kvm_stage2_unmap_range(mmu, 0, kvm_phys_size(mmu)); 743 + kvm_stage2_unmap_range(mmu, 0, kvm_phys_size(mmu), may_block); 769 744 } 770 745 } 771 746 ··· 1208 1183 set_sysreg_masks(kvm, SCTLR_EL1, res0, res1); 1209 1184 1210 1185 return 0; 1186 + } 1187 + 1188 + void check_nested_vcpu_requests(struct kvm_vcpu *vcpu) 1189 + { 1190 + if (kvm_check_request(KVM_REQ_NESTED_S2_UNMAP, vcpu)) { 1191 + struct kvm_s2_mmu *mmu = vcpu->arch.hw_mmu; 1192 + 1193 + write_lock(&vcpu->kvm->mmu_lock); 1194 + if (mmu->pending_unmap) { 1195 + kvm_stage2_unmap_range(mmu, 0, kvm_phys_size(mmu), true); 1196 + mmu->pending_unmap = false; 1197 + } 1198 + write_unlock(&vcpu->kvm->mmu_lock); 1199 + } 1211 1200 }
+70 -7
arch/arm64/kvm/sys_regs.c
··· 1527 1527 val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE); 1528 1528 1529 1529 val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME); 1530 + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_RNDR_trap); 1531 + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_NMI); 1532 + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE_frac); 1533 + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_GCS); 1534 + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_THE); 1535 + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTEX); 1536 + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_DF2); 1537 + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_PFAR); 1530 1538 break; 1531 1539 case SYS_ID_AA64PFR2_EL1: 1532 1540 /* We only expose FPMR */ ··· 1558 1550 val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK; 1559 1551 break; 1560 1552 case SYS_ID_AA64MMFR3_EL1: 1561 - val &= ID_AA64MMFR3_EL1_TCRX | ID_AA64MMFR3_EL1_S1POE; 1553 + val &= ID_AA64MMFR3_EL1_TCRX | ID_AA64MMFR3_EL1_S1POE | 1554 + ID_AA64MMFR3_EL1_S1PIE; 1562 1555 break; 1563 1556 case SYS_ID_MMFR4_EL1: 1564 1557 val &= ~ARM64_FEATURE_MASK(ID_MMFR4_EL1_CCIDX); ··· 1994 1985 * one cache line. 1995 1986 */ 1996 1987 if (kvm_has_mte(vcpu->kvm)) 1997 - clidr |= 2 << CLIDR_TTYPE_SHIFT(loc); 1988 + clidr |= 2ULL << CLIDR_TTYPE_SHIFT(loc); 1998 1989 1999 1990 __vcpu_sys_reg(vcpu, r->reg) = clidr; 2000 1991 ··· 2385 2376 ID_AA64PFR0_EL1_RAS | 2386 2377 ID_AA64PFR0_EL1_AdvSIMD | 2387 2378 ID_AA64PFR0_EL1_FP), }, 2388 - ID_SANITISED(ID_AA64PFR1_EL1), 2379 + ID_WRITABLE(ID_AA64PFR1_EL1, ~(ID_AA64PFR1_EL1_PFAR | 2380 + ID_AA64PFR1_EL1_DF2 | 2381 + ID_AA64PFR1_EL1_MTEX | 2382 + ID_AA64PFR1_EL1_THE | 2383 + ID_AA64PFR1_EL1_GCS | 2384 + ID_AA64PFR1_EL1_MTE_frac | 2385 + ID_AA64PFR1_EL1_NMI | 2386 + ID_AA64PFR1_EL1_RNDR_trap | 2387 + ID_AA64PFR1_EL1_SME | 2388 + ID_AA64PFR1_EL1_RES0 | 2389 + ID_AA64PFR1_EL1_MPAM_frac | 2390 + ID_AA64PFR1_EL1_RAS_frac | 2391 + ID_AA64PFR1_EL1_MTE)), 2389 2392 ID_WRITABLE(ID_AA64PFR2_EL1, ID_AA64PFR2_EL1_FPMR), 2390 2393 ID_UNALLOCATED(4,3), 2391 2394 ID_WRITABLE(ID_AA64ZFR0_EL1, ~ID_AA64ZFR0_EL1_RES0), ··· 2411 2390 .get_user = get_id_reg, 2412 2391 .set_user = set_id_aa64dfr0_el1, 2413 2392 .reset = read_sanitised_id_aa64dfr0_el1, 2414 - .val = ID_AA64DFR0_EL1_PMUVer_MASK | 2393 + /* 2394 + * Prior to FEAT_Debugv8.9, the architecture defines context-aware 2395 + * breakpoints (CTX_CMPs) as the highest numbered breakpoints (BRPs). 2396 + * KVM does not trap + emulate the breakpoint registers, and as such 2397 + * cannot support a layout that misaligns with the underlying hardware. 2398 + * While it may be possible to describe a subset that aligns with 2399 + * hardware, just prevent changes to BRPs and CTX_CMPs altogether for 2400 + * simplicity. 2401 + * 2402 + * See DDI0487K.a, section D2.8.3 Breakpoint types and linking 2403 + * of breakpoints for more details. 2404 + */ 2405 + .val = ID_AA64DFR0_EL1_DoubleLock_MASK | 2406 + ID_AA64DFR0_EL1_WRPs_MASK | 2407 + ID_AA64DFR0_EL1_PMUVer_MASK | 2415 2408 ID_AA64DFR0_EL1_DebugVer_MASK, }, 2416 2409 ID_SANITISED(ID_AA64DFR1_EL1), 2417 2410 ID_UNALLOCATED(5,2), ··· 2468 2433 ID_AA64MMFR2_EL1_NV | 2469 2434 ID_AA64MMFR2_EL1_CCIDX)), 2470 2435 ID_WRITABLE(ID_AA64MMFR3_EL1, (ID_AA64MMFR3_EL1_TCRX | 2436 + ID_AA64MMFR3_EL1_S1PIE | 2471 2437 ID_AA64MMFR3_EL1_S1POE)), 2472 2438 ID_SANITISED(ID_AA64MMFR4_EL1), 2473 2439 ID_UNALLOCATED(7,5), ··· 2939 2903 * Drop all shadow S2s, resulting in S1/S2 TLBIs for each of the 2940 2904 * corresponding VMIDs. 2941 2905 */ 2942 - kvm_nested_s2_unmap(vcpu->kvm); 2906 + kvm_nested_s2_unmap(vcpu->kvm, true); 2943 2907 2944 2908 write_unlock(&vcpu->kvm->mmu_lock); 2945 2909 ··· 2991 2955 static void s2_mmu_unmap_range(struct kvm_s2_mmu *mmu, 2992 2956 const union tlbi_info *info) 2993 2957 { 2994 - kvm_stage2_unmap_range(mmu, info->range.start, info->range.size); 2958 + /* 2959 + * The unmap operation is allowed to drop the MMU lock and block, which 2960 + * means that @mmu could be used for a different context than the one 2961 + * currently being invalidated. 2962 + * 2963 + * This behavior is still safe, as: 2964 + * 2965 + * 1) The vCPU(s) that recycled the MMU are responsible for invalidating 2966 + * the entire MMU before reusing it, which still honors the intent 2967 + * of a TLBI. 2968 + * 2969 + * 2) Until the guest TLBI instruction is 'retired' (i.e. increment PC 2970 + * and ERET to the guest), other vCPUs are allowed to use stale 2971 + * translations. 2972 + * 2973 + * 3) Accidentally unmapping an unrelated MMU context is nonfatal, and 2974 + * at worst may cause more aborts for shadow stage-2 fills. 2975 + * 2976 + * Dropping the MMU lock also implies that shadow stage-2 fills could 2977 + * happen behind the back of the TLBI. This is still safe, though, as 2978 + * the L1 needs to put its stage-2 in a consistent state before doing 2979 + * the TLBI. 2980 + */ 2981 + kvm_stage2_unmap_range(mmu, info->range.start, info->range.size, true); 2995 2982 } 2996 2983 2997 2984 static bool handle_vmalls12e1is(struct kvm_vcpu *vcpu, struct sys_reg_params *p, ··· 3109 3050 max_size = compute_tlb_inval_range(mmu, info->ipa.addr); 3110 3051 base_addr &= ~(max_size - 1); 3111 3052 3112 - kvm_stage2_unmap_range(mmu, base_addr, max_size); 3053 + /* 3054 + * See comment in s2_mmu_unmap_range() for why this is allowed to 3055 + * reschedule. 3056 + */ 3057 + kvm_stage2_unmap_range(mmu, base_addr, max_size, true); 3113 3058 } 3114 3059 3115 3060 static bool handle_ipas2e1is(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+35 -6
arch/arm64/kvm/vgic/vgic-init.c
··· 417 417 kfree(vgic_cpu->private_irqs); 418 418 vgic_cpu->private_irqs = NULL; 419 419 420 - if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) 420 + if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) { 421 + /* 422 + * If this vCPU is being destroyed because of a failed creation 423 + * then unregister the redistributor to avoid leaving behind a 424 + * dangling pointer to the vCPU struct. 425 + * 426 + * vCPUs that have been successfully created (i.e. added to 427 + * kvm->vcpu_array) get unregistered in kvm_vgic_destroy(), as 428 + * this function gets called while holding kvm->arch.config_lock 429 + * in the VM teardown path and would otherwise introduce a lock 430 + * inversion w.r.t. kvm->srcu. 431 + * 432 + * vCPUs that failed creation are torn down outside of the 433 + * kvm->arch.config_lock and do not get unregistered in 434 + * kvm_vgic_destroy(), meaning it is both safe and necessary to 435 + * do so here. 436 + */ 437 + if (kvm_get_vcpu_by_id(vcpu->kvm, vcpu->vcpu_id) != vcpu) 438 + vgic_unregister_redist_iodev(vcpu); 439 + 421 440 vgic_cpu->rd_iodev.base_addr = VGIC_ADDR_UNDEF; 441 + } 422 442 } 423 443 424 444 void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu) ··· 544 524 if (ret) 545 525 goto out; 546 526 547 - dist->ready = true; 548 527 dist_base = dist->vgic_dist_base; 549 528 mutex_unlock(&kvm->arch.config_lock); 550 529 551 530 ret = vgic_register_dist_iodev(kvm, dist_base, type); 552 - if (ret) 531 + if (ret) { 553 532 kvm_err("Unable to register VGIC dist MMIO regions\n"); 533 + goto out_slots; 534 + } 554 535 536 + /* 537 + * kvm_io_bus_register_dev() guarantees all readers see the new MMIO 538 + * registration before returning through synchronize_srcu(), which also 539 + * implies a full memory barrier. As such, marking the distributor as 540 + * 'ready' here is guaranteed to be ordered after all vCPUs having seen 541 + * a completely configured distributor. 542 + */ 543 + dist->ready = true; 555 544 goto out_slots; 556 545 out: 557 546 mutex_unlock(&kvm->arch.config_lock); 558 547 out_slots: 559 - mutex_unlock(&kvm->slots_lock); 560 - 561 548 if (ret) 562 - kvm_vgic_destroy(kvm); 549 + kvm_vm_dead(kvm); 550 + 551 + mutex_unlock(&kvm->slots_lock); 563 552 564 553 return ret; 565 554 }
+6 -1
arch/arm64/kvm/vgic/vgic-kvm-device.c
··· 236 236 237 237 mutex_lock(&dev->kvm->arch.config_lock); 238 238 239 - if (vgic_ready(dev->kvm) || dev->kvm->arch.vgic.nr_spis) 239 + /* 240 + * Either userspace has already configured NR_IRQS or 241 + * the vgic has already been initialized and vgic_init() 242 + * supplied a default amount of SPIs. 243 + */ 244 + if (dev->kvm->arch.vgic.nr_spis) 240 245 ret = -EBUSY; 241 246 else 242 247 dev->kvm->arch.vgic.nr_spis =
+4
arch/loongarch/include/asm/bootinfo.h
··· 26 26 27 27 #define NR_WORDS DIV_ROUND_UP(NR_CPUS, BITS_PER_LONG) 28 28 29 + /* 30 + * The "core" of cores_per_node and cores_per_package stands for a 31 + * logical core, which means in a SMT system it stands for a thread. 32 + */ 29 33 struct loongson_system_configuration { 30 34 int nr_cpus; 31 35 int nr_nodes;
+1 -1
arch/loongarch/include/asm/kasan.h
··· 16 16 #define XRANGE_SHIFT (48) 17 17 18 18 /* Valid address length */ 19 - #define XRANGE_SHADOW_SHIFT (PGDIR_SHIFT + PAGE_SHIFT - 3) 19 + #define XRANGE_SHADOW_SHIFT min(cpu_vabits, VA_BITS) 20 20 /* Used for taking out the valid address */ 21 21 #define XRANGE_SHADOW_MASK GENMASK_ULL(XRANGE_SHADOW_SHIFT - 1, 0) 22 22 /* One segment whole address space size */
+1 -1
arch/loongarch/include/asm/loongarch.h
··· 250 250 #define CSR_ESTAT_IS_WIDTH 15 251 251 #define CSR_ESTAT_IS (_ULCAST_(0x7fff) << CSR_ESTAT_IS_SHIFT) 252 252 253 - #define LOONGARCH_CSR_ERA 0x6 /* ERA */ 253 + #define LOONGARCH_CSR_ERA 0x6 /* Exception return address */ 254 254 255 255 #define LOONGARCH_CSR_BADV 0x7 /* Bad virtual address */ 256 256
+11
arch/loongarch/include/asm/pgalloc.h
··· 10 10 11 11 #define __HAVE_ARCH_PMD_ALLOC_ONE 12 12 #define __HAVE_ARCH_PUD_ALLOC_ONE 13 + #define __HAVE_ARCH_PTE_ALLOC_ONE_KERNEL 13 14 #include <asm-generic/pgalloc.h> 14 15 15 16 static inline void pmd_populate_kernel(struct mm_struct *mm, ··· 44 43 extern void pagetable_init(void); 45 44 46 45 extern pgd_t *pgd_alloc(struct mm_struct *mm); 46 + 47 + static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) 48 + { 49 + pte_t *pte = __pte_alloc_one_kernel(mm); 50 + 51 + if (pte) 52 + kernel_pte_init(pte); 53 + 54 + return pte; 55 + } 47 56 48 57 #define __pte_free_tlb(tlb, pte, address) \ 49 58 do { \
+7 -28
arch/loongarch/include/asm/pgtable.h
··· 269 269 extern void pgd_init(void *addr); 270 270 extern void pud_init(void *addr); 271 271 extern void pmd_init(void *addr); 272 + extern void kernel_pte_init(void *addr); 272 273 273 274 /* 274 275 * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that ··· 326 325 { 327 326 WRITE_ONCE(*ptep, pteval); 328 327 329 - if (pte_val(pteval) & _PAGE_GLOBAL) { 330 - pte_t *buddy = ptep_buddy(ptep); 331 - /* 332 - * Make sure the buddy is global too (if it's !none, 333 - * it better already be global) 334 - */ 335 - if (pte_none(ptep_get(buddy))) { 336 328 #ifdef CONFIG_SMP 337 - /* 338 - * For SMP, multiple CPUs can race, so we need 339 - * to do this atomically. 340 - */ 341 - __asm__ __volatile__( 342 - __AMOR "$zero, %[global], %[buddy] \n" 343 - : [buddy] "+ZB" (buddy->pte) 344 - : [global] "r" (_PAGE_GLOBAL) 345 - : "memory"); 346 - 347 - DBAR(0b11000); /* o_wrw = 0b11000 */ 348 - #else /* !CONFIG_SMP */ 349 - WRITE_ONCE(*buddy, __pte(pte_val(ptep_get(buddy)) | _PAGE_GLOBAL)); 350 - #endif /* CONFIG_SMP */ 351 - } 352 - } 329 + if (pte_val(pteval) & _PAGE_GLOBAL) 330 + DBAR(0b11000); /* o_wrw = 0b11000 */ 331 + #endif 353 332 } 354 333 355 334 static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) 356 335 { 357 - /* Preserve global status for the pair */ 358 - if (pte_val(ptep_get(ptep_buddy(ptep))) & _PAGE_GLOBAL) 359 - set_pte(ptep, __pte(_PAGE_GLOBAL)); 360 - else 361 - set_pte(ptep, __pte(0)); 336 + pte_t pte = ptep_get(ptep); 337 + pte_val(pte) &= _PAGE_GLOBAL; 338 + set_pte(ptep, pte); 362 339 } 363 340 364 341 #define PGD_T_LOG2 (__builtin_ffs(sizeof(pgd_t)) - 1)
+8 -6
arch/loongarch/kernel/process.c
··· 293 293 { 294 294 unsigned long top = TASK_SIZE & PAGE_MASK; 295 295 296 - /* Space for the VDSO & data page */ 297 - top -= PAGE_ALIGN(current->thread.vdso->size); 298 - top -= VVAR_SIZE; 296 + if (current->thread.vdso) { 297 + /* Space for the VDSO & data page */ 298 + top -= PAGE_ALIGN(current->thread.vdso->size); 299 + top -= VVAR_SIZE; 299 300 300 - /* Space to randomize the VDSO base */ 301 - if (current->flags & PF_RANDOMIZE) 302 - top -= VDSO_RANDOMIZE_SIZE; 301 + /* Space to randomize the VDSO base */ 302 + if (current->flags & PF_RANDOMIZE) 303 + top -= VDSO_RANDOMIZE_SIZE; 304 + } 303 305 304 306 return top; 305 307 }
+2 -1
arch/loongarch/kernel/setup.c
··· 55 55 #define SMBIOS_FREQHIGH_OFFSET 0x17 56 56 #define SMBIOS_FREQLOW_MASK 0xFF 57 57 #define SMBIOS_CORE_PACKAGE_OFFSET 0x23 58 + #define SMBIOS_THREAD_PACKAGE_OFFSET 0x25 58 59 #define LOONGSON_EFI_ENABLE (1 << 3) 59 60 60 61 unsigned long fw_arg0, fw_arg1, fw_arg2; ··· 126 125 cpu_clock_freq = freq_temp * 1000000; 127 126 128 127 loongson_sysconf.cpuname = (void *)dmi_string_parse(dm, dmi_data[16]); 129 - loongson_sysconf.cores_per_package = *(dmi_data + SMBIOS_CORE_PACKAGE_OFFSET); 128 + loongson_sysconf.cores_per_package = *(dmi_data + SMBIOS_THREAD_PACKAGE_OFFSET); 130 129 131 130 pr_info("CpuClock = %llu\n", cpu_clock_freq); 132 131 }
+5
arch/loongarch/kernel/traps.c
··· 555 555 #else 556 556 unsigned int *pc; 557 557 558 + if (regs->csr_prmd & CSR_PRMD_PIE) 559 + local_irq_enable(); 560 + 558 561 perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, regs->csr_badvaddr); 559 562 560 563 /* ··· 582 579 die_if_kernel("Kernel ale access", regs); 583 580 force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)regs->csr_badvaddr); 584 581 out: 582 + if (regs->csr_prmd & CSR_PRMD_PIE) 583 + local_irq_disable(); 585 584 #endif 586 585 irqentry_exit(regs, state); 587 586 }
+4 -4
arch/loongarch/kernel/vdso.c
··· 34 34 struct loongarch_vdso_data vdata; 35 35 } loongarch_vdso_data __page_aligned_data; 36 36 37 - static struct page *vdso_pages[] = { NULL }; 38 37 struct vdso_data *vdso_data = generic_vdso_data.data; 39 38 struct vdso_pcpu_data *vdso_pdata = loongarch_vdso_data.vdata.pdata; 40 39 struct vdso_rng_data *vdso_rng_data = &loongarch_vdso_data.vdata.rng_data; ··· 84 85 85 86 struct loongarch_vdso_info vdso_info = { 86 87 .vdso = vdso_start, 87 - .size = PAGE_SIZE, 88 88 .code_mapping = { 89 89 .name = "[vdso]", 90 - .pages = vdso_pages, 91 90 .mremap = vdso_mremap, 92 91 }, 93 92 .data_mapping = { ··· 100 103 unsigned long i, cpu, pfn; 101 104 102 105 BUG_ON(!PAGE_ALIGNED(vdso_info.vdso)); 103 - BUG_ON(!PAGE_ALIGNED(vdso_info.size)); 104 106 105 107 for_each_possible_cpu(cpu) 106 108 vdso_pdata[cpu].node = cpu_to_node(cpu); 109 + 110 + vdso_info.size = PAGE_ALIGN(vdso_end - vdso_start); 111 + vdso_info.code_mapping.pages = 112 + kcalloc(vdso_info.size / PAGE_SIZE, sizeof(struct page *), GFP_KERNEL); 107 113 108 114 pfn = __phys_to_pfn(__pa_symbol(vdso_info.vdso)); 109 115 for (i = 0; i < vdso_info.size / PAGE_SIZE; i++)
+4 -3
arch/loongarch/kvm/timer.c
··· 161 161 if (kvm_vcpu_is_blocking(vcpu)) { 162 162 163 163 /* 164 - * HRTIMER_MODE_PINNED is suggested since vcpu may run in 165 - * the same physical cpu in next time 164 + * HRTIMER_MODE_PINNED_HARD is suggested since vcpu may run in 165 + * the same physical cpu in next time, and the timer should run 166 + * in hardirq context even in the PREEMPT_RT case. 166 167 */ 167 - hrtimer_start(&vcpu->arch.swtimer, expire, HRTIMER_MODE_ABS_PINNED); 168 + hrtimer_start(&vcpu->arch.swtimer, expire, HRTIMER_MODE_ABS_PINNED_HARD); 168 169 } 169 170 } 170 171
+1 -1
arch/loongarch/kvm/vcpu.c
··· 1457 1457 vcpu->arch.vpid = 0; 1458 1458 vcpu->arch.flush_gpa = INVALID_GPA; 1459 1459 1460 - hrtimer_init(&vcpu->arch.swtimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED); 1460 + hrtimer_init(&vcpu->arch.swtimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED_HARD); 1461 1461 vcpu->arch.swtimer.function = kvm_swtimer_wakeup; 1462 1462 1463 1463 vcpu->arch.handle_exit = kvm_handle_exit;
+2
arch/loongarch/mm/init.c
··· 201 201 pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE); 202 202 if (!pte) 203 203 panic("%s: Failed to allocate memory\n", __func__); 204 + 204 205 pmd_populate_kernel(&init_mm, pmd, pte); 206 + kernel_pte_init(pte); 205 207 } 206 208 207 209 return pte_offset_kernel(pmd, addr);
+20
arch/loongarch/mm/pgtable.c
··· 116 116 EXPORT_SYMBOL_GPL(pud_init); 117 117 #endif 118 118 119 + void kernel_pte_init(void *addr) 120 + { 121 + unsigned long *p, *end; 122 + 123 + p = (unsigned long *)addr; 124 + end = p + PTRS_PER_PTE; 125 + 126 + do { 127 + p[0] = _PAGE_GLOBAL; 128 + p[1] = _PAGE_GLOBAL; 129 + p[2] = _PAGE_GLOBAL; 130 + p[3] = _PAGE_GLOBAL; 131 + p[4] = _PAGE_GLOBAL; 132 + p += 8; 133 + p[-3] = _PAGE_GLOBAL; 134 + p[-2] = _PAGE_GLOBAL; 135 + p[-1] = _PAGE_GLOBAL; 136 + } while (p != end); 137 + } 138 + 119 139 pmd_t mk_pmd(struct page *page, pgprot_t prot) 120 140 { 121 141 pmd_t pmd;
+1
arch/powerpc/platforms/powernv/opal-irqchip.c
··· 282 282 name, NULL); 283 283 if (rc) { 284 284 pr_warn("Error %d requesting OPAL irq %d\n", rc, (int)r->start); 285 + kfree(name); 285 286 continue; 286 287 } 287 288 }
+4 -4
arch/riscv/kvm/aia_imsic.c
··· 55 55 /* IMSIC SW-file */ 56 56 struct imsic_mrif *swfile; 57 57 phys_addr_t swfile_pa; 58 - spinlock_t swfile_extirq_lock; 58 + raw_spinlock_t swfile_extirq_lock; 59 59 }; 60 60 61 61 #define imsic_vs_csr_read(__c) \ ··· 622 622 * interruptions between reading topei and updating pending status. 623 623 */ 624 624 625 - spin_lock_irqsave(&imsic->swfile_extirq_lock, flags); 625 + raw_spin_lock_irqsave(&imsic->swfile_extirq_lock, flags); 626 626 627 627 if (imsic_mrif_atomic_read(mrif, &mrif->eidelivery) && 628 628 imsic_mrif_topei(mrif, imsic->nr_eix, imsic->nr_msis)) ··· 630 630 else 631 631 kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_EXT); 632 632 633 - spin_unlock_irqrestore(&imsic->swfile_extirq_lock, flags); 633 + raw_spin_unlock_irqrestore(&imsic->swfile_extirq_lock, flags); 634 634 } 635 635 636 636 static void imsic_swfile_read(struct kvm_vcpu *vcpu, bool clear, ··· 1051 1051 } 1052 1052 imsic->swfile = page_to_virt(swfile_page); 1053 1053 imsic->swfile_pa = page_to_phys(swfile_page); 1054 - spin_lock_init(&imsic->swfile_extirq_lock); 1054 + raw_spin_lock_init(&imsic->swfile_extirq_lock); 1055 1055 1056 1056 /* Setup IO device */ 1057 1057 kvm_iodevice_init(&imsic->iodev, &imsic_iodoev_ops);
+5 -3
arch/riscv/net/bpf_jit_comp64.c
··· 18 18 #define RV_MAX_REG_ARGS 8 19 19 #define RV_FENTRY_NINSNS 2 20 20 #define RV_FENTRY_NBYTES (RV_FENTRY_NINSNS * 4) 21 + #define RV_KCFI_NINSNS (IS_ENABLED(CONFIG_CFI_CLANG) ? 1 : 0) 21 22 /* imm that allows emit_imm to emit max count insns */ 22 23 #define RV_MAX_COUNT_IMM 0x7FFF7FF7FF7FF7FF 23 24 ··· 272 271 if (!is_tail_call) 273 272 emit_addiw(RV_REG_A0, RV_REG_A5, 0, ctx); 274 273 emit_jalr(RV_REG_ZERO, is_tail_call ? RV_REG_T3 : RV_REG_RA, 275 - is_tail_call ? (RV_FENTRY_NINSNS + 1) * 4 : 0, /* skip reserved nops and TCC init */ 274 + /* kcfi, fentry and TCC init insns will be skipped on tailcall */ 275 + is_tail_call ? (RV_KCFI_NINSNS + RV_FENTRY_NINSNS + 1) * 4 : 0, 276 276 ctx); 277 277 } 278 278 ··· 550 548 rv_lr_w(r0, 0, rd, 0, 0), ctx); 551 549 jmp_offset = ninsns_rvoff(8); 552 550 emit(rv_bne(RV_REG_T2, r0, jmp_offset >> 1), ctx); 553 - emit(is64 ? rv_sc_d(RV_REG_T3, rs, rd, 0, 0) : 554 - rv_sc_w(RV_REG_T3, rs, rd, 0, 0), ctx); 551 + emit(is64 ? rv_sc_d(RV_REG_T3, rs, rd, 0, 1) : 552 + rv_sc_w(RV_REG_T3, rs, rd, 0, 1), ctx); 555 553 jmp_offset = ninsns_rvoff(-6); 556 554 emit(rv_bne(RV_REG_T3, 0, jmp_offset >> 1), ctx); 557 555 emit(rv_fence(0x3, 0x3), ctx);
+11 -2
arch/s390/configs/debug_defconfig
··· 50 50 CONFIG_HZ_100=y 51 51 CONFIG_CERT_STORE=y 52 52 CONFIG_EXPOLINE=y 53 - # CONFIG_EXPOLINE_EXTERN is not set 54 53 CONFIG_EXPOLINE_AUTO=y 55 54 CONFIG_CHSC_SCH=y 56 55 CONFIG_VFIO_CCW=m ··· 94 95 CONFIG_ZSWAP=y 95 96 CONFIG_ZSWAP_ZPOOL_DEFAULT_ZBUD=y 96 97 CONFIG_ZSMALLOC_STAT=y 98 + CONFIG_SLAB_BUCKETS=y 97 99 CONFIG_SLUB_STATS=y 98 100 # CONFIG_COMPAT_BRK is not set 99 101 CONFIG_MEMORY_HOTPLUG=y ··· 426 426 # CONFIG_FW_LOADER is not set 427 427 CONFIG_CONNECTOR=y 428 428 CONFIG_ZRAM=y 429 + CONFIG_ZRAM_BACKEND_LZ4=y 430 + CONFIG_ZRAM_BACKEND_LZ4HC=y 431 + CONFIG_ZRAM_BACKEND_ZSTD=y 432 + CONFIG_ZRAM_BACKEND_DEFLATE=y 433 + CONFIG_ZRAM_BACKEND_842=y 434 + CONFIG_ZRAM_BACKEND_LZO=y 435 + CONFIG_ZRAM_DEF_COMP_DEFLATE=y 429 436 CONFIG_BLK_DEV_LOOP=m 430 437 CONFIG_BLK_DEV_DRBD=m 431 438 CONFIG_BLK_DEV_NBD=m ··· 493 486 CONFIG_DM_FLAKEY=m 494 487 CONFIG_DM_VERITY=m 495 488 CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG=y 489 + CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG_PLATFORM_KEYRING=y 496 490 CONFIG_DM_SWITCH=m 497 491 CONFIG_DM_INTEGRITY=m 498 492 CONFIG_DM_VDO=m ··· 543 535 CONFIG_MLX4_EN=m 544 536 CONFIG_MLX5_CORE=m 545 537 CONFIG_MLX5_CORE_EN=y 538 + # CONFIG_NET_VENDOR_META is not set 546 539 # CONFIG_NET_VENDOR_MICREL is not set 547 540 # CONFIG_NET_VENDOR_MICROCHIP is not set 548 541 # CONFIG_NET_VENDOR_MICROSEMI is not set ··· 704 695 CONFIG_NFSD_V3_ACL=y 705 696 CONFIG_NFSD_V4=y 706 697 CONFIG_NFSD_V4_SECURITY_LABEL=y 698 + # CONFIG_NFSD_LEGACY_CLIENT_TRACKING is not set 707 699 CONFIG_CIFS=m 708 700 CONFIG_CIFS_UPCALL=y 709 701 CONFIG_CIFS_XATTR=y ··· 750 740 CONFIG_CRYPTO_ECDH=m 751 741 CONFIG_CRYPTO_ECDSA=m 752 742 CONFIG_CRYPTO_ECRDSA=m 753 - CONFIG_CRYPTO_SM2=m 754 743 CONFIG_CRYPTO_CURVE25519=m 755 744 CONFIG_CRYPTO_AES_TI=m 756 745 CONFIG_CRYPTO_ANUBIS=m
+12 -2
arch/s390/configs/defconfig
··· 48 48 CONFIG_HZ_100=y 49 49 CONFIG_CERT_STORE=y 50 50 CONFIG_EXPOLINE=y 51 - # CONFIG_EXPOLINE_EXTERN is not set 52 51 CONFIG_EXPOLINE_AUTO=y 53 52 CONFIG_CHSC_SCH=y 54 53 CONFIG_VFIO_CCW=m ··· 88 89 CONFIG_ZSWAP=y 89 90 CONFIG_ZSWAP_ZPOOL_DEFAULT_ZBUD=y 90 91 CONFIG_ZSMALLOC_STAT=y 92 + CONFIG_SLAB_BUCKETS=y 91 93 # CONFIG_COMPAT_BRK is not set 92 94 CONFIG_MEMORY_HOTPLUG=y 93 95 CONFIG_MEMORY_HOTREMOVE=y ··· 416 416 # CONFIG_FW_LOADER is not set 417 417 CONFIG_CONNECTOR=y 418 418 CONFIG_ZRAM=y 419 + CONFIG_ZRAM_BACKEND_LZ4=y 420 + CONFIG_ZRAM_BACKEND_LZ4HC=y 421 + CONFIG_ZRAM_BACKEND_ZSTD=y 422 + CONFIG_ZRAM_BACKEND_DEFLATE=y 423 + CONFIG_ZRAM_BACKEND_842=y 424 + CONFIG_ZRAM_BACKEND_LZO=y 425 + CONFIG_ZRAM_DEF_COMP_DEFLATE=y 419 426 CONFIG_BLK_DEV_LOOP=m 420 427 CONFIG_BLK_DEV_DRBD=m 421 428 CONFIG_BLK_DEV_NBD=m ··· 483 476 CONFIG_DM_FLAKEY=m 484 477 CONFIG_DM_VERITY=m 485 478 CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG=y 479 + CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG_PLATFORM_KEYRING=y 486 480 CONFIG_DM_SWITCH=m 487 481 CONFIG_DM_INTEGRITY=m 488 482 CONFIG_DM_VDO=m ··· 533 525 CONFIG_MLX4_EN=m 534 526 CONFIG_MLX5_CORE=m 535 527 CONFIG_MLX5_CORE_EN=y 528 + # CONFIG_NET_VENDOR_META is not set 536 529 # CONFIG_NET_VENDOR_MICREL is not set 537 530 # CONFIG_NET_VENDOR_MICROCHIP is not set 538 531 # CONFIG_NET_VENDOR_MICROSEMI is not set ··· 691 682 CONFIG_NFSD_V3_ACL=y 692 683 CONFIG_NFSD_V4=y 693 684 CONFIG_NFSD_V4_SECURITY_LABEL=y 685 + # CONFIG_NFSD_LEGACY_CLIENT_TRACKING is not set 694 686 CONFIG_CIFS=m 695 687 CONFIG_CIFS_UPCALL=y 696 688 CONFIG_CIFS_XATTR=y ··· 736 726 CONFIG_CRYPTO_ECDH=m 737 727 CONFIG_CRYPTO_ECDSA=m 738 728 CONFIG_CRYPTO_ECRDSA=m 739 - CONFIG_CRYPTO_SM2=m 740 729 CONFIG_CRYPTO_CURVE25519=m 741 730 CONFIG_CRYPTO_AES_TI=m 742 731 CONFIG_CRYPTO_ANUBIS=m ··· 776 767 CONFIG_CRYPTO_LZ4HC=m 777 768 CONFIG_CRYPTO_ZSTD=m 778 769 CONFIG_CRYPTO_ANSI_CPRNG=m 770 + CONFIG_CRYPTO_JITTERENTROPY_OSR=1 779 771 CONFIG_CRYPTO_USER_API_HASH=m 780 772 CONFIG_CRYPTO_USER_API_SKCIPHER=m 781 773 CONFIG_CRYPTO_USER_API_RNG=m
+1
arch/s390/configs/zfcpdump_defconfig
··· 49 49 # CONFIG_HVC_IUCV is not set 50 50 # CONFIG_HW_RANDOM_S390 is not set 51 51 # CONFIG_HMC_DRV is not set 52 + # CONFIG_S390_UV_UAPI is not set 52 53 # CONFIG_S390_TAPE is not set 53 54 # CONFIG_VMCP is not set 54 55 # CONFIG_MONWRITER is not set
+1
arch/s390/include/asm/perf_event.h
··· 49 49 }; 50 50 51 51 #define perf_arch_fetch_caller_regs(regs, __ip) do { \ 52 + (regs)->psw.mask = 0; \ 52 53 (regs)->psw.addr = (__ip); \ 53 54 (regs)->gprs[15] = (unsigned long)__builtin_frame_address(0) - \ 54 55 offsetof(struct stack_frame, back_chain); \
+1 -1
arch/s390/kvm/diag.c
··· 77 77 vcpu->stat.instruction_diagnose_258++; 78 78 if (vcpu->run->s.regs.gprs[rx] & 7) 79 79 return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION); 80 - rc = read_guest(vcpu, vcpu->run->s.regs.gprs[rx], rx, &parm, sizeof(parm)); 80 + rc = read_guest_real(vcpu, vcpu->run->s.regs.gprs[rx], &parm, sizeof(parm)); 81 81 if (rc) 82 82 return kvm_s390_inject_prog_cond(vcpu, rc); 83 83 if (parm.parm_version != 2 || parm.parm_len < 5 || parm.code != 0x258)
+4
arch/s390/kvm/gaccess.c
··· 828 828 const gfn_t gfn = gpa_to_gfn(gpa); 829 829 int rc; 830 830 831 + if (!gfn_to_memslot(kvm, gfn)) 832 + return PGM_ADDRESSING; 831 833 if (mode == GACC_STORE) 832 834 rc = kvm_write_guest_page(kvm, gfn, data, offset, len); 833 835 else ··· 987 985 gra += fragment_len; 988 986 data += fragment_len; 989 987 } 988 + if (rc > 0) 989 + vcpu->arch.pgm.code = rc; 990 990 return rc; 991 991 } 992 992
+8 -6
arch/s390/kvm/gaccess.h
··· 405 405 * @len: number of bytes to copy 406 406 * 407 407 * Copy @len bytes from @data (kernel space) to @gra (guest real address). 408 - * It is up to the caller to ensure that the entire guest memory range is 409 - * valid memory before calling this function. 410 408 * Guest low address and key protection are not checked. 411 409 * 412 - * Returns zero on success or -EFAULT on error. 410 + * Returns zero on success, -EFAULT when copying from @data failed, or 411 + * PGM_ADRESSING in case @gra is outside a memslot. In this case, pgm check info 412 + * is also stored to allow injecting into the guest (if applicable) using 413 + * kvm_s390_inject_prog_cond(). 413 414 * 414 415 * If an error occurs data may have been copied partially to guest memory. 415 416 */ ··· 429 428 * @len: number of bytes to copy 430 429 * 431 430 * Copy @len bytes from @gra (guest real address) to @data (kernel space). 432 - * It is up to the caller to ensure that the entire guest memory range is 433 - * valid memory before calling this function. 434 431 * Guest key protection is not checked. 435 432 * 436 - * Returns zero on success or -EFAULT on error. 433 + * Returns zero on success, -EFAULT when copying to @data failed, or 434 + * PGM_ADRESSING in case @gra is outside a memslot. In this case, pgm check info 435 + * is also stored to allow injecting into the guest (if applicable) using 436 + * kvm_s390_inject_prog_cond(). 437 437 * 438 438 * If an error occurs data may have been copied partially to kernel space. 439 439 */
+9 -8
arch/s390/pci/pci_event.c
··· 280 280 goto no_pdev; 281 281 282 282 switch (ccdf->pec) { 283 - case 0x003a: /* Service Action or Error Recovery Successful */ 283 + case 0x002a: /* Error event concerns FMB */ 284 + case 0x002b: 285 + case 0x002c: 286 + break; 287 + case 0x0040: /* Service Action or Error Recovery Failed */ 288 + case 0x003b: 289 + zpci_event_io_failure(pdev, pci_channel_io_perm_failure); 290 + break; 291 + default: /* PCI function left in the error state attempt to recover */ 284 292 ers_res = zpci_event_attempt_error_recovery(pdev); 285 293 if (ers_res != PCI_ERS_RESULT_RECOVERED) 286 294 zpci_event_io_failure(pdev, pci_channel_io_perm_failure); 287 - break; 288 - default: 289 - /* 290 - * Mark as frozen not permanently failed because the device 291 - * could be subsequently recovered by the platform. 292 - */ 293 - zpci_event_io_failure(pdev, pci_channel_io_frozen); 294 295 break; 295 296 } 296 297 pci_dev_put(pdev);
+5
arch/x86/entry/entry.S
··· 9 9 #include <asm/unwind_hints.h> 10 10 #include <asm/segment.h> 11 11 #include <asm/cache.h> 12 + #include <asm/cpufeatures.h> 13 + #include <asm/nospec-branch.h> 12 14 13 15 #include "calling.h" 14 16 ··· 21 19 movl $PRED_CMD_IBPB, %eax 22 20 xorl %edx, %edx 23 21 wrmsr 22 + 23 + /* Make sure IBPB clears return stack preductions too. */ 24 + FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_BUG_IBPB_NO_RET 24 25 RET 25 26 SYM_FUNC_END(entry_ibpb) 26 27 /* For KVM */
+4 -2
arch/x86/entry/entry_32.S
··· 871 871 872 872 /* Now ready to switch the cr3 */ 873 873 SWITCH_TO_USER_CR3 scratch_reg=%eax 874 + /* Clobbers ZF */ 875 + CLEAR_CPU_BUFFERS 874 876 875 877 /* 876 878 * Restore all flags except IF. (We restore IF separately because ··· 883 881 BUG_IF_WRONG_CR3 no_user_check=1 884 882 popfl 885 883 popl %eax 886 - CLEAR_CPU_BUFFERS 887 884 888 885 /* 889 886 * Return back to the vDSO, which will pop ecx and edx. ··· 1145 1144 1146 1145 /* Not on SYSENTER stack. */ 1147 1146 call exc_nmi 1148 - CLEAR_CPU_BUFFERS 1149 1147 jmp .Lnmi_return 1150 1148 1151 1149 .Lnmi_from_sysenter_stack: ··· 1165 1165 1166 1166 CHECK_AND_APPLY_ESPFIX 1167 1167 RESTORE_ALL_NMI cr3_reg=%edi pop=4 1168 + CLEAR_CPU_BUFFERS 1168 1169 jmp .Lirq_return 1169 1170 1170 1171 #ifdef CONFIG_X86_ESPFIX32 ··· 1207 1206 * 1 - orig_ax 1208 1207 */ 1209 1208 lss (1+5+6)*4(%esp), %esp # back to espfix stack 1209 + CLEAR_CPU_BUFFERS 1210 1210 jmp .Lirq_return 1211 1211 #endif 1212 1212 SYM_CODE_END(asm_exc_nmi)
+3 -1
arch/x86/include/asm/cpufeatures.h
··· 215 215 #define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE ( 7*32+23) /* Disable Speculative Store Bypass. */ 216 216 #define X86_FEATURE_LS_CFG_SSBD ( 7*32+24) /* AMD SSBD implementation via LS_CFG MSR */ 217 217 #define X86_FEATURE_IBRS ( 7*32+25) /* "ibrs" Indirect Branch Restricted Speculation */ 218 - #define X86_FEATURE_IBPB ( 7*32+26) /* "ibpb" Indirect Branch Prediction Barrier */ 218 + #define X86_FEATURE_IBPB ( 7*32+26) /* "ibpb" Indirect Branch Prediction Barrier without a guaranteed RSB flush */ 219 219 #define X86_FEATURE_STIBP ( 7*32+27) /* "stibp" Single Thread Indirect Branch Predictors */ 220 220 #define X86_FEATURE_ZEN ( 7*32+28) /* Generic flag for all Zen and newer */ 221 221 #define X86_FEATURE_L1TF_PTEINV ( 7*32+29) /* L1TF workaround PTE inversion */ ··· 348 348 #define X86_FEATURE_CPPC (13*32+27) /* "cppc" Collaborative Processor Performance Control */ 349 349 #define X86_FEATURE_AMD_PSFD (13*32+28) /* Predictive Store Forwarding Disable */ 350 350 #define X86_FEATURE_BTC_NO (13*32+29) /* Not vulnerable to Branch Type Confusion */ 351 + #define X86_FEATURE_AMD_IBPB_RET (13*32+30) /* IBPB clears return address predictor */ 351 352 #define X86_FEATURE_BRS (13*32+31) /* "brs" Branch Sampling available */ 352 353 353 354 /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */ ··· 524 523 #define X86_BUG_DIV0 X86_BUG(1*32 + 1) /* "div0" AMD DIV0 speculation bug */ 525 524 #define X86_BUG_RFDS X86_BUG(1*32 + 2) /* "rfds" CPU is vulnerable to Register File Data Sampling */ 526 525 #define X86_BUG_BHI X86_BUG(1*32 + 3) /* "bhi" CPU is affected by Branch History Injection */ 526 + #define X86_BUG_IBPB_NO_RET X86_BUG(1*32 + 4) /* "ibpb_no_ret" IBPB omits return target predictions */ 527 527 #endif /* _ASM_X86_CPUFEATURES_H */
+10 -1
arch/x86/include/asm/nospec-branch.h
··· 323 323 * Note: Only the memory operand variant of VERW clears the CPU buffers. 324 324 */ 325 325 .macro CLEAR_CPU_BUFFERS 326 - ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF 326 + #ifdef CONFIG_X86_64 327 + ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF 328 + #else 329 + /* 330 + * In 32bit mode, the memory operand must be a %cs reference. The data 331 + * segments may not be usable (vm86 mode), and the stack segment may not 332 + * be flat (ESPFIX32). 333 + */ 334 + ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF 335 + #endif 327 336 .endm 328 337 329 338 #ifdef CONFIG_X86_64
+2
arch/x86/kernel/amd_nb.c
··· 44 44 #define PCI_DEVICE_ID_AMD_19H_M70H_DF_F4 0x14f4 45 45 #define PCI_DEVICE_ID_AMD_19H_M78H_DF_F4 0x12fc 46 46 #define PCI_DEVICE_ID_AMD_1AH_M00H_DF_F4 0x12c4 47 + #define PCI_DEVICE_ID_AMD_1AH_M20H_DF_F4 0x16fc 47 48 #define PCI_DEVICE_ID_AMD_1AH_M60H_DF_F4 0x124c 48 49 #define PCI_DEVICE_ID_AMD_1AH_M70H_DF_F4 0x12bc 49 50 #define PCI_DEVICE_ID_AMD_MI200_DF_F4 0x14d4 ··· 128 127 { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_M78H_DF_F4) }, 129 128 { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CNB17H_F4) }, 130 129 { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M00H_DF_F4) }, 130 + { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M20H_DF_F4) }, 131 131 { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M60H_DF_F4) }, 132 132 { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M70H_DF_F4) }, 133 133 { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_MI200_DF_F4) },
+13 -1
arch/x86/kernel/apic/apic.c
··· 440 440 v = apic_read(APIC_LVTT); 441 441 v |= (APIC_LVT_MASKED | LOCAL_TIMER_VECTOR); 442 442 apic_write(APIC_LVTT, v); 443 - apic_write(APIC_TMICT, 0); 443 + 444 + /* 445 + * Setting APIC_LVT_MASKED (above) should be enough to tell 446 + * the hardware that this timer will never fire. But AMD 447 + * erratum 411 and some Intel CPU behavior circa 2024 say 448 + * otherwise. Time for belt and suspenders programming: mask 449 + * the timer _and_ zero the counter registers: 450 + */ 451 + if (v & APIC_LVT_TIMER_TSCDEADLINE) 452 + wrmsrl(MSR_IA32_TSC_DEADLINE, 0); 453 + else 454 + apic_write(APIC_TMICT, 0); 455 + 444 456 return 0; 445 457 } 446 458
+2 -1
arch/x86/kernel/cpu/amd.c
··· 1202 1202 if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD) 1203 1203 return; 1204 1204 1205 - on_each_cpu(zenbleed_check_cpu, NULL, 1); 1205 + if (cpu_feature_enabled(X86_FEATURE_ZEN2)) 1206 + on_each_cpu(zenbleed_check_cpu, NULL, 1); 1206 1207 }
+32
arch/x86/kernel/cpu/bugs.c
··· 1115 1115 1116 1116 case RETBLEED_MITIGATION_IBPB: 1117 1117 setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB); 1118 + 1119 + /* 1120 + * IBPB on entry already obviates the need for 1121 + * software-based untraining so clear those in case some 1122 + * other mitigation like SRSO has selected them. 1123 + */ 1124 + setup_clear_cpu_cap(X86_FEATURE_UNRET); 1125 + setup_clear_cpu_cap(X86_FEATURE_RETHUNK); 1126 + 1118 1127 setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT); 1119 1128 mitigate_smt = true; 1129 + 1130 + /* 1131 + * There is no need for RSB filling: entry_ibpb() ensures 1132 + * all predictions, including the RSB, are invalidated, 1133 + * regardless of IBPB implementation. 1134 + */ 1135 + setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT); 1136 + 1120 1137 break; 1121 1138 1122 1139 case RETBLEED_MITIGATION_STUFF: ··· 2644 2627 if (has_microcode) { 2645 2628 setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB); 2646 2629 srso_mitigation = SRSO_MITIGATION_IBPB; 2630 + 2631 + /* 2632 + * IBPB on entry already obviates the need for 2633 + * software-based untraining so clear those in case some 2634 + * other mitigation like Retbleed has selected them. 2635 + */ 2636 + setup_clear_cpu_cap(X86_FEATURE_UNRET); 2637 + setup_clear_cpu_cap(X86_FEATURE_RETHUNK); 2647 2638 } 2648 2639 } else { 2649 2640 pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n"); ··· 2663 2638 if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) { 2664 2639 setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT); 2665 2640 srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT; 2641 + 2642 + /* 2643 + * There is no need for RSB filling: entry_ibpb() ensures 2644 + * all predictions, including the RSB, are invalidated, 2645 + * regardless of IBPB implementation. 2646 + */ 2647 + setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT); 2666 2648 } 2667 2649 } else { 2668 2650 pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
+3
arch/x86/kernel/cpu/common.c
··· 1443 1443 boot_cpu_has(X86_FEATURE_HYPERVISOR))) 1444 1444 setup_force_cpu_bug(X86_BUG_BHI); 1445 1445 1446 + if (cpu_has(c, X86_FEATURE_AMD_IBPB) && !cpu_has(c, X86_FEATURE_AMD_IBPB_RET)) 1447 + setup_force_cpu_bug(X86_BUG_IBPB_NO_RET); 1448 + 1446 1449 if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) 1447 1450 return; 1448 1451
+2 -2
arch/x86/kernel/cpu/resctrl/core.c
··· 207 207 return false; 208 208 } 209 209 210 - static bool __get_mem_config_intel(struct rdt_resource *r) 210 + static __init bool __get_mem_config_intel(struct rdt_resource *r) 211 211 { 212 212 struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r); 213 213 union cpuid_0x10_3_eax eax; ··· 241 241 return true; 242 242 } 243 243 244 - static bool __rdt_get_mem_config_amd(struct rdt_resource *r) 244 + static __init bool __rdt_get_mem_config_amd(struct rdt_resource *r) 245 245 { 246 246 struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r); 247 247 u32 eax, ebx, ecx, edx, subleaf;
+14 -9
arch/x86/kernel/cpu/resctrl/ctrlmondata.c
··· 29 29 * hardware. The allocated bandwidth percentage is rounded to the next 30 30 * control step available on the hardware. 31 31 */ 32 - static bool bw_validate(char *buf, unsigned long *data, struct rdt_resource *r) 32 + static bool bw_validate(char *buf, u32 *data, struct rdt_resource *r) 33 33 { 34 - unsigned long bw; 35 34 int ret; 35 + u32 bw; 36 36 37 37 /* 38 38 * Only linear delay values is supported for current Intel SKUs. ··· 42 42 return false; 43 43 } 44 44 45 - ret = kstrtoul(buf, 10, &bw); 45 + ret = kstrtou32(buf, 10, &bw); 46 46 if (ret) { 47 - rdt_last_cmd_printf("Non-decimal digit in MB value %s\n", buf); 47 + rdt_last_cmd_printf("Invalid MB value %s\n", buf); 48 48 return false; 49 49 } 50 50 51 - if ((bw < r->membw.min_bw || bw > r->default_ctrl) && 52 - !is_mba_sc(r)) { 53 - rdt_last_cmd_printf("MB value %ld out of range [%d,%d]\n", bw, 54 - r->membw.min_bw, r->default_ctrl); 51 + /* Nothing else to do if software controller is enabled. */ 52 + if (is_mba_sc(r)) { 53 + *data = bw; 54 + return true; 55 + } 56 + 57 + if (bw < r->membw.min_bw || bw > r->default_ctrl) { 58 + rdt_last_cmd_printf("MB value %u out of range [%d,%d]\n", 59 + bw, r->membw.min_bw, r->default_ctrl); 55 60 return false; 56 61 } 57 62 ··· 70 65 struct resctrl_staged_config *cfg; 71 66 u32 closid = data->rdtgrp->closid; 72 67 struct rdt_resource *r = s->res; 73 - unsigned long bw_val; 68 + u32 bw_val; 74 69 75 70 cfg = &d->staged_config[s->conf_type]; 76 71 if (cfg->have_new_ctrl) {
+4
arch/x86/kernel/kvm.c
··· 37 37 #include <asm/apic.h> 38 38 #include <asm/apicdef.h> 39 39 #include <asm/hypervisor.h> 40 + #include <asm/mtrr.h> 40 41 #include <asm/tlb.h> 41 42 #include <asm/cpuidle_haltpoll.h> 42 43 #include <asm/ptrace.h> ··· 981 980 } 982 981 kvmclock_init(); 983 982 x86_platform.apic_post_init = kvm_apic_init; 983 + 984 + /* Set WB as the default cache mode for SEV-SNP and TDX */ 985 + mtrr_overwrite_state(NULL, 0, MTRR_TYPE_WRBACK); 984 986 } 985 987 986 988 #if defined(CONFIG_AMD_MEM_ENCRYPT)
+17 -10
arch/x86/kvm/mmu/mmu.c
··· 1556 1556 { 1557 1557 bool flush = false; 1558 1558 1559 + /* 1560 + * To prevent races with vCPUs faulting in a gfn using stale data, 1561 + * zapping a gfn range must be protected by mmu_invalidate_in_progress 1562 + * (and mmu_invalidate_seq). The only exception is memslot deletion; 1563 + * in that case, SRCU synchronization ensures that SPTEs are zapped 1564 + * after all vCPUs have unlocked SRCU, guaranteeing that vCPUs see the 1565 + * invalid slot. 1566 + */ 1567 + lockdep_assert_once(kvm->mmu_invalidate_in_progress || 1568 + lockdep_is_held(&kvm->slots_lock)); 1569 + 1559 1570 if (kvm_memslots_have_rmaps(kvm)) 1560 1571 flush = __kvm_rmap_zap_gfn_range(kvm, range->slot, 1561 1572 range->start, range->end, ··· 1895 1884 if (is_obsolete_sp((_kvm), (_sp))) { \ 1896 1885 } else 1897 1886 1898 - #define for_each_gfn_valid_sp(_kvm, _sp, _gfn) \ 1887 + #define for_each_gfn_valid_sp_with_gptes(_kvm, _sp, _gfn) \ 1899 1888 for_each_valid_sp(_kvm, _sp, \ 1900 1889 &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)]) \ 1901 - if ((_sp)->gfn != (_gfn)) {} else 1902 - 1903 - #define for_each_gfn_valid_sp_with_gptes(_kvm, _sp, _gfn) \ 1904 - for_each_gfn_valid_sp(_kvm, _sp, _gfn) \ 1905 - if (!sp_has_gptes(_sp)) {} else 1890 + if ((_sp)->gfn != (_gfn) || !sp_has_gptes(_sp)) {} else 1906 1891 1907 1892 static bool kvm_sync_page_check(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) 1908 1893 { ··· 7070 7063 7071 7064 /* 7072 7065 * Since accounting information is stored in struct kvm_arch_memory_slot, 7073 - * shadow pages deletion (e.g. unaccount_shadowed()) requires that all 7074 - * gfns with a shadow page have a corresponding memslot. Do so before 7075 - * the memslot goes away. 7066 + * all MMU pages that are shadowing guest PTEs must be zapped before the 7067 + * memslot is deleted, as freeing such pages after the memslot is freed 7068 + * will result in use-after-free, e.g. in unaccount_shadowed(). 7076 7069 */ 7077 7070 for (i = 0; i < slot->npages; i++) { 7078 7071 struct kvm_mmu_page *sp; 7079 7072 gfn_t gfn = slot->base_gfn + i; 7080 7073 7081 - for_each_gfn_valid_sp(kvm, sp, gfn) 7074 + for_each_gfn_valid_sp_with_gptes(kvm, sp, gfn) 7082 7075 kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); 7083 7076 7084 7077 if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) {
+5 -1
arch/x86/kvm/svm/nested.c
··· 63 63 u64 pdpte; 64 64 int ret; 65 65 66 + /* 67 + * Note, nCR3 is "assumed" to be 32-byte aligned, i.e. the CPU ignores 68 + * nCR3[4:0] when loading PDPTEs from memory. 69 + */ 66 70 ret = kvm_vcpu_read_guest_page(vcpu, gpa_to_gfn(cr3), &pdpte, 67 - offset_in_page(cr3) + index * 8, 8); 71 + (cr3 & GENMASK(11, 5)) + index * 8, 8); 68 72 if (ret) 69 73 return 0; 70 74 return pdpte;
+3 -3
arch/x86/kvm/vmx/vmx.c
··· 4888 4888 vmx->hv_deadline_tsc = -1; 4889 4889 kvm_set_cr8(vcpu, 0); 4890 4890 4891 - vmx_segment_cache_clear(vmx); 4892 - kvm_register_mark_available(vcpu, VCPU_EXREG_SEGMENTS); 4893 - 4894 4891 seg_setup(VCPU_SREG_CS); 4895 4892 vmcs_write16(GUEST_CS_SELECTOR, 0xf000); 4896 4893 vmcs_writel(GUEST_CS_BASE, 0xffff0000ul); ··· 4913 4916 4914 4917 vmcs_writel(GUEST_IDTR_BASE, 0); 4915 4918 vmcs_write32(GUEST_IDTR_LIMIT, 0xffff); 4919 + 4920 + vmx_segment_cache_clear(vmx); 4921 + kvm_register_mark_available(vcpu, VCPU_EXREG_SEGMENTS); 4916 4922 4917 4923 vmcs_write32(GUEST_ACTIVITY_STATE, GUEST_ACTIVITY_ACTIVE); 4918 4924 vmcs_write32(GUEST_INTERRUPTIBILITY_INFO, 0);
+6 -2
block/blk-mq.c
··· 4310 4310 /* mark the queue as mq asap */ 4311 4311 q->mq_ops = set->ops; 4312 4312 4313 + /* 4314 + * ->tag_set has to be setup before initialize hctx, which cpuphp 4315 + * handler needs it for checking queue mapping 4316 + */ 4317 + q->tag_set = set; 4318 + 4313 4319 if (blk_mq_alloc_ctxs(q)) 4314 4320 goto err_exit; 4315 4321 ··· 4333 4327 4334 4328 INIT_WORK(&q->timeout_work, blk_mq_timeout_work); 4335 4329 blk_queue_rq_timeout(q, set->timeout ? set->timeout : 30 * HZ); 4336 - 4337 - q->tag_set = set; 4338 4330 4339 4331 q->queue_flags |= QUEUE_FLAG_MQ_DEFAULT; 4340 4332
+1 -1
block/blk-rq-qos.c
··· 219 219 220 220 data->got_token = true; 221 221 smp_wmb(); 222 - list_del_init(&curr->entry); 223 222 wake_up_process(data->task); 223 + list_del_init_careful(&curr->entry); 224 224 return 1; 225 225 } 226 226
+14 -7
block/elevator.c
··· 106 106 return NULL; 107 107 } 108 108 109 - static struct elevator_type *elevator_find_get(struct request_queue *q, 110 - const char *name) 109 + static struct elevator_type *elevator_find_get(const char *name) 111 110 { 112 111 struct elevator_type *e; 113 112 ··· 550 551 static inline bool elv_support_iosched(struct request_queue *q) 551 552 { 552 553 if (!queue_is_mq(q) || 553 - (q->tag_set && (q->tag_set->flags & BLK_MQ_F_NO_SCHED))) 554 + (q->tag_set->flags & BLK_MQ_F_NO_SCHED)) 554 555 return false; 555 556 return true; 556 557 } ··· 561 562 */ 562 563 static struct elevator_type *elevator_get_default(struct request_queue *q) 563 564 { 564 - if (q->tag_set && q->tag_set->flags & BLK_MQ_F_NO_SCHED_BY_DEFAULT) 565 + if (q->tag_set->flags & BLK_MQ_F_NO_SCHED_BY_DEFAULT) 565 566 return NULL; 566 567 567 568 if (q->nr_hw_queues != 1 && 568 569 !blk_mq_is_shared_tags(q->tag_set->flags)) 569 570 return NULL; 570 571 571 - return elevator_find_get(q, "mq-deadline"); 572 + return elevator_find_get("mq-deadline"); 572 573 } 573 574 574 575 /* ··· 696 697 if (q->elevator && elevator_match(q->elevator->type, elevator_name)) 697 698 return 0; 698 699 699 - e = elevator_find_get(q, elevator_name); 700 + e = elevator_find_get(elevator_name); 700 701 if (!e) 701 702 return -EINVAL; 702 703 ret = elevator_switch(q, e); ··· 708 709 size_t count) 709 710 { 710 711 char elevator_name[ELV_NAME_MAX]; 712 + struct elevator_type *found; 713 + const char *name; 711 714 712 715 if (!elv_support_iosched(disk->queue)) 713 716 return -EOPNOTSUPP; 714 717 715 718 strscpy(elevator_name, buf, sizeof(elevator_name)); 719 + name = strstrip(elevator_name); 716 720 717 - request_module("%s-iosched", strstrip(elevator_name)); 721 + spin_lock(&elv_list_lock); 722 + found = __elevator_find(name); 723 + spin_unlock(&elv_list_lock); 724 + 725 + if (!found) 726 + request_module("%s-iosched", name); 718 727 719 728 return 0; 720 729 }
+1 -1
drivers/accel/qaic/qaic_control.c
··· 496 496 nents = sgt->nents; 497 497 nents_dma = nents; 498 498 *size = QAIC_MANAGE_EXT_MSG_LENGTH - msg_hdr_len - sizeof(**out_trans); 499 - for_each_sgtable_sg(sgt, sg, i) { 499 + for_each_sgtable_dma_sg(sgt, sg, i) { 500 500 *size -= sizeof(*asp); 501 501 /* Save 1K for possible follow-up transactions. */ 502 502 if (*size < SZ_1K) {
+3 -3
drivers/accel/qaic/qaic_data.c
··· 184 184 nents = 0; 185 185 186 186 size = size ? size : PAGE_SIZE; 187 - for (sg = sgt_in->sgl; sg; sg = sg_next(sg)) { 187 + for_each_sgtable_dma_sg(sgt_in, sg, j) { 188 188 len = sg_dma_len(sg); 189 189 190 190 if (!len) ··· 221 221 222 222 /* copy relevant sg node and fix page and length */ 223 223 sgn = sgf; 224 - for_each_sgtable_sg(sgt, sg, j) { 224 + for_each_sgtable_dma_sg(sgt, sg, j) { 225 225 memcpy(sg, sgn, sizeof(*sg)); 226 226 if (sgn == sgf) { 227 227 sg_dma_address(sg) += offf; ··· 301 301 * fence. 302 302 */ 303 303 dev_addr = req->dev_addr; 304 - for_each_sgtable_sg(slice->sgt, sg, i) { 304 + for_each_sgtable_dma_sg(slice->sgt, sg, i) { 305 305 slice->reqs[i].cmd = cmd; 306 306 slice->reqs[i].src_addr = cpu_to_le64(slice->dir == DMA_TO_DEVICE ? 307 307 sg_dma_address(sg) : dev_addr);
-1
drivers/block/drbd/drbd_int.h
··· 1364 1364 1365 1365 extern struct mutex resources_mutex; 1366 1366 1367 - extern int conn_lowest_minor(struct drbd_connection *connection); 1368 1367 extern enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsigned int minor); 1369 1368 extern void drbd_destroy_device(struct kref *kref); 1370 1369 extern void drbd_delete_device(struct drbd_device *device);
-14
drivers/block/drbd/drbd_main.c
··· 471 471 wait_for_completion(&thi->stop); 472 472 } 473 473 474 - int conn_lowest_minor(struct drbd_connection *connection) 475 - { 476 - struct drbd_peer_device *peer_device; 477 - int vnr = 0, minor = -1; 478 - 479 - rcu_read_lock(); 480 - peer_device = idr_get_next(&connection->peer_devices, &vnr); 481 - if (peer_device) 482 - minor = device_to_minor(peer_device->device); 483 - rcu_read_unlock(); 484 - 485 - return minor; 486 - } 487 - 488 474 #ifdef CONFIG_SMP 489 475 /* 490 476 * drbd_calc_cpu_mask() - Generate CPU masks, spread over all CPUs
+10 -1
drivers/block/ublk_drv.c
··· 2380 2380 * TODO: provide forward progress for RECOVERY handler, so that 2381 2381 * unprivileged device can benefit from it 2382 2382 */ 2383 - if (info.flags & UBLK_F_UNPRIVILEGED_DEV) 2383 + if (info.flags & UBLK_F_UNPRIVILEGED_DEV) { 2384 2384 info.flags &= ~(UBLK_F_USER_RECOVERY_REISSUE | 2385 2385 UBLK_F_USER_RECOVERY); 2386 + 2387 + /* 2388 + * For USER_COPY, we depends on userspace to fill request 2389 + * buffer by pwrite() to ublk char device, which can't be 2390 + * used for unprivileged device 2391 + */ 2392 + if (info.flags & UBLK_F_USER_COPY) 2393 + return -EINVAL; 2394 + } 2386 2395 2387 2396 /* the created device is always owned by current user */ 2388 2397 ublk_store_owner_uid_gid(&info.owner_uid, &info.owner_gid);
+9 -18
drivers/bluetooth/btusb.c
··· 1345 1345 if (!urb) 1346 1346 return -ENOMEM; 1347 1347 1348 - /* Use maximum HCI Event size so the USB stack handles 1349 - * ZPL/short-transfer automatically. 1350 - */ 1351 - size = HCI_MAX_EVENT_SIZE; 1348 + if (le16_to_cpu(data->udev->descriptor.idVendor) == 0x0a12 && 1349 + le16_to_cpu(data->udev->descriptor.idProduct) == 0x0001) 1350 + /* Fake CSR devices don't seem to support sort-transter */ 1351 + size = le16_to_cpu(data->intr_ep->wMaxPacketSize); 1352 + else 1353 + /* Use maximum HCI Event size so the USB stack handles 1354 + * ZPL/short-transfer automatically. 1355 + */ 1356 + size = HCI_MAX_EVENT_SIZE; 1352 1357 1353 1358 buf = kmalloc(size, mem_flags); 1354 1359 if (!buf) { ··· 4043 4038 static int btusb_suspend(struct usb_interface *intf, pm_message_t message) 4044 4039 { 4045 4040 struct btusb_data *data = usb_get_intfdata(intf); 4046 - int err; 4047 4041 4048 4042 BT_DBG("intf %p", intf); 4049 4043 ··· 4055 4051 if (data->suspend_count++) 4056 4052 return 0; 4057 4053 4058 - /* Notify Host stack to suspend; this has to be done before stopping 4059 - * the traffic since the hci_suspend_dev itself may generate some 4060 - * traffic. 4061 - */ 4062 - err = hci_suspend_dev(data->hdev); 4063 - if (err) { 4064 - data->suspend_count--; 4065 - return err; 4066 - } 4067 - 4068 4054 spin_lock_irq(&data->txlock); 4069 4055 if (!(PMSG_IS_AUTO(message) && data->tx_in_flight)) { 4070 4056 set_bit(BTUSB_SUSPENDING, &data->flags); ··· 4062 4068 } else { 4063 4069 spin_unlock_irq(&data->txlock); 4064 4070 data->suspend_count--; 4065 - hci_resume_dev(data->hdev); 4066 4071 return -EBUSY; 4067 4072 } 4068 4073 ··· 4181 4188 clear_bit(BTUSB_SUSPENDING, &data->flags); 4182 4189 spin_unlock_irq(&data->txlock); 4183 4190 schedule_work(&data->work); 4184 - 4185 - hci_resume_dev(data->hdev); 4186 4191 4187 4192 return 0; 4188 4193
+1 -1
drivers/cdrom/cdrom.c
··· 2313 2313 return -EINVAL; 2314 2314 2315 2315 /* Prevent arg from speculatively bypassing the length check */ 2316 - barrier_nospec(); 2316 + arg = array_index_nospec(arg, cdi->capacity); 2317 2317 2318 2318 info = kmalloc(sizeof(*info), GFP_KERNEL); 2319 2319 if (!info)
+14 -47
drivers/clk/clk_test.c
··· 473 473 &clk_dummy_rate_ops, 474 474 0); 475 475 ctx->parents_ctx[0].rate = DUMMY_CLOCK_RATE_1; 476 - ret = clk_hw_register(NULL, &ctx->parents_ctx[0].hw); 476 + ret = clk_hw_register_kunit(test, NULL, &ctx->parents_ctx[0].hw); 477 477 if (ret) 478 478 return ret; 479 479 ··· 481 481 &clk_dummy_rate_ops, 482 482 0); 483 483 ctx->parents_ctx[1].rate = DUMMY_CLOCK_RATE_2; 484 - ret = clk_hw_register(NULL, &ctx->parents_ctx[1].hw); 484 + ret = clk_hw_register_kunit(test, NULL, &ctx->parents_ctx[1].hw); 485 485 if (ret) 486 486 return ret; 487 487 ··· 489 489 ctx->hw.init = CLK_HW_INIT_PARENTS("test-mux", parents, 490 490 &clk_multiple_parents_mux_ops, 491 491 CLK_SET_RATE_PARENT); 492 - ret = clk_hw_register(NULL, &ctx->hw); 492 + ret = clk_hw_register_kunit(test, NULL, &ctx->hw); 493 493 if (ret) 494 494 return ret; 495 495 496 496 return 0; 497 - } 498 - 499 - static void 500 - clk_multiple_parents_mux_test_exit(struct kunit *test) 501 - { 502 - struct clk_multiple_parent_ctx *ctx = test->priv; 503 - 504 - clk_hw_unregister(&ctx->hw); 505 - clk_hw_unregister(&ctx->parents_ctx[0].hw); 506 - clk_hw_unregister(&ctx->parents_ctx[1].hw); 507 497 } 508 498 509 499 /* ··· 551 561 { 552 562 struct clk_multiple_parent_ctx *ctx = test->priv; 553 563 struct clk_hw *hw = &ctx->hw; 554 - struct clk *clk = clk_hw_get_clk(hw, NULL); 564 + struct clk *clk = clk_hw_get_clk_kunit(test, hw, NULL); 555 565 struct clk *parent1, *parent2; 556 566 unsigned long rate; 557 567 int ret; 558 568 559 569 kunit_skip(test, "This needs to be fixed in the core."); 560 570 561 - parent1 = clk_hw_get_clk(&ctx->parents_ctx[0].hw, NULL); 571 + parent1 = clk_hw_get_clk_kunit(test, &ctx->parents_ctx[0].hw, NULL); 562 572 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent1); 563 573 KUNIT_ASSERT_TRUE(test, clk_is_match(clk_get_parent(clk), parent1)); 564 574 565 - parent2 = clk_hw_get_clk(&ctx->parents_ctx[1].hw, NULL); 575 + parent2 = clk_hw_get_clk_kunit(test, &ctx->parents_ctx[1].hw, NULL); 566 576 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent2); 567 577 568 578 ret = clk_set_rate(parent1, DUMMY_CLOCK_RATE_1); ··· 583 593 KUNIT_ASSERT_GT(test, rate, 0); 584 594 KUNIT_EXPECT_GE(test, rate, DUMMY_CLOCK_RATE_1 - 1000); 585 595 KUNIT_EXPECT_LE(test, rate, DUMMY_CLOCK_RATE_1 + 1000); 586 - 587 - clk_put(parent2); 588 - clk_put(parent1); 589 - clk_put(clk); 590 596 } 591 597 592 598 static struct kunit_case clk_multiple_parents_mux_test_cases[] = { ··· 603 617 clk_multiple_parents_mux_test_suite = { 604 618 .name = "clk-multiple-parents-mux-test", 605 619 .init = clk_multiple_parents_mux_test_init, 606 - .exit = clk_multiple_parents_mux_test_exit, 607 620 .test_cases = clk_multiple_parents_mux_test_cases, 608 621 }; 609 622 ··· 622 637 &clk_dummy_rate_ops, 623 638 0); 624 639 ctx->parents_ctx[1].rate = DUMMY_CLOCK_INIT_RATE; 625 - ret = clk_hw_register(NULL, &ctx->parents_ctx[1].hw); 640 + ret = clk_hw_register_kunit(test, NULL, &ctx->parents_ctx[1].hw); 626 641 if (ret) 627 642 return ret; 628 643 629 644 ctx->hw.init = CLK_HW_INIT_PARENTS("test-orphan-mux", parents, 630 645 &clk_multiple_parents_mux_ops, 631 646 CLK_SET_RATE_PARENT); 632 - ret = clk_hw_register(NULL, &ctx->hw); 647 + ret = clk_hw_register_kunit(test, NULL, &ctx->hw); 633 648 if (ret) 634 649 return ret; 635 650 636 651 return 0; 637 - } 638 - 639 - static void 640 - clk_orphan_transparent_multiple_parent_mux_test_exit(struct kunit *test) 641 - { 642 - struct clk_multiple_parent_ctx *ctx = test->priv; 643 - 644 - clk_hw_unregister(&ctx->hw); 645 - clk_hw_unregister(&ctx->parents_ctx[1].hw); 646 652 } 647 653 648 654 /* ··· 888 912 { 889 913 struct clk_multiple_parent_ctx *ctx = test->priv; 890 914 struct clk_hw *hw = &ctx->hw; 891 - struct clk *clk = clk_hw_get_clk(hw, NULL); 915 + struct clk *clk = clk_hw_get_clk_kunit(test, hw, NULL); 892 916 struct clk *parent; 893 917 unsigned long rate; 894 918 int ret; ··· 897 921 898 922 clk_hw_set_rate_range(hw, DUMMY_CLOCK_RATE_1, DUMMY_CLOCK_RATE_2); 899 923 900 - parent = clk_hw_get_clk(&ctx->parents_ctx[1].hw, NULL); 924 + parent = clk_hw_get_clk_kunit(test, &ctx->parents_ctx[1].hw, NULL); 901 925 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent); 902 926 903 927 ret = clk_set_parent(clk, parent); ··· 907 931 KUNIT_ASSERT_GT(test, rate, 0); 908 932 KUNIT_EXPECT_GE(test, rate, DUMMY_CLOCK_RATE_1); 909 933 KUNIT_EXPECT_LE(test, rate, DUMMY_CLOCK_RATE_2); 910 - 911 - clk_put(parent); 912 - clk_put(clk); 913 934 } 914 935 915 936 static struct kunit_case clk_orphan_transparent_multiple_parent_mux_test_cases[] = { ··· 934 961 static struct kunit_suite clk_orphan_transparent_multiple_parent_mux_test_suite = { 935 962 .name = "clk-orphan-transparent-multiple-parent-mux-test", 936 963 .init = clk_orphan_transparent_multiple_parent_mux_test_init, 937 - .exit = clk_orphan_transparent_multiple_parent_mux_test_exit, 938 964 .test_cases = clk_orphan_transparent_multiple_parent_mux_test_cases, 939 965 }; 940 966 ··· 958 986 &clk_dummy_rate_ops, 959 987 0); 960 988 961 - ret = clk_hw_register(NULL, &ctx->parent_ctx.hw); 989 + ret = clk_hw_register_kunit(test, NULL, &ctx->parent_ctx.hw); 962 990 if (ret) 963 991 return ret; 964 992 ··· 966 994 &clk_dummy_single_parent_ops, 967 995 CLK_SET_RATE_PARENT); 968 996 969 - ret = clk_hw_register(NULL, &ctx->hw); 997 + ret = clk_hw_register_kunit(test, NULL, &ctx->hw); 970 998 if (ret) 971 999 return ret; 972 1000 ··· 1032 1060 { 1033 1061 struct clk_single_parent_ctx *ctx = test->priv; 1034 1062 struct clk_hw *hw = &ctx->hw; 1035 - struct clk *clk = clk_hw_get_clk(hw, NULL); 1063 + struct clk *clk = clk_hw_get_clk_kunit(test, hw, NULL); 1036 1064 struct clk *parent; 1037 1065 int ret; 1038 1066 ··· 1046 1074 1047 1075 ret = clk_set_rate_range(clk, 3000, 4000); 1048 1076 KUNIT_EXPECT_LT(test, ret, 0); 1049 - 1050 - clk_put(clk); 1051 1077 } 1052 1078 1053 1079 /* ··· 1062 1092 { 1063 1093 struct clk_single_parent_ctx *ctx = test->priv; 1064 1094 struct clk_hw *hw = &ctx->hw; 1065 - struct clk *clk = clk_hw_get_clk(hw, NULL); 1095 + struct clk *clk = clk_hw_get_clk_kunit(test, hw, NULL); 1066 1096 struct clk *parent; 1067 1097 int ret; 1068 1098 ··· 1076 1106 1077 1107 ret = clk_set_rate_range(parent, 3000, 4000); 1078 1108 KUNIT_EXPECT_LT(test, ret, 0); 1079 - 1080 - clk_put(clk); 1081 1109 } 1082 1110 1083 1111 /* ··· 1206 1238 clk_single_parent_mux_test_suite = { 1207 1239 .name = "clk-single-parent-mux-test", 1208 1240 .init = clk_single_parent_mux_test_init, 1209 - .exit = clk_single_parent_mux_test_exit, 1210 1241 .test_cases = clk_single_parent_mux_test_cases, 1211 1242 }; 1212 1243
+1 -1
drivers/clk/rockchip/clk.c
··· 439 439 if (list->id > max) 440 440 max = list->id; 441 441 if (list->child && list->child->id > max) 442 - max = list->id; 442 + max = list->child->id; 443 443 } 444 444 445 445 return max;
+1
drivers/clk/samsung/clk-exynosautov920.c
··· 1155 1155 .compatible = "samsung,exynosautov920-cmu-peric0", 1156 1156 .data = &peric0_cmu_info, 1157 1157 }, 1158 + { } 1158 1159 }; 1159 1160 1160 1161 static struct platform_driver exynosautov920_cmu_driver __refdata = {
+24 -6
drivers/cpufreq/amd-pstate.c
··· 536 536 537 537 static int amd_pstate_update_min_max_limit(struct cpufreq_policy *policy) 538 538 { 539 - u32 max_limit_perf, min_limit_perf, lowest_perf; 539 + u32 max_limit_perf, min_limit_perf, lowest_perf, max_perf; 540 540 struct amd_cpudata *cpudata = policy->driver_data; 541 541 542 - max_limit_perf = div_u64(policy->max * cpudata->highest_perf, cpudata->max_freq); 543 - min_limit_perf = div_u64(policy->min * cpudata->highest_perf, cpudata->max_freq); 542 + if (cpudata->boost_supported && !policy->boost_enabled) 543 + max_perf = READ_ONCE(cpudata->nominal_perf); 544 + else 545 + max_perf = READ_ONCE(cpudata->highest_perf); 546 + 547 + max_limit_perf = div_u64(policy->max * max_perf, policy->cpuinfo.max_freq); 548 + min_limit_perf = div_u64(policy->min * max_perf, policy->cpuinfo.max_freq); 544 549 545 550 lowest_perf = READ_ONCE(cpudata->lowest_perf); 546 551 if (min_limit_perf < lowest_perf) ··· 1206 1201 return -EINVAL; 1207 1202 1208 1203 cppc_state = mode; 1204 + 1205 + ret = amd_pstate_enable(true); 1206 + if (ret) { 1207 + pr_err("failed to enable cppc during amd-pstate driver registration, return %d\n", 1208 + ret); 1209 + amd_pstate_driver_cleanup(); 1210 + return ret; 1211 + } 1212 + 1209 1213 ret = cpufreq_register_driver(current_pstate_driver); 1210 1214 if (ret) { 1211 1215 amd_pstate_driver_cleanup(); 1212 1216 return ret; 1213 1217 } 1218 + 1214 1219 return 0; 1215 1220 } 1216 1221 ··· 1511 1496 u64 value; 1512 1497 s16 epp; 1513 1498 1514 - max_perf = READ_ONCE(cpudata->highest_perf); 1499 + if (cpudata->boost_supported && !policy->boost_enabled) 1500 + max_perf = READ_ONCE(cpudata->nominal_perf); 1501 + else 1502 + max_perf = READ_ONCE(cpudata->highest_perf); 1515 1503 min_perf = READ_ONCE(cpudata->lowest_perf); 1516 - max_limit_perf = div_u64(policy->max * cpudata->highest_perf, cpudata->max_freq); 1517 - min_limit_perf = div_u64(policy->min * cpudata->highest_perf, cpudata->max_freq); 1504 + max_limit_perf = div_u64(policy->max * max_perf, policy->cpuinfo.max_freq); 1505 + min_limit_perf = div_u64(policy->min * max_perf, policy->cpuinfo.max_freq); 1518 1506 1519 1507 if (min_limit_perf < min_perf) 1520 1508 min_limit_perf = min_perf;
+6 -3
drivers/dma/ep93xx_dma.c
··· 1391 1391 INIT_LIST_HEAD(&dma_dev->channels); 1392 1392 for (i = 0; i < edma->num_channels; i++) { 1393 1393 struct ep93xx_dma_chan *edmac = &edma->channels[i]; 1394 + int len; 1394 1395 1395 1396 edmac->chan.device = dma_dev; 1396 1397 edmac->regs = devm_platform_ioremap_resource(pdev, i); 1397 1398 if (IS_ERR(edmac->regs)) 1398 - return edmac->regs; 1399 + return ERR_CAST(edmac->regs); 1399 1400 1400 1401 edmac->irq = fwnode_irq_get(dev_fwnode(dev), i); 1401 1402 if (edmac->irq < 0) ··· 1405 1404 edmac->edma = edma; 1406 1405 1407 1406 if (edma->m2m) 1408 - snprintf(dma_clk_name, sizeof(dma_clk_name), "m2m%u", i); 1407 + len = snprintf(dma_clk_name, sizeof(dma_clk_name), "m2m%u", i); 1409 1408 else 1410 - snprintf(dma_clk_name, sizeof(dma_clk_name), "m2p%u", i); 1409 + len = snprintf(dma_clk_name, sizeof(dma_clk_name), "m2p%u", i); 1410 + if (len >= sizeof(dma_clk_name)) 1411 + return ERR_PTR(-ENOBUFS); 1411 1412 1412 1413 edmac->clk = devm_clk_get(dev, dma_clk_name); 1413 1414 if (IS_ERR(edmac->clk)) {
+9 -4
drivers/firmware/arm_ffa/driver.c
··· 481 481 struct ffa_send_direct_data2 *data) 482 482 { 483 483 u32 src_dst_ids = PACK_TARGET_INFO(src_id, dst_id); 484 + union { 485 + uuid_t uuid; 486 + __le64 regs[2]; 487 + } uuid_regs = { .uuid = *uuid }; 484 488 ffa_value_t ret, args = { 485 - .a0 = FFA_MSG_SEND_DIRECT_REQ2, .a1 = src_dst_ids, 489 + .a0 = FFA_MSG_SEND_DIRECT_REQ2, 490 + .a1 = src_dst_ids, 491 + .a2 = le64_to_cpu(uuid_regs.regs[0]), 492 + .a3 = le64_to_cpu(uuid_regs.regs[1]), 486 493 }; 487 - 488 - export_uuid((u8 *)&args.a2, uuid); 489 494 memcpy((void *)&args + offsetof(ffa_value_t, a4), data, sizeof(*data)); 490 495 491 496 invoke_ffa_fn(args, &ret); ··· 501 496 return ffa_to_linux_errno((int)ret.a2); 502 497 503 498 if (ret.a0 == FFA_MSG_SEND_DIRECT_RESP2) { 504 - memcpy(data, &ret.a4, sizeof(*data)); 499 + memcpy(data, (void *)&ret + offsetof(ffa_value_t, a4), sizeof(*data)); 505 500 return 0; 506 501 } 507 502
+1 -3
drivers/firmware/arm_scmi/driver.c
··· 2976 2976 dbg->top_dentry = top_dentry; 2977 2977 2978 2978 if (devm_add_action_or_reset(info->dev, 2979 - scmi_debugfs_common_cleanup, dbg)) { 2980 - scmi_debugfs_common_cleanup(dbg); 2979 + scmi_debugfs_common_cleanup, dbg)) 2981 2980 return NULL; 2982 - } 2983 2981 2984 2982 return dbg; 2985 2983 }
+4 -2
drivers/firmware/arm_scmi/transports/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 - scmi_transport_mailbox-objs := mailbox.o 3 - obj-$(CONFIG_ARM_SCMI_TRANSPORT_MAILBOX) += scmi_transport_mailbox.o 2 + # Keep before scmi_transport_mailbox.o to allow precedence 3 + # while matching the compatible. 4 4 scmi_transport_smc-objs := smc.o 5 5 obj-$(CONFIG_ARM_SCMI_TRANSPORT_SMC) += scmi_transport_smc.o 6 + scmi_transport_mailbox-objs := mailbox.o 7 + obj-$(CONFIG_ARM_SCMI_TRANSPORT_MAILBOX) += scmi_transport_mailbox.o 6 8 scmi_transport_optee-objs := optee.o 7 9 obj-$(CONFIG_ARM_SCMI_TRANSPORT_OPTEE) += scmi_transport_optee.o 8 10 scmi_transport_virtio-objs := virtio.o
+21 -11
drivers/firmware/arm_scmi/transports/mailbox.c
··· 25 25 * @chan_platform_receiver: Optional Platform Receiver mailbox unidirectional channel 26 26 * @cinfo: SCMI channel info 27 27 * @shmem: Transmit/Receive shared memory area 28 + * @chan_lock: Lock that prevents multiple xfers from being queued 28 29 */ 29 30 struct scmi_mailbox { 30 31 struct mbox_client cl; ··· 34 33 struct mbox_chan *chan_platform_receiver; 35 34 struct scmi_chan_info *cinfo; 36 35 struct scmi_shared_mem __iomem *shmem; 36 + struct mutex chan_lock; 37 37 }; 38 38 39 39 #define client_to_scmi_mailbox(c) container_of(c, struct scmi_mailbox, cl) ··· 240 238 241 239 cinfo->transport_info = smbox; 242 240 smbox->cinfo = cinfo; 241 + mutex_init(&smbox->chan_lock); 243 242 244 243 return 0; 245 244 } ··· 270 267 struct scmi_mailbox *smbox = cinfo->transport_info; 271 268 int ret; 272 269 270 + /* 271 + * The mailbox layer has its own queue. However the mailbox queue 272 + * confuses the per message SCMI timeouts since the clock starts when 273 + * the message is submitted into the mailbox queue. So when multiple 274 + * messages are queued up the clock starts on all messages instead of 275 + * only the one inflight. 276 + */ 277 + mutex_lock(&smbox->chan_lock); 278 + 273 279 ret = mbox_send_message(smbox->chan, xfer); 280 + /* mbox_send_message returns non-negative value on success */ 281 + if (ret < 0) { 282 + mutex_unlock(&smbox->chan_lock); 283 + return ret; 284 + } 274 285 275 - /* mbox_send_message returns non-negative value on success, so reset */ 276 - if (ret > 0) 277 - ret = 0; 278 - 279 - return ret; 286 + return 0; 280 287 } 281 288 282 289 static void mailbox_mark_txdone(struct scmi_chan_info *cinfo, int ret, ··· 294 281 { 295 282 struct scmi_mailbox *smbox = cinfo->transport_info; 296 283 297 - /* 298 - * NOTE: we might prefer not to need the mailbox ticker to manage the 299 - * transfer queueing since the protocol layer queues things by itself. 300 - * Unfortunately, we have to kick the mailbox framework after we have 301 - * received our message. 302 - */ 303 284 mbox_client_txdone(smbox->chan, ret); 285 + 286 + /* Release channel */ 287 + mutex_unlock(&smbox->chan_lock); 304 288 } 305 289 306 290 static void mailbox_fetch_response(struct scmi_chan_info *cinfo,
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
··· 265 265 266 266 /* Only a single BO list is allowed to simplify handling. */ 267 267 if (p->bo_list) 268 - ret = -EINVAL; 268 + goto free_partial_kdata; 269 269 270 270 ret = amdgpu_cs_p1_bo_handles(p, p->chunks[i].kdata); 271 271 if (ret)
+4 -7
drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
··· 1635 1635 { 1636 1636 int r; 1637 1637 1638 - if (!amdgpu_sriov_vf(adev)) { 1639 - r = device_create_file(adev->dev, &dev_attr_enforce_isolation); 1640 - if (r) 1641 - return r; 1642 - } 1638 + r = device_create_file(adev->dev, &dev_attr_enforce_isolation); 1639 + if (r) 1640 + return r; 1643 1641 1644 1642 r = device_create_file(adev->dev, &dev_attr_run_cleaner_shader); 1645 1643 if (r) ··· 1648 1650 1649 1651 void amdgpu_gfx_sysfs_isolation_shader_fini(struct amdgpu_device *adev) 1650 1652 { 1651 - if (!amdgpu_sriov_vf(adev)) 1652 - device_remove_file(adev->dev, &dev_attr_enforce_isolation); 1653 + device_remove_file(adev->dev, &dev_attr_enforce_isolation); 1653 1654 device_remove_file(adev->dev, &dev_attr_run_cleaner_shader); 1654 1655 } 1655 1656
+3 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
··· 1203 1203 1204 1204 r = amdgpu_ring_init(adev, ring, 1024, NULL, 0, 1205 1205 AMDGPU_RING_PRIO_DEFAULT, NULL); 1206 - if (r) 1206 + if (r) { 1207 + amdgpu_mes_unlock(&adev->mes); 1207 1208 goto clean_up_memory; 1209 + } 1208 1210 1209 1211 amdgpu_mes_ring_to_queue_props(adev, ring, &qprops); 1210 1212 ··· 1239 1237 amdgpu_ring_fini(ring); 1240 1238 clean_up_memory: 1241 1239 kfree(ring); 1242 - amdgpu_mes_unlock(&adev->mes); 1243 1240 return r; 1244 1241 } 1245 1242
+2 -2
drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
··· 621 621 622 622 if (amdgpu_mes_log_enable) { 623 623 mes_set_hw_res_pkt.enable_mes_event_int_logging = 1; 624 - mes_set_hw_res_pkt.event_intr_history_gpu_mc_ptr = mes->event_log_gpu_addr; 624 + mes_set_hw_res_pkt.event_intr_history_gpu_mc_ptr = mes->event_log_gpu_addr + pipe * AMDGPU_MES_LOG_BUFFER_SIZE; 625 625 } 626 626 627 627 return mes_v12_0_submit_pkt_and_poll_completion(mes, pipe, ··· 1336 1336 adev->mes.kiq_hw_fini = &mes_v12_0_kiq_hw_fini; 1337 1337 adev->mes.enable_legacy_queue_map = true; 1338 1338 1339 - adev->mes.event_log_size = AMDGPU_MES_LOG_BUFFER_SIZE; 1339 + adev->mes.event_log_size = adev->enable_uni_mes ? (AMDGPU_MAX_MES_PIPES * AMDGPU_MES_LOG_BUFFER_SIZE) : AMDGPU_MES_LOG_BUFFER_SIZE; 1340 1340 1341 1341 r = amdgpu_mes_init(adev); 1342 1342 if (r)
+3 -3
drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
··· 1148 1148 1149 1149 if (flags & KFD_IOC_ALLOC_MEM_FLAGS_AQL_QUEUE_MEM) 1150 1150 size >>= 1; 1151 - WRITE_ONCE(pdd->vram_usage, pdd->vram_usage + PAGE_ALIGN(size)); 1151 + atomic64_add(PAGE_ALIGN(size), &pdd->vram_usage); 1152 1152 } 1153 1153 1154 1154 mutex_unlock(&p->mutex); ··· 1219 1219 kfd_process_device_remove_obj_handle( 1220 1220 pdd, GET_IDR_HANDLE(args->handle)); 1221 1221 1222 - WRITE_ONCE(pdd->vram_usage, pdd->vram_usage - size); 1222 + atomic64_sub(size, &pdd->vram_usage); 1223 1223 1224 1224 err_unlock: 1225 1225 err_pdd: ··· 2347 2347 } else if (bo_bucket->alloc_flags & KFD_IOC_ALLOC_MEM_FLAGS_VRAM) { 2348 2348 bo_bucket->restored_offset = offset; 2349 2349 /* Update the VRAM usage count */ 2350 - WRITE_ONCE(pdd->vram_usage, pdd->vram_usage + bo_bucket->size); 2350 + atomic64_add(bo_bucket->size, &pdd->vram_usage); 2351 2351 } 2352 2352 return 0; 2353 2353 }
+1 -1
drivers/gpu/drm/amd/amdkfd/kfd_priv.h
··· 775 775 enum kfd_pdd_bound bound; 776 776 777 777 /* VRAM usage */ 778 - uint64_t vram_usage; 778 + atomic64_t vram_usage; 779 779 struct attribute attr_vram; 780 780 char vram_filename[MAX_SYSFS_FILENAME_LEN]; 781 781
+2 -2
drivers/gpu/drm/amd/amdkfd/kfd_process.c
··· 332 332 } else if (strncmp(attr->name, "vram_", 5) == 0) { 333 333 struct kfd_process_device *pdd = container_of(attr, struct kfd_process_device, 334 334 attr_vram); 335 - return snprintf(buffer, PAGE_SIZE, "%llu\n", READ_ONCE(pdd->vram_usage)); 335 + return snprintf(buffer, PAGE_SIZE, "%llu\n", atomic64_read(&pdd->vram_usage)); 336 336 } else if (strncmp(attr->name, "sdma_", 5) == 0) { 337 337 struct kfd_process_device *pdd = container_of(attr, struct kfd_process_device, 338 338 attr_sdma); ··· 1625 1625 pdd->bound = PDD_UNBOUND; 1626 1626 pdd->already_dequeued = false; 1627 1627 pdd->runtime_inuse = false; 1628 - pdd->vram_usage = 0; 1628 + atomic64_set(&pdd->vram_usage, 0); 1629 1629 pdd->sdma_past_activity_counter = 0; 1630 1630 pdd->user_gpu_id = dev->id; 1631 1631 atomic64_set(&pdd->evict_duration_counter, 0);
+26
drivers/gpu/drm/amd/amdkfd/kfd_svm.c
··· 405 405 spin_lock(&svm_bo->list_lock); 406 406 } 407 407 spin_unlock(&svm_bo->list_lock); 408 + 409 + if (mmget_not_zero(svm_bo->eviction_fence->mm)) { 410 + struct kfd_process_device *pdd; 411 + struct kfd_process *p; 412 + struct mm_struct *mm; 413 + 414 + mm = svm_bo->eviction_fence->mm; 415 + /* 416 + * The forked child process takes svm_bo device pages ref, svm_bo could be 417 + * released after parent process is gone. 418 + */ 419 + p = kfd_lookup_process_by_mm(mm); 420 + if (p) { 421 + pdd = kfd_get_process_device_data(svm_bo->node, p); 422 + if (pdd) 423 + atomic64_sub(amdgpu_bo_size(svm_bo->bo), &pdd->vram_usage); 424 + kfd_unref_process(p); 425 + } 426 + mmput(mm); 427 + } 428 + 408 429 if (!dma_fence_is_signaled(&svm_bo->eviction_fence->base)) 409 430 /* We're not in the eviction worker. Signal the fence. */ 410 431 dma_fence_signal(&svm_bo->eviction_fence->base); ··· 553 532 svm_range_vram_node_new(struct kfd_node *node, struct svm_range *prange, 554 533 bool clear) 555 534 { 535 + struct kfd_process_device *pdd; 556 536 struct amdgpu_bo_param bp; 557 537 struct svm_range_bo *svm_bo; 558 538 struct amdgpu_bo_user *ubo; ··· 644 622 spin_lock(&svm_bo->list_lock); 645 623 list_add(&prange->svm_bo_list, &svm_bo->range_list); 646 624 spin_unlock(&svm_bo->list_lock); 625 + 626 + pdd = svm_range_get_pdd_by_node(prange, node); 627 + if (pdd) 628 + atomic64_add(amdgpu_bo_size(bo), &pdd->vram_usage); 647 629 648 630 return 0; 649 631
+8 -4
drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
··· 1264 1264 smu->workload_prority[PP_SMC_POWER_PROFILE_VR] = 4; 1265 1265 smu->workload_prority[PP_SMC_POWER_PROFILE_COMPUTE] = 5; 1266 1266 smu->workload_prority[PP_SMC_POWER_PROFILE_CUSTOM] = 6; 1267 - smu->workload_mask = 1 << smu->workload_prority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT]; 1267 + 1268 + if (smu->is_apu) 1269 + smu->workload_mask = 1 << smu->workload_prority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT]; 1270 + else 1271 + smu->workload_mask = 1 << smu->workload_prority[PP_SMC_POWER_PROFILE_FULLSCREEN3D]; 1268 1272 1269 1273 smu->workload_setting[0] = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT; 1270 1274 smu->workload_setting[1] = PP_SMC_POWER_PROFILE_FULLSCREEN3D; ··· 2230 2226 static int smu_adjust_power_state_dynamic(struct smu_context *smu, 2231 2227 enum amd_dpm_forced_level level, 2232 2228 bool skip_display_settings, 2233 - bool force_update) 2229 + bool init) 2234 2230 { 2235 2231 int ret = 0; 2236 2232 int index = 0; ··· 2259 2255 } 2260 2256 } 2261 2257 2262 - if (force_update || smu_dpm_ctx->dpm_level != level) { 2258 + if (smu_dpm_ctx->dpm_level != level) { 2263 2259 ret = smu_asic_set_performance_level(smu, level); 2264 2260 if (ret) { 2265 2261 dev_err(smu->adev->dev, "Failed to set performance level!"); ··· 2276 2272 index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0; 2277 2273 workload[0] = smu->workload_setting[index]; 2278 2274 2279 - if (force_update || smu->power_profile_mode != workload[0]) 2275 + if (init || smu->power_profile_mode != workload[0]) 2280 2276 smu_bump_power_profile_mode(smu, workload, 0); 2281 2277 } 2282 2278
+10 -12
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
··· 2555 2555 workload_mask = 1 << workload_type; 2556 2556 2557 2557 /* Add optimizations for SMU13.0.0/10. Reuse the power saving profile */ 2558 - if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_COMPUTE) { 2559 - if ((amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 0) && 2560 - ((smu->adev->pm.fw_version == 0x004e6601) || 2561 - (smu->adev->pm.fw_version >= 0x004e7300))) || 2562 - (amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 10) && 2563 - smu->adev->pm.fw_version >= 0x00504500)) { 2564 - workload_type = smu_cmn_to_asic_specific_index(smu, 2565 - CMN2ASIC_MAPPING_WORKLOAD, 2566 - PP_SMC_POWER_PROFILE_POWERSAVING); 2567 - if (workload_type >= 0) 2568 - workload_mask |= 1 << workload_type; 2569 - } 2558 + if ((amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 0) && 2559 + ((smu->adev->pm.fw_version == 0x004e6601) || 2560 + (smu->adev->pm.fw_version >= 0x004e7300))) || 2561 + (amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 10) && 2562 + smu->adev->pm.fw_version >= 0x00504500)) { 2563 + workload_type = smu_cmn_to_asic_specific_index(smu, 2564 + CMN2ASIC_MAPPING_WORKLOAD, 2565 + PP_SMC_POWER_PROFILE_POWERSAVING); 2566 + if (workload_type >= 0) 2567 + workload_mask |= 1 << workload_type; 2570 2568 } 2571 2569 2572 2570 ret = smu_cmn_send_smc_msg_with_param(smu,
+2
drivers/gpu/drm/ast/ast_sil164.c
··· 29 29 if (ast_connector->physical_status == connector_status_connected) { 30 30 count = drm_connector_helper_get_modes(connector); 31 31 } else { 32 + drm_edid_connector_update(connector, NULL); 33 + 32 34 /* 33 35 * There's no EDID data without a connected monitor. Set BMC- 34 36 * compatible modes in this case. The XGA default resolution
+2
drivers/gpu/drm/ast/ast_vga.c
··· 29 29 if (ast_connector->physical_status == connector_status_connected) { 30 30 count = drm_connector_helper_get_modes(connector); 31 31 } else { 32 + drm_edid_connector_update(connector, NULL); 33 + 32 34 /* 33 35 * There's no EDID data without a connected monitor. Set BMC- 34 36 * compatible modes in this case. The XGA default resolution
+30 -10
drivers/gpu/drm/i915/display/intel_dp_mst.c
··· 89 89 90 90 static int intel_dp_mst_bw_overhead(const struct intel_crtc_state *crtc_state, 91 91 const struct intel_connector *connector, 92 - bool ssc, bool dsc, int bpp_x16) 92 + bool ssc, int dsc_slice_count, int bpp_x16) 93 93 { 94 94 const struct drm_display_mode *adjusted_mode = 95 95 &crtc_state->hw.adjusted_mode; 96 96 unsigned long flags = DRM_DP_BW_OVERHEAD_MST; 97 - int dsc_slice_count = 0; 98 97 int overhead; 99 98 100 99 flags |= intel_dp_is_uhbr(crtc_state) ? DRM_DP_BW_OVERHEAD_UHBR : 0; 101 100 flags |= ssc ? DRM_DP_BW_OVERHEAD_SSC_REF_CLK : 0; 102 101 flags |= crtc_state->fec_enable ? DRM_DP_BW_OVERHEAD_FEC : 0; 103 102 104 - if (dsc) { 103 + if (dsc_slice_count) 105 104 flags |= DRM_DP_BW_OVERHEAD_DSC; 106 - dsc_slice_count = intel_dp_dsc_get_slice_count(connector, 107 - adjusted_mode->clock, 108 - adjusted_mode->hdisplay, 109 - crtc_state->joiner_pipes); 110 - } 111 105 112 106 overhead = drm_dp_bw_overhead(crtc_state->lane_count, 113 107 adjusted_mode->hdisplay, ··· 147 153 return DIV_ROUND_UP(effective_data_rate * 64, 54 * 1000); 148 154 } 149 155 156 + static int intel_dp_mst_dsc_get_slice_count(const struct intel_connector *connector, 157 + const struct intel_crtc_state *crtc_state) 158 + { 159 + const struct drm_display_mode *adjusted_mode = 160 + &crtc_state->hw.adjusted_mode; 161 + int num_joined_pipes = crtc_state->joiner_pipes; 162 + 163 + return intel_dp_dsc_get_slice_count(connector, 164 + adjusted_mode->clock, 165 + adjusted_mode->hdisplay, 166 + num_joined_pipes); 167 + } 168 + 150 169 static int intel_dp_mst_find_vcpi_slots_for_bpp(struct intel_encoder *encoder, 151 170 struct intel_crtc_state *crtc_state, 152 171 int max_bpp, ··· 179 172 const struct drm_display_mode *adjusted_mode = 180 173 &crtc_state->hw.adjusted_mode; 181 174 int bpp, slots = -EINVAL; 175 + int dsc_slice_count = 0; 182 176 int max_dpt_bpp; 183 177 int ret = 0; 184 178 ··· 211 203 drm_dbg_kms(&i915->drm, "Looking for slots in range min bpp %d max bpp %d\n", 212 204 min_bpp, max_bpp); 213 205 206 + if (dsc) { 207 + dsc_slice_count = intel_dp_mst_dsc_get_slice_count(connector, crtc_state); 208 + if (!dsc_slice_count) { 209 + drm_dbg_kms(&i915->drm, "Can't get valid DSC slice count\n"); 210 + 211 + return -ENOSPC; 212 + } 213 + } 214 + 214 215 for (bpp = max_bpp; bpp >= min_bpp; bpp -= step) { 215 216 int local_bw_overhead; 216 217 int remote_bw_overhead; ··· 233 216 intel_dp_output_bpp(crtc_state->output_format, bpp)); 234 217 235 218 local_bw_overhead = intel_dp_mst_bw_overhead(crtc_state, connector, 236 - false, dsc, link_bpp_x16); 219 + false, dsc_slice_count, link_bpp_x16); 237 220 remote_bw_overhead = intel_dp_mst_bw_overhead(crtc_state, connector, 238 - true, dsc, link_bpp_x16); 221 + true, dsc_slice_count, link_bpp_x16); 239 222 240 223 intel_dp_mst_compute_m_n(crtc_state, connector, 241 224 local_bw_overhead, ··· 464 447 return false; 465 448 466 449 if (mode_hblank_period_ns(adjusted_mode) > hblank_limit) 450 + return false; 451 + 452 + if (!intel_dp_mst_dsc_get_slice_count(connector, crtc_state)) 467 453 return false; 468 454 469 455 return true;
+13
drivers/gpu/drm/i915/display/intel_fb.c
··· 438 438 INTEL_PLANE_CAP_NEED64K_PHYS); 439 439 } 440 440 441 + /** 442 + * intel_fb_is_tile4_modifier: Check if a modifier is a tile4 modifier type 443 + * @modifier: Modifier to check 444 + * 445 + * Returns: 446 + * Returns %true if @modifier is a tile4 modifier. 447 + */ 448 + bool intel_fb_is_tile4_modifier(u64 modifier) 449 + { 450 + return plane_caps_contain_any(lookup_modifier(modifier)->plane_caps, 451 + INTEL_PLANE_CAP_TILING_4); 452 + } 453 + 441 454 static bool check_modifier_display_ver_range(const struct intel_modifier_desc *md, 442 455 u8 display_ver_from, u8 display_ver_until) 443 456 {
+1
drivers/gpu/drm/i915/display/intel_fb.h
··· 35 35 bool intel_fb_is_rc_ccs_cc_modifier(u64 modifier); 36 36 bool intel_fb_is_mc_ccs_modifier(u64 modifier); 37 37 bool intel_fb_needs_64k_phys(u64 modifier); 38 + bool intel_fb_is_tile4_modifier(u64 modifier); 38 39 39 40 bool intel_fb_is_ccs_aux_plane(const struct drm_framebuffer *fb, int color_plane); 40 41 int intel_fb_rc_ccs_cc_plane(const struct drm_framebuffer *fb);
+11
drivers/gpu/drm/i915/display/skl_universal_plane.c
··· 1591 1591 return -EINVAL; 1592 1592 } 1593 1593 1594 + /* 1595 + * Display20 onward tile4 hflip is not supported 1596 + */ 1597 + if (rotation & DRM_MODE_REFLECT_X && 1598 + intel_fb_is_tile4_modifier(fb->modifier) && 1599 + DISPLAY_VER(dev_priv) >= 20) { 1600 + drm_dbg_kms(&dev_priv->drm, 1601 + "horizontal flip is not supported with tile4 surface formats\n"); 1602 + return -EINVAL; 1603 + } 1604 + 1594 1605 if (drm_rotation_90_or_270(rotation)) { 1595 1606 if (!intel_fb_supports_90_270_rotation(to_intel_framebuffer(fb))) { 1596 1607 drm_dbg_kms(&dev_priv->drm,
-38
drivers/gpu/drm/mgag200/mgag200_drv.c
··· 18 18 #include <drm/drm_managed.h> 19 19 #include <drm/drm_module.h> 20 20 #include <drm/drm_pciids.h> 21 - #include <drm/drm_vblank.h> 22 21 23 22 #include "mgag200_drv.h" 24 23 ··· 82 83 iowrite16(orig, mem); 83 84 84 85 return offset - 65536; 85 - } 86 - 87 - static irqreturn_t mgag200_irq_handler(int irq, void *arg) 88 - { 89 - struct drm_device *dev = arg; 90 - struct mga_device *mdev = to_mga_device(dev); 91 - struct drm_crtc *crtc; 92 - u32 status, ien; 93 - 94 - status = RREG32(MGAREG_STATUS); 95 - 96 - if (status & MGAREG_STATUS_VLINEPEN) { 97 - ien = RREG32(MGAREG_IEN); 98 - if (!(ien & MGAREG_IEN_VLINEIEN)) 99 - goto out; 100 - 101 - crtc = drm_crtc_from_index(dev, 0); 102 - if (WARN_ON_ONCE(!crtc)) 103 - goto out; 104 - drm_crtc_handle_vblank(crtc); 105 - 106 - WREG32(MGAREG_ICLEAR, MGAREG_ICLEAR_VLINEICLR); 107 - 108 - return IRQ_HANDLED; 109 - } 110 - 111 - out: 112 - return IRQ_NONE; 113 86 } 114 87 115 88 /* ··· 167 196 const struct mgag200_device_funcs *funcs) 168 197 { 169 198 struct drm_device *dev = &mdev->base; 170 - struct pci_dev *pdev = to_pci_dev(dev->dev); 171 199 u8 crtcext3, misc; 172 200 int ret; 173 201 ··· 193 223 mutex_unlock(&mdev->rmmio_lock); 194 224 195 225 WREG32(MGAREG_IEN, 0); 196 - WREG32(MGAREG_ICLEAR, MGAREG_ICLEAR_VLINEICLR); 197 - 198 - ret = devm_request_irq(&pdev->dev, pdev->irq, mgag200_irq_handler, IRQF_SHARED, 199 - dev->driver->name, dev); 200 - if (ret) { 201 - drm_err(dev, "Failed to acquire interrupt, error %d\n", ret); 202 - return ret; 203 - } 204 226 205 227 return 0; 206 228 }
+2 -12
drivers/gpu/drm/mgag200/mgag200_drv.h
··· 391 391 void mgag200_crtc_helper_atomic_flush(struct drm_crtc *crtc, struct drm_atomic_state *old_state); 392 392 void mgag200_crtc_helper_atomic_enable(struct drm_crtc *crtc, struct drm_atomic_state *old_state); 393 393 void mgag200_crtc_helper_atomic_disable(struct drm_crtc *crtc, struct drm_atomic_state *old_state); 394 - bool mgag200_crtc_helper_get_scanout_position(struct drm_crtc *crtc, bool in_vblank_irq, 395 - int *vpos, int *hpos, 396 - ktime_t *stime, ktime_t *etime, 397 - const struct drm_display_mode *mode); 398 394 399 395 #define MGAG200_CRTC_HELPER_FUNCS \ 400 396 .mode_valid = mgag200_crtc_helper_mode_valid, \ 401 397 .atomic_check = mgag200_crtc_helper_atomic_check, \ 402 398 .atomic_flush = mgag200_crtc_helper_atomic_flush, \ 403 399 .atomic_enable = mgag200_crtc_helper_atomic_enable, \ 404 - .atomic_disable = mgag200_crtc_helper_atomic_disable, \ 405 - .get_scanout_position = mgag200_crtc_helper_get_scanout_position 400 + .atomic_disable = mgag200_crtc_helper_atomic_disable 406 401 407 402 void mgag200_crtc_reset(struct drm_crtc *crtc); 408 403 struct drm_crtc_state *mgag200_crtc_atomic_duplicate_state(struct drm_crtc *crtc); 409 404 void mgag200_crtc_atomic_destroy_state(struct drm_crtc *crtc, struct drm_crtc_state *crtc_state); 410 - int mgag200_crtc_enable_vblank(struct drm_crtc *crtc); 411 - void mgag200_crtc_disable_vblank(struct drm_crtc *crtc); 412 405 413 406 #define MGAG200_CRTC_FUNCS \ 414 407 .reset = mgag200_crtc_reset, \ ··· 409 416 .set_config = drm_atomic_helper_set_config, \ 410 417 .page_flip = drm_atomic_helper_page_flip, \ 411 418 .atomic_duplicate_state = mgag200_crtc_atomic_duplicate_state, \ 412 - .atomic_destroy_state = mgag200_crtc_atomic_destroy_state, \ 413 - .enable_vblank = mgag200_crtc_enable_vblank, \ 414 - .disable_vblank = mgag200_crtc_disable_vblank, \ 415 - .get_vblank_timestamp = drm_crtc_vblank_helper_get_vblank_timestamp 419 + .atomic_destroy_state = mgag200_crtc_atomic_destroy_state 416 420 417 421 void mgag200_set_mode_regs(struct mga_device *mdev, const struct drm_display_mode *mode, 418 422 bool set_vidrst);
-5
drivers/gpu/drm/mgag200/mgag200_g200.c
··· 8 8 #include <drm/drm_drv.h> 9 9 #include <drm/drm_gem_atomic_helper.h> 10 10 #include <drm/drm_probe_helper.h> 11 - #include <drm/drm_vblank.h> 12 11 13 12 #include "mgag200_drv.h" 14 13 ··· 402 403 403 404 drm_mode_config_reset(dev); 404 405 drm_kms_helper_poll_init(dev); 405 - 406 - ret = drm_vblank_init(dev, 1); 407 - if (ret) 408 - return ERR_PTR(ret); 409 406 410 407 return mdev; 411 408 }
-5
drivers/gpu/drm/mgag200/mgag200_g200eh.c
··· 8 8 #include <drm/drm_drv.h> 9 9 #include <drm/drm_gem_atomic_helper.h> 10 10 #include <drm/drm_probe_helper.h> 11 - #include <drm/drm_vblank.h> 12 11 13 12 #include "mgag200_drv.h" 14 13 ··· 274 275 275 276 drm_mode_config_reset(dev); 276 277 drm_kms_helper_poll_init(dev); 277 - 278 - ret = drm_vblank_init(dev, 1); 279 - if (ret) 280 - return ERR_PTR(ret); 281 278 282 279 return mdev; 283 280 }
-5
drivers/gpu/drm/mgag200/mgag200_g200eh3.c
··· 7 7 #include <drm/drm_drv.h> 8 8 #include <drm/drm_gem_atomic_helper.h> 9 9 #include <drm/drm_probe_helper.h> 10 - #include <drm/drm_vblank.h> 11 10 12 11 #include "mgag200_drv.h" 13 12 ··· 179 180 180 181 drm_mode_config_reset(dev); 181 182 drm_kms_helper_poll_init(dev); 182 - 183 - ret = drm_vblank_init(dev, 1); 184 - if (ret) 185 - return ERR_PTR(ret); 186 183 187 184 return mdev; 188 185 }
+1 -9
drivers/gpu/drm/mgag200/mgag200_g200er.c
··· 8 8 #include <drm/drm_drv.h> 9 9 #include <drm/drm_gem_atomic_helper.h> 10 10 #include <drm/drm_probe_helper.h> 11 - #include <drm/drm_vblank.h> 12 11 13 12 #include "mgag200_drv.h" 14 13 ··· 205 206 mgag200_crtc_set_gamma_linear(mdev, format); 206 207 207 208 mgag200_enable_display(mdev); 208 - 209 - drm_crtc_vblank_on(crtc); 210 209 } 211 210 212 211 static const struct drm_crtc_helper_funcs mgag200_g200er_crtc_helper_funcs = { ··· 212 215 .atomic_check = mgag200_crtc_helper_atomic_check, 213 216 .atomic_flush = mgag200_crtc_helper_atomic_flush, 214 217 .atomic_enable = mgag200_g200er_crtc_helper_atomic_enable, 215 - .atomic_disable = mgag200_crtc_helper_atomic_disable, 216 - .get_scanout_position = mgag200_crtc_helper_get_scanout_position, 218 + .atomic_disable = mgag200_crtc_helper_atomic_disable 217 219 }; 218 220 219 221 static const struct drm_crtc_funcs mgag200_g200er_crtc_funcs = { ··· 307 311 308 312 drm_mode_config_reset(dev); 309 313 drm_kms_helper_poll_init(dev); 310 - 311 - ret = drm_vblank_init(dev, 1); 312 - if (ret) 313 - return ERR_PTR(ret); 314 314 315 315 return mdev; 316 316 }
+1 -9
drivers/gpu/drm/mgag200/mgag200_g200ev.c
··· 8 8 #include <drm/drm_drv.h> 9 9 #include <drm/drm_gem_atomic_helper.h> 10 10 #include <drm/drm_probe_helper.h> 11 - #include <drm/drm_vblank.h> 12 11 13 12 #include "mgag200_drv.h" 14 13 ··· 206 207 mgag200_crtc_set_gamma_linear(mdev, format); 207 208 208 209 mgag200_enable_display(mdev); 209 - 210 - drm_crtc_vblank_on(crtc); 211 210 } 212 211 213 212 static const struct drm_crtc_helper_funcs mgag200_g200ev_crtc_helper_funcs = { ··· 213 216 .atomic_check = mgag200_crtc_helper_atomic_check, 214 217 .atomic_flush = mgag200_crtc_helper_atomic_flush, 215 218 .atomic_enable = mgag200_g200ev_crtc_helper_atomic_enable, 216 - .atomic_disable = mgag200_crtc_helper_atomic_disable, 217 - .get_scanout_position = mgag200_crtc_helper_get_scanout_position, 219 + .atomic_disable = mgag200_crtc_helper_atomic_disable 218 220 }; 219 221 220 222 static const struct drm_crtc_funcs mgag200_g200ev_crtc_funcs = { ··· 312 316 313 317 drm_mode_config_reset(dev); 314 318 drm_kms_helper_poll_init(dev); 315 - 316 - ret = drm_vblank_init(dev, 1); 317 - if (ret) 318 - return ERR_PTR(ret); 319 319 320 320 return mdev; 321 321 }
-5
drivers/gpu/drm/mgag200/mgag200_g200ew3.c
··· 7 7 #include <drm/drm_drv.h> 8 8 #include <drm/drm_gem_atomic_helper.h> 9 9 #include <drm/drm_probe_helper.h> 10 - #include <drm/drm_vblank.h> 11 10 12 11 #include "mgag200_drv.h" 13 12 ··· 197 198 198 199 drm_mode_config_reset(dev); 199 200 drm_kms_helper_poll_init(dev); 200 - 201 - ret = drm_vblank_init(dev, 1); 202 - if (ret) 203 - return ERR_PTR(ret); 204 201 205 202 return mdev; 206 203 }
+1 -9
drivers/gpu/drm/mgag200/mgag200_g200se.c
··· 8 8 #include <drm/drm_drv.h> 9 9 #include <drm/drm_gem_atomic_helper.h> 10 10 #include <drm/drm_probe_helper.h> 11 - #include <drm/drm_vblank.h> 12 11 13 12 #include "mgag200_drv.h" 14 13 ··· 337 338 mgag200_crtc_set_gamma_linear(mdev, format); 338 339 339 340 mgag200_enable_display(mdev); 340 - 341 - drm_crtc_vblank_on(crtc); 342 341 } 343 342 344 343 static const struct drm_crtc_helper_funcs mgag200_g200se_crtc_helper_funcs = { ··· 344 347 .atomic_check = mgag200_crtc_helper_atomic_check, 345 348 .atomic_flush = mgag200_crtc_helper_atomic_flush, 346 349 .atomic_enable = mgag200_g200se_crtc_helper_atomic_enable, 347 - .atomic_disable = mgag200_crtc_helper_atomic_disable, 348 - .get_scanout_position = mgag200_crtc_helper_get_scanout_position, 350 + .atomic_disable = mgag200_crtc_helper_atomic_disable 349 351 }; 350 352 351 353 static const struct drm_crtc_funcs mgag200_g200se_crtc_funcs = { ··· 512 516 513 517 drm_mode_config_reset(dev); 514 518 drm_kms_helper_poll_init(dev); 515 - 516 - ret = drm_vblank_init(dev, 1); 517 - if (ret) 518 - return ERR_PTR(ret); 519 519 520 520 return mdev; 521 521 }
-5
drivers/gpu/drm/mgag200/mgag200_g200wb.c
··· 8 8 #include <drm/drm_drv.h> 9 9 #include <drm/drm_gem_atomic_helper.h> 10 10 #include <drm/drm_probe_helper.h> 11 - #include <drm/drm_vblank.h> 12 11 13 12 #include "mgag200_drv.h" 14 13 ··· 321 322 322 323 drm_mode_config_reset(dev); 323 324 drm_kms_helper_poll_init(dev); 324 - 325 - ret = drm_vblank_init(dev, 1); 326 - if (ret) 327 - return ERR_PTR(ret); 328 325 329 326 return mdev; 330 327 }
+1 -76
drivers/gpu/drm/mgag200/mgag200_mode.c
··· 22 22 #include <drm/drm_gem_framebuffer_helper.h> 23 23 #include <drm/drm_panic.h> 24 24 #include <drm/drm_print.h> 25 - #include <drm/drm_vblank.h> 26 25 27 26 #include "mgag200_ddc.h" 28 27 #include "mgag200_drv.h" ··· 226 227 vblkstr = mode->crtc_vblank_start; 227 228 vblkend = vtotal + 1; 228 229 229 - /* 230 - * There's no VBLANK interrupt on Matrox chipsets, so we use 231 - * the VLINE interrupt instead. It triggers when the current 232 - * <linecomp> has been reached. For VBLANK, this is the first 233 - * non-visible line at the bottom of the screen. Therefore, 234 - * keep <linecomp> in sync with <vblkstr>. 235 - */ 236 - linecomp = vblkstr; 230 + linecomp = vdispend; 237 231 238 232 misc = RREG8(MGA_MISC_IN); 239 233 ··· 637 645 struct mgag200_crtc_state *mgag200_crtc_state = to_mgag200_crtc_state(crtc_state); 638 646 struct drm_device *dev = crtc->dev; 639 647 struct mga_device *mdev = to_mga_device(dev); 640 - struct drm_pending_vblank_event *event; 641 - unsigned long flags; 642 648 643 649 if (crtc_state->enable && crtc_state->color_mgmt_changed) { 644 650 const struct drm_format_info *format = mgag200_crtc_state->format; ··· 645 655 mgag200_crtc_set_gamma(mdev, format, crtc_state->gamma_lut->data); 646 656 else 647 657 mgag200_crtc_set_gamma_linear(mdev, format); 648 - } 649 - 650 - event = crtc->state->event; 651 - if (event) { 652 - crtc->state->event = NULL; 653 - 654 - spin_lock_irqsave(&dev->event_lock, flags); 655 - if (drm_crtc_vblank_get(crtc) != 0) 656 - drm_crtc_send_vblank_event(crtc, event); 657 - else 658 - drm_crtc_arm_vblank_event(crtc, event); 659 - spin_unlock_irqrestore(&dev->event_lock, flags); 660 658 } 661 659 } 662 660 ··· 670 692 mgag200_crtc_set_gamma_linear(mdev, format); 671 693 672 694 mgag200_enable_display(mdev); 673 - 674 - drm_crtc_vblank_on(crtc); 675 695 } 676 696 677 697 void mgag200_crtc_helper_atomic_disable(struct drm_crtc *crtc, struct drm_atomic_state *old_state) 678 698 { 679 699 struct mga_device *mdev = to_mga_device(crtc->dev); 680 700 681 - drm_crtc_vblank_off(crtc); 682 - 683 701 mgag200_disable_display(mdev); 684 - } 685 - 686 - bool mgag200_crtc_helper_get_scanout_position(struct drm_crtc *crtc, bool in_vblank_irq, 687 - int *vpos, int *hpos, 688 - ktime_t *stime, ktime_t *etime, 689 - const struct drm_display_mode *mode) 690 - { 691 - struct mga_device *mdev = to_mga_device(crtc->dev); 692 - u32 vcount; 693 - 694 - if (stime) 695 - *stime = ktime_get(); 696 - 697 - if (vpos) { 698 - vcount = RREG32(MGAREG_VCOUNT); 699 - *vpos = vcount & GENMASK(11, 0); 700 - } 701 - 702 - if (hpos) 703 - *hpos = mode->htotal >> 1; // near middle of scanline on average 704 - 705 - if (etime) 706 - *etime = ktime_get(); 707 - 708 - return true; 709 702 } 710 703 711 704 void mgag200_crtc_reset(struct drm_crtc *crtc) ··· 721 772 722 773 __drm_atomic_helper_crtc_destroy_state(&mgag200_crtc_state->base); 723 774 kfree(mgag200_crtc_state); 724 - } 725 - 726 - int mgag200_crtc_enable_vblank(struct drm_crtc *crtc) 727 - { 728 - struct mga_device *mdev = to_mga_device(crtc->dev); 729 - u32 ien; 730 - 731 - WREG32(MGAREG_ICLEAR, MGAREG_ICLEAR_VLINEICLR); 732 - 733 - ien = RREG32(MGAREG_IEN); 734 - ien |= MGAREG_IEN_VLINEIEN; 735 - WREG32(MGAREG_IEN, ien); 736 - 737 - return 0; 738 - } 739 - 740 - void mgag200_crtc_disable_vblank(struct drm_crtc *crtc) 741 - { 742 - struct mga_device *mdev = to_mga_device(crtc->dev); 743 - u32 ien; 744 - 745 - ien = RREG32(MGAREG_IEN); 746 - ien &= ~(MGAREG_IEN_VLINEIEN); 747 - WREG32(MGAREG_IEN, ien); 748 775 } 749 776 750 777 /*
+13 -3
drivers/gpu/drm/msm/adreno/a6xx_gpu.c
··· 101 101 } 102 102 103 103 static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, 104 - struct msm_ringbuffer *ring, struct msm_file_private *ctx) 104 + struct msm_ringbuffer *ring, struct msm_gem_submit *submit) 105 105 { 106 106 bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1; 107 + struct msm_file_private *ctx = submit->queue->ctx; 107 108 struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; 108 109 phys_addr_t ttbr; 109 110 u32 asid; ··· 115 114 116 115 if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid)) 117 116 return; 117 + 118 + if (adreno_gpu->info->family >= ADRENO_7XX_GEN1) { 119 + /* Wait for previous submit to complete before continuing: */ 120 + OUT_PKT7(ring, CP_WAIT_TIMESTAMP, 4); 121 + OUT_RING(ring, 0); 122 + OUT_RING(ring, lower_32_bits(rbmemptr(ring, fence))); 123 + OUT_RING(ring, upper_32_bits(rbmemptr(ring, fence))); 124 + OUT_RING(ring, submit->seqno - 1); 125 + } 118 126 119 127 if (!sysprof) { 120 128 if (!adreno_is_a7xx(adreno_gpu)) { ··· 203 193 struct msm_ringbuffer *ring = submit->ring; 204 194 unsigned int i, ibs = 0; 205 195 206 - a6xx_set_pagetable(a6xx_gpu, ring, submit->queue->ctx); 196 + a6xx_set_pagetable(a6xx_gpu, ring, submit); 207 197 208 198 get_stats_counter(ring, REG_A6XX_RBBM_PERFCTR_CP(0), 209 199 rbmemptr_stats(ring, index, cpcycles_start)); ··· 293 283 OUT_PKT7(ring, CP_THREAD_CONTROL, 1); 294 284 OUT_RING(ring, CP_THREAD_CONTROL_0_SYNC_THREADS | CP_SET_THREAD_BR); 295 285 296 - a6xx_set_pagetable(a6xx_gpu, ring, submit->queue->ctx); 286 + a6xx_set_pagetable(a6xx_gpu, ring, submit); 297 287 298 288 get_stats_counter(ring, REG_A7XX_RBBM_PERFCTR_CP(0), 299 289 rbmemptr_stats(ring, index, cpcycles_start));
+13 -7
drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
··· 711 711 _dpu_crtc_complete_flip(crtc); 712 712 } 713 713 714 - static void _dpu_crtc_setup_lm_bounds(struct drm_crtc *crtc, 714 + static int _dpu_crtc_check_and_setup_lm_bounds(struct drm_crtc *crtc, 715 715 struct drm_crtc_state *state) 716 716 { 717 717 struct dpu_crtc_state *cstate = to_dpu_crtc_state(state); 718 718 struct drm_display_mode *adj_mode = &state->adjusted_mode; 719 719 u32 crtc_split_width = adj_mode->hdisplay / cstate->num_mixers; 720 + struct dpu_kms *dpu_kms = _dpu_crtc_get_kms(crtc); 720 721 int i; 721 722 722 723 for (i = 0; i < cstate->num_mixers; i++) { ··· 728 727 r->y2 = adj_mode->vdisplay; 729 728 730 729 trace_dpu_crtc_setup_lm_bounds(DRMID(crtc), i, r); 730 + 731 + if (drm_rect_width(r) > dpu_kms->catalog->caps->max_mixer_width) 732 + return -E2BIG; 731 733 } 734 + 735 + return 0; 732 736 } 733 737 734 738 static void _dpu_crtc_get_pcc_coeff(struct drm_crtc_state *state, ··· 809 803 810 804 DRM_DEBUG_ATOMIC("crtc%d\n", crtc->base.id); 811 805 812 - _dpu_crtc_setup_lm_bounds(crtc, crtc->state); 806 + _dpu_crtc_check_and_setup_lm_bounds(crtc, crtc->state); 813 807 814 808 /* encoder will trigger pending mask now */ 815 809 drm_for_each_encoder_mask(encoder, crtc->dev, crtc->state->encoder_mask) ··· 1097 1091 1098 1092 dpu_core_perf_crtc_update(crtc, 0); 1099 1093 1100 - memset(cstate->mixers, 0, sizeof(cstate->mixers)); 1101 - cstate->num_mixers = 0; 1102 - 1103 1094 /* disable clk & bw control until clk & bw properties are set */ 1104 1095 cstate->bw_control = false; 1105 1096 cstate->bw_split_vote = false; ··· 1195 1192 if (crtc_state->active_changed) 1196 1193 crtc_state->mode_changed = true; 1197 1194 1198 - if (cstate->num_mixers) 1199 - _dpu_crtc_setup_lm_bounds(crtc, crtc_state); 1195 + if (cstate->num_mixers) { 1196 + rc = _dpu_crtc_check_and_setup_lm_bounds(crtc, crtc_state); 1197 + if (rc) 1198 + return rc; 1199 + } 1200 1200 1201 1201 /* FIXME: move this to dpu_plane_atomic_check? */ 1202 1202 drm_atomic_crtc_state_for_each_plane_state(plane, pstate, crtc_state) {
+42 -26
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
··· 624 624 return topology; 625 625 } 626 626 627 + static void dpu_encoder_assign_crtc_resources(struct dpu_kms *dpu_kms, 628 + struct drm_encoder *drm_enc, 629 + struct dpu_global_state *global_state, 630 + struct drm_crtc_state *crtc_state) 631 + { 632 + struct dpu_crtc_state *cstate; 633 + struct dpu_hw_blk *hw_ctl[MAX_CHANNELS_PER_ENC]; 634 + struct dpu_hw_blk *hw_lm[MAX_CHANNELS_PER_ENC]; 635 + struct dpu_hw_blk *hw_dspp[MAX_CHANNELS_PER_ENC]; 636 + int num_lm, num_ctl, num_dspp, i; 637 + 638 + cstate = to_dpu_crtc_state(crtc_state); 639 + 640 + memset(cstate->mixers, 0, sizeof(cstate->mixers)); 641 + 642 + num_ctl = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, 643 + drm_enc->base.id, DPU_HW_BLK_CTL, hw_ctl, ARRAY_SIZE(hw_ctl)); 644 + num_lm = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, 645 + drm_enc->base.id, DPU_HW_BLK_LM, hw_lm, ARRAY_SIZE(hw_lm)); 646 + num_dspp = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, 647 + drm_enc->base.id, DPU_HW_BLK_DSPP, hw_dspp, 648 + ARRAY_SIZE(hw_dspp)); 649 + 650 + for (i = 0; i < num_lm; i++) { 651 + int ctl_idx = (i < num_ctl) ? i : (num_ctl-1); 652 + 653 + cstate->mixers[i].hw_lm = to_dpu_hw_mixer(hw_lm[i]); 654 + cstate->mixers[i].lm_ctl = to_dpu_hw_ctl(hw_ctl[ctl_idx]); 655 + cstate->mixers[i].hw_dspp = i < num_dspp ? to_dpu_hw_dspp(hw_dspp[i]) : NULL; 656 + } 657 + 658 + cstate->num_mixers = num_lm; 659 + } 660 + 627 661 static int dpu_encoder_virt_atomic_check( 628 662 struct drm_encoder *drm_enc, 629 663 struct drm_crtc_state *crtc_state, ··· 726 692 if (!crtc_state->active_changed || crtc_state->enable) 727 693 ret = dpu_rm_reserve(&dpu_kms->rm, global_state, 728 694 drm_enc, crtc_state, topology); 695 + if (!ret) 696 + dpu_encoder_assign_crtc_resources(dpu_kms, drm_enc, 697 + global_state, crtc_state); 729 698 } 730 699 731 700 trace_dpu_enc_atomic_check_flags(DRMID(drm_enc), adj_mode->flags); ··· 1130 1093 struct dpu_encoder_virt *dpu_enc; 1131 1094 struct msm_drm_private *priv; 1132 1095 struct dpu_kms *dpu_kms; 1133 - struct dpu_crtc_state *cstate; 1134 1096 struct dpu_global_state *global_state; 1135 1097 struct dpu_hw_blk *hw_pp[MAX_CHANNELS_PER_ENC]; 1136 1098 struct dpu_hw_blk *hw_ctl[MAX_CHANNELS_PER_ENC]; 1137 - struct dpu_hw_blk *hw_lm[MAX_CHANNELS_PER_ENC]; 1138 - struct dpu_hw_blk *hw_dspp[MAX_CHANNELS_PER_ENC] = { NULL }; 1139 1099 struct dpu_hw_blk *hw_dsc[MAX_CHANNELS_PER_ENC]; 1140 - int num_lm, num_ctl, num_pp, num_dsc; 1100 + int num_ctl, num_pp, num_dsc; 1141 1101 unsigned int dsc_mask = 0; 1142 1102 int i; 1143 1103 ··· 1163 1129 ARRAY_SIZE(hw_pp)); 1164 1130 num_ctl = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, 1165 1131 drm_enc->base.id, DPU_HW_BLK_CTL, hw_ctl, ARRAY_SIZE(hw_ctl)); 1166 - num_lm = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, 1167 - drm_enc->base.id, DPU_HW_BLK_LM, hw_lm, ARRAY_SIZE(hw_lm)); 1168 - dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, 1169 - drm_enc->base.id, DPU_HW_BLK_DSPP, hw_dspp, 1170 - ARRAY_SIZE(hw_dspp)); 1171 1132 1172 1133 for (i = 0; i < MAX_CHANNELS_PER_ENC; i++) 1173 1134 dpu_enc->hw_pp[i] = i < num_pp ? to_dpu_hw_pingpong(hw_pp[i]) ··· 1188 1159 dpu_enc->cur_master->hw_cdm = hw_cdm ? to_dpu_hw_cdm(hw_cdm) : NULL; 1189 1160 } 1190 1161 1191 - cstate = to_dpu_crtc_state(crtc_state); 1192 - 1193 - for (i = 0; i < num_lm; i++) { 1194 - int ctl_idx = (i < num_ctl) ? i : (num_ctl-1); 1195 - 1196 - cstate->mixers[i].hw_lm = to_dpu_hw_mixer(hw_lm[i]); 1197 - cstate->mixers[i].lm_ctl = to_dpu_hw_ctl(hw_ctl[ctl_idx]); 1198 - cstate->mixers[i].hw_dspp = to_dpu_hw_dspp(hw_dspp[i]); 1199 - } 1200 - 1201 - cstate->num_mixers = num_lm; 1202 - 1203 1162 for (i = 0; i < dpu_enc->num_phys_encs; i++) { 1204 1163 struct dpu_encoder_phys *phys = dpu_enc->phys_encs[i]; 1205 1164 1206 - if (!dpu_enc->hw_pp[i]) { 1165 + phys->hw_pp = dpu_enc->hw_pp[i]; 1166 + if (!phys->hw_pp) { 1207 1167 DPU_ERROR_ENC(dpu_enc, 1208 1168 "no pp block assigned at idx: %d\n", i); 1209 1169 return; 1210 1170 } 1211 1171 1212 - if (!hw_ctl[i]) { 1172 + phys->hw_ctl = i < num_ctl ? to_dpu_hw_ctl(hw_ctl[i]) : NULL; 1173 + if (!phys->hw_ctl) { 1213 1174 DPU_ERROR_ENC(dpu_enc, 1214 1175 "no ctl block assigned at idx: %d\n", i); 1215 1176 return; 1216 1177 } 1217 - 1218 - phys->hw_pp = dpu_enc->hw_pp[i]; 1219 - phys->hw_ctl = to_dpu_hw_ctl(hw_ctl[i]); 1220 1178 1221 1179 phys->cached_mode = crtc_state->adjusted_mode; 1222 1180 if (phys->ops.atomic_mode_set)
+5 -2
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
··· 302 302 intf_cfg.stream_sel = 0; /* Don't care value for video mode */ 303 303 intf_cfg.mode_3d = dpu_encoder_helper_get_3d_blend_mode(phys_enc); 304 304 intf_cfg.dsc = dpu_encoder_helper_get_dsc(phys_enc); 305 - if (phys_enc->hw_pp->merge_3d) 305 + if (intf_cfg.mode_3d && phys_enc->hw_pp->merge_3d) 306 306 intf_cfg.merge_3d = phys_enc->hw_pp->merge_3d->idx; 307 307 308 308 spin_lock_irqsave(phys_enc->enc_spinlock, lock_flags); ··· 440 440 struct dpu_hw_ctl *ctl; 441 441 const struct msm_format *fmt; 442 442 u32 fmt_fourcc; 443 + u32 mode_3d; 443 444 444 445 ctl = phys_enc->hw_ctl; 445 446 fmt_fourcc = dpu_encoder_get_drm_fmt(phys_enc); 446 447 fmt = mdp_get_format(&phys_enc->dpu_kms->base, fmt_fourcc, 0); 448 + mode_3d = dpu_encoder_helper_get_3d_blend_mode(phys_enc); 447 449 448 450 DPU_DEBUG_VIDENC(phys_enc, "\n"); 449 451 ··· 468 466 goto skip_flush; 469 467 470 468 ctl->ops.update_pending_flush_intf(ctl, phys_enc->hw_intf->idx); 471 - if (ctl->ops.update_pending_flush_merge_3d && phys_enc->hw_pp->merge_3d) 469 + if (mode_3d && ctl->ops.update_pending_flush_merge_3d && 470 + phys_enc->hw_pp->merge_3d) 472 471 ctl->ops.update_pending_flush_merge_3d(ctl, phys_enc->hw_pp->merge_3d->idx); 473 472 474 473 if (ctl->ops.update_pending_flush_cdm && phys_enc->hw_cdm)
+4 -1
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
··· 275 275 struct dpu_hw_pingpong *hw_pp; 276 276 struct dpu_hw_cdm *hw_cdm; 277 277 u32 pending_flush = 0; 278 + u32 mode_3d; 278 279 279 280 if (!phys_enc) 280 281 return; ··· 284 283 hw_pp = phys_enc->hw_pp; 285 284 hw_ctl = phys_enc->hw_ctl; 286 285 hw_cdm = phys_enc->hw_cdm; 286 + mode_3d = dpu_encoder_helper_get_3d_blend_mode(phys_enc); 287 287 288 288 DPU_DEBUG("[wb:%d]\n", hw_wb->idx - WB_0); 289 289 ··· 296 294 if (hw_ctl->ops.update_pending_flush_wb) 297 295 hw_ctl->ops.update_pending_flush_wb(hw_ctl, hw_wb->idx); 298 296 299 - if (hw_ctl->ops.update_pending_flush_merge_3d && hw_pp && hw_pp->merge_3d) 297 + if (mode_3d && hw_ctl->ops.update_pending_flush_merge_3d && 298 + hw_pp && hw_pp->merge_3d) 300 299 hw_ctl->ops.update_pending_flush_merge_3d(hw_ctl, 301 300 hw_pp->merge_3d->idx); 302 301
+10 -9
drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c
··· 26 26 end_addr = base_addr + aligned_len; 27 27 28 28 if (!(*reg)) 29 - *reg = kzalloc(len_padded, GFP_KERNEL); 29 + *reg = kvzalloc(len_padded, GFP_KERNEL); 30 30 31 31 if (*reg) 32 32 dump_addr = *reg; ··· 48 48 } 49 49 } 50 50 51 - static void msm_disp_state_print_regs(u32 **reg, u32 len, void __iomem *base_addr, 52 - struct drm_printer *p) 51 + static void msm_disp_state_print_regs(const u32 *dump_addr, u32 len, 52 + void __iomem *base_addr, struct drm_printer *p) 53 53 { 54 54 int i; 55 - u32 *dump_addr = NULL; 56 55 void __iomem *addr; 57 56 u32 num_rows; 58 57 58 + if (!dump_addr) { 59 + drm_printf(p, "Registers not stored\n"); 60 + return; 61 + } 62 + 59 63 addr = base_addr; 60 64 num_rows = len / REG_DUMP_ALIGN; 61 - 62 - if (*reg) 63 - dump_addr = *reg; 64 65 65 66 for (i = 0; i < num_rows; i++) { 66 67 drm_printf(p, "0x%lx : %08x %08x %08x %08x\n", ··· 90 89 91 90 list_for_each_entry_safe(block, tmp, &state->blocks, node) { 92 91 drm_printf(p, "====================%s================\n", block->name); 93 - msm_disp_state_print_regs(&block->state, block->size, block->base_addr, p); 92 + msm_disp_state_print_regs(block->state, block->size, block->base_addr, p); 94 93 } 95 94 96 95 drm_printf(p, "===================dpu drm state================\n"); ··· 162 161 163 162 list_for_each_entry_safe(block, tmp, &disp_state->blocks, node) { 164 163 list_del(&block->node); 165 - kfree(block->state); 164 + kvfree(block->state); 166 165 kfree(block); 167 166 } 168 167
+2 -2
drivers/gpu/drm/msm/dsi/dsi_host.c
··· 542 542 543 543 int new_htotal = mode->htotal - mode->hdisplay + new_hdisplay; 544 544 545 - return new_htotal * mode->vtotal * drm_mode_vrefresh(mode); 545 + return mult_frac(mode->clock * 1000u, new_htotal, mode->htotal); 546 546 } 547 547 548 548 static unsigned long dsi_get_pclk_rate(const struct drm_display_mode *mode, ··· 550 550 { 551 551 unsigned long pclk_rate; 552 552 553 - pclk_rate = mode->clock * 1000; 553 + pclk_rate = mode->clock * 1000u; 554 554 555 555 if (dsc) 556 556 pclk_rate = dsi_adjust_pclk_for_compression(mode, dsc);
-9
drivers/gpu/drm/msm/hdmi/hdmi_phy_8998.c
··· 153 153 return dividend - 1; 154 154 } 155 155 156 - static inline u64 pll_cmp_to_fdata(u32 pll_cmp, unsigned long ref_clk) 157 - { 158 - u64 fdata = ((u64)pll_cmp) * ref_clk * 10; 159 - 160 - do_div(fdata, HDMI_PLL_CMP_CNT); 161 - 162 - return fdata; 163 - } 164 - 165 156 #define HDMI_REF_CLOCK_HZ ((u64)19200000) 166 157 #define HDMI_MHZ_TO_HZ ((u64)1000000) 167 158 static int pll_get_post_div(struct hdmi_8998_post_divider *pd, u64 bclk)
+6 -6
drivers/gpu/drm/panel/panel-himax-hx83102.c
··· 298 298 msleep(60); 299 299 300 300 hx83102_enable_extended_cmds(&dsi_ctx, true); 301 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83102_SETPOWER, 0x2c, 0xed, 0xed, 0x0f, 0xcf, 0x42, 301 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83102_SETPOWER, 0x2c, 0xed, 0xed, 0x27, 0xe7, 0x52, 302 302 0xf5, 0x39, 0x36, 0x36, 0x36, 0x36, 0x32, 0x8b, 0x11, 0x65, 0x00, 0x88, 303 303 0xfa, 0xff, 0xff, 0x8f, 0xff, 0x08, 0xd6, 0x33); 304 304 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83102_SETDISP, 0x00, 0x47, 0xb0, 0x80, 0x00, 0x12, ··· 343 343 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xa0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 344 344 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 345 345 0x00, 0x00, 0x00, 0x00, 0x00, 0x00); 346 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83102_SETGMA, 0x04, 0x04, 0x06, 0x0a, 0x0a, 0x05, 347 - 0x12, 0x14, 0x17, 0x13, 0x2c, 0x33, 0x39, 0x4b, 0x4c, 0x56, 0x61, 0x78, 348 - 0x7a, 0x41, 0x50, 0x68, 0x73, 0x04, 0x04, 0x06, 0x0a, 0x0a, 0x05, 0x12, 349 - 0x14, 0x17, 0x13, 0x2c, 0x33, 0x39, 0x4b, 0x4c, 0x56, 0x61, 0x78, 0x7a, 350 - 0x41, 0x50, 0x68, 0x73); 346 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83102_SETGMA, 0x00, 0x07, 0x10, 0x17, 0x1c, 0x33, 347 + 0x48, 0x50, 0x57, 0x50, 0x68, 0x6e, 0x71, 0x7f, 0x81, 0x8a, 0x8e, 0x9b, 348 + 0x9c, 0x4d, 0x56, 0x5d, 0x73, 0x00, 0x07, 0x10, 0x17, 0x1c, 0x33, 0x48, 349 + 0x50, 0x57, 0x50, 0x68, 0x6e, 0x71, 0x7f, 0x81, 0x8a, 0x8e, 0x9b, 0x9c, 350 + 0x4d, 0x56, 0x5d, 0x73); 351 351 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, HX83102_SETTP1, 0x07, 0x10, 0x10, 0x1a, 0x26, 0x9e, 352 352 0x00, 0x4f, 0xa0, 0x14, 0x14, 0x00, 0x00, 0x00, 0x00, 0x12, 0x0a, 0x02, 353 353 0x02, 0x00, 0x33, 0x02, 0x04, 0x18, 0x01);
+1 -1
drivers/gpu/drm/radeon/radeon_encoders.c
··· 43 43 struct radeon_device *rdev = dev->dev_private; 44 44 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 45 45 struct drm_encoder *clone_encoder; 46 - uint32_t index_mask = 0; 46 + uint32_t index_mask = drm_encoder_mask(encoder); 47 47 int count; 48 48 49 49 /* DIG routing gets problematic */
+2 -4
drivers/gpu/drm/vmwgfx/vmwgfx_blit.c
··· 635 635 kunmap_atomic(d.src_addr); 636 636 if (d.dst_addr) 637 637 kunmap_atomic(d.dst_addr); 638 - if (src_pages) 639 - kvfree(src_pages); 640 - if (dst_pages) 641 - kvfree(dst_pages); 638 + kvfree(src_pages); 639 + kvfree(dst_pages); 642 640 643 641 return ret; 644 642 }
+2 -2
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
··· 62 62 #define VMWGFX_DRIVER_MINOR 20 63 63 #define VMWGFX_DRIVER_PATCHLEVEL 0 64 64 #define VMWGFX_FIFO_STATIC_SIZE (1024*1024) 65 - #define VMWGFX_MAX_DISPLAYS 16 65 + #define VMWGFX_NUM_DISPLAY_UNITS 8 66 66 #define VMWGFX_CMD_BOUNCE_INIT_SIZE 32768 67 67 68 68 #define VMWGFX_MIN_INITIAL_WIDTH 1280 ··· 82 82 #define VMWGFX_NUM_GB_CONTEXT 256 83 83 #define VMWGFX_NUM_GB_SHADER 20000 84 84 #define VMWGFX_NUM_GB_SURFACE 32768 85 - #define VMWGFX_NUM_GB_SCREEN_TARGET VMWGFX_MAX_DISPLAYS 85 + #define VMWGFX_NUM_GB_SCREEN_TARGET VMWGFX_NUM_DISPLAY_UNITS 86 86 #define VMWGFX_NUM_DXCONTEXT 256 87 87 #define VMWGFX_NUM_DXQUERY 512 88 88 #define VMWGFX_NUM_MOB (VMWGFX_NUM_GB_CONTEXT +\
+4 -30
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 1283 1283 { 1284 1284 struct drm_device *dev = &dev_priv->drm; 1285 1285 struct vmw_framebuffer_surface *vfbs; 1286 - enum SVGA3dSurfaceFormat format; 1287 1286 struct vmw_surface *surface; 1288 1287 int ret; 1289 1288 ··· 1316 1317 surface->metadata.base_size.depth != 1)) { 1317 1318 DRM_ERROR("Incompatible surface dimensions " 1318 1319 "for requested mode.\n"); 1319 - return -EINVAL; 1320 - } 1321 - 1322 - switch (mode_cmd->pixel_format) { 1323 - case DRM_FORMAT_ARGB8888: 1324 - format = SVGA3D_A8R8G8B8; 1325 - break; 1326 - case DRM_FORMAT_XRGB8888: 1327 - format = SVGA3D_X8R8G8B8; 1328 - break; 1329 - case DRM_FORMAT_RGB565: 1330 - format = SVGA3D_R5G6B5; 1331 - break; 1332 - case DRM_FORMAT_XRGB1555: 1333 - format = SVGA3D_A1R5G5B5; 1334 - break; 1335 - default: 1336 - DRM_ERROR("Invalid pixel format: %p4cc\n", 1337 - &mode_cmd->pixel_format); 1338 - return -EINVAL; 1339 - } 1340 - 1341 - /* 1342 - * For DX, surface format validation is done when surface->scanout 1343 - * is set. 1344 - */ 1345 - if (!has_sm4_context(dev_priv) && format != surface->metadata.format) { 1346 - DRM_ERROR("Invalid surface format for requested mode.\n"); 1347 1320 return -EINVAL; 1348 1321 } 1349 1322 ··· 1510 1539 DRM_ERROR("Surface size cannot exceed %dx%d\n", 1511 1540 dev_priv->texture_max_width, 1512 1541 dev_priv->texture_max_height); 1542 + ret = -EINVAL; 1513 1543 goto err_out; 1514 1544 } 1515 1545 ··· 2197 2225 struct drm_mode_config *mode_config = &dev->mode_config; 2198 2226 struct drm_vmw_update_layout_arg *arg = 2199 2227 (struct drm_vmw_update_layout_arg *)data; 2200 - void __user *user_rects; 2228 + const void __user *user_rects; 2201 2229 struct drm_vmw_rect *rects; 2202 2230 struct drm_rect *drm_rects; 2203 2231 unsigned rects_size; ··· 2209 2237 VMWGFX_MIN_INITIAL_HEIGHT}; 2210 2238 vmw_du_update_layout(dev_priv, 1, &def_rect); 2211 2239 return 0; 2240 + } else if (arg->num_outputs > VMWGFX_NUM_DISPLAY_UNITS) { 2241 + return -E2BIG; 2212 2242 } 2213 2243 2214 2244 rects_size = arg->num_outputs * sizeof(struct drm_vmw_rect);
-3
drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
··· 199 199 s32 unit_y2; 200 200 }; 201 201 202 - #define VMWGFX_NUM_DISPLAY_UNITS 8 203 - 204 - 205 202 #define vmw_framebuffer_to_vfb(x) \ 206 203 container_of(x, struct vmw_framebuffer, base) 207 204 #define vmw_framebuffer_to_vfbs(x) \
+4
drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
··· 886 886 struct drm_crtc_state *new_crtc_state; 887 887 888 888 conn_state = drm_atomic_get_connector_state(state, conn); 889 + 890 + if (IS_ERR(conn_state)) 891 + return PTR_ERR(conn_state); 892 + 889 893 du = vmw_connector_to_stdu(conn); 890 894 891 895 if (!conn_state->crtc)
+6 -3
drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
··· 2276 2276 const struct SVGA3dSurfaceDesc *desc = vmw_surface_get_desc(format); 2277 2277 SVGA3dSurfaceAllFlags flags = SVGA3D_SURFACE_HINT_TEXTURE | 2278 2278 SVGA3D_SURFACE_HINT_RENDERTARGET | 2279 - SVGA3D_SURFACE_SCREENTARGET | 2280 - SVGA3D_SURFACE_BIND_SHADER_RESOURCE | 2281 - SVGA3D_SURFACE_BIND_RENDER_TARGET; 2279 + SVGA3D_SURFACE_SCREENTARGET; 2280 + 2281 + if (vmw_surface_is_dx_screen_target_format(format)) { 2282 + flags |= SVGA3D_SURFACE_BIND_SHADER_RESOURCE | 2283 + SVGA3D_SURFACE_BIND_RENDER_TARGET; 2284 + } 2282 2285 2283 2286 /* 2284 2287 * Without mob support we're just going to use raw memory buffer
-3
drivers/gpu/drm/xe/regs/xe_gt_regs.h
··· 393 393 394 394 #define XE2_GLOBAL_INVAL XE_REG(0xb404) 395 395 396 - #define SCRATCH1LPFC XE_REG(0xb474) 397 - #define EN_L3_RW_CCS_CACHE_FLUSH REG_BIT(0) 398 - 399 396 #define XE2LPM_L3SQCREG2 XE_REG_MCR(0xb604) 400 397 401 398 #define XE2LPM_L3SQCREG3 XE_REG_MCR(0xb608)
+2 -2
drivers/gpu/drm/xe/xe_device.c
··· 980 980 return; 981 981 } 982 982 983 + xe_pm_runtime_get_noresume(xe); 984 + 983 985 if (drmm_add_action_or_reset(&xe->drm, xe_device_wedged_fini, xe)) { 984 986 drm_err(&xe->drm, "Failed to register xe_device_wedged_fini clean-up. Although device is wedged.\n"); 985 987 return; 986 988 } 987 - 988 - xe_pm_runtime_get_noresume(xe); 989 989 990 990 if (!atomic_xchg(&xe->wedged.flag, 1)) { 991 991 xe->needs_flr_on_fini = true;
+4 -8
drivers/gpu/drm/xe/xe_exec.c
··· 41 41 * user knows an exec writes to a BO and reads from the BO in the next exec, it 42 42 * is the user's responsibility to pass in / out fence between the two execs). 43 43 * 44 - * Implicit dependencies for external BOs are handled by using the dma-buf 45 - * implicit dependency uAPI (TODO: add link). To make this works each exec must 46 - * install the job's fence into the DMA_RESV_USAGE_WRITE slot of every external 47 - * BO mapped in the VM. 48 - * 49 44 * We do not allow a user to trigger a bind at exec time rather we have a VM 50 45 * bind IOCTL which uses the same in / out fence interface as exec. In that 51 46 * sense, a VM bind is basically the same operation as an exec from the user ··· 54 59 * behind any pending kernel operations on any external BOs in VM or any BOs 55 60 * private to the VM. This is accomplished by the rebinds waiting on BOs 56 61 * DMA_RESV_USAGE_KERNEL slot (kernel ops) and kernel ops waiting on all BOs 57 - * slots (inflight execs are in the DMA_RESV_USAGE_BOOKING for private BOs and 58 - * in DMA_RESV_USAGE_WRITE for external BOs). 62 + * slots (inflight execs are in the DMA_RESV_USAGE_BOOKKEEP for private BOs and 63 + * for external BOs). 59 64 * 60 65 * Rebinds / dma-resv usage applies to non-compute mode VMs only as for compute 61 66 * mode VMs we use preempt fences and a rebind worker (TODO: add link). ··· 299 304 xe_sched_job_arm(job); 300 305 if (!xe_vm_in_lr_mode(vm)) 301 306 drm_gpuvm_resv_add_fence(&vm->gpuvm, exec, &job->drm.s_fence->finished, 302 - DMA_RESV_USAGE_BOOKKEEP, DMA_RESV_USAGE_WRITE); 307 + DMA_RESV_USAGE_BOOKKEEP, 308 + DMA_RESV_USAGE_BOOKKEEP); 303 309 304 310 for (i = 0; i < num_syncs; i++) { 305 311 xe_sync_entry_signal(&syncs[i], &job->drm.s_fence->finished);
+2
drivers/gpu/drm/xe/xe_gpu_scheduler.h
··· 63 63 static inline void xe_sched_add_pending_job(struct xe_gpu_scheduler *sched, 64 64 struct xe_sched_job *job) 65 65 { 66 + spin_lock(&sched->base.job_list_lock); 66 67 list_add(&job->drm.list, &sched->base.pending_list); 68 + spin_unlock(&sched->base.job_list_lock); 67 69 } 68 70 69 71 static inline
-1
drivers/gpu/drm/xe/xe_gt.c
··· 108 108 return; 109 109 110 110 if (!xe_gt_is_media_type(gt)) { 111 - xe_mmio_write32(gt, SCRATCH1LPFC, EN_L3_RW_CCS_CACHE_FLUSH); 112 111 reg = xe_gt_mcr_unicast_read_any(gt, XE2_GAMREQSTRM_CTRL); 113 112 reg |= CG_DIS_CNTLBUS; 114 113 xe_gt_mcr_multicast_write(gt, XE2_GAMREQSTRM_CTRL, reg);
+13 -16
drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
··· 37 37 return hw_tlb_timeout + 2 * delay; 38 38 } 39 39 40 + static void xe_gt_tlb_invalidation_fence_fini(struct xe_gt_tlb_invalidation_fence *fence) 41 + { 42 + if (WARN_ON_ONCE(!fence->gt)) 43 + return; 44 + 45 + xe_pm_runtime_put(gt_to_xe(fence->gt)); 46 + fence->gt = NULL; /* fini() should be called once */ 47 + } 48 + 40 49 static void 41 50 __invalidation_fence_signal(struct xe_device *xe, struct xe_gt_tlb_invalidation_fence *fence) 42 51 { ··· 213 204 tlb_timeout_jiffies(gt)); 214 205 } 215 206 spin_unlock_irq(&gt->tlb_invalidation.pending_lock); 216 - } else if (ret < 0) { 207 + } else { 217 208 __invalidation_fence_signal(xe, fence); 218 209 } 219 210 if (!ret) { ··· 276 267 277 268 xe_gt_tlb_invalidation_fence_init(gt, &fence, true); 278 269 ret = xe_gt_tlb_invalidation_guc(gt, &fence); 279 - if (ret < 0) { 280 - xe_gt_tlb_invalidation_fence_fini(&fence); 270 + if (ret) 281 271 return ret; 282 - } 283 272 284 273 xe_gt_tlb_invalidation_fence_wait(&fence); 285 274 } else if (xe_device_uc_enabled(xe) && !xe_device_wedged(xe)) { ··· 503 496 * @stack: fence is stack variable 504 497 * 505 498 * Initialize TLB invalidation fence for use. xe_gt_tlb_invalidation_fence_fini 506 - * must be called if fence is not signaled. 499 + * will be automatically called when fence is signalled (all fences must signal), 500 + * even on error. 507 501 */ 508 502 void xe_gt_tlb_invalidation_fence_init(struct xe_gt *gt, 509 503 struct xe_gt_tlb_invalidation_fence *fence, ··· 523 515 else 524 516 dma_fence_get(&fence->base); 525 517 fence->gt = gt; 526 - } 527 - 528 - /** 529 - * xe_gt_tlb_invalidation_fence_fini - Finalize TLB invalidation fence 530 - * @fence: TLB invalidation fence to finalize 531 - * 532 - * Drop PM ref which fence took durinig init. 533 - */ 534 - void xe_gt_tlb_invalidation_fence_fini(struct xe_gt_tlb_invalidation_fence *fence) 535 - { 536 - xe_pm_runtime_put(gt_to_xe(fence->gt)); 537 518 }
-1
drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
··· 28 28 void xe_gt_tlb_invalidation_fence_init(struct xe_gt *gt, 29 29 struct xe_gt_tlb_invalidation_fence *fence, 30 30 bool stack); 31 - void xe_gt_tlb_invalidation_fence_fini(struct xe_gt_tlb_invalidation_fence *fence); 32 31 33 32 static inline void 34 33 xe_gt_tlb_invalidation_fence_wait(struct xe_gt_tlb_invalidation_fence *fence)
+5 -2
drivers/gpu/drm/xe/xe_guc_submit.c
··· 1030 1030 1031 1031 /* 1032 1032 * TDR has fired before free job worker. Common if exec queue 1033 - * immediately closed after last fence signaled. 1033 + * immediately closed after last fence signaled. Add back to pending 1034 + * list so job can be freed and kick scheduler ensuring free job is not 1035 + * lost. 1034 1036 */ 1035 1037 if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &job->fence->flags)) { 1036 - guc_exec_queue_free_job(drm_job); 1038 + xe_sched_add_pending_job(sched, job); 1039 + xe_sched_submission_start(sched); 1037 1040 1038 1041 return DRM_GPU_SCHED_STAT_NOMINAL; 1039 1042 }
+5 -1
drivers/gpu/drm/xe/xe_query.c
··· 161 161 cpu_clock); 162 162 163 163 xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL); 164 - resp.width = 36; 164 + 165 + if (GRAPHICS_VER(xe) >= 20) 166 + resp.width = 64; 167 + else 168 + resp.width = 36; 165 169 166 170 /* Only write to the output fields of user query */ 167 171 if (put_user(resp.cpu_timestamp, &query_ptr->cpu_timestamp))
+1 -1
drivers/gpu/drm/xe/xe_sync.c
··· 58 58 if (!access_ok(ptr, sizeof(*ptr))) 59 59 return ERR_PTR(-EFAULT); 60 60 61 - ufence = kmalloc(sizeof(*ufence), GFP_KERNEL); 61 + ufence = kzalloc(sizeof(*ufence), GFP_KERNEL); 62 62 if (!ufence) 63 63 return ERR_PTR(-ENOMEM); 64 64
+2 -6
drivers/gpu/drm/xe/xe_vm.c
··· 3199 3199 3200 3200 ret = xe_gt_tlb_invalidation_vma(tile->primary_gt, 3201 3201 &fence[fence_id], vma); 3202 - if (ret < 0) { 3203 - xe_gt_tlb_invalidation_fence_fini(&fence[fence_id]); 3202 + if (ret) 3204 3203 goto wait; 3205 - } 3206 3204 ++fence_id; 3207 3205 3208 3206 if (!tile->media_gt) ··· 3212 3214 3213 3215 ret = xe_gt_tlb_invalidation_vma(tile->media_gt, 3214 3216 &fence[fence_id], vma); 3215 - if (ret < 0) { 3216 - xe_gt_tlb_invalidation_fence_fini(&fence[fence_id]); 3217 + if (ret) 3217 3218 goto wait; 3218 - } 3219 3219 ++fence_id; 3220 3220 } 3221 3221 }
+4
drivers/gpu/drm/xe/xe_wa.c
··· 710 710 DIS_PARTIAL_AUTOSTRIP | 711 711 DIS_AUTOSTRIP)) 712 712 }, 713 + { XE_RTP_NAME("15016589081"), 714 + XE_RTP_RULES(GRAPHICS_VERSION(2004), ENGINE_CLASS(RENDER)), 715 + XE_RTP_ACTIONS(SET(CHICKEN_RASTER_1, DIS_CLIP_NEGATIVE_BOUNDING_BOX)) 716 + }, 713 717 714 718 /* Xe2_HPG */ 715 719 { XE_RTP_NAME("15010599737"),
-3
drivers/gpu/drm/xe/xe_wait_user_fence.c
··· 169 169 args->timeout = 0; 170 170 } 171 171 172 - if (!timeout && !(err < 0)) 173 - err = -ETIME; 174 - 175 172 if (q) 176 173 xe_exec_queue_put(q); 177 174
+1
drivers/gpu/host1x/context.c
··· 58 58 ctx->dev.parent = host1x->dev; 59 59 ctx->dev.release = host1x_memory_context_release; 60 60 61 + ctx->dev.dma_parms = &ctx->dma_parms; 61 62 dma_set_max_seg_size(&ctx->dev, UINT_MAX); 62 63 63 64 err = device_add(&ctx->dev);
+8 -10
drivers/gpu/host1x/dev.c
··· 625 625 goto free_contexts; 626 626 } 627 627 628 - err = host1x_intr_init(host); 629 - if (err) { 630 - dev_err(&pdev->dev, "failed to initialize interrupts\n"); 631 - goto deinit_syncpt; 632 - } 633 - 634 628 pm_runtime_enable(&pdev->dev); 635 629 636 630 err = devm_tegra_core_dev_init_opp_table_common(&pdev->dev); ··· 635 641 err = pm_runtime_resume_and_get(&pdev->dev); 636 642 if (err) 637 643 goto pm_disable; 644 + 645 + err = host1x_intr_init(host); 646 + if (err) { 647 + dev_err(&pdev->dev, "failed to initialize interrupts\n"); 648 + goto pm_put; 649 + } 638 650 639 651 host1x_debug_init(host); 640 652 ··· 658 658 host1x_unregister(host); 659 659 deinit_debugfs: 660 660 host1x_debug_deinit(host); 661 - 661 + host1x_intr_deinit(host); 662 + pm_put: 662 663 pm_runtime_put_sync_suspend(&pdev->dev); 663 664 pm_disable: 664 665 pm_runtime_disable(&pdev->dev); 665 - 666 - host1x_intr_deinit(host); 667 - deinit_syncpt: 668 666 host1x_syncpt_deinit(host); 669 667 free_contexts: 670 668 host1x_memory_context_list_free(&host->context_list);
+2
drivers/hid/hid-ids.h
··· 509 509 #define I2C_DEVICE_ID_GOODIX_01E8 0x01e8 510 510 #define I2C_DEVICE_ID_GOODIX_01E9 0x01e9 511 511 #define I2C_DEVICE_ID_GOODIX_01F0 0x01f0 512 + #define I2C_DEVICE_ID_GOODIX_0D42 0x0d42 512 513 513 514 #define USB_VENDOR_ID_GOODTOUCH 0x1aad 514 515 #define USB_DEVICE_ID_GOODTOUCH_000f 0x000f ··· 869 868 #define USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1 0xc539 870 869 #define USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1_1 0xc53f 871 870 #define USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_POWERPLAY 0xc53a 871 + #define USB_DEVICE_ID_LOGITECH_BOLT_RECEIVER 0xc548 872 872 #define USB_DEVICE_ID_SPACETRAVELLER 0xc623 873 873 #define USB_DEVICE_ID_SPACENAVIGATOR 0xc626 874 874 #define USB_DEVICE_ID_DINOVO_DESKTOP 0xc704
+8
drivers/hid/hid-lenovo.c
··· 473 473 return lenovo_input_mapping_tp10_ultrabook_kbd(hdev, hi, field, 474 474 usage, bit, max); 475 475 case USB_DEVICE_ID_LENOVO_X1_TAB: 476 + case USB_DEVICE_ID_LENOVO_X1_TAB3: 476 477 return lenovo_input_mapping_x1_tab_kbd(hdev, hi, field, usage, bit, max); 477 478 default: 478 479 return 0; ··· 584 583 break; 585 584 case USB_DEVICE_ID_LENOVO_TP10UBKBD: 586 585 case USB_DEVICE_ID_LENOVO_X1_TAB: 586 + case USB_DEVICE_ID_LENOVO_X1_TAB3: 587 587 ret = lenovo_led_set_tp10ubkbd(hdev, TP10UBKBD_FN_LOCK_LED, value); 588 588 if (ret) 589 589 return ret; ··· 778 776 return lenovo_event_cptkbd(hdev, field, usage, value); 779 777 case USB_DEVICE_ID_LENOVO_TP10UBKBD: 780 778 case USB_DEVICE_ID_LENOVO_X1_TAB: 779 + case USB_DEVICE_ID_LENOVO_X1_TAB3: 781 780 return lenovo_event_tp10ubkbd(hdev, field, usage, value); 782 781 default: 783 782 return 0; ··· 1059 1056 break; 1060 1057 case USB_DEVICE_ID_LENOVO_TP10UBKBD: 1061 1058 case USB_DEVICE_ID_LENOVO_X1_TAB: 1059 + case USB_DEVICE_ID_LENOVO_X1_TAB3: 1062 1060 ret = lenovo_led_set_tp10ubkbd(hdev, tp10ubkbd_led[led_nr], value); 1063 1061 break; 1064 1062 } ··· 1290 1286 break; 1291 1287 case USB_DEVICE_ID_LENOVO_TP10UBKBD: 1292 1288 case USB_DEVICE_ID_LENOVO_X1_TAB: 1289 + case USB_DEVICE_ID_LENOVO_X1_TAB3: 1293 1290 ret = lenovo_probe_tp10ubkbd(hdev); 1294 1291 break; 1295 1292 default: ··· 1377 1372 break; 1378 1373 case USB_DEVICE_ID_LENOVO_TP10UBKBD: 1379 1374 case USB_DEVICE_ID_LENOVO_X1_TAB: 1375 + case USB_DEVICE_ID_LENOVO_X1_TAB3: 1380 1376 lenovo_remove_tp10ubkbd(hdev); 1381 1377 break; 1382 1378 } ··· 1427 1421 */ 1428 1422 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 1429 1423 USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X1_TAB) }, 1424 + { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 1425 + USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X1_TAB3) }, 1430 1426 { } 1431 1427 }; 1432 1428
+4
drivers/hid/hid-multitouch.c
··· 2146 2146 HID_DEVICE(BUS_BLUETOOTH, HID_GROUP_MULTITOUCH_WIN_8, 2147 2147 USB_VENDOR_ID_LOGITECH, 2148 2148 USB_DEVICE_ID_LOGITECH_CASA_TOUCHPAD) }, 2149 + { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU, 2150 + HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8, 2151 + USB_VENDOR_ID_LOGITECH, 2152 + USB_DEVICE_ID_LOGITECH_BOLT_RECEIVER) }, 2149 2153 2150 2154 /* MosArt panels */ 2151 2155 { .driver_data = MT_CLS_CONFIDENCE_MINUS_ONE,
+10
drivers/hid/i2c-hid/i2c-hid-core.c
··· 50 50 #define I2C_HID_QUIRK_BAD_INPUT_SIZE BIT(3) 51 51 #define I2C_HID_QUIRK_NO_WAKEUP_AFTER_RESET BIT(4) 52 52 #define I2C_HID_QUIRK_NO_SLEEP_ON_SUSPEND BIT(5) 53 + #define I2C_HID_QUIRK_DELAY_WAKEUP_AFTER_RESUME BIT(6) 53 54 54 55 /* Command opcodes */ 55 56 #define I2C_HID_OPCODE_RESET 0x01 ··· 141 140 { USB_VENDOR_ID_ELAN, HID_ANY_ID, 142 141 I2C_HID_QUIRK_NO_WAKEUP_AFTER_RESET | 143 142 I2C_HID_QUIRK_BOGUS_IRQ }, 143 + { I2C_VENDOR_ID_GOODIX, I2C_DEVICE_ID_GOODIX_0D42, 144 + I2C_HID_QUIRK_DELAY_WAKEUP_AFTER_RESUME }, 144 145 { 0, 0 } 145 146 }; 146 147 ··· 983 980 ret); 984 981 return -ENXIO; 985 982 } 983 + 984 + /* On Goodix 27c6:0d42 wait extra time before device wakeup. 985 + * It's not clear why but if we send wakeup too early, the device will 986 + * never trigger input interrupts. 987 + */ 988 + if (ihid->quirks & I2C_HID_QUIRK_DELAY_WAKEUP_AFTER_RESUME) 989 + msleep(1500); 986 990 987 991 /* Instead of resetting device, simply powers the device on. This 988 992 * solves "incomplete reports" on Raydium devices 2386:3118 and
+1 -1
drivers/hwmon/jc42.c
··· 417 417 return -ENODEV; 418 418 419 419 if ((devid & TSE2004_DEVID_MASK) == TSE2004_DEVID && 420 - (cap & 0x00e7) != 0x00e7) 420 + (cap & 0x0062) != 0x0062) 421 421 return -ENODEV; 422 422 423 423 for (i = 0; i < ARRAY_SIZE(jc42_chips); i++) {
+2
drivers/iio/accel/Kconfig
··· 447 447 448 448 config IIO_KX022A 449 449 tristate 450 + select IIO_BUFFER 451 + select IIO_TRIGGERED_BUFFER 450 452 451 453 config IIO_KX022A_SPI 452 454 tristate "Kionix KX022A tri-axis digital accelerometer SPI interface"
+2 -1
drivers/iio/accel/bma400_core.c
··· 1218 1218 static int bma400_tap_event_en(struct bma400_data *data, 1219 1219 enum iio_event_direction dir, int state) 1220 1220 { 1221 - unsigned int mask, field_value; 1221 + unsigned int mask; 1222 + unsigned int field_value = 0; 1222 1223 int ret; 1223 1224 1224 1225 /*
+11
drivers/iio/adc/Kconfig
··· 52 52 tristate "Analog Device AD4695 ADC Driver" 53 53 depends on SPI 54 54 select REGMAP_SPI 55 + select IIO_BUFFER 56 + select IIO_TRIGGERED_BUFFER 55 57 help 56 58 Say yes here to build support for Analog Devices AD4695 and similar 57 59 analog to digital converters (ADC). ··· 330 328 config AD7944 331 329 tristate "Analog Devices AD7944 and similar ADCs driver" 332 330 depends on SPI 331 + select IIO_BUFFER 332 + select IIO_TRIGGERED_BUFFER 333 333 help 334 334 Say yes here to build support for Analog Devices 335 335 AD7944, AD7985, AD7986 ADCs. ··· 1485 1481 config TI_ADS8688 1486 1482 tristate "Texas Instruments ADS8688" 1487 1483 depends on SPI 1484 + select IIO_BUFFER 1485 + select IIO_TRIGGERED_BUFFER 1488 1486 help 1489 1487 If you say yes here you get support for Texas Instruments ADS8684 and 1490 1488 and ADS8688 ADC chips ··· 1497 1491 config TI_ADS124S08 1498 1492 tristate "Texas Instruments ADS124S08" 1499 1493 depends on SPI 1494 + select IIO_BUFFER 1495 + select IIO_TRIGGERED_BUFFER 1500 1496 help 1501 1497 If you say yes here you get support for Texas Instruments ADS124S08 1502 1498 and ADS124S06 ADC chips ··· 1533 1525 config TI_LMP92064 1534 1526 tristate "Texas Instruments LMP92064 ADC driver" 1535 1527 depends on SPI 1528 + select REGMAP_SPI 1529 + select IIO_BUFFER 1530 + select IIO_TRIGGERED_BUFFER 1536 1531 help 1537 1532 Say yes here to build support for the LMP92064 Precision Current and Voltage 1538 1533 sensor.
+1
drivers/iio/amplifiers/Kconfig
··· 27 27 config ADA4250 28 28 tristate "Analog Devices ADA4250 Instrumentation Amplifier" 29 29 depends on SPI 30 + select REGMAP_SPI 30 31 help 31 32 Say yes here to build support for Analog Devices ADA4250 32 33 SPI Amplifier's support. The driver provides direct access via
+2
drivers/iio/chemical/Kconfig
··· 80 80 tristate "ScioSense ENS160 sensor driver" 81 81 depends on (I2C || SPI) 82 82 select REGMAP 83 + select IIO_BUFFER 84 + select IIO_TRIGGERED_BUFFER 83 85 select ENS160_I2C if I2C 84 86 select ENS160_SPI if SPI 85 87 help
+1 -1
drivers/iio/common/hid-sensors/hid-sensor-trigger.c
··· 32 32 latency = integer * 1000 + fract / 1000; 33 33 ret = hid_sensor_set_report_latency(attrb, latency); 34 34 if (ret < 0) 35 - return len; 35 + return ret; 36 36 37 37 attrb->latency_ms = hid_sensor_get_report_latency(attrb); 38 38
+7
drivers/iio/dac/Kconfig
··· 9 9 config AD3552R 10 10 tristate "Analog Devices AD3552R DAC driver" 11 11 depends on SPI_MASTER 12 + select IIO_BUFFER 13 + select IIO_TRIGGERED_BUFFER 12 14 help 13 15 Say yes here to build support for Analog Devices AD3552R 14 16 Digital to Analog Converter. ··· 254 252 config AD5766 255 253 tristate "Analog Devices AD5766/AD5767 DAC driver" 256 254 depends on SPI_MASTER 255 + select IIO_BUFFER 256 + select IIO_TRIGGERED_BUFFER 257 257 help 258 258 Say yes here to build support for Analog Devices AD5766, AD5767 259 259 Digital to Analog Converter. ··· 266 262 config AD5770R 267 263 tristate "Analog Devices AD5770R IDAC driver" 268 264 depends on SPI_MASTER 265 + select REGMAP_SPI 269 266 help 270 267 Say yes here to build support for Analog Devices AD5770R Digital to 271 268 Analog Converter. ··· 358 353 config LTC1660 359 354 tristate "Linear Technology LTC1660/LTC1665 DAC SPI driver" 360 355 depends on SPI 356 + select REGMAP_SPI 361 357 help 362 358 Say yes here to build support for Linear Technology 363 359 LTC1660 and LTC1665 Digital to Analog Converters. ··· 489 483 490 484 config STM32_DAC_CORE 491 485 tristate 486 + select REGMAP_MMIO 492 487 493 488 config TI_DAC082S085 494 489 tristate "Texas Instruments 8/10/12-bit 2/4-channel DAC driver"
+9 -8
drivers/iio/dac/ltc2664.c
··· 516 516 const struct ltc2664_chip_info *chip_info = st->chip_info; 517 517 struct device *dev = &st->spi->dev; 518 518 u32 reg, tmp[2], mspan; 519 - int ret, span = 0; 519 + int ret; 520 520 521 521 mspan = LTC2664_MSPAN_SOFTSPAN; 522 522 ret = device_property_read_u32(dev, "adi,manual-span-operation-config", ··· 579 579 ret = fwnode_property_read_u32_array(child, "output-range-microvolt", 580 580 tmp, ARRAY_SIZE(tmp)); 581 581 if (!ret && mspan == LTC2664_MSPAN_SOFTSPAN) { 582 - chan->span = ltc2664_set_span(st, tmp[0] / 1000, 583 - tmp[1] / 1000, reg); 584 - if (span < 0) 585 - return dev_err_probe(dev, span, 582 + ret = ltc2664_set_span(st, tmp[0] / 1000, tmp[1] / 1000, reg); 583 + if (ret < 0) 584 + return dev_err_probe(dev, ret, 586 585 "Failed to set span\n"); 586 + chan->span = ret; 587 587 } 588 588 589 589 ret = fwnode_property_read_u32_array(child, "output-range-microamp", 590 590 tmp, ARRAY_SIZE(tmp)); 591 591 if (!ret) { 592 - chan->span = ltc2664_set_span(st, 0, tmp[1] / 1000, reg); 593 - if (span < 0) 594 - return dev_err_probe(dev, span, 592 + ret = ltc2664_set_span(st, 0, tmp[1] / 1000, reg); 593 + if (ret < 0) 594 + return dev_err_probe(dev, ret, 595 595 "Failed to set span\n"); 596 + chan->span = ret; 596 597 } 597 598 } 598 599
+17 -15
drivers/iio/frequency/Kconfig
··· 53 53 config ADF4377 54 54 tristate "Analog Devices ADF4377 Microwave Wideband Synthesizer" 55 55 depends on SPI && COMMON_CLK 56 + select REGMAP_SPI 56 57 help 57 58 Say yes here to build support for Analog Devices ADF4377 Microwave 58 59 Wideband Synthesizer. ··· 92 91 module will be called admv1014. 93 92 94 93 config ADMV4420 95 - tristate "Analog Devices ADMV4420 K Band Downconverter" 96 - depends on SPI 97 - help 98 - Say yes here to build support for Analog Devices K Band 99 - Downconverter with integrated Fractional-N PLL and VCO. 94 + tristate "Analog Devices ADMV4420 K Band Downconverter" 95 + depends on SPI 96 + select REGMAP_SPI 97 + help 98 + Say yes here to build support for Analog Devices K Band 99 + Downconverter with integrated Fractional-N PLL and VCO. 100 100 101 - To compile this driver as a module, choose M here: the 102 - module will be called admv4420. 101 + To compile this driver as a module, choose M here: the 102 + module will be called admv4420. 103 103 104 104 config ADRF6780 105 - tristate "Analog Devices ADRF6780 Microwave Upconverter" 106 - depends on SPI 107 - depends on COMMON_CLK 108 - help 109 - Say yes here to build support for Analog Devices ADRF6780 110 - 5.9 GHz to 23.6 GHz, Wideband, Microwave Upconverter. 105 + tristate "Analog Devices ADRF6780 Microwave Upconverter" 106 + depends on SPI 107 + depends on COMMON_CLK 108 + help 109 + Say yes here to build support for Analog Devices ADRF6780 110 + 5.9 GHz to 23.6 GHz, Wideband, Microwave Upconverter. 111 111 112 - To compile this driver as a module, choose M here: the 113 - module will be called adrf6780. 112 + To compile this driver as a module, choose M here: the 113 + module will be called adrf6780. 114 114 115 115 endmenu 116 116 endmenu
+11 -12
drivers/iio/imu/bmi323/bmi323_core.c
··· 2172 2172 } 2173 2173 EXPORT_SYMBOL_NS_GPL(bmi323_core_probe, IIO_BMI323); 2174 2174 2175 - #if defined(CONFIG_PM) 2176 2175 static int bmi323_core_runtime_suspend(struct device *dev) 2177 2176 { 2178 2177 struct iio_dev *indio_dev = dev_get_drvdata(dev); ··· 2198 2199 } 2199 2200 2200 2201 for (unsigned int i = 0; i < ARRAY_SIZE(bmi323_ext_reg_savestate); i++) { 2201 - ret = bmi323_read_ext_reg(data, bmi323_reg_savestate[i], 2202 - &savestate->reg_settings[i]); 2202 + ret = bmi323_read_ext_reg(data, bmi323_ext_reg_savestate[i], 2203 + &savestate->ext_reg_settings[i]); 2203 2204 if (ret) { 2204 2205 dev_err(data->dev, 2205 2206 "Error reading bmi323 external reg 0x%x: %d\n", 2206 - bmi323_reg_savestate[i], ret); 2207 + bmi323_ext_reg_savestate[i], ret); 2207 2208 return ret; 2208 2209 } 2209 2210 } ··· 2231 2232 * after being reset in the lower power state by runtime-pm. 2232 2233 */ 2233 2234 ret = bmi323_init(data); 2234 - if (!ret) 2235 + if (ret) { 2236 + dev_err(data->dev, "Device power-on and init failed: %d", ret); 2235 2237 return ret; 2238 + } 2236 2239 2237 2240 /* Register must be cleared before changing an active config */ 2238 2241 ret = regmap_write(data->regmap, BMI323_FEAT_IO0_REG, 0); ··· 2244 2243 } 2245 2244 2246 2245 for (unsigned int i = 0; i < ARRAY_SIZE(bmi323_ext_reg_savestate); i++) { 2247 - ret = bmi323_write_ext_reg(data, bmi323_reg_savestate[i], 2248 - savestate->reg_settings[i]); 2246 + ret = bmi323_write_ext_reg(data, bmi323_ext_reg_savestate[i], 2247 + savestate->ext_reg_settings[i]); 2249 2248 if (ret) { 2250 2249 dev_err(data->dev, 2251 2250 "Error writing bmi323 external reg 0x%x: %d\n", 2252 - bmi323_reg_savestate[i], ret); 2251 + bmi323_ext_reg_savestate[i], ret); 2253 2252 return ret; 2254 2253 } 2255 2254 } ··· 2294 2293 return iio_device_resume_triggering(indio_dev); 2295 2294 } 2296 2295 2297 - #endif 2298 - 2299 2296 const struct dev_pm_ops bmi323_core_pm_ops = { 2300 - SET_RUNTIME_PM_OPS(bmi323_core_runtime_suspend, 2301 - bmi323_core_runtime_resume, NULL) 2297 + RUNTIME_PM_OPS(bmi323_core_runtime_suspend, 2298 + bmi323_core_runtime_resume, NULL) 2302 2299 }; 2303 2300 EXPORT_SYMBOL_NS_GPL(bmi323_core_pm_ops, IIO_BMI323); 2304 2301
+2
drivers/iio/light/Kconfig
··· 335 335 depends on I2C 336 336 select REGMAP_I2C 337 337 select IIO_GTS_HELPER 338 + select IIO_BUFFER 339 + select IIO_TRIGGERED_BUFFER 338 340 help 339 341 Enable support for the ROHM BU27008 color sensor. 340 342 The ROHM BU27008 is a sensor with 5 photodiodes (red, green,
+4
drivers/iio/light/opt3001.c
··· 139 139 .val2 = 400000, 140 140 }, 141 141 { 142 + .val = 41932, 143 + .val2 = 800000, 144 + }, 145 + { 142 146 .val = 83865, 143 147 .val2 = 600000, 144 148 },
+2 -3
drivers/iio/light/veml6030.c
··· 99 99 static ssize_t in_illuminance_period_available_show(struct device *dev, 100 100 struct device_attribute *attr, char *buf) 101 101 { 102 + struct veml6030_data *data = iio_priv(dev_to_iio_dev(dev)); 102 103 int ret, reg, x; 103 - struct iio_dev *indio_dev = i2c_get_clientdata(to_i2c_client(dev)); 104 - struct veml6030_data *data = iio_priv(indio_dev); 105 104 106 105 ret = regmap_read(data->regmap, VEML6030_REG_ALS_CONF, &reg); 107 106 if (ret) { ··· 779 780 780 781 /* Cache currently active measurement parameters */ 781 782 data->cur_gain = 3; 782 - data->cur_resolution = 4608; 783 + data->cur_resolution = 5376; 783 784 data->cur_integration_time = 3; 784 785 785 786 return ret;
+2
drivers/iio/magnetometer/Kconfig
··· 11 11 depends on I2C 12 12 depends on OF 13 13 select REGMAP_I2C 14 + select IIO_BUFFER 15 + select IIO_TRIGGERED_BUFFER 14 16 help 15 17 Say yes here to build support for Voltafield AF8133J I2C-based 16 18 3-axis magnetometer chip.
+4
drivers/iio/pressure/Kconfig
··· 19 19 config ROHM_BM1390 20 20 tristate "ROHM BM1390GLV-Z pressure sensor driver" 21 21 depends on I2C 22 + select REGMAP_I2C 23 + select IIO_BUFFER 24 + select IIO_TRIGGERED_BUFFER 22 25 help 23 26 Support for the ROHM BM1390 pressure sensor. The BM1390GLV-Z 24 27 can measure pressures ranging from 300 hPa to 1300 hPa with ··· 256 253 config SDP500 257 254 tristate "Sensirion SDP500 differential pressure sensor I2C driver" 258 255 depends on I2C 256 + select CRC8 259 257 help 260 258 Say Y here to build support for Sensirion SDP500 differential pressure 261 259 sensor I2C driver.
+2
drivers/iio/proximity/Kconfig
··· 86 86 config MB1232 87 87 tristate "MaxSonar I2CXL family ultrasonic sensors" 88 88 depends on I2C 89 + select IIO_BUFFER 90 + select IIO_TRIGGERED_BUFFER 89 91 help 90 92 Say Y to build a driver for the ultrasonic sensors I2CXL of 91 93 MaxBotix which have an i2c interface. It can be used to measure
+3
drivers/iio/resolver/Kconfig
··· 31 31 depends on SPI 32 32 depends on COMMON_CLK 33 33 depends on GPIOLIB || COMPILE_TEST 34 + select REGMAP 35 + select IIO_BUFFER 36 + select IIO_TRIGGERED_BUFFER 34 37 help 35 38 Say yes here to build support for Analog Devices spi resolver 36 39 to digital converters, ad2s1210, provides direct access via sysfs.
+3
drivers/input/joystick/xpad.c
··· 218 218 { 0x0c12, 0x8810, "Zeroplus Xbox Controller", 0, XTYPE_XBOX }, 219 219 { 0x0c12, 0x9902, "HAMA VibraX - *FAULTY HARDWARE*", 0, XTYPE_XBOX }, 220 220 { 0x0d2f, 0x0002, "Andamiro Pump It Up pad", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX }, 221 + { 0x0db0, 0x1901, "Micro Star International Xbox360 Controller for Windows", 0, XTYPE_XBOX360 }, 221 222 { 0x0e4c, 0x1097, "Radica Gamester Controller", 0, XTYPE_XBOX }, 222 223 { 0x0e4c, 0x1103, "Radica Gamester Reflex", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX }, 223 224 { 0x0e4c, 0x2390, "Radica Games Jtech Controller", 0, XTYPE_XBOX }, ··· 374 373 { 0x294b, 0x3404, "Snakebyte GAMEPAD RGB X", 0, XTYPE_XBOXONE }, 375 374 { 0x2dc8, 0x2000, "8BitDo Pro 2 Wired Controller fox Xbox", 0, XTYPE_XBOXONE }, 376 375 { 0x2dc8, 0x3106, "8BitDo Pro 2 Wired Controller", 0, XTYPE_XBOX360 }, 376 + { 0x2dc8, 0x310a, "8BitDo Ultimate 2C Wireless Controller", 0, XTYPE_XBOX360 }, 377 377 { 0x2e24, 0x0652, "Hyperkin Duke X-Box One pad", 0, XTYPE_XBOXONE }, 378 378 { 0x31e3, 0x1100, "Wooting One", 0, XTYPE_XBOX360 }, 379 379 { 0x31e3, 0x1200, "Wooting Two", 0, XTYPE_XBOX360 }, ··· 494 492 XPAD_XBOX360_VENDOR(0x07ff), /* Mad Catz Gamepad */ 495 493 XPAD_XBOXONE_VENDOR(0x0b05), /* ASUS controllers */ 496 494 XPAD_XBOX360_VENDOR(0x0c12), /* Zeroplus X-Box 360 controllers */ 495 + XPAD_XBOX360_VENDOR(0x0db0), /* Micro Star International X-Box 360 controllers */ 497 496 XPAD_XBOX360_VENDOR(0x0e6f), /* 0x0e6f Xbox 360 controllers */ 498 497 XPAD_XBOXONE_VENDOR(0x0e6f), /* 0x0e6f Xbox One controllers */ 499 498 XPAD_XBOX360_VENDOR(0x0f0d), /* Hori controllers */
+22 -12
drivers/input/touchscreen/zinitix.c
··· 645 645 return error; 646 646 } 647 647 648 - bt541->num_keycodes = device_property_count_u32(&client->dev, "linux,keycodes"); 649 - if (bt541->num_keycodes > ARRAY_SIZE(bt541->keycodes)) { 650 - dev_err(&client->dev, "too many keys defined (%d)\n", bt541->num_keycodes); 651 - return -EINVAL; 652 - } 648 + if (device_property_present(&client->dev, "linux,keycodes")) { 649 + bt541->num_keycodes = device_property_count_u32(&client->dev, 650 + "linux,keycodes"); 651 + if (bt541->num_keycodes < 0) { 652 + dev_err(&client->dev, "Failed to count keys (%d)\n", 653 + bt541->num_keycodes); 654 + return bt541->num_keycodes; 655 + } else if (bt541->num_keycodes > ARRAY_SIZE(bt541->keycodes)) { 656 + dev_err(&client->dev, "Too many keys defined (%d)\n", 657 + bt541->num_keycodes); 658 + return -EINVAL; 659 + } 653 660 654 - error = device_property_read_u32_array(&client->dev, "linux,keycodes", 655 - bt541->keycodes, 656 - bt541->num_keycodes); 657 - if (error) { 658 - dev_err(&client->dev, 659 - "Unable to parse \"linux,keycodes\" property: %d\n", error); 660 - return error; 661 + error = device_property_read_u32_array(&client->dev, 662 + "linux,keycodes", 663 + bt541->keycodes, 664 + bt541->num_keycodes); 665 + if (error) { 666 + dev_err(&client->dev, 667 + "Unable to parse \"linux,keycodes\" property: %d\n", 668 + error); 669 + return error; 670 + } 661 671 } 662 672 663 673 error = zinitix_init_input_dev(bt541);
+2 -2
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
··· 1420 1420 cd_table->s1fmt = STRTAB_STE_0_S1FMT_LINEAR; 1421 1421 cd_table->linear.num_ents = max_contexts; 1422 1422 1423 - l1size = max_contexts * sizeof(struct arm_smmu_cd), 1423 + l1size = max_contexts * sizeof(struct arm_smmu_cd); 1424 1424 cd_table->linear.table = dma_alloc_coherent(smmu->dev, l1size, 1425 1425 &cd_table->cdtab_dma, 1426 1426 GFP_KERNEL); ··· 3625 3625 u32 l1size; 3626 3626 struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg; 3627 3627 unsigned int last_sid_idx = 3628 - arm_smmu_strtab_l1_idx((1 << smmu->sid_bits) - 1); 3628 + arm_smmu_strtab_l1_idx((1ULL << smmu->sid_bits) - 1); 3629 3629 3630 3630 /* Calculate the L1 size, capped to the SIDSIZE. */ 3631 3631 cfg->l2.num_l1_ents = min(last_sid_idx + 1, STRTAB_MAX_L1_ENTRIES);
+2 -2
drivers/iommu/arm/arm-smmu/arm-smmu-impl.c
··· 130 130 131 131 /* 132 132 * Disable MMU-500's not-particularly-beneficial next-page 133 - * prefetcher for the sake of errata #841119 and #826419. 133 + * prefetcher for the sake of at least 5 known errata. 134 134 */ 135 135 for (i = 0; i < smmu->num_context_banks; ++i) { 136 136 reg = arm_smmu_cb_read(smmu, i, ARM_SMMU_CB_ACTLR); ··· 138 138 arm_smmu_cb_write(smmu, i, ARM_SMMU_CB_ACTLR, reg); 139 139 reg = arm_smmu_cb_read(smmu, i, ARM_SMMU_CB_ACTLR); 140 140 if (reg & ARM_MMU500_ACTLR_CPRE) 141 - dev_warn_once(smmu->dev, "Failed to disable prefetcher [errata #841119 and #826419], check ACR.CACHE_LOCK\n"); 141 + dev_warn_once(smmu->dev, "Failed to disable prefetcher for errata workarounds, check SACR.CACHE_LOCK\n"); 142 142 } 143 143 144 144 return 0;
+3 -1
drivers/iommu/intel/iommu.c
··· 3340 3340 */ 3341 3341 static void domain_context_clear(struct device_domain_info *info) 3342 3342 { 3343 - if (!dev_is_pci(info->dev)) 3343 + if (!dev_is_pci(info->dev)) { 3344 3344 domain_context_clear_one(info, info->bus, info->devfn); 3345 + return; 3346 + } 3345 3347 3346 3348 pci_for_each_dma_alias(to_pci_dev(info->dev), 3347 3349 &domain_context_clear_one_cb, info);
-7
drivers/irqchip/Kconfig
··· 45 45 select IRQ_MSI_LIB 46 46 default ARM_GIC_V3 47 47 48 - config ARM_GIC_V3_ITS_PCI 49 - bool 50 - depends on ARM_GIC_V3_ITS 51 - depends on PCI 52 - depends on PCI_MSI 53 - default ARM_GIC_V3_ITS 54 - 55 48 config ARM_GIC_V3_ITS_FSL_MC 56 49 bool 57 50 depends on ARM_GIC_V3_ITS
+12 -6
drivers/irqchip/irq-gic-v3-its.c
··· 797 797 its_encode_valid(cmd, desc->its_vmapp_cmd.valid); 798 798 799 799 if (!desc->its_vmapp_cmd.valid) { 800 + alloc = !atomic_dec_return(&desc->its_vmapp_cmd.vpe->vmapp_count); 800 801 if (is_v4_1(its)) { 801 - alloc = !atomic_dec_return(&desc->its_vmapp_cmd.vpe->vmapp_count); 802 802 its_encode_alloc(cmd, alloc); 803 803 /* 804 804 * Unmapping a VPE is self-synchronizing on GICv4.1, ··· 817 817 its_encode_vpt_addr(cmd, vpt_addr); 818 818 its_encode_vpt_size(cmd, LPI_NRBITS - 1); 819 819 820 + alloc = !atomic_fetch_inc(&desc->its_vmapp_cmd.vpe->vmapp_count); 821 + 820 822 if (!is_v4_1(its)) 821 823 goto out; 822 824 823 825 vconf_addr = virt_to_phys(page_address(desc->its_vmapp_cmd.vpe->its_vm->vprop_page)); 824 - 825 - alloc = !atomic_fetch_inc(&desc->its_vmapp_cmd.vpe->vmapp_count); 826 826 827 827 its_encode_alloc(cmd, alloc); 828 828 ··· 3807 3807 unsigned long flags; 3808 3808 3809 3809 /* 3810 + * Check if we're racing against a VPE being destroyed, for 3811 + * which we don't want to allow a VMOVP. 3812 + */ 3813 + if (!atomic_read(&vpe->vmapp_count)) 3814 + return -EINVAL; 3815 + 3816 + /* 3810 3817 * Changing affinity is mega expensive, so let's be as lazy as 3811 3818 * we can and only do it if we really have to. Also, if mapped 3812 3819 * into the proxy device, we need to move the doorbell ··· 4470 4463 raw_spin_lock_init(&vpe->vpe_lock); 4471 4464 vpe->vpe_id = vpe_id; 4472 4465 vpe->vpt_page = vpt_page; 4473 - if (gic_rdists->has_rvpeid) 4474 - atomic_set(&vpe->vmapp_count, 0); 4475 - else 4466 + atomic_set(&vpe->vmapp_count, 0); 4467 + if (!gic_rdists->has_rvpeid) 4476 4468 vpe->vpe_proxy_event = -1; 4477 4469 4478 4470 return 0;
+8 -2
drivers/irqchip/irq-mscc-ocelot.c
··· 37 37 .reg_off_ena_clr = 0x1c, 38 38 .reg_off_ena_set = 0x20, 39 39 .reg_off_ident = 0x38, 40 - .reg_off_trigger = 0x5c, 40 + .reg_off_trigger = 0x4, 41 41 .n_irq = 24, 42 42 }; 43 43 ··· 70 70 .reg_off_ena_clr = 0x1c, 71 71 .reg_off_ena_set = 0x20, 72 72 .reg_off_ident = 0x38, 73 - .reg_off_trigger = 0x5c, 73 + .reg_off_trigger = 0x4, 74 74 .n_irq = 29, 75 75 }; 76 76 ··· 84 84 u32 val; 85 85 86 86 irq_gc_lock(gc); 87 + /* 88 + * Clear sticky bits for edge mode interrupts. 89 + * Serval has only one trigger register replication, but the adjacent 90 + * register is always read as zero, so there's no need to handle this 91 + * case separately. 92 + */ 87 93 val = irq_reg_readl(gc, ICPU_CFG_INTR_INTR_TRIGGER(p, 0)) | 88 94 irq_reg_readl(gc, ICPU_CFG_INTR_INTR_TRIGGER(p, 1)); 89 95 if (!(val & mask))
+14 -2
drivers/irqchip/irq-renesas-rzg2l.c
··· 8 8 */ 9 9 10 10 #include <linux/bitfield.h> 11 + #include <linux/cleanup.h> 11 12 #include <linux/clk.h> 12 13 #include <linux/err.h> 13 14 #include <linux/io.h> ··· 531 530 static int rzg2l_irqc_common_init(struct device_node *node, struct device_node *parent, 532 531 const struct irq_chip *irq_chip) 533 532 { 533 + struct platform_device *pdev = of_find_device_by_node(node); 534 + struct device *dev __free(put_device) = pdev ? &pdev->dev : NULL; 534 535 struct irq_domain *irq_domain, *parent_domain; 535 - struct platform_device *pdev; 536 536 struct reset_control *resetn; 537 537 int ret; 538 538 539 - pdev = of_find_device_by_node(node); 540 539 if (!pdev) 541 540 return -ENODEV; 542 541 ··· 591 590 } 592 591 593 592 register_syscore_ops(&rzg2l_irqc_syscore_ops); 593 + 594 + /* 595 + * Prevent the cleanup function from invoking put_device by assigning 596 + * NULL to dev. 597 + * 598 + * make coccicheck will complain about missing put_device calls, but 599 + * those are false positives, as dev will be automatically "put" via 600 + * __free_put_device on the failing path. 601 + * On the successful path we don't actually want to "put" dev. 602 + */ 603 + dev = NULL; 594 604 595 605 return 0; 596 606
+1 -1
drivers/irqchip/irq-riscv-imsic-platform.c
··· 341 341 imsic->fwnode, global->hart_index_bits, global->guest_index_bits); 342 342 pr_info("%pfwP: group-index-bits: %d, group-index-shift: %d\n", 343 343 imsic->fwnode, global->group_index_bits, global->group_index_shift); 344 - pr_info("%pfwP: per-CPU IDs %d at base PPN %pa\n", 344 + pr_info("%pfwP: per-CPU IDs %d at base address %pa\n", 345 345 imsic->fwnode, global->nr_ids, &global->base_addr); 346 346 pr_info("%pfwP: total %d interrupts available\n", 347 347 imsic->fwnode, num_possible_cpus() * (global->nr_ids - 1));
+18 -1
drivers/irqchip/irq-riscv-intc.c
··· 265 265 }; 266 266 267 267 static u32 nr_rintc; 268 - static struct rintc_data *rintc_acpi_data[NR_CPUS]; 268 + static struct rintc_data **rintc_acpi_data; 269 269 270 270 #define for_each_matching_plic(_plic_id) \ 271 271 unsigned int _plic; \ ··· 329 329 return 0; 330 330 } 331 331 332 + static int __init riscv_intc_acpi_match(union acpi_subtable_headers *header, 333 + const unsigned long end) 334 + { 335 + return 0; 336 + } 337 + 332 338 static int __init riscv_intc_acpi_init(union acpi_subtable_headers *header, 333 339 const unsigned long end) 334 340 { 335 341 struct acpi_madt_rintc *rintc; 336 342 struct fwnode_handle *fn; 343 + int count; 337 344 int rc; 345 + 346 + if (!rintc_acpi_data) { 347 + count = acpi_table_parse_madt(ACPI_MADT_TYPE_RINTC, riscv_intc_acpi_match, 0); 348 + if (count <= 0) 349 + return -EINVAL; 350 + 351 + rintc_acpi_data = kcalloc(count, sizeof(*rintc_acpi_data), GFP_KERNEL); 352 + if (!rintc_acpi_data) 353 + return -ENOMEM; 354 + } 338 355 339 356 rintc = (struct acpi_madt_rintc *)header; 340 357 rintc_acpi_data[nr_rintc] = kzalloc(sizeof(*rintc_acpi_data[0]), GFP_KERNEL);
+17 -12
drivers/irqchip/irq-sifive-plic.c
··· 126 126 } 127 127 } 128 128 129 - static void plic_irq_enable(struct irq_data *d) 130 - { 131 - plic_irq_toggle(irq_data_get_effective_affinity_mask(d), d, 1); 132 - } 133 - 134 - static void plic_irq_disable(struct irq_data *d) 135 - { 136 - plic_irq_toggle(irq_data_get_effective_affinity_mask(d), d, 0); 137 - } 138 - 139 129 static void plic_irq_unmask(struct irq_data *d) 140 130 { 141 131 struct plic_priv *priv = irq_data_get_irq_chip_data(d); ··· 138 148 struct plic_priv *priv = irq_data_get_irq_chip_data(d); 139 149 140 150 writel(0, priv->regs + PRIORITY_BASE + d->hwirq * PRIORITY_PER_ID); 151 + } 152 + 153 + static void plic_irq_enable(struct irq_data *d) 154 + { 155 + plic_irq_toggle(irq_data_get_effective_affinity_mask(d), d, 1); 156 + plic_irq_unmask(d); 157 + } 158 + 159 + static void plic_irq_disable(struct irq_data *d) 160 + { 161 + plic_irq_toggle(irq_data_get_effective_affinity_mask(d), d, 0); 141 162 } 142 163 143 164 static void plic_irq_eoi(struct irq_data *d) ··· 627 626 628 627 handler->enable_save = kcalloc(DIV_ROUND_UP(nr_irqs, 32), 629 628 sizeof(*handler->enable_save), GFP_KERNEL); 630 - if (!handler->enable_save) 629 + if (!handler->enable_save) { 630 + error = -ENOMEM; 631 631 goto fail_cleanup_contexts; 632 + } 632 633 done: 633 634 for (hwirq = 1; hwirq <= nr_irqs; hwirq++) { 634 635 plic_toggle(handler, hwirq, 0); ··· 642 639 643 640 priv->irqdomain = irq_domain_create_linear(fwnode, nr_irqs + 1, 644 641 &plic_irqdomain_ops, priv); 645 - if (WARN_ON(!priv->irqdomain)) 642 + if (WARN_ON(!priv->irqdomain)) { 643 + error = -ENOMEM; 646 644 goto fail_cleanup_contexts; 645 + } 647 646 648 647 /* 649 648 * We can have multiple PLIC instances so setup global state
+2 -1
drivers/misc/cardreader/Kconfig
··· 16 16 select MFD_CORE 17 17 help 18 18 This supports for Realtek PCI-Express card reader including rts5209, 19 - rts5227, rts522A, rts5229, rts5249, rts524A, rts525A, rtl8411, rts5260. 19 + rts5227, rts5228, rts522A, rts5229, rts5249, rts524A, rts525A, rtl8411, 20 + rts5260, rts5261, rts5264. 20 21 Realtek card readers support access to many types of memory cards, 21 22 such as Memory Stick, Memory Stick Pro, Secure Digital and 22 23 MultiMediaCard.
+2
drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_otpe2p.c
··· 364 364 if (is_eeprom_responsive(priv)) { 365 365 priv->nvmem_config_eeprom.type = NVMEM_TYPE_EEPROM; 366 366 priv->nvmem_config_eeprom.name = EEPROM_NAME; 367 + priv->nvmem_config_eeprom.id = NVMEM_DEVID_AUTO; 367 368 priv->nvmem_config_eeprom.dev = &aux_dev->dev; 368 369 priv->nvmem_config_eeprom.owner = THIS_MODULE; 369 370 priv->nvmem_config_eeprom.reg_read = pci1xxxx_eeprom_read; ··· 384 383 385 384 priv->nvmem_config_otp.type = NVMEM_TYPE_OTP; 386 385 priv->nvmem_config_otp.name = OTP_NAME; 386 + priv->nvmem_config_otp.id = NVMEM_DEVID_AUTO; 387 387 priv->nvmem_config_otp.dev = &aux_dev->dev; 388 388 priv->nvmem_config_otp.owner = THIS_MODULE; 389 389 priv->nvmem_config_otp.reg_read = pci1xxxx_otp_read;
+11 -10
drivers/net/dsa/microchip/ksz_common.c
··· 2733 2733 return MICREL_KSZ8_P1_ERRATA; 2734 2734 break; 2735 2735 case KSZ8567_CHIP_ID: 2736 + /* KSZ8567R Errata DS80000752C Module 4 */ 2737 + case KSZ8765_CHIP_ID: 2738 + case KSZ8794_CHIP_ID: 2739 + case KSZ8795_CHIP_ID: 2740 + /* KSZ879x/KSZ877x/KSZ876x Errata DS80000687C Module 2 */ 2736 2741 case KSZ9477_CHIP_ID: 2742 + /* KSZ9477S Errata DS80000754A Module 4 */ 2737 2743 case KSZ9567_CHIP_ID: 2744 + /* KSZ9567S Errata DS80000756A Module 4 */ 2738 2745 case KSZ9896_CHIP_ID: 2746 + /* KSZ9896C Errata DS80000757A Module 3 */ 2739 2747 case KSZ9897_CHIP_ID: 2740 - /* KSZ9477 Errata DS80000754C 2741 - * 2742 - * Module 4: Energy Efficient Ethernet (EEE) feature select must 2743 - * be manually disabled 2748 + /* KSZ9897R Errata DS80000758C Module 4 */ 2749 + /* Energy Efficient Ethernet (EEE) feature select must be manually disabled 2744 2750 * The EEE feature is enabled by default, but it is not fully 2745 2751 * operational. It must be manually disabled through register 2746 2752 * controls. If not disabled, the PHY ports can auto-negotiate 2747 2753 * to enable EEE, and this feature can cause link drops when 2748 2754 * linked to another device supporting EEE. 2749 2755 * 2750 - * The same item appears in the errata for the KSZ9567, KSZ9896, 2751 - * and KSZ9897. 2752 - * 2753 - * A similar item appears in the errata for the KSZ8567, but 2754 - * provides an alternative workaround. For now, use the simple 2755 - * workaround of disabling the EEE feature for this device too. 2756 + * The same item appears in the errata for all switches above. 2756 2757 */ 2757 2758 return MICREL_NO_EEE; 2758 2759 }
+2 -4
drivers/net/dsa/mv88e6xxx/chip.h
··· 208 208 struct mv88e6xxx_avb_ops; 209 209 struct mv88e6xxx_ptp_ops; 210 210 struct mv88e6xxx_pcs_ops; 211 + struct mv88e6xxx_cc_coeffs; 211 212 212 213 struct mv88e6xxx_irq { 213 214 u16 masked; ··· 417 416 struct cyclecounter tstamp_cc; 418 417 struct timecounter tstamp_tc; 419 418 struct delayed_work overflow_work; 419 + const struct mv88e6xxx_cc_coeffs *cc_coeffs; 420 420 421 421 struct ptp_clock *ptp_clock; 422 422 struct ptp_clock_info ptp_clock_info; ··· 747 745 int arr1_sts_reg; 748 746 int dep_sts_reg; 749 747 u32 rx_filters; 750 - u32 cc_shift; 751 - u32 cc_mult; 752 - u32 cc_mult_num; 753 - u32 cc_mult_dem; 754 748 }; 755 749 756 750 struct mv88e6xxx_pcs_ops {
+1
drivers/net/dsa/mv88e6xxx/port.c
··· 1714 1714 ptr = shift / 8; 1715 1715 shift %= 8; 1716 1716 mask >>= ptr * 8; 1717 + ptr <<= 8; 1717 1718 1718 1719 err = mv88e6393x_port_policy_read(chip, port, ptr, &reg); 1719 1720 if (err)
+75 -33
drivers/net/dsa/mv88e6xxx/ptp.c
··· 18 18 19 19 #define MV88E6XXX_MAX_ADJ_PPB 1000000 20 20 21 + struct mv88e6xxx_cc_coeffs { 22 + u32 cc_shift; 23 + u32 cc_mult; 24 + u32 cc_mult_num; 25 + u32 cc_mult_dem; 26 + }; 27 + 21 28 /* Family MV88E6250: 22 29 * Raw timestamps are in units of 10-ns clock periods. 23 30 * ··· 32 25 * simplifies to 33 26 * clkadj = scaled_ppm * 2^7 / 5^5 34 27 */ 35 - #define MV88E6250_CC_SHIFT 28 36 - #define MV88E6250_CC_MULT (10 << MV88E6250_CC_SHIFT) 37 - #define MV88E6250_CC_MULT_NUM (1 << 7) 38 - #define MV88E6250_CC_MULT_DEM 3125ULL 28 + #define MV88E6XXX_CC_10NS_SHIFT 28 29 + static const struct mv88e6xxx_cc_coeffs mv88e6xxx_cc_10ns_coeffs = { 30 + .cc_shift = MV88E6XXX_CC_10NS_SHIFT, 31 + .cc_mult = 10 << MV88E6XXX_CC_10NS_SHIFT, 32 + .cc_mult_num = 1 << 7, 33 + .cc_mult_dem = 3125ULL, 34 + }; 39 35 40 - /* Other families: 36 + /* Other families except MV88E6393X in internal clock mode: 41 37 * Raw timestamps are in units of 8-ns clock periods. 42 38 * 43 39 * clkadj = scaled_ppm * 8*2^28 / (10^6 * 2^16) 44 40 * simplifies to 45 41 * clkadj = scaled_ppm * 2^9 / 5^6 46 42 */ 47 - #define MV88E6XXX_CC_SHIFT 28 48 - #define MV88E6XXX_CC_MULT (8 << MV88E6XXX_CC_SHIFT) 49 - #define MV88E6XXX_CC_MULT_NUM (1 << 9) 50 - #define MV88E6XXX_CC_MULT_DEM 15625ULL 43 + #define MV88E6XXX_CC_8NS_SHIFT 28 44 + static const struct mv88e6xxx_cc_coeffs mv88e6xxx_cc_8ns_coeffs = { 45 + .cc_shift = MV88E6XXX_CC_8NS_SHIFT, 46 + .cc_mult = 8 << MV88E6XXX_CC_8NS_SHIFT, 47 + .cc_mult_num = 1 << 9, 48 + .cc_mult_dem = 15625ULL 49 + }; 50 + 51 + /* Family MV88E6393X using internal clock: 52 + * Raw timestamps are in units of 4-ns clock periods. 53 + * 54 + * clkadj = scaled_ppm * 4*2^28 / (10^6 * 2^16) 55 + * simplifies to 56 + * clkadj = scaled_ppm * 2^8 / 5^6 57 + */ 58 + #define MV88E6XXX_CC_4NS_SHIFT 28 59 + static const struct mv88e6xxx_cc_coeffs mv88e6xxx_cc_4ns_coeffs = { 60 + .cc_shift = MV88E6XXX_CC_4NS_SHIFT, 61 + .cc_mult = 4 << MV88E6XXX_CC_4NS_SHIFT, 62 + .cc_mult_num = 1 << 8, 63 + .cc_mult_dem = 15625ULL 64 + }; 51 65 52 66 #define TAI_EVENT_WORK_INTERVAL msecs_to_jiffies(100) 53 67 ··· 109 81 return err; 110 82 111 83 return chip->info->ops->gpio_ops->set_pctl(chip, pin, func); 84 + } 85 + 86 + static const struct mv88e6xxx_cc_coeffs * 87 + mv88e6xxx_cc_coeff_get(struct mv88e6xxx_chip *chip) 88 + { 89 + u16 period_ps; 90 + int err; 91 + 92 + err = mv88e6xxx_tai_read(chip, MV88E6XXX_TAI_CLOCK_PERIOD, &period_ps, 1); 93 + if (err) { 94 + dev_err(chip->dev, "failed to read cycle counter period: %d\n", 95 + err); 96 + return ERR_PTR(err); 97 + } 98 + 99 + switch (period_ps) { 100 + case 4000: 101 + return &mv88e6xxx_cc_4ns_coeffs; 102 + case 8000: 103 + return &mv88e6xxx_cc_8ns_coeffs; 104 + case 10000: 105 + return &mv88e6xxx_cc_10ns_coeffs; 106 + default: 107 + dev_err(chip->dev, "unexpected cycle counter period of %u ps\n", 108 + period_ps); 109 + return ERR_PTR(-ENODEV); 110 + } 112 111 } 113 112 114 113 static u64 mv88e6352_ptp_clock_read(const struct cyclecounter *cc) ··· 259 204 static int mv88e6xxx_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm) 260 205 { 261 206 struct mv88e6xxx_chip *chip = ptp_to_chip(ptp); 262 - const struct mv88e6xxx_ptp_ops *ptp_ops = chip->info->ops->ptp_ops; 263 207 int neg_adj = 0; 264 208 u32 diff, mult; 265 209 u64 adj; ··· 268 214 scaled_ppm = -scaled_ppm; 269 215 } 270 216 271 - mult = ptp_ops->cc_mult; 272 - adj = ptp_ops->cc_mult_num; 217 + mult = chip->cc_coeffs->cc_mult; 218 + adj = chip->cc_coeffs->cc_mult_num; 273 219 adj *= scaled_ppm; 274 - diff = div_u64(adj, ptp_ops->cc_mult_dem); 220 + diff = div_u64(adj, chip->cc_coeffs->cc_mult_dem); 275 221 276 222 mv88e6xxx_reg_lock(chip); 277 223 ··· 418 364 (1 << HWTSTAMP_FILTER_PTP_V2_EVENT) | 419 365 (1 << HWTSTAMP_FILTER_PTP_V2_SYNC) | 420 366 (1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ), 421 - .cc_shift = MV88E6XXX_CC_SHIFT, 422 - .cc_mult = MV88E6XXX_CC_MULT, 423 - .cc_mult_num = MV88E6XXX_CC_MULT_NUM, 424 - .cc_mult_dem = MV88E6XXX_CC_MULT_DEM, 425 367 }; 426 368 427 369 const struct mv88e6xxx_ptp_ops mv88e6250_ptp_ops = { ··· 441 391 (1 << HWTSTAMP_FILTER_PTP_V2_EVENT) | 442 392 (1 << HWTSTAMP_FILTER_PTP_V2_SYNC) | 443 393 (1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ), 444 - .cc_shift = MV88E6250_CC_SHIFT, 445 - .cc_mult = MV88E6250_CC_MULT, 446 - .cc_mult_num = MV88E6250_CC_MULT_NUM, 447 - .cc_mult_dem = MV88E6250_CC_MULT_DEM, 448 394 }; 449 395 450 396 const struct mv88e6xxx_ptp_ops mv88e6352_ptp_ops = { ··· 464 418 (1 << HWTSTAMP_FILTER_PTP_V2_EVENT) | 465 419 (1 << HWTSTAMP_FILTER_PTP_V2_SYNC) | 466 420 (1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ), 467 - .cc_shift = MV88E6XXX_CC_SHIFT, 468 - .cc_mult = MV88E6XXX_CC_MULT, 469 - .cc_mult_num = MV88E6XXX_CC_MULT_NUM, 470 - .cc_mult_dem = MV88E6XXX_CC_MULT_DEM, 471 421 }; 472 422 473 423 const struct mv88e6xxx_ptp_ops mv88e6390_ptp_ops = { ··· 488 446 (1 << HWTSTAMP_FILTER_PTP_V2_EVENT) | 489 447 (1 << HWTSTAMP_FILTER_PTP_V2_SYNC) | 490 448 (1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ), 491 - .cc_shift = MV88E6XXX_CC_SHIFT, 492 - .cc_mult = MV88E6XXX_CC_MULT, 493 - .cc_mult_num = MV88E6XXX_CC_MULT_NUM, 494 - .cc_mult_dem = MV88E6XXX_CC_MULT_DEM, 495 449 }; 496 450 497 451 static u64 mv88e6xxx_ptp_clock_read(const struct cyclecounter *cc) ··· 500 462 return 0; 501 463 } 502 464 503 - /* With a 125MHz input clock, the 32-bit timestamp counter overflows in ~34.3 465 + /* With a 250MHz input clock, the 32-bit timestamp counter overflows in ~17.2 504 466 * seconds; this task forces periodic reads so that we don't miss any. 505 467 */ 506 - #define MV88E6XXX_TAI_OVERFLOW_PERIOD (HZ * 16) 468 + #define MV88E6XXX_TAI_OVERFLOW_PERIOD (HZ * 8) 507 469 static void mv88e6xxx_ptp_overflow_check(struct work_struct *work) 508 470 { 509 471 struct delayed_work *dw = to_delayed_work(work); ··· 522 484 int i; 523 485 524 486 /* Set up the cycle counter */ 487 + chip->cc_coeffs = mv88e6xxx_cc_coeff_get(chip); 488 + if (IS_ERR(chip->cc_coeffs)) 489 + return PTR_ERR(chip->cc_coeffs); 490 + 525 491 memset(&chip->tstamp_cc, 0, sizeof(chip->tstamp_cc)); 526 492 chip->tstamp_cc.read = mv88e6xxx_ptp_clock_read; 527 493 chip->tstamp_cc.mask = CYCLECOUNTER_MASK(32); 528 - chip->tstamp_cc.mult = ptp_ops->cc_mult; 529 - chip->tstamp_cc.shift = ptp_ops->cc_shift; 494 + chip->tstamp_cc.mult = chip->cc_coeffs->cc_mult; 495 + chip->tstamp_cc.shift = chip->cc_coeffs->cc_shift; 530 496 531 497 timecounter_init(&chip->tstamp_tc, &chip->tstamp_cc, 532 498 ktime_to_ns(ktime_get_real()));
+14 -8
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 2254 2254 2255 2255 if (!bnxt_get_rx_ts_p5(bp, &ts, cmpl_ts)) { 2256 2256 struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; 2257 + unsigned long flags; 2257 2258 2258 - spin_lock_bh(&ptp->ptp_lock); 2259 + spin_lock_irqsave(&ptp->ptp_lock, flags); 2259 2260 ns = timecounter_cyc2time(&ptp->tc, ts); 2260 - spin_unlock_bh(&ptp->ptp_lock); 2261 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 2261 2262 memset(skb_hwtstamps(skb), 0, 2262 2263 sizeof(*skb_hwtstamps(skb))); 2263 2264 skb_hwtstamps(skb)->hwtstamp = ns_to_ktime(ns); ··· 2758 2757 case ASYNC_EVENT_CMPL_PHC_UPDATE_EVENT_DATA1_FLAGS_PHC_RTC_UPDATE: 2759 2758 if (BNXT_PTP_USE_RTC(bp)) { 2760 2759 struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; 2760 + unsigned long flags; 2761 2761 u64 ns; 2762 2762 2763 2763 if (!ptp) 2764 2764 goto async_event_process_exit; 2765 2765 2766 - spin_lock_bh(&ptp->ptp_lock); 2766 + spin_lock_irqsave(&ptp->ptp_lock, flags); 2767 2767 bnxt_ptp_update_current_time(bp); 2768 2768 ns = (((u64)BNXT_EVENT_PHC_RTC_UPDATE(data1) << 2769 2769 BNXT_PHC_BITS) | ptp->current_time); 2770 2770 bnxt_ptp_rtc_timecounter_init(ptp, ns); 2771 - spin_unlock_bh(&ptp->ptp_lock); 2771 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 2772 2772 } 2773 2773 break; 2774 2774 } ··· 13497 13495 return; 13498 13496 13499 13497 if (ptp) { 13500 - spin_lock_bh(&ptp->ptp_lock); 13498 + unsigned long flags; 13499 + 13500 + spin_lock_irqsave(&ptp->ptp_lock, flags); 13501 13501 set_bit(BNXT_STATE_IN_FW_RESET, &bp->state); 13502 - spin_unlock_bh(&ptp->ptp_lock); 13502 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 13503 13503 } else { 13504 13504 set_bit(BNXT_STATE_IN_FW_RESET, &bp->state); 13505 13505 } ··· 13566 13562 int n = 0, tmo; 13567 13563 13568 13564 if (ptp) { 13569 - spin_lock_bh(&ptp->ptp_lock); 13565 + unsigned long flags; 13566 + 13567 + spin_lock_irqsave(&ptp->ptp_lock, flags); 13570 13568 set_bit(BNXT_STATE_IN_FW_RESET, &bp->state); 13571 - spin_unlock_bh(&ptp->ptp_lock); 13569 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 13572 13570 } else { 13573 13571 set_bit(BNXT_STATE_IN_FW_RESET, &bp->state); 13574 13572 }
+42 -28
drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
··· 62 62 struct bnxt_ptp_cfg *ptp = container_of(ptp_info, struct bnxt_ptp_cfg, 63 63 ptp_info); 64 64 u64 ns = timespec64_to_ns(ts); 65 + unsigned long flags; 65 66 66 67 if (BNXT_PTP_USE_RTC(ptp->bp)) 67 68 return bnxt_ptp_cfg_settime(ptp->bp, ns); 68 69 69 - spin_lock_bh(&ptp->ptp_lock); 70 + spin_lock_irqsave(&ptp->ptp_lock, flags); 70 71 timecounter_init(&ptp->tc, &ptp->cc, ns); 71 - spin_unlock_bh(&ptp->ptp_lock); 72 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 72 73 return 0; 73 74 } 74 75 ··· 101 100 static void bnxt_ptp_get_current_time(struct bnxt *bp) 102 101 { 103 102 struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; 103 + unsigned long flags; 104 104 105 105 if (!ptp) 106 106 return; 107 - spin_lock_bh(&ptp->ptp_lock); 107 + spin_lock_irqsave(&ptp->ptp_lock, flags); 108 108 WRITE_ONCE(ptp->old_time, ptp->current_time); 109 109 bnxt_refclk_read(bp, NULL, &ptp->current_time); 110 - spin_unlock_bh(&ptp->ptp_lock); 110 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 111 111 } 112 112 113 113 static int bnxt_hwrm_port_ts_query(struct bnxt *bp, u32 flags, u64 *ts, ··· 151 149 { 152 150 struct bnxt_ptp_cfg *ptp = container_of(ptp_info, struct bnxt_ptp_cfg, 153 151 ptp_info); 152 + unsigned long flags; 154 153 u64 ns, cycles; 155 154 int rc; 156 155 157 - spin_lock_bh(&ptp->ptp_lock); 156 + spin_lock_irqsave(&ptp->ptp_lock, flags); 158 157 rc = bnxt_refclk_read(ptp->bp, sts, &cycles); 159 158 if (rc) { 160 - spin_unlock_bh(&ptp->ptp_lock); 159 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 161 160 return rc; 162 161 } 163 162 ns = timecounter_cyc2time(&ptp->tc, cycles); 164 - spin_unlock_bh(&ptp->ptp_lock); 163 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 165 164 *ts = ns_to_timespec64(ns); 166 165 167 166 return 0; ··· 180 177 static int bnxt_ptp_adjphc(struct bnxt_ptp_cfg *ptp, s64 delta) 181 178 { 182 179 struct hwrm_port_mac_cfg_input *req; 180 + unsigned long flags; 183 181 int rc; 184 182 185 183 rc = hwrm_req_init(ptp->bp, req, HWRM_PORT_MAC_CFG); ··· 194 190 if (rc) { 195 191 netdev_err(ptp->bp->dev, "ptp adjphc failed. rc = %x\n", rc); 196 192 } else { 197 - spin_lock_bh(&ptp->ptp_lock); 193 + spin_lock_irqsave(&ptp->ptp_lock, flags); 198 194 bnxt_ptp_update_current_time(ptp->bp); 199 - spin_unlock_bh(&ptp->ptp_lock); 195 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 200 196 } 201 197 202 198 return rc; ··· 206 202 { 207 203 struct bnxt_ptp_cfg *ptp = container_of(ptp_info, struct bnxt_ptp_cfg, 208 204 ptp_info); 205 + unsigned long flags; 209 206 210 207 if (BNXT_PTP_USE_RTC(ptp->bp)) 211 208 return bnxt_ptp_adjphc(ptp, delta); 212 209 213 - spin_lock_bh(&ptp->ptp_lock); 210 + spin_lock_irqsave(&ptp->ptp_lock, flags); 214 211 timecounter_adjtime(&ptp->tc, delta); 215 - spin_unlock_bh(&ptp->ptp_lock); 212 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 216 213 return 0; 217 214 } 218 215 ··· 241 236 struct bnxt_ptp_cfg *ptp = container_of(ptp_info, struct bnxt_ptp_cfg, 242 237 ptp_info); 243 238 struct bnxt *bp = ptp->bp; 239 + unsigned long flags; 244 240 245 241 if (!BNXT_MH(bp)) 246 242 return bnxt_ptp_adjfine_rtc(bp, scaled_ppm); 247 243 248 - spin_lock_bh(&ptp->ptp_lock); 244 + spin_lock_irqsave(&ptp->ptp_lock, flags); 249 245 timecounter_read(&ptp->tc); 250 246 ptp->cc.mult = adjust_by_scaled_ppm(ptp->cmult, scaled_ppm); 251 - spin_unlock_bh(&ptp->ptp_lock); 247 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 252 248 return 0; 253 249 } 254 250 ··· 257 251 { 258 252 struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; 259 253 struct ptp_clock_event event; 254 + unsigned long flags; 260 255 u64 ns, pps_ts; 261 256 262 257 pps_ts = EVENT_PPS_TS(data2, data1); 263 - spin_lock_bh(&ptp->ptp_lock); 258 + spin_lock_irqsave(&ptp->ptp_lock, flags); 264 259 ns = timecounter_cyc2time(&ptp->tc, pps_ts); 265 - spin_unlock_bh(&ptp->ptp_lock); 260 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 266 261 267 262 switch (EVENT_DATA2_PPS_EVENT_TYPE(data2)) { 268 263 case ASYNC_EVENT_CMPL_PPS_TIMESTAMP_EVENT_DATA2_EVENT_TYPE_INTERNAL: ··· 400 393 { 401 394 u64 cycles_now; 402 395 u64 nsec_now, nsec_delta; 396 + unsigned long flags; 403 397 int rc; 404 398 405 - spin_lock_bh(&ptp->ptp_lock); 399 + spin_lock_irqsave(&ptp->ptp_lock, flags); 406 400 rc = bnxt_refclk_read(ptp->bp, NULL, &cycles_now); 407 401 if (rc) { 408 - spin_unlock_bh(&ptp->ptp_lock); 402 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 409 403 return rc; 410 404 } 411 405 nsec_now = timecounter_cyc2time(&ptp->tc, cycles_now); 412 - spin_unlock_bh(&ptp->ptp_lock); 406 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 413 407 414 408 nsec_delta = target_ns - nsec_now; 415 409 *cycles_delta = div64_u64(nsec_delta << ptp->cc.shift, ptp->cc.mult); ··· 697 689 struct skb_shared_hwtstamps timestamp; 698 690 struct bnxt_ptp_tx_req *txts_req; 699 691 unsigned long now = jiffies; 692 + unsigned long flags; 700 693 u64 ts = 0, ns = 0; 701 694 u32 tmo = 0; 702 695 int rc; ··· 711 702 tmo, slot); 712 703 if (!rc) { 713 704 memset(&timestamp, 0, sizeof(timestamp)); 714 - spin_lock_bh(&ptp->ptp_lock); 705 + spin_lock_irqsave(&ptp->ptp_lock, flags); 715 706 ns = timecounter_cyc2time(&ptp->tc, ts); 716 - spin_unlock_bh(&ptp->ptp_lock); 707 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 717 708 timestamp.hwtstamp = ns_to_ktime(ns); 718 709 skb_tstamp_tx(txts_req->tx_skb, &timestamp); 719 710 ptp->stats.ts_pkts++; ··· 739 730 unsigned long now = jiffies; 740 731 struct bnxt *bp = ptp->bp; 741 732 u16 cons = ptp->txts_cons; 733 + unsigned long flags; 742 734 u32 num_requests; 743 735 int rc = 0; 744 736 ··· 767 757 bnxt_ptp_get_current_time(bp); 768 758 ptp->next_period = now + HZ; 769 759 if (time_after_eq(now, ptp->next_overflow_check)) { 770 - spin_lock_bh(&ptp->ptp_lock); 760 + spin_lock_irqsave(&ptp->ptp_lock, flags); 771 761 timecounter_read(&ptp->tc); 772 - spin_unlock_bh(&ptp->ptp_lock); 762 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 773 763 ptp->next_overflow_check = now + BNXT_PHC_OVERFLOW_PERIOD; 774 764 } 775 765 if (rc == -EAGAIN) ··· 829 819 u32 opaque = tscmp->tx_ts_cmp_opaque; 830 820 struct bnxt_tx_ring_info *txr; 831 821 struct bnxt_sw_tx_bd *tx_buf; 822 + unsigned long flags; 832 823 u64 ts, ns; 833 824 u16 cons; 834 825 ··· 844 833 le32_to_cpu(tscmp->tx_ts_cmp_flags_type), 845 834 le32_to_cpu(tscmp->tx_ts_cmp_errors_v)); 846 835 } else { 847 - spin_lock_bh(&ptp->ptp_lock); 836 + spin_lock_irqsave(&ptp->ptp_lock, flags); 848 837 ns = timecounter_cyc2time(&ptp->tc, ts); 849 - spin_unlock_bh(&ptp->ptp_lock); 838 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 850 839 timestamp.hwtstamp = ns_to_ktime(ns); 851 840 skb_tstamp_tx(tx_buf->skb, &timestamp); 852 841 } ··· 986 975 int bnxt_ptp_init_rtc(struct bnxt *bp, bool phc_cfg) 987 976 { 988 977 struct timespec64 tsp; 978 + unsigned long flags; 989 979 u64 ns; 990 980 int rc; 991 981 ··· 1005 993 if (rc) 1006 994 return rc; 1007 995 } 1008 - spin_lock_bh(&bp->ptp_cfg->ptp_lock); 996 + spin_lock_irqsave(&bp->ptp_cfg->ptp_lock, flags); 1009 997 bnxt_ptp_rtc_timecounter_init(bp->ptp_cfg, ns); 1010 - spin_unlock_bh(&bp->ptp_cfg->ptp_lock); 998 + spin_unlock_irqrestore(&bp->ptp_cfg->ptp_lock, flags); 1011 999 1012 1000 return 0; 1013 1001 } ··· 1075 1063 atomic64_set(&ptp->stats.ts_err, 0); 1076 1064 1077 1065 if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { 1078 - spin_lock_bh(&ptp->ptp_lock); 1066 + unsigned long flags; 1067 + 1068 + spin_lock_irqsave(&ptp->ptp_lock, flags); 1079 1069 bnxt_refclk_read(bp, NULL, &ptp->current_time); 1080 1070 WRITE_ONCE(ptp->old_time, ptp->current_time); 1081 - spin_unlock_bh(&ptp->ptp_lock); 1071 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 1082 1072 ptp_schedule_worker(ptp->ptp_clock, 0); 1083 1073 } 1084 1074 ptp->txts_tmo = BNXT_PTP_DFLT_TX_TMO;
+7 -5
drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
··· 146 146 }; 147 147 148 148 #if BITS_PER_LONG == 32 149 - #define BNXT_READ_TIME64(ptp, dst, src) \ 150 - do { \ 151 - spin_lock_bh(&(ptp)->ptp_lock); \ 152 - (dst) = (src); \ 153 - spin_unlock_bh(&(ptp)->ptp_lock); \ 149 + #define BNXT_READ_TIME64(ptp, dst, src) \ 150 + do { \ 151 + unsigned long flags; \ 152 + \ 153 + spin_lock_irqsave(&(ptp)->ptp_lock, flags); \ 154 + (dst) = (src); \ 155 + spin_unlock_irqrestore(&(ptp)->ptp_lock, flags); \ 154 156 } while (0) 155 157 #else 156 158 #define BNXT_READ_TIME64(ptp, dst, src) \
+5 -5
drivers/net/ethernet/emulex/benet/be_main.c
··· 1381 1381 be_get_wrb_params_from_skb(adapter, skb, &wrb_params); 1382 1382 1383 1383 wrb_cnt = be_xmit_enqueue(adapter, txo, skb, &wrb_params); 1384 - if (unlikely(!wrb_cnt)) { 1385 - dev_kfree_skb_any(skb); 1386 - goto drop; 1387 - } 1384 + if (unlikely(!wrb_cnt)) 1385 + goto drop_skb; 1388 1386 1389 1387 /* if os2bmc is enabled and if the pkt is destined to bmc, 1390 1388 * enqueue the pkt a 2nd time with mgmt bit set. ··· 1391 1393 BE_WRB_F_SET(wrb_params.features, OS2BMC, 1); 1392 1394 wrb_cnt = be_xmit_enqueue(adapter, txo, skb, &wrb_params); 1393 1395 if (unlikely(!wrb_cnt)) 1394 - goto drop; 1396 + goto drop_skb; 1395 1397 else 1396 1398 skb_get(skb); 1397 1399 } ··· 1405 1407 be_xmit_flush(adapter, txo); 1406 1408 1407 1409 return NETDEV_TX_OK; 1410 + drop_skb: 1411 + dev_kfree_skb_any(skb); 1408 1412 drop: 1409 1413 tx_stats(txo)->tx_drv_drops++; 1410 1414 /* Flush the already enqueued tx requests */
+51 -17
drivers/net/ethernet/freescale/fman/mac.c
··· 155 155 err = -EINVAL; 156 156 goto _return_of_node_put; 157 157 } 158 + mac_dev->fman_dev = &of_dev->dev; 158 159 159 160 /* Get the FMan cell-index */ 160 161 err = of_property_read_u32(dev_node, "cell-index", &val); 161 162 if (err) { 162 163 dev_err(dev, "failed to read cell-index for %pOF\n", dev_node); 163 164 err = -EINVAL; 164 - goto _return_of_node_put; 165 + goto _return_dev_put; 165 166 } 166 167 /* cell-index 0 => FMan id 1 */ 167 168 fman_id = (u8)(val + 1); 168 169 169 - priv->fman = fman_bind(&of_dev->dev); 170 + priv->fman = fman_bind(mac_dev->fman_dev); 170 171 if (!priv->fman) { 171 172 dev_err(dev, "fman_bind(%pOF) failed\n", dev_node); 172 173 err = -ENODEV; 173 - goto _return_of_node_put; 174 + goto _return_dev_put; 174 175 } 175 176 177 + /* Two references have been taken in of_find_device_by_node() 178 + * and fman_bind(). Release one of them here. The second one 179 + * will be released in mac_remove(). 180 + */ 181 + put_device(mac_dev->fman_dev); 176 182 of_node_put(dev_node); 183 + dev_node = NULL; 177 184 178 185 /* Get the address of the memory mapped registers */ 179 186 mac_dev->res = platform_get_mem_or_io(_of_dev, 0); 180 187 if (!mac_dev->res) { 181 188 dev_err(dev, "could not get registers\n"); 182 - return -EINVAL; 189 + err = -EINVAL; 190 + goto _return_dev_put; 183 191 } 184 192 185 193 err = devm_request_resource(dev, fman_get_mem_region(priv->fman), 186 194 mac_dev->res); 187 195 if (err) { 188 196 dev_err_probe(dev, err, "could not request resource\n"); 189 - return err; 197 + goto _return_dev_put; 190 198 } 191 199 192 200 mac_dev->vaddr = devm_ioremap(dev, mac_dev->res->start, 193 201 resource_size(mac_dev->res)); 194 202 if (!mac_dev->vaddr) { 195 203 dev_err(dev, "devm_ioremap() failed\n"); 196 - return -EIO; 204 + err = -EIO; 205 + goto _return_dev_put; 197 206 } 198 207 199 - if (!of_device_is_available(mac_node)) 200 - return -ENODEV; 208 + if (!of_device_is_available(mac_node)) { 209 + err = -ENODEV; 210 + goto _return_dev_put; 211 + } 201 212 202 213 /* Get the cell-index */ 203 214 err = of_property_read_u32(mac_node, "cell-index", &val); 204 215 if (err) { 205 216 dev_err(dev, "failed to read cell-index for %pOF\n", mac_node); 206 - return -EINVAL; 217 + err = -EINVAL; 218 + goto _return_dev_put; 207 219 } 208 220 priv->cell_index = (u8)val; 209 221 ··· 229 217 if (unlikely(nph < 0)) { 230 218 dev_err(dev, "of_count_phandle_with_args(%pOF, fsl,fman-ports) failed\n", 231 219 mac_node); 232 - return nph; 220 + err = nph; 221 + goto _return_dev_put; 233 222 } 234 223 235 224 if (nph != ARRAY_SIZE(mac_dev->port)) { 236 225 dev_err(dev, "Not supported number of fman-ports handles of mac node %pOF from device tree\n", 237 226 mac_node); 238 - return -EINVAL; 227 + err = -EINVAL; 228 + goto _return_dev_put; 239 229 } 240 230 241 - for (i = 0; i < ARRAY_SIZE(mac_dev->port); i++) { 231 + /* PORT_NUM determines the size of the port array */ 232 + for (i = 0; i < PORT_NUM; i++) { 242 233 /* Find the port node */ 243 234 dev_node = of_parse_phandle(mac_node, "fsl,fman-ports", i); 244 235 if (!dev_node) { 245 236 dev_err(dev, "of_parse_phandle(%pOF, fsl,fman-ports) failed\n", 246 237 mac_node); 247 - return -EINVAL; 238 + err = -EINVAL; 239 + goto _return_dev_arr_put; 248 240 } 249 241 250 242 of_dev = of_find_device_by_node(dev_node); ··· 256 240 dev_err(dev, "of_find_device_by_node(%pOF) failed\n", 257 241 dev_node); 258 242 err = -EINVAL; 259 - goto _return_of_node_put; 243 + goto _return_dev_arr_put; 260 244 } 245 + mac_dev->fman_port_devs[i] = &of_dev->dev; 261 246 262 - mac_dev->port[i] = fman_port_bind(&of_dev->dev); 247 + mac_dev->port[i] = fman_port_bind(mac_dev->fman_port_devs[i]); 263 248 if (!mac_dev->port[i]) { 264 249 dev_err(dev, "dev_get_drvdata(%pOF) failed\n", 265 250 dev_node); 266 251 err = -EINVAL; 267 - goto _return_of_node_put; 252 + goto _return_dev_arr_put; 268 253 } 254 + /* Two references have been taken in of_find_device_by_node() 255 + * and fman_port_bind(). Release one of them here. The second 256 + * one will be released in mac_remove(). 257 + */ 258 + put_device(mac_dev->fman_port_devs[i]); 269 259 of_node_put(dev_node); 260 + dev_node = NULL; 270 261 } 271 262 272 263 /* Get the PHY connection type */ ··· 293 270 294 271 err = init(mac_dev, mac_node, &params); 295 272 if (err < 0) 296 - return err; 273 + goto _return_dev_arr_put; 297 274 298 275 if (!is_zero_ether_addr(mac_dev->addr)) 299 276 dev_info(dev, "FMan MAC address: %pM\n", mac_dev->addr); ··· 308 285 309 286 return err; 310 287 288 + _return_dev_arr_put: 289 + /* mac_dev is kzalloc'ed */ 290 + for (i = 0; i < PORT_NUM; i++) 291 + put_device(mac_dev->fman_port_devs[i]); 292 + _return_dev_put: 293 + put_device(mac_dev->fman_dev); 311 294 _return_of_node_put: 312 295 of_node_put(dev_node); 313 296 return err; ··· 322 293 static void mac_remove(struct platform_device *pdev) 323 294 { 324 295 struct mac_device *mac_dev = platform_get_drvdata(pdev); 296 + int i; 297 + 298 + for (i = 0; i < PORT_NUM; i++) 299 + put_device(mac_dev->fman_port_devs[i]); 300 + put_device(mac_dev->fman_dev); 325 301 326 302 platform_device_unregister(mac_dev->priv->eth_dev); 327 303 }
+5 -1
drivers/net/ethernet/freescale/fman/mac.h
··· 19 19 struct fman_mac; 20 20 struct mac_priv_s; 21 21 22 + #define PORT_NUM 2 22 23 struct mac_device { 23 24 void __iomem *vaddr; 24 25 struct device *dev; 25 26 struct resource *res; 26 27 u8 addr[ETH_ALEN]; 27 - struct fman_port *port[2]; 28 + struct fman_port *port[PORT_NUM]; 28 29 struct phylink *phylink; 29 30 struct phylink_config phylink_config; 30 31 phy_interface_t phy_if; ··· 51 50 52 51 struct fman_mac *fman_mac; 53 52 struct mac_priv_s *priv; 53 + 54 + struct device *fman_dev; 55 + struct device *fman_port_devs[PORT_NUM]; 54 56 }; 55 57 56 58 static inline struct mac_device
+1
drivers/net/ethernet/i825xx/sun3_82586.c
··· 1012 1012 if(skb->len > XMIT_BUFF_SIZE) 1013 1013 { 1014 1014 printk("%s: Sorry, max. framelength is %d bytes. The length of your frame is %d bytes.\n",dev->name,XMIT_BUFF_SIZE,skb->len); 1015 + dev_kfree_skb(skb); 1015 1016 return NETDEV_TX_OK; 1016 1017 } 1017 1018
+59 -23
drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
··· 337 337 } 338 338 339 339 /** 340 + * octep_oq_next_pkt() - Move to the next packet in Rx queue. 341 + * 342 + * @oq: Octeon Rx queue data structure. 343 + * @buff_info: Current packet buffer info. 344 + * @read_idx: Current packet index in the ring. 345 + * @desc_used: Current packet descriptor number. 346 + * 347 + * Free the resources associated with a packet. 348 + * Increment packet index in the ring and packet descriptor number. 349 + */ 350 + static void octep_oq_next_pkt(struct octep_oq *oq, 351 + struct octep_rx_buffer *buff_info, 352 + u32 *read_idx, u32 *desc_used) 353 + { 354 + dma_unmap_page(oq->dev, oq->desc_ring[*read_idx].buffer_ptr, 355 + PAGE_SIZE, DMA_FROM_DEVICE); 356 + buff_info->page = NULL; 357 + (*read_idx)++; 358 + (*desc_used)++; 359 + if (*read_idx == oq->max_count) 360 + *read_idx = 0; 361 + } 362 + 363 + /** 364 + * octep_oq_drop_rx() - Free the resources associated with a packet. 365 + * 366 + * @oq: Octeon Rx queue data structure. 367 + * @buff_info: Current packet buffer info. 368 + * @read_idx: Current packet index in the ring. 369 + * @desc_used: Current packet descriptor number. 370 + * 371 + */ 372 + static void octep_oq_drop_rx(struct octep_oq *oq, 373 + struct octep_rx_buffer *buff_info, 374 + u32 *read_idx, u32 *desc_used) 375 + { 376 + int data_len = buff_info->len - oq->max_single_buffer_size; 377 + 378 + while (data_len > 0) { 379 + octep_oq_next_pkt(oq, buff_info, read_idx, desc_used); 380 + data_len -= oq->buffer_size; 381 + }; 382 + } 383 + 384 + /** 340 385 * __octep_oq_process_rx() - Process hardware Rx queue and push to stack. 341 386 * 342 387 * @oct: Octeon device private data structure. ··· 412 367 desc_used = 0; 413 368 for (pkt = 0; pkt < pkts_to_process; pkt++) { 414 369 buff_info = (struct octep_rx_buffer *)&oq->buff_info[read_idx]; 415 - dma_unmap_page(oq->dev, oq->desc_ring[read_idx].buffer_ptr, 416 - PAGE_SIZE, DMA_FROM_DEVICE); 417 370 resp_hw = page_address(buff_info->page); 418 - buff_info->page = NULL; 419 371 420 372 /* Swap the length field that is in Big-Endian to CPU */ 421 373 buff_info->len = be64_to_cpu(resp_hw->length); ··· 436 394 data_offset = OCTEP_OQ_RESP_HW_SIZE; 437 395 rx_ol_flags = 0; 438 396 } 397 + 398 + octep_oq_next_pkt(oq, buff_info, &read_idx, &desc_used); 399 + 400 + skb = build_skb((void *)resp_hw, PAGE_SIZE); 401 + if (!skb) { 402 + octep_oq_drop_rx(oq, buff_info, 403 + &read_idx, &desc_used); 404 + oq->stats.alloc_failures++; 405 + continue; 406 + } 407 + skb_reserve(skb, data_offset); 408 + 439 409 rx_bytes += buff_info->len; 440 410 441 411 if (buff_info->len <= oq->max_single_buffer_size) { 442 - skb = build_skb((void *)resp_hw, PAGE_SIZE); 443 - skb_reserve(skb, data_offset); 444 412 skb_put(skb, buff_info->len); 445 - read_idx++; 446 - desc_used++; 447 - if (read_idx == oq->max_count) 448 - read_idx = 0; 449 413 } else { 450 414 struct skb_shared_info *shinfo; 451 415 u16 data_len; 452 416 453 - skb = build_skb((void *)resp_hw, PAGE_SIZE); 454 - skb_reserve(skb, data_offset); 455 417 /* Head fragment includes response header(s); 456 418 * subsequent fragments contains only data. 457 419 */ 458 420 skb_put(skb, oq->max_single_buffer_size); 459 - read_idx++; 460 - desc_used++; 461 - if (read_idx == oq->max_count) 462 - read_idx = 0; 463 - 464 421 shinfo = skb_shinfo(skb); 465 422 data_len = buff_info->len - oq->max_single_buffer_size; 466 423 while (data_len) { 467 - dma_unmap_page(oq->dev, oq->desc_ring[read_idx].buffer_ptr, 468 - PAGE_SIZE, DMA_FROM_DEVICE); 469 424 buff_info = (struct octep_rx_buffer *) 470 425 &oq->buff_info[read_idx]; 471 426 if (data_len < oq->buffer_size) { ··· 477 438 buff_info->page, 0, 478 439 buff_info->len, 479 440 buff_info->len); 480 - buff_info->page = NULL; 481 - read_idx++; 482 - desc_used++; 483 - if (read_idx == oq->max_count) 484 - read_idx = 0; 441 + 442 + octep_oq_next_pkt(oq, buff_info, &read_idx, &desc_used); 485 443 } 486 444 } 487 445
+3 -6
drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
··· 3197 3197 { 3198 3198 struct mlxsw_sp_nexthop_group *nh_grp = nh->nhgi->nh_grp; 3199 3199 struct mlxsw_sp_nexthop_counter *nhct; 3200 - void *ptr; 3201 3200 int err; 3202 3201 3203 3202 nhct = xa_load(&nh_grp->nhgi->nexthop_counters, nh->id); ··· 3209 3210 if (IS_ERR(nhct)) 3210 3211 return nhct; 3211 3212 3212 - ptr = xa_store(&nh_grp->nhgi->nexthop_counters, nh->id, nhct, 3213 - GFP_KERNEL); 3214 - if (IS_ERR(ptr)) { 3215 - err = PTR_ERR(ptr); 3213 + err = xa_err(xa_store(&nh_grp->nhgi->nexthop_counters, nh->id, nhct, 3214 + GFP_KERNEL)); 3215 + if (err) 3216 3216 goto err_store; 3217 - } 3218 3217 3219 3218 return nhct; 3220 3219
+3 -1
drivers/net/ethernet/realtek/r8169_main.c
··· 4749 4749 if ((status & 0xffff) == 0xffff || !(status & tp->irq_mask)) 4750 4750 return IRQ_NONE; 4751 4751 4752 - if (unlikely(status & SYSErr)) { 4752 + /* At least RTL8168fp may unexpectedly set the SYSErr bit */ 4753 + if (unlikely(status & SYSErr && 4754 + tp->mac_version <= RTL_GIGA_MAC_VER_06)) { 4753 4755 rtl8169_pcierr_interrupt(tp->dev); 4754 4756 goto out; 4755 4757 }
+30
drivers/net/hyperv/netvsc_drv.c
··· 2798 2798 }, 2799 2799 }; 2800 2800 2801 + /* Set VF's namespace same as the synthetic NIC */ 2802 + static void netvsc_event_set_vf_ns(struct net_device *ndev) 2803 + { 2804 + struct net_device_context *ndev_ctx = netdev_priv(ndev); 2805 + struct net_device *vf_netdev; 2806 + int ret; 2807 + 2808 + vf_netdev = rtnl_dereference(ndev_ctx->vf_netdev); 2809 + if (!vf_netdev) 2810 + return; 2811 + 2812 + if (!net_eq(dev_net(ndev), dev_net(vf_netdev))) { 2813 + ret = dev_change_net_namespace(vf_netdev, dev_net(ndev), 2814 + "eth%d"); 2815 + if (ret) 2816 + netdev_err(vf_netdev, 2817 + "Cannot move to same namespace as %s: %d\n", 2818 + ndev->name, ret); 2819 + else 2820 + netdev_info(vf_netdev, 2821 + "Moved VF to namespace with: %s\n", 2822 + ndev->name); 2823 + } 2824 + } 2825 + 2801 2826 /* 2802 2827 * On Hyper-V, every VF interface is matched with a corresponding 2803 2828 * synthetic interface. The synthetic interface is presented first ··· 2834 2809 { 2835 2810 struct net_device *event_dev = netdev_notifier_info_to_dev(ptr); 2836 2811 int ret = 0; 2812 + 2813 + if (event_dev->netdev_ops == &device_ops && event == NETDEV_REGISTER) { 2814 + netvsc_event_set_vf_ns(event_dev); 2815 + return NOTIFY_DONE; 2816 + } 2837 2817 2838 2818 ret = check_dev_is_matching_vf(event_dev); 2839 2819 if (ret != 0)
+2 -2
drivers/net/phy/dp83822.c
··· 45 45 /* Control Register 2 bits */ 46 46 #define DP83822_FX_ENABLE BIT(14) 47 47 48 - #define DP83822_HW_RESET BIT(15) 49 - #define DP83822_SW_RESET BIT(14) 48 + #define DP83822_SW_RESET BIT(15) 49 + #define DP83822_DIG_RESTART BIT(14) 50 50 51 51 /* PHY STS bits */ 52 52 #define DP83822_PHYSTS_DUPLEX BIT(2)
+1 -1
drivers/net/plip/plip.c
··· 815 815 return HS_TIMEOUT; 816 816 } 817 817 } 818 - break; 818 + fallthrough; 819 819 820 820 case PLIP_PK_LENGTH_LSB: 821 821 if (plip_send(nibble_timeout, dev,
+2 -2
drivers/net/pse-pd/pse_core.c
··· 113 113 { 114 114 int i; 115 115 116 - for (i = 0; i <= pcdev->nr_lines; i++) { 116 + for (i = 0; i < pcdev->nr_lines; i++) { 117 117 of_node_put(pcdev->pi[i].pairset[0].np); 118 118 of_node_put(pcdev->pi[i].pairset[1].np); 119 119 of_node_put(pcdev->pi[i].np); ··· 647 647 { 648 648 int i; 649 649 650 - for (i = 0; i <= pcdev->nr_lines; i++) { 650 + for (i = 0; i < pcdev->nr_lines; i++) { 651 651 if (pcdev->pi[i].np == np) 652 652 return i; 653 653 }
+1
drivers/net/usb/qmi_wwan.c
··· 1426 1426 {QMI_FIXED_INTF(0x2c7c, 0x0296, 4)}, /* Quectel BG96 */ 1427 1427 {QMI_QUIRK_SET_DTR(0x2c7c, 0x030e, 4)}, /* Quectel EM05GV2 */ 1428 1428 {QMI_QUIRK_SET_DTR(0x2cb7, 0x0104, 4)}, /* Fibocom NL678 series */ 1429 + {QMI_QUIRK_SET_DTR(0x2cb7, 0x0112, 0)}, /* Fibocom FG132 */ 1429 1430 {QMI_FIXED_INTF(0x0489, 0xe0b4, 0)}, /* Foxconn T77W968 LTE */ 1430 1431 {QMI_FIXED_INTF(0x0489, 0xe0b5, 0)}, /* Foxconn T77W968 LTE with eSIM support*/ 1431 1432 {QMI_FIXED_INTF(0x2692, 0x9025, 4)}, /* Cellient MPL200 (rebranded Qualcomm 05c6:9025) */
+2 -1
drivers/net/usb/usbnet.c
··· 1767 1767 // can rename the link if it knows better. 1768 1768 if ((dev->driver_info->flags & FLAG_ETHER) != 0 && 1769 1769 ((dev->driver_info->flags & FLAG_POINTTOPOINT) == 0 || 1770 - (net->dev_addr [0] & 0x02) == 0)) 1770 + /* somebody touched it*/ 1771 + !is_zero_ether_addr(net->dev_addr))) 1771 1772 strscpy(net->name, "eth%d", sizeof(net->name)); 1772 1773 /* WLAN devices should always be named "wlan%d" */ 1773 1774 if ((dev->driver_info->flags & FLAG_WLAN) != 0)
+1 -1
drivers/net/virtio_net.c
··· 4155 4155 u32 desc_num[3]; 4156 4156 4157 4157 /* The actual supported stat types. */ 4158 - u32 bitmap[3]; 4158 + u64 bitmap[3]; 4159 4159 4160 4160 /* Used to calculate the reply buffer size. */ 4161 4161 u32 size[3];
+1 -1
drivers/net/wwan/wwan_core.c
··· 1038 1038 1039 1039 static struct rtnl_link_ops wwan_rtnl_link_ops __read_mostly = { 1040 1040 .kind = "wwan", 1041 - .maxtype = __IFLA_WWAN_MAX, 1041 + .maxtype = IFLA_WWAN_MAX, 1042 1042 .alloc = wwan_rtnl_alloc, 1043 1043 .validate = wwan_rtnl_validate, 1044 1044 .newlink = wwan_rtnl_newlink,
+17 -24
drivers/nvme/host/core.c
··· 1292 1292 queue_delayed_work(nvme_wq, &ctrl->ka_work, delay); 1293 1293 } 1294 1294 1295 - static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq, 1296 - blk_status_t status) 1295 + static void nvme_keep_alive_finish(struct request *rq, 1296 + blk_status_t status, struct nvme_ctrl *ctrl) 1297 1297 { 1298 - struct nvme_ctrl *ctrl = rq->end_io_data; 1299 - unsigned long flags; 1300 - bool startka = false; 1301 1298 unsigned long rtt = jiffies - (rq->deadline - rq->timeout); 1302 1299 unsigned long delay = nvme_keep_alive_work_period(ctrl); 1300 + enum nvme_ctrl_state state = nvme_ctrl_state(ctrl); 1303 1301 1304 1302 /* 1305 1303 * Subtract off the keepalive RTT so nvme_keep_alive_work runs ··· 1311 1313 delay = 0; 1312 1314 } 1313 1315 1314 - blk_mq_free_request(rq); 1315 - 1316 1316 if (status) { 1317 1317 dev_err(ctrl->device, 1318 1318 "failed nvme_keep_alive_end_io error=%d\n", 1319 1319 status); 1320 - return RQ_END_IO_NONE; 1320 + return; 1321 1321 } 1322 1322 1323 1323 ctrl->ka_last_check_time = jiffies; 1324 1324 ctrl->comp_seen = false; 1325 - spin_lock_irqsave(&ctrl->lock, flags); 1326 - if (ctrl->state == NVME_CTRL_LIVE || 1327 - ctrl->state == NVME_CTRL_CONNECTING) 1328 - startka = true; 1329 - spin_unlock_irqrestore(&ctrl->lock, flags); 1330 - if (startka) 1325 + if (state == NVME_CTRL_LIVE || state == NVME_CTRL_CONNECTING) 1331 1326 queue_delayed_work(nvme_wq, &ctrl->ka_work, delay); 1332 - return RQ_END_IO_NONE; 1333 1327 } 1334 1328 1335 1329 static void nvme_keep_alive_work(struct work_struct *work) ··· 1330 1340 struct nvme_ctrl, ka_work); 1331 1341 bool comp_seen = ctrl->comp_seen; 1332 1342 struct request *rq; 1343 + blk_status_t status; 1333 1344 1334 1345 ctrl->ka_last_check_time = jiffies; 1335 1346 ··· 1353 1362 nvme_init_request(rq, &ctrl->ka_cmd); 1354 1363 1355 1364 rq->timeout = ctrl->kato * HZ; 1356 - rq->end_io = nvme_keep_alive_end_io; 1357 - rq->end_io_data = ctrl; 1358 - blk_execute_rq_nowait(rq, false); 1365 + status = blk_execute_rq(rq, false); 1366 + nvme_keep_alive_finish(rq, status, ctrl); 1367 + blk_mq_free_request(rq); 1359 1368 } 1360 1369 1361 1370 static void nvme_start_keep_alive(struct nvme_ctrl *ctrl) ··· 2449 2458 else 2450 2459 ctrl->ctrl_config = NVME_CC_CSS_NVM; 2451 2460 2452 - if (ctrl->cap & NVME_CAP_CRMS_CRWMS && ctrl->cap & NVME_CAP_CRMS_CRIMS) 2453 - ctrl->ctrl_config |= NVME_CC_CRIME; 2461 + /* 2462 + * Setting CRIME results in CSTS.RDY before the media is ready. This 2463 + * makes it possible for media related commands to return the error 2464 + * NVME_SC_ADMIN_COMMAND_MEDIA_NOT_READY. Until the driver is 2465 + * restructured to handle retries, disable CC.CRIME. 2466 + */ 2467 + ctrl->ctrl_config &= ~NVME_CC_CRIME; 2454 2468 2455 2469 ctrl->ctrl_config |= (NVME_CTRL_PAGE_SHIFT - 12) << NVME_CC_MPS_SHIFT; 2456 2470 ctrl->ctrl_config |= NVME_CC_AMS_RR | NVME_CC_SHN_NONE; ··· 2485 2489 * devices are known to get this wrong. Use the larger of the 2486 2490 * two values. 2487 2491 */ 2488 - if (ctrl->ctrl_config & NVME_CC_CRIME) 2489 - ready_timeout = NVME_CRTO_CRIMT(crto); 2490 - else 2491 - ready_timeout = NVME_CRTO_CRWMT(crto); 2492 + ready_timeout = NVME_CRTO_CRWMT(crto); 2492 2493 2493 2494 if (ready_timeout < timeout) 2494 2495 dev_warn_once(ctrl->device, "bad crto:%x cap:%llx\n",
+33 -7
drivers/nvme/host/multipath.c
··· 431 431 case NVME_CTRL_LIVE: 432 432 case NVME_CTRL_RESETTING: 433 433 case NVME_CTRL_CONNECTING: 434 - /* fallthru */ 435 434 return true; 436 435 default: 437 436 break; ··· 579 580 return ret; 580 581 } 581 582 583 + static void nvme_partition_scan_work(struct work_struct *work) 584 + { 585 + struct nvme_ns_head *head = 586 + container_of(work, struct nvme_ns_head, partition_scan_work); 587 + 588 + if (WARN_ON_ONCE(!test_and_clear_bit(GD_SUPPRESS_PART_SCAN, 589 + &head->disk->state))) 590 + return; 591 + 592 + mutex_lock(&head->disk->open_mutex); 593 + bdev_disk_changed(head->disk, false); 594 + mutex_unlock(&head->disk->open_mutex); 595 + } 596 + 582 597 static void nvme_requeue_work(struct work_struct *work) 583 598 { 584 599 struct nvme_ns_head *head = ··· 619 606 bio_list_init(&head->requeue_list); 620 607 spin_lock_init(&head->requeue_lock); 621 608 INIT_WORK(&head->requeue_work, nvme_requeue_work); 609 + INIT_WORK(&head->partition_scan_work, nvme_partition_scan_work); 622 610 623 611 /* 624 612 * Add a multipath node if the subsystems supports multiple controllers. ··· 643 629 return PTR_ERR(head->disk); 644 630 head->disk->fops = &nvme_ns_head_ops; 645 631 head->disk->private_data = head; 632 + 633 + /* 634 + * We need to suppress the partition scan from occuring within the 635 + * controller's scan_work context. If a path error occurs here, the IO 636 + * will wait until a path becomes available or all paths are torn down, 637 + * but that action also occurs within scan_work, so it would deadlock. 638 + * Defer the partion scan to a different context that does not block 639 + * scan_work. 640 + */ 641 + set_bit(GD_SUPPRESS_PART_SCAN, &head->disk->state); 646 642 sprintf(head->disk->disk_name, "nvme%dn%d", 647 643 ctrl->subsys->instance, head->instance); 648 644 return 0; ··· 679 655 return; 680 656 } 681 657 nvme_add_ns_head_cdev(head); 658 + kblockd_schedule_work(&head->partition_scan_work); 682 659 } 683 660 684 661 mutex_lock(&head->lock); ··· 999 974 return; 1000 975 if (test_and_clear_bit(NVME_NSHEAD_DISK_LIVE, &head->flags)) { 1001 976 nvme_cdev_del(&head->cdev, &head->cdev_device); 977 + /* 978 + * requeue I/O after NVME_NSHEAD_DISK_LIVE has been cleared 979 + * to allow multipath to fail all I/O. 980 + */ 981 + synchronize_srcu(&head->srcu); 982 + kblockd_schedule_work(&head->requeue_work); 1002 983 del_gendisk(head->disk); 1003 984 } 1004 - /* 1005 - * requeue I/O after NVME_NSHEAD_DISK_LIVE has been cleared 1006 - * to allow multipath to fail all I/O. 1007 - */ 1008 - synchronize_srcu(&head->srcu); 1009 - kblockd_schedule_work(&head->requeue_work); 1010 985 } 1011 986 1012 987 void nvme_mpath_remove_disk(struct nvme_ns_head *head) ··· 1016 991 /* make sure all pending bios are cleaned up */ 1017 992 kblockd_schedule_work(&head->requeue_work); 1018 993 flush_work(&head->requeue_work); 994 + flush_work(&head->partition_scan_work); 1019 995 put_disk(head->disk); 1020 996 } 1021 997
+1
drivers/nvme/host/nvme.h
··· 494 494 struct bio_list requeue_list; 495 495 spinlock_t requeue_lock; 496 496 struct work_struct requeue_work; 497 + struct work_struct partition_scan_work; 497 498 struct mutex lock; 498 499 unsigned long flags; 499 500 #define NVME_NSHEAD_DISK_LIVE 0
+16 -3
drivers/nvme/host/pci.c
··· 2506 2506 return 1; 2507 2507 } 2508 2508 2509 - static void nvme_pci_update_nr_queues(struct nvme_dev *dev) 2509 + static bool nvme_pci_update_nr_queues(struct nvme_dev *dev) 2510 2510 { 2511 2511 if (!dev->ctrl.tagset) { 2512 2512 nvme_alloc_io_tag_set(&dev->ctrl, &dev->tagset, &nvme_mq_ops, 2513 2513 nvme_pci_nr_maps(dev), sizeof(struct nvme_iod)); 2514 - return; 2514 + return true; 2515 + } 2516 + 2517 + /* Give up if we are racing with nvme_dev_disable() */ 2518 + if (!mutex_trylock(&dev->shutdown_lock)) 2519 + return false; 2520 + 2521 + /* Check if nvme_dev_disable() has been executed already */ 2522 + if (!dev->online_queues) { 2523 + mutex_unlock(&dev->shutdown_lock); 2524 + return false; 2515 2525 } 2516 2526 2517 2527 blk_mq_update_nr_hw_queues(&dev->tagset, dev->online_queues - 1); 2518 2528 /* free previously allocated queues that are no longer usable */ 2519 2529 nvme_free_queues(dev, dev->online_queues); 2530 + mutex_unlock(&dev->shutdown_lock); 2531 + return true; 2520 2532 } 2521 2533 2522 2534 static int nvme_pci_enable(struct nvme_dev *dev) ··· 2809 2797 nvme_dbbuf_set(dev); 2810 2798 nvme_unquiesce_io_queues(&dev->ctrl); 2811 2799 nvme_wait_freeze(&dev->ctrl); 2812 - nvme_pci_update_nr_queues(dev); 2800 + if (!nvme_pci_update_nr_queues(dev)) 2801 + goto out; 2813 2802 nvme_unfreeze(&dev->ctrl); 2814 2803 } else { 2815 2804 dev_warn(dev->ctrl.device, "IO queues lost\n");
+4 -3
drivers/nvme/host/tcp.c
··· 2644 2644 2645 2645 len = nvmf_get_address(ctrl, buf, size); 2646 2646 2647 + if (!test_bit(NVME_TCP_Q_LIVE, &queue->flags)) 2648 + return len; 2649 + 2647 2650 mutex_lock(&queue->queue_lock); 2648 2651 2649 - if (!test_bit(NVME_TCP_Q_LIVE, &queue->flags)) 2650 - goto done; 2651 2652 ret = kernel_getsockname(queue->sock, (struct sockaddr *)&src_addr); 2652 2653 if (ret > 0) { 2653 2654 if (len > 0) ··· 2656 2655 len += scnprintf(buf + len, size - len, "%ssrc_addr=%pISc\n", 2657 2656 (len) ? "," : "", &src_addr); 2658 2657 } 2659 - done: 2658 + 2660 2659 mutex_unlock(&queue->queue_lock); 2661 2660 2662 2661 return len;
+13
drivers/nvme/target/loop.c
··· 265 265 { 266 266 if (!test_and_clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags)) 267 267 return; 268 + /* 269 + * It's possible that some requests might have been added 270 + * after admin queue is stopped/quiesced. So now start the 271 + * queue to flush these requests to the completion. 272 + */ 273 + nvme_unquiesce_admin_queue(&ctrl->ctrl); 274 + 268 275 nvmet_sq_destroy(&ctrl->queues[0].nvme_sq); 269 276 nvme_remove_admin_tag_set(&ctrl->ctrl); 270 277 } ··· 304 297 nvmet_sq_destroy(&ctrl->queues[i].nvme_sq); 305 298 } 306 299 ctrl->ctrl.queue_count = 1; 300 + /* 301 + * It's possible that some requests might have been added 302 + * after io queue is stopped/quiesced. So now start the 303 + * queue to flush these requests to the completion. 304 + */ 305 + nvme_unquiesce_io_queues(&ctrl->ctrl); 307 306 } 308 307 309 308 static int nvme_loop_init_io_queues(struct nvme_loop_ctrl *ctrl)
+2 -4
drivers/nvme/target/passthru.c
··· 535 535 break; 536 536 case nvme_admin_identify: 537 537 switch (req->cmd->identify.cns) { 538 - case NVME_ID_CNS_CTRL: 539 - req->execute = nvmet_passthru_execute_cmd; 540 - req->p.use_workqueue = true; 541 - return NVME_SC_SUCCESS; 542 538 case NVME_ID_CNS_CS_CTRL: 543 539 switch (req->cmd->identify.csi) { 544 540 case NVME_CSI_ZNS: ··· 543 547 return NVME_SC_SUCCESS; 544 548 } 545 549 return NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR; 550 + case NVME_ID_CNS_CTRL: 546 551 case NVME_ID_CNS_NS: 552 + case NVME_ID_CNS_NS_DESC_LIST: 547 553 req->execute = nvmet_passthru_execute_cmd; 548 554 req->p.use_workqueue = true; 549 555 return NVME_SC_SUCCESS;
+27 -29
drivers/nvme/target/rdma.c
··· 39 39 40 40 #define NVMET_RDMA_BACKLOG 128 41 41 42 + #define NVMET_RDMA_DISCRETE_RSP_TAG -1 43 + 42 44 struct nvmet_rdma_srq; 43 45 44 46 struct nvmet_rdma_cmd { ··· 77 75 u32 invalidate_rkey; 78 76 79 77 struct list_head wait_list; 80 - struct list_head free_list; 78 + int tag; 81 79 }; 82 80 83 81 enum nvmet_rdma_queue_state { ··· 100 98 struct nvmet_sq nvme_sq; 101 99 102 100 struct nvmet_rdma_rsp *rsps; 103 - struct list_head free_rsps; 104 - spinlock_t rsps_lock; 101 + struct sbitmap rsp_tags; 105 102 struct nvmet_rdma_cmd *cmds; 106 103 107 104 struct work_struct release_work; ··· 173 172 static void nvmet_rdma_free_rsp(struct nvmet_rdma_device *ndev, 174 173 struct nvmet_rdma_rsp *r); 175 174 static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev, 176 - struct nvmet_rdma_rsp *r); 175 + struct nvmet_rdma_rsp *r, 176 + int tag); 177 177 178 178 static const struct nvmet_fabrics_ops nvmet_rdma_ops; 179 179 ··· 212 210 static inline struct nvmet_rdma_rsp * 213 211 nvmet_rdma_get_rsp(struct nvmet_rdma_queue *queue) 214 212 { 215 - struct nvmet_rdma_rsp *rsp; 216 - unsigned long flags; 213 + struct nvmet_rdma_rsp *rsp = NULL; 214 + int tag; 217 215 218 - spin_lock_irqsave(&queue->rsps_lock, flags); 219 - rsp = list_first_entry_or_null(&queue->free_rsps, 220 - struct nvmet_rdma_rsp, free_list); 221 - if (likely(rsp)) 222 - list_del(&rsp->free_list); 223 - spin_unlock_irqrestore(&queue->rsps_lock, flags); 216 + tag = sbitmap_get(&queue->rsp_tags); 217 + if (tag >= 0) 218 + rsp = &queue->rsps[tag]; 224 219 225 220 if (unlikely(!rsp)) { 226 221 int ret; ··· 225 226 rsp = kzalloc(sizeof(*rsp), GFP_KERNEL); 226 227 if (unlikely(!rsp)) 227 228 return NULL; 228 - ret = nvmet_rdma_alloc_rsp(queue->dev, rsp); 229 + ret = nvmet_rdma_alloc_rsp(queue->dev, rsp, 230 + NVMET_RDMA_DISCRETE_RSP_TAG); 229 231 if (unlikely(ret)) { 230 232 kfree(rsp); 231 233 return NULL; 232 234 } 233 - 234 - rsp->allocated = true; 235 235 } 236 236 237 237 return rsp; ··· 239 241 static inline void 240 242 nvmet_rdma_put_rsp(struct nvmet_rdma_rsp *rsp) 241 243 { 242 - unsigned long flags; 243 - 244 - if (unlikely(rsp->allocated)) { 244 + if (unlikely(rsp->tag == NVMET_RDMA_DISCRETE_RSP_TAG)) { 245 245 nvmet_rdma_free_rsp(rsp->queue->dev, rsp); 246 246 kfree(rsp); 247 247 return; 248 248 } 249 249 250 - spin_lock_irqsave(&rsp->queue->rsps_lock, flags); 251 - list_add_tail(&rsp->free_list, &rsp->queue->free_rsps); 252 - spin_unlock_irqrestore(&rsp->queue->rsps_lock, flags); 250 + sbitmap_clear_bit(&rsp->queue->rsp_tags, rsp->tag); 253 251 } 254 252 255 253 static void nvmet_rdma_free_inline_pages(struct nvmet_rdma_device *ndev, ··· 398 404 } 399 405 400 406 static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev, 401 - struct nvmet_rdma_rsp *r) 407 + struct nvmet_rdma_rsp *r, int tag) 402 408 { 403 409 /* NVMe CQE / RDMA SEND */ 404 410 r->req.cqe = kmalloc(sizeof(*r->req.cqe), GFP_KERNEL); ··· 426 432 r->read_cqe.done = nvmet_rdma_read_data_done; 427 433 /* Data Out / RDMA WRITE */ 428 434 r->write_cqe.done = nvmet_rdma_write_data_done; 435 + r->tag = tag; 429 436 430 437 return 0; 431 438 ··· 449 454 { 450 455 struct nvmet_rdma_device *ndev = queue->dev; 451 456 int nr_rsps = queue->recv_queue_size * 2; 452 - int ret = -EINVAL, i; 457 + int ret = -ENOMEM, i; 458 + 459 + if (sbitmap_init_node(&queue->rsp_tags, nr_rsps, -1, GFP_KERNEL, 460 + NUMA_NO_NODE, false, true)) 461 + goto out; 453 462 454 463 queue->rsps = kcalloc(nr_rsps, sizeof(struct nvmet_rdma_rsp), 455 464 GFP_KERNEL); 456 465 if (!queue->rsps) 457 - goto out; 466 + goto out_free_sbitmap; 458 467 459 468 for (i = 0; i < nr_rsps; i++) { 460 469 struct nvmet_rdma_rsp *rsp = &queue->rsps[i]; 461 470 462 - ret = nvmet_rdma_alloc_rsp(ndev, rsp); 471 + ret = nvmet_rdma_alloc_rsp(ndev, rsp, i); 463 472 if (ret) 464 473 goto out_free; 465 - 466 - list_add_tail(&rsp->free_list, &queue->free_rsps); 467 474 } 468 475 469 476 return 0; ··· 474 477 while (--i >= 0) 475 478 nvmet_rdma_free_rsp(ndev, &queue->rsps[i]); 476 479 kfree(queue->rsps); 480 + out_free_sbitmap: 481 + sbitmap_free(&queue->rsp_tags); 477 482 out: 478 483 return ret; 479 484 } ··· 488 489 for (i = 0; i < nr_rsps; i++) 489 490 nvmet_rdma_free_rsp(ndev, &queue->rsps[i]); 490 491 kfree(queue->rsps); 492 + sbitmap_free(&queue->rsp_tags); 491 493 } 492 494 493 495 static int nvmet_rdma_post_recv(struct nvmet_rdma_device *ndev, ··· 1447 1447 INIT_LIST_HEAD(&queue->rsp_wait_list); 1448 1448 INIT_LIST_HEAD(&queue->rsp_wr_wait_list); 1449 1449 spin_lock_init(&queue->rsp_wr_wait_lock); 1450 - INIT_LIST_HEAD(&queue->free_rsps); 1451 - spin_lock_init(&queue->rsps_lock); 1452 1450 INIT_LIST_HEAD(&queue->queue_list); 1453 1451 1454 1452 queue->idx = ida_alloc(&nvmet_rdma_queue_ida, GFP_KERNEL);
+11 -11
drivers/parport/procfs.c
··· 51 51 52 52 for (dev = port->devices; dev ; dev = dev->next) { 53 53 if(dev == port->cad) { 54 - len += snprintf(buffer, sizeof(buffer), "%s\n", dev->name); 54 + len += scnprintf(buffer, sizeof(buffer), "%s\n", dev->name); 55 55 } 56 56 } 57 57 58 58 if(!len) { 59 - len += snprintf(buffer, sizeof(buffer), "%s\n", "none"); 59 + len += scnprintf(buffer, sizeof(buffer), "%s\n", "none"); 60 60 } 61 61 62 62 if (len > *lenp) ··· 87 87 } 88 88 89 89 if ((str = info->class_name) != NULL) 90 - len += snprintf (buffer + len, sizeof(buffer) - len, "CLASS:%s;\n", str); 90 + len += scnprintf (buffer + len, sizeof(buffer) - len, "CLASS:%s;\n", str); 91 91 92 92 if ((str = info->model) != NULL) 93 - len += snprintf (buffer + len, sizeof(buffer) - len, "MODEL:%s;\n", str); 93 + len += scnprintf (buffer + len, sizeof(buffer) - len, "MODEL:%s;\n", str); 94 94 95 95 if ((str = info->mfr) != NULL) 96 - len += snprintf (buffer + len, sizeof(buffer) - len, "MANUFACTURER:%s;\n", str); 96 + len += scnprintf (buffer + len, sizeof(buffer) - len, "MANUFACTURER:%s;\n", str); 97 97 98 98 if ((str = info->description) != NULL) 99 - len += snprintf (buffer + len, sizeof(buffer) - len, "DESCRIPTION:%s;\n", str); 99 + len += scnprintf (buffer + len, sizeof(buffer) - len, "DESCRIPTION:%s;\n", str); 100 100 101 101 if ((str = info->cmdset) != NULL) 102 - len += snprintf (buffer + len, sizeof(buffer) - len, "COMMAND SET:%s;\n", str); 102 + len += scnprintf (buffer + len, sizeof(buffer) - len, "COMMAND SET:%s;\n", str); 103 103 104 104 if (len > *lenp) 105 105 len = *lenp; ··· 128 128 if (write) /* permissions prevent this anyway */ 129 129 return -EACCES; 130 130 131 - len += snprintf (buffer, sizeof(buffer), "%lu\t%lu\n", port->base, port->base_hi); 131 + len += scnprintf (buffer, sizeof(buffer), "%lu\t%lu\n", port->base, port->base_hi); 132 132 133 133 if (len > *lenp) 134 134 len = *lenp; ··· 155 155 if (write) /* permissions prevent this anyway */ 156 156 return -EACCES; 157 157 158 - len += snprintf (buffer, sizeof(buffer), "%d\n", port->irq); 158 + len += scnprintf (buffer, sizeof(buffer), "%d\n", port->irq); 159 159 160 160 if (len > *lenp) 161 161 len = *lenp; ··· 182 182 if (write) /* permissions prevent this anyway */ 183 183 return -EACCES; 184 184 185 - len += snprintf (buffer, sizeof(buffer), "%d\n", port->dma); 185 + len += scnprintf (buffer, sizeof(buffer), "%d\n", port->dma); 186 186 187 187 if (len > *lenp) 188 188 len = *lenp; ··· 213 213 #define printmode(x) \ 214 214 do { \ 215 215 if (port->modes & PARPORT_MODE_##x) \ 216 - len += snprintf(buffer + len, sizeof(buffer) - len, "%s%s", f++ ? "," : "", #x); \ 216 + len += scnprintf(buffer + len, sizeof(buffer) - len, "%s%s", f++ ? "," : "", #x); \ 217 217 } while (0) 218 218 int f = 0; 219 219 printmode(PCSPP);
+1
drivers/pinctrl/intel/Kconfig
··· 46 46 of Intel PCH pins and using them as GPIOs. Currently the following 47 47 Intel SoCs / platforms require this to be functional: 48 48 - Lunar Lake 49 + - Panther Lake 49 50 50 51 config PINCTRL_ALDERLAKE 51 52 tristate "Intel Alder Lake pinctrl and GPIO driver"
+2 -3
drivers/pinctrl/intel/pinctrl-intel-platform.c
··· 90 90 struct intel_community *community, 91 91 struct intel_platform_pins *pins) 92 92 { 93 - struct fwnode_handle *child; 94 93 struct intel_padgroup *gpps; 95 94 unsigned int group; 96 95 size_t ngpps; ··· 130 131 return -ENOMEM; 131 132 132 133 group = 0; 133 - device_for_each_child_node(dev, child) { 134 + device_for_each_child_node_scoped(dev, child) { 134 135 struct intel_padgroup *gpp = &gpps[group]; 135 136 136 137 gpp->reg_num = group; ··· 158 159 int ret; 159 160 160 161 /* Version 1.0 of the specification assumes only a single community per device node */ 161 - ncommunities = 1, 162 + ncommunities = 1; 162 163 communities = devm_kcalloc(dev, ncommunities, sizeof(*communities), GFP_KERNEL); 163 164 if (!communities) 164 165 return -ENOMEM;
+1 -1
drivers/pinctrl/nuvoton/pinctrl-ma35.c
··· 218 218 } 219 219 220 220 map_num += grp->npins; 221 - new_map = devm_kcalloc(pctldev->dev, map_num, sizeof(*new_map), GFP_KERNEL); 221 + new_map = kcalloc(map_num, sizeof(*new_map), GFP_KERNEL); 222 222 if (!new_map) 223 223 return -ENOMEM; 224 224
+3
drivers/pinctrl/pinctrl-apple-gpio.c
··· 474 474 for (i = 0; i < npins; i++) { 475 475 pins[i].number = i; 476 476 pins[i].name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "PIN%u", i); 477 + if (!pins[i].name) 478 + return -ENOMEM; 479 + 477 480 pins[i].drv_data = pctl; 478 481 pin_names[i] = pins[i].name; 479 482 pin_nums[i] = i;
+4 -2
drivers/pinctrl/pinctrl-aw9523.c
··· 987 987 lockdep_set_subclass(&awi->i2c_lock, i2c_adapter_depth(client->adapter)); 988 988 989 989 pdesc = devm_kzalloc(dev, sizeof(*pdesc), GFP_KERNEL); 990 - if (!pdesc) 991 - return -ENOMEM; 990 + if (!pdesc) { 991 + ret = -ENOMEM; 992 + goto err_disable_vregs; 993 + } 992 994 993 995 ret = aw9523_hw_init(awi); 994 996 if (ret)
+4 -4
drivers/pinctrl/pinctrl-ocelot.c
··· 1955 1955 unsigned int reg = 0, irq, i; 1956 1956 unsigned long irqs; 1957 1957 1958 + chained_irq_enter(parent_chip, desc); 1959 + 1958 1960 for (i = 0; i < info->stride; i++) { 1959 1961 regmap_read(info->map, id_reg + 4 * i, &reg); 1960 1962 if (!reg) 1961 1963 continue; 1962 - 1963 - chained_irq_enter(parent_chip, desc); 1964 1964 1965 1965 irqs = reg; 1966 1966 1967 1967 for_each_set_bit(irq, &irqs, 1968 1968 min(32U, info->desc->npins - 32 * i)) 1969 1969 generic_handle_domain_irq(chip->irq.domain, irq + 32 * i); 1970 - 1971 - chained_irq_exit(parent_chip, desc); 1972 1970 } 1971 + 1972 + chained_irq_exit(parent_chip, desc); 1973 1973 } 1974 1974 1975 1975 static int ocelot_gpiochip_register(struct platform_device *pdev,
+1 -1
drivers/pinctrl/sophgo/pinctrl-cv18xx.c
··· 221 221 if (!grpnames) 222 222 return -ENOMEM; 223 223 224 - map = devm_kcalloc(dev, ngroups * 2, sizeof(*map), GFP_KERNEL); 224 + map = kcalloc(ngroups * 2, sizeof(*map), GFP_KERNEL); 225 225 if (!map) 226 226 return -ENOMEM; 227 227
+7 -2
drivers/pinctrl/stm32/pinctrl-stm32.c
··· 1374 1374 1375 1375 for (i = 0; i < npins; i++) { 1376 1376 stm32_pin = stm32_pctrl_get_desc_pin_from_gpio(pctl, bank, i); 1377 - if (stm32_pin && stm32_pin->pin.name) 1377 + if (stm32_pin && stm32_pin->pin.name) { 1378 1378 names[i] = devm_kasprintf(dev, GFP_KERNEL, "%s", stm32_pin->pin.name); 1379 - else 1379 + if (!names[i]) { 1380 + err = -ENOMEM; 1381 + goto err_clk; 1382 + } 1383 + } else { 1380 1384 names[i] = NULL; 1385 + } 1381 1386 } 1382 1387 1383 1388 bank->gpio_chip.names = (const char * const *)names;
+1
drivers/powercap/intel_rapl_msr.c
··· 148 148 X86_MATCH_VFM(INTEL_METEORLAKE, NULL), 149 149 X86_MATCH_VFM(INTEL_METEORLAKE_L, NULL), 150 150 X86_MATCH_VFM(INTEL_ARROWLAKE_U, NULL), 151 + X86_MATCH_VFM(INTEL_ARROWLAKE_H, NULL), 151 152 {} 152 153 }; 153 154
+2 -2
drivers/reset/reset-npcm.c
··· 405 405 if (!of_property_read_u32(pdev->dev.of_node, "nuvoton,sw-reset-number", 406 406 &rc->sw_reset_number)) { 407 407 if (rc->sw_reset_number && rc->sw_reset_number < 5) { 408 - rc->restart_nb.priority = 192, 409 - rc->restart_nb.notifier_call = npcm_rc_restart, 408 + rc->restart_nb.priority = 192; 409 + rc->restart_nb.notifier_call = npcm_rc_restart; 410 410 ret = register_restart_handler(&rc->restart_nb); 411 411 if (ret) 412 412 dev_warn(&pdev->dev, "failed to register restart handler\n");
+3
drivers/reset/starfive/reset-starfive-jh71x0.c
··· 94 94 void __iomem *reg_status = data->status + offset * sizeof(u32); 95 95 u32 value = readl(reg_status); 96 96 97 + if (!data->asserted) 98 + return !(value & mask); 99 + 97 100 return !((value ^ data->asserted[offset]) & mask); 98 101 } 99 102
+2 -1
drivers/s390/char/sclp.c
··· 1195 1195 } 1196 1196 1197 1197 static struct notifier_block sclp_reboot_notifier = { 1198 - .notifier_call = sclp_reboot_event 1198 + .notifier_call = sclp_reboot_event, 1199 + .priority = INT_MIN, 1199 1200 }; 1200 1201 1201 1202 static ssize_t con_pages_show(struct device_driver *dev, char *buf)
+2 -2
drivers/s390/char/sclp_vt220.c
··· 319 319 buffer = (void *) ((addr_t) sccb + sccb->header.length); 320 320 321 321 if (convertlf) { 322 - /* Perform Linefeed conversion (0x0a -> 0x0a 0x0d)*/ 322 + /* Perform Linefeed conversion (0x0a -> 0x0d 0x0a)*/ 323 323 for (from=0, to=0; 324 324 (from < count) && (to < sclp_vt220_space_left(request)); 325 325 from++) { ··· 328 328 /* Perform conversion */ 329 329 if (c == 0x0a) { 330 330 if (to + 1 < sclp_vt220_space_left(request)) { 331 - ((unsigned char *) buffer)[to++] = c; 332 331 ((unsigned char *) buffer)[to++] = 0x0d; 332 + ((unsigned char *) buffer)[to++] = c; 333 333 } else 334 334 break; 335 335
+1 -2
drivers/s390/crypto/ap_bus.c
··· 1864 1864 } 1865 1865 /* if no queue device exists, create a new one */ 1866 1866 if (!aq) { 1867 - aq = ap_queue_create(qid, ac->ap_dev.device_type); 1867 + aq = ap_queue_create(qid, ac); 1868 1868 if (!aq) { 1869 1869 AP_DBF_WARN("%s(%d,%d) ap_queue_create() failed\n", 1870 1870 __func__, ac->id, dom); 1871 1871 continue; 1872 1872 } 1873 - aq->card = ac; 1874 1873 aq->config = !decfg; 1875 1874 aq->chkstop = chkstop; 1876 1875 aq->se_bstate = hwinfo.bs;
+1 -1
drivers/s390/crypto/ap_bus.h
··· 272 272 int ap_test_config_ctrl_domain(unsigned int domain); 273 273 274 274 void ap_queue_init_reply(struct ap_queue *aq, struct ap_message *ap_msg); 275 - struct ap_queue *ap_queue_create(ap_qid_t qid, int device_type); 275 + struct ap_queue *ap_queue_create(ap_qid_t qid, struct ap_card *ac); 276 276 void ap_queue_prepare_remove(struct ap_queue *aq); 277 277 void ap_queue_remove(struct ap_queue *aq); 278 278 void ap_queue_init_state(struct ap_queue *aq);
+20 -8
drivers/s390/crypto/ap_queue.c
··· 22 22 * some AP queue helper functions 23 23 */ 24 24 25 + static inline bool ap_q_supported_in_se(struct ap_queue *aq) 26 + { 27 + return aq->card->hwinfo.ep11 || aq->card->hwinfo.accel; 28 + } 29 + 25 30 static inline bool ap_q_supports_bind(struct ap_queue *aq) 26 31 { 27 32 return aq->card->hwinfo.ep11 || aq->card->hwinfo.accel; ··· 1109 1104 kfree(aq); 1110 1105 } 1111 1106 1112 - struct ap_queue *ap_queue_create(ap_qid_t qid, int device_type) 1107 + struct ap_queue *ap_queue_create(ap_qid_t qid, struct ap_card *ac) 1113 1108 { 1114 1109 struct ap_queue *aq; 1115 1110 1116 1111 aq = kzalloc(sizeof(*aq), GFP_KERNEL); 1117 1112 if (!aq) 1118 1113 return NULL; 1114 + aq->card = ac; 1119 1115 aq->ap_dev.device.release = ap_queue_device_release; 1120 1116 aq->ap_dev.device.type = &ap_queue_type; 1121 - aq->ap_dev.device_type = device_type; 1122 - // add optional SE secure binding attributes group 1123 - if (ap_sb_available() && is_prot_virt_guest()) 1117 + aq->ap_dev.device_type = ac->ap_dev.device_type; 1118 + /* in SE environment add bind/associate attributes group */ 1119 + if (ap_is_se_guest() && ap_q_supported_in_se(aq)) 1124 1120 aq->ap_dev.device.groups = ap_queue_dev_sb_attr_groups; 1125 1121 aq->qid = qid; 1126 1122 spin_lock_init(&aq->lock); ··· 1202 1196 } 1203 1197 1204 1198 /* SE guest's queues additionally need to be bound */ 1205 - if (ap_q_needs_bind(aq) && 1206 - !(aq->se_bstate == AP_BS_Q_USABLE || 1207 - aq->se_bstate == AP_BS_Q_USABLE_NO_SECURE_KEY)) 1208 - rc = false; 1199 + if (ap_is_se_guest()) { 1200 + if (!ap_q_supported_in_se(aq)) { 1201 + rc = false; 1202 + goto unlock_and_out; 1203 + } 1204 + if (ap_q_needs_bind(aq) && 1205 + !(aq->se_bstate == AP_BS_Q_USABLE || 1206 + aq->se_bstate == AP_BS_Q_USABLE_NO_SECURE_KEY)) 1207 + rc = false; 1208 + } 1209 1209 1210 1210 unlock_and_out: 1211 1211 spin_unlock_bh(&aq->lock);
+1
drivers/s390/crypto/pkey_pckmo.c
··· 324 324 memcpy(protkey, t->protkey, t->len); 325 325 *protkeylen = t->len; 326 326 *protkeytype = t->keytype; 327 + rc = 0; 327 328 break; 328 329 } 329 330 case TOKVER_CLEAR_KEY: {
+2 -2
drivers/scsi/mpi3mr/mpi3mr.h
··· 542 542 * @port_list: List of ports belonging to a SAS node 543 543 * @num_phys: Number of phys associated with port 544 544 * @marked_responding: used while refresing the sas ports 545 - * @lowest_phy: lowest phy ID of current sas port 546 - * @phy_mask: phy_mask of current sas port 545 + * @lowest_phy: lowest phy ID of current sas port, valid for controller port 546 + * @phy_mask: phy_mask of current sas port, valid for controller port 547 547 * @hba_port: HBA port entry 548 548 * @remote_identify: Attached device identification 549 549 * @rphy: SAS transport layer rphy object
+27 -15
drivers/scsi/mpi3mr/mpi3mr_transport.c
··· 590 590 * @mrioc: Adapter instance reference 591 591 * @mr_sas_port: Internal Port object 592 592 * @mr_sas_phy: Internal Phy object 593 + * @host_node: Flag to indicate this is a host_node 593 594 * 594 595 * Return: None. 595 596 */ 596 597 static void mpi3mr_delete_sas_phy(struct mpi3mr_ioc *mrioc, 597 598 struct mpi3mr_sas_port *mr_sas_port, 598 - struct mpi3mr_sas_phy *mr_sas_phy) 599 + struct mpi3mr_sas_phy *mr_sas_phy, u8 host_node) 599 600 { 600 601 u64 sas_address = mr_sas_port->remote_identify.sas_address; 601 602 ··· 606 605 607 606 list_del(&mr_sas_phy->port_siblings); 608 607 mr_sas_port->num_phys--; 609 - mr_sas_port->phy_mask &= ~(1 << mr_sas_phy->phy_id); 610 - if (mr_sas_port->lowest_phy == mr_sas_phy->phy_id) 611 - mr_sas_port->lowest_phy = ffs(mr_sas_port->phy_mask) - 1; 608 + 609 + if (host_node) { 610 + mr_sas_port->phy_mask &= ~(1 << mr_sas_phy->phy_id); 611 + 612 + if (mr_sas_port->lowest_phy == mr_sas_phy->phy_id) 613 + mr_sas_port->lowest_phy = ffs(mr_sas_port->phy_mask) - 1; 614 + } 612 615 sas_port_delete_phy(mr_sas_port->port, mr_sas_phy->phy); 613 616 mr_sas_phy->phy_belongs_to_port = 0; 614 617 } ··· 622 617 * @mrioc: Adapter instance reference 623 618 * @mr_sas_port: Internal Port object 624 619 * @mr_sas_phy: Internal Phy object 620 + * @host_node: Flag to indicate this is a host_node 625 621 * 626 622 * Return: None. 627 623 */ 628 624 static void mpi3mr_add_sas_phy(struct mpi3mr_ioc *mrioc, 629 625 struct mpi3mr_sas_port *mr_sas_port, 630 - struct mpi3mr_sas_phy *mr_sas_phy) 626 + struct mpi3mr_sas_phy *mr_sas_phy, u8 host_node) 631 627 { 632 628 u64 sas_address = mr_sas_port->remote_identify.sas_address; 633 629 ··· 638 632 639 633 list_add_tail(&mr_sas_phy->port_siblings, &mr_sas_port->phy_list); 640 634 mr_sas_port->num_phys++; 641 - mr_sas_port->phy_mask |= (1 << mr_sas_phy->phy_id); 642 - if (mr_sas_phy->phy_id < mr_sas_port->lowest_phy) 643 - mr_sas_port->lowest_phy = ffs(mr_sas_port->phy_mask) - 1; 635 + if (host_node) { 636 + mr_sas_port->phy_mask |= (1 << mr_sas_phy->phy_id); 637 + 638 + if (mr_sas_phy->phy_id < mr_sas_port->lowest_phy) 639 + mr_sas_port->lowest_phy = ffs(mr_sas_port->phy_mask) - 1; 640 + } 644 641 sas_port_add_phy(mr_sas_port->port, mr_sas_phy->phy); 645 642 mr_sas_phy->phy_belongs_to_port = 1; 646 643 } ··· 684 675 if (srch_phy == mr_sas_phy) 685 676 return; 686 677 } 687 - mpi3mr_add_sas_phy(mrioc, mr_sas_port, mr_sas_phy); 678 + mpi3mr_add_sas_phy(mrioc, mr_sas_port, mr_sas_phy, mr_sas_node->host_node); 688 679 return; 689 680 } 690 681 } ··· 745 736 mpi3mr_delete_sas_port(mrioc, mr_sas_port); 746 737 else 747 738 mpi3mr_delete_sas_phy(mrioc, mr_sas_port, 748 - mr_sas_phy); 739 + mr_sas_phy, mr_sas_node->host_node); 749 740 return; 750 741 } 751 742 } ··· 1037 1028 /** 1038 1029 * mpi3mr_get_hba_port_by_id - find hba port by id 1039 1030 * @mrioc: Adapter instance reference 1040 - * @port_id - Port ID to search 1031 + * @port_id: Port ID to search 1041 1032 * 1042 1033 * Return: mpi3mr_hba_port reference for the matched port 1043 1034 */ ··· 1376 1367 mpi3mr_sas_port_sanity_check(mrioc, mr_sas_node, 1377 1368 mr_sas_port->remote_identify.sas_address, hba_port); 1378 1369 1379 - if (mr_sas_node->num_phys >= sizeof(mr_sas_port->phy_mask) * 8) 1370 + if (mr_sas_node->host_node && mr_sas_node->num_phys >= 1371 + sizeof(mr_sas_port->phy_mask) * 8) 1380 1372 ioc_info(mrioc, "max port count %u could be too high\n", 1381 1373 mr_sas_node->num_phys); 1382 1374 ··· 1387 1377 (mr_sas_node->phy[i].hba_port != hba_port)) 1388 1378 continue; 1389 1379 1390 - if (i >= sizeof(mr_sas_port->phy_mask) * 8) { 1380 + if (mr_sas_node->host_node && (i >= sizeof(mr_sas_port->phy_mask) * 8)) { 1391 1381 ioc_warn(mrioc, "skipping port %u, max allowed value is %zu\n", 1392 1382 i, sizeof(mr_sas_port->phy_mask) * 8); 1393 1383 goto out_fail; ··· 1395 1385 list_add_tail(&mr_sas_node->phy[i].port_siblings, 1396 1386 &mr_sas_port->phy_list); 1397 1387 mr_sas_port->num_phys++; 1398 - mr_sas_port->phy_mask |= (1 << i); 1388 + if (mr_sas_node->host_node) 1389 + mr_sas_port->phy_mask |= (1 << i); 1399 1390 } 1400 1391 1401 1392 if (!mr_sas_port->num_phys) { ··· 1405 1394 goto out_fail; 1406 1395 } 1407 1396 1408 - mr_sas_port->lowest_phy = ffs(mr_sas_port->phy_mask) - 1; 1397 + if (mr_sas_node->host_node) 1398 + mr_sas_port->lowest_phy = ffs(mr_sas_port->phy_mask) - 1; 1409 1399 1410 1400 if (mr_sas_port->remote_identify.device_type == SAS_END_DEVICE) { 1411 1401 tgtdev = mpi3mr_get_tgtdev_by_addr(mrioc,
+5 -6
drivers/soc/fsl/qe/qmc.c
··· 1761 1761 */ 1762 1762 info = devm_qe_muram_alloc(qmc->dev, UCC_SLOW_PRAM_SIZE + 2 * 64, 1763 1763 ALIGNMENT_OF_UCC_SLOW_PRAM); 1764 - if (IS_ERR_VALUE(info)) { 1765 - dev_err(qmc->dev, "cannot allocate MURAM for PRAM"); 1766 - return -ENOMEM; 1767 - } 1764 + if (info < 0) 1765 + return info; 1766 + 1768 1767 if (!qe_issue_cmd(QE_ASSIGN_PAGE_TO_DEVICE, qmc->qe_subblock, 1769 1768 QE_CR_PROTOCOL_UNSPECIFIED, info)) { 1770 1769 dev_err(qmc->dev, "QE_ASSIGN_PAGE_TO_DEVICE cmd failed"); ··· 2055 2056 qmc_exit_xcc(qmc); 2056 2057 } 2057 2058 2058 - static const struct qmc_data qmc_data_cpm1 = { 2059 + static const struct qmc_data qmc_data_cpm1 __maybe_unused = { 2059 2060 .version = QMC_CPM1, 2060 2061 .tstate = 0x30000000, 2061 2062 .rstate = 0x31000000, ··· 2065 2066 .rpack = 0x00000000, 2066 2067 }; 2067 2068 2068 - static const struct qmc_data qmc_data_qe = { 2069 + static const struct qmc_data qmc_data_qe __maybe_unused = { 2069 2070 .version = QMC_QE, 2070 2071 .tstate = 0x30000000, 2071 2072 .rstate = 0x30000000,
+1 -1
drivers/target/target_core_device.c
··· 691 691 692 692 dev->queues = kcalloc(nr_cpu_ids, sizeof(*dev->queues), GFP_KERNEL); 693 693 if (!dev->queues) { 694 - dev->transport->free_device(dev); 694 + hba->backend->ops->free_device(dev); 695 695 return NULL; 696 696 } 697 697
+2
drivers/tty/n_gsm.c
··· 3157 3157 mutex_unlock(&gsm->mutex); 3158 3158 /* Now wipe the queues */ 3159 3159 tty_ldisc_flush(gsm->tty); 3160 + 3161 + guard(spinlock_irqsave)(&gsm->tx_lock); 3160 3162 list_for_each_entry_safe(txq, ntxq, &gsm->tx_ctrl_list, list) 3161 3163 kfree(txq); 3162 3164 INIT_LIST_HEAD(&gsm->tx_ctrl_list);
+15
drivers/tty/serial/imx.c
··· 762 762 763 763 imx_uart_writel(sport, USR1_RTSD, USR1); 764 764 usr1 = imx_uart_readl(sport, USR1) & USR1_RTSS; 765 + /* 766 + * Update sport->old_status here, so any follow-up calls to 767 + * imx_uart_mctrl_check() will be able to recognize that RTS 768 + * state changed since last imx_uart_mctrl_check() call. 769 + * 770 + * In case RTS has been detected as asserted here and later on 771 + * deasserted by the time imx_uart_mctrl_check() was called, 772 + * imx_uart_mctrl_check() can detect the RTS state change and 773 + * trigger uart_handle_cts_change() to unblock the port for 774 + * further TX transfers. 775 + */ 776 + if (usr1 & USR1_RTSS) 777 + sport->old_status |= TIOCM_CTS; 778 + else 779 + sport->old_status &= ~TIOCM_CTS; 765 780 uart_handle_cts_change(&sport->port, usr1); 766 781 wake_up_interruptible(&sport->port.state->port.delta_msr_wait); 767 782
+48 -55
drivers/tty/serial/qcom_geni_serial.c
··· 147 147 148 148 static void __qcom_geni_serial_cancel_tx_cmd(struct uart_port *uport); 149 149 static void qcom_geni_serial_cancel_tx_cmd(struct uart_port *uport); 150 + static int qcom_geni_serial_port_setup(struct uart_port *uport); 150 151 151 152 static inline struct qcom_geni_serial_port *to_dev_port(struct uart_port *uport) 152 153 { ··· 396 395 writel(c, uport->membase + SE_GENI_TX_FIFOn); 397 396 qcom_geni_serial_poll_tx_done(uport); 398 397 } 398 + 399 + static int qcom_geni_serial_poll_init(struct uart_port *uport) 400 + { 401 + struct qcom_geni_serial_port *port = to_dev_port(uport); 402 + int ret; 403 + 404 + if (!port->setup) { 405 + ret = qcom_geni_serial_port_setup(uport); 406 + if (ret) 407 + return ret; 408 + } 409 + 410 + if (!qcom_geni_serial_secondary_active(uport)) 411 + geni_se_setup_s_cmd(&port->se, UART_START_READ, 0); 412 + 413 + return 0; 414 + } 399 415 #endif 400 416 401 417 #ifdef CONFIG_SERIAL_QCOM_GENI_CONSOLE ··· 580 562 } 581 563 #endif /* CONFIG_SERIAL_QCOM_GENI_CONSOLE */ 582 564 583 - static void handle_rx_uart(struct uart_port *uport, u32 bytes, bool drop) 565 + static void handle_rx_uart(struct uart_port *uport, u32 bytes) 584 566 { 585 567 struct qcom_geni_serial_port *port = to_dev_port(uport); 586 568 struct tty_port *tport = &uport->state->port; ··· 588 570 589 571 ret = tty_insert_flip_string(tport, port->rx_buf, bytes); 590 572 if (ret != bytes) { 591 - dev_err(uport->dev, "%s:Unable to push data ret %d_bytes %d\n", 592 - __func__, ret, bytes); 593 - WARN_ON_ONCE(1); 573 + dev_err_ratelimited(uport->dev, "failed to push data (%d < %u)\n", 574 + ret, bytes); 594 575 } 595 576 uport->icount.rx += ret; 596 577 tty_flip_buffer_push(tport); ··· 804 787 static void qcom_geni_serial_stop_rx_dma(struct uart_port *uport) 805 788 { 806 789 struct qcom_geni_serial_port *port = to_dev_port(uport); 790 + bool done; 807 791 808 792 if (!qcom_geni_serial_secondary_active(uport)) 809 793 return; 810 794 811 795 geni_se_cancel_s_cmd(&port->se); 812 - qcom_geni_serial_poll_bit(uport, SE_GENI_S_IRQ_STATUS, 813 - S_CMD_CANCEL_EN, true); 814 - 815 - if (qcom_geni_serial_secondary_active(uport)) 796 + done = qcom_geni_serial_poll_bit(uport, SE_DMA_RX_IRQ_STAT, 797 + RX_EOT, true); 798 + if (done) { 799 + writel(RX_EOT | RX_DMA_DONE, 800 + uport->membase + SE_DMA_RX_IRQ_CLR); 801 + } else { 816 802 qcom_geni_serial_abort_rx(uport); 803 + 804 + writel(1, uport->membase + SE_DMA_RX_FSM_RST); 805 + qcom_geni_serial_poll_bit(uport, SE_DMA_RX_IRQ_STAT, 806 + RX_RESET_DONE, true); 807 + writel(RX_RESET_DONE | RX_DMA_DONE, 808 + uport->membase + SE_DMA_RX_IRQ_CLR); 809 + } 817 810 818 811 if (port->rx_dma_addr) { 819 812 geni_se_rx_dma_unprep(&port->se, port->rx_dma_addr, ··· 873 846 } 874 847 875 848 if (!drop) 876 - handle_rx_uart(uport, rx_in, drop); 849 + handle_rx_uart(uport, rx_in); 877 850 878 851 ret = geni_se_rx_dma_prep(&port->se, port->rx_buf, 879 852 DMA_RX_BUF_SIZE, ··· 1123 1096 { 1124 1097 disable_irq(uport->irq); 1125 1098 1099 + uart_port_lock_irq(uport); 1126 1100 qcom_geni_serial_stop_tx(uport); 1127 1101 qcom_geni_serial_stop_rx(uport); 1128 1102 1129 1103 qcom_geni_serial_cancel_tx_cmd(uport); 1104 + uart_port_unlock_irq(uport); 1130 1105 } 1131 1106 1132 1107 static void qcom_geni_serial_flush_buffer(struct uart_port *uport) ··· 1181 1152 false, true, true); 1182 1153 geni_se_init(&port->se, UART_RX_WM, port->rx_fifo_depth - 2); 1183 1154 geni_se_select_mode(&port->se, port->dev_data->mode); 1184 - qcom_geni_serial_start_rx(uport); 1185 1155 port->setup = true; 1186 1156 1187 1157 return 0; ··· 1196 1168 if (ret) 1197 1169 return ret; 1198 1170 } 1171 + 1172 + uart_port_lock_irq(uport); 1173 + qcom_geni_serial_start_rx(uport); 1174 + uart_port_unlock_irq(uport); 1175 + 1199 1176 enable_irq(uport->irq); 1200 1177 1201 1178 return 0; ··· 1286 1253 unsigned int avg_bw_core; 1287 1254 unsigned long timeout; 1288 1255 1289 - qcom_geni_serial_stop_rx(uport); 1290 1256 /* baud rate */ 1291 1257 baud = uart_get_baud_rate(uport, termios, old, 300, 4000000); 1292 1258 ··· 1301 1269 dev_err(port->se.dev, 1302 1270 "Couldn't find suitable clock rate for %u\n", 1303 1271 baud * sampling_rate); 1304 - goto out_restart_rx; 1272 + return; 1305 1273 } 1306 1274 1307 1275 dev_dbg(port->se.dev, "desired_rate = %u, clk_rate = %lu, clk_div = %u\n", ··· 1392 1360 writel(stop_bit_len, uport->membase + SE_UART_TX_STOP_BIT_LEN); 1393 1361 writel(ser_clk_cfg, uport->membase + GENI_SER_M_CLK_CFG); 1394 1362 writel(ser_clk_cfg, uport->membase + GENI_SER_S_CLK_CFG); 1395 - out_restart_rx: 1396 - qcom_geni_serial_start_rx(uport); 1397 1363 } 1398 1364 1399 1365 #ifdef CONFIG_SERIAL_QCOM_GENI_CONSOLE ··· 1612 1582 #ifdef CONFIG_CONSOLE_POLL 1613 1583 .poll_get_char = qcom_geni_serial_get_char, 1614 1584 .poll_put_char = qcom_geni_serial_poll_put_char, 1615 - .poll_init = qcom_geni_serial_port_setup, 1585 + .poll_init = qcom_geni_serial_poll_init, 1616 1586 #endif 1617 1587 .pm = qcom_geni_serial_pm, 1618 1588 }; ··· 1779 1749 uart_remove_one_port(drv, &port->uport); 1780 1750 } 1781 1751 1782 - static int qcom_geni_serial_sys_suspend(struct device *dev) 1752 + static int qcom_geni_serial_suspend(struct device *dev) 1783 1753 { 1784 1754 struct qcom_geni_serial_port *port = dev_get_drvdata(dev); 1785 1755 struct uart_port *uport = &port->uport; ··· 1796 1766 return uart_suspend_port(private_data->drv, uport); 1797 1767 } 1798 1768 1799 - static int qcom_geni_serial_sys_resume(struct device *dev) 1769 + static int qcom_geni_serial_resume(struct device *dev) 1800 1770 { 1801 1771 int ret; 1802 1772 struct qcom_geni_serial_port *port = dev_get_drvdata(dev); ··· 1807 1777 if (uart_console(uport)) { 1808 1778 geni_icc_set_tag(&port->se, QCOM_ICC_TAG_ALWAYS); 1809 1779 geni_icc_set_bw(&port->se); 1810 - } 1811 - return ret; 1812 - } 1813 - 1814 - static int qcom_geni_serial_sys_hib_resume(struct device *dev) 1815 - { 1816 - int ret = 0; 1817 - struct uart_port *uport; 1818 - struct qcom_geni_private_data *private_data; 1819 - struct qcom_geni_serial_port *port = dev_get_drvdata(dev); 1820 - 1821 - uport = &port->uport; 1822 - private_data = uport->private_data; 1823 - 1824 - if (uart_console(uport)) { 1825 - geni_icc_set_tag(&port->se, QCOM_ICC_TAG_ALWAYS); 1826 - geni_icc_set_bw(&port->se); 1827 - ret = uart_resume_port(private_data->drv, uport); 1828 - /* 1829 - * For hibernation usecase clients for 1830 - * console UART won't call port setup during restore, 1831 - * hence call port setup for console uart. 1832 - */ 1833 - qcom_geni_serial_port_setup(uport); 1834 - } else { 1835 - /* 1836 - * Peripheral register settings are lost during hibernation. 1837 - * Update setup flag such that port setup happens again 1838 - * during next session. Clients of HS-UART will close and 1839 - * open the port during hibernation. 1840 - */ 1841 - port->setup = false; 1842 1780 } 1843 1781 return ret; 1844 1782 } ··· 1822 1824 }; 1823 1825 1824 1826 static const struct dev_pm_ops qcom_geni_serial_pm_ops = { 1825 - .suspend = pm_sleep_ptr(qcom_geni_serial_sys_suspend), 1826 - .resume = pm_sleep_ptr(qcom_geni_serial_sys_resume), 1827 - .freeze = pm_sleep_ptr(qcom_geni_serial_sys_suspend), 1828 - .poweroff = pm_sleep_ptr(qcom_geni_serial_sys_suspend), 1829 - .restore = pm_sleep_ptr(qcom_geni_serial_sys_hib_resume), 1830 - .thaw = pm_sleep_ptr(qcom_geni_serial_sys_hib_resume), 1827 + SYSTEM_SLEEP_PM_OPS(qcom_geni_serial_suspend, qcom_geni_serial_resume) 1831 1828 }; 1832 1829 1833 1830 static const struct of_device_id qcom_geni_serial_match_table[] = {
+1 -1
drivers/tty/vt/vt.c
··· 4726 4726 return -EINVAL; 4727 4727 4728 4728 if (op->data) { 4729 - font.data = kvmalloc(max_font_size, GFP_KERNEL); 4729 + font.data = kvzalloc(max_font_size, GFP_KERNEL); 4730 4730 if (!font.data) 4731 4731 return -ENOMEM; 4732 4732 } else
+8 -7
drivers/ufs/core/ufs-mcq.c
··· 539 539 struct scsi_cmnd *cmd = lrbp->cmd; 540 540 struct ufs_hw_queue *hwq; 541 541 void __iomem *reg, *opr_sqd_base; 542 - u32 nexus, id, val; 542 + u32 nexus, id, val, rtc; 543 543 int err; 544 544 545 545 if (hba->quirks & UFSHCD_QUIRK_MCQ_BROKEN_RTC) ··· 569 569 opr_sqd_base = mcq_opr_base(hba, OPR_SQD, id); 570 570 writel(nexus, opr_sqd_base + REG_SQCTI); 571 571 572 - /* SQRTCy.ICU = 1 */ 573 - writel(SQ_ICU, opr_sqd_base + REG_SQRTC); 572 + /* Initiate Cleanup */ 573 + writel(readl(opr_sqd_base + REG_SQRTC) | SQ_ICU, 574 + opr_sqd_base + REG_SQRTC); 574 575 575 576 /* Poll SQRTSy.CUS = 1. Return result from SQRTSy.RTC */ 576 577 reg = opr_sqd_base + REG_SQRTS; 577 578 err = read_poll_timeout(readl, val, val & SQ_CUS, 20, 578 579 MCQ_POLL_US, false, reg); 579 - if (err) 580 - dev_err(hba->dev, "%s: failed. hwq=%d, tag=%d err=%ld\n", 581 - __func__, id, task_tag, 582 - FIELD_GET(SQ_ICU_ERR_CODE_MASK, readl(reg))); 580 + rtc = FIELD_GET(SQ_ICU_ERR_CODE_MASK, readl(reg)); 581 + if (err || rtc) 582 + dev_err(hba->dev, "%s: failed. hwq=%d, tag=%d err=%d RTC=%d\n", 583 + __func__, id, task_tag, err, rtc); 583 584 584 585 if (ufshcd_mcq_sq_start(hba, hwq)) 585 586 err = -ETIMEDOUT;
+7 -17
drivers/ufs/core/ufshcd.c
··· 5416 5416 } 5417 5417 break; 5418 5418 case OCS_ABORTED: 5419 - result |= DID_ABORT << 16; 5420 - break; 5421 5419 case OCS_INVALID_COMMAND_STATUS: 5422 5420 result |= DID_REQUEUE << 16; 5421 + dev_warn(hba->dev, 5422 + "OCS %s from controller for tag %d\n", 5423 + (ocs == OCS_ABORTED ? "aborted" : "invalid"), 5424 + lrbp->task_tag); 5423 5425 break; 5424 5426 case OCS_INVALID_CMD_TABLE_ATTR: 5425 5427 case OCS_INVALID_PRDT_ATTR: ··· 6467 6465 struct scsi_device *sdev = cmd->device; 6468 6466 struct Scsi_Host *shost = sdev->host; 6469 6467 struct ufs_hba *hba = shost_priv(shost); 6470 - struct ufshcd_lrb *lrbp = &hba->lrb[tag]; 6471 - struct ufs_hw_queue *hwq; 6472 - unsigned long flags; 6473 6468 6474 6469 *ret = ufshcd_try_to_abort_task(hba, tag); 6475 6470 dev_err(hba->dev, "Aborting tag %d / CDB %#02x %s\n", tag, 6476 6471 hba->lrb[tag].cmd ? hba->lrb[tag].cmd->cmnd[0] : -1, 6477 6472 *ret ? "failed" : "succeeded"); 6478 - 6479 - /* Release cmd in MCQ mode if abort succeeds */ 6480 - if (hba->mcq_enabled && (*ret == 0)) { 6481 - hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(lrbp->cmd)); 6482 - if (!hwq) 6483 - return 0; 6484 - spin_lock_irqsave(&hwq->cq_lock, flags); 6485 - if (ufshcd_cmd_inflight(lrbp->cmd)) 6486 - ufshcd_release_scsi_cmd(hba, lrbp); 6487 - spin_unlock_irqrestore(&hwq->cq_lock, flags); 6488 - } 6489 6473 6490 6474 return *ret == 0; 6491 6475 } ··· 10197 10209 shost_for_each_device(sdev, hba->host) { 10198 10210 if (sdev == hba->ufs_device_wlun) 10199 10211 continue; 10200 - scsi_device_quiesce(sdev); 10212 + mutex_lock(&sdev->state_mutex); 10213 + scsi_device_set_state(sdev, SDEV_OFFLINE); 10214 + mutex_unlock(&sdev->state_mutex); 10201 10215 } 10202 10216 __ufshcd_wl_suspend(hba, UFS_SHUTDOWN_PM); 10203 10217
+19
drivers/usb/dwc3/core.c
··· 2342 2342 u32 reg; 2343 2343 int i; 2344 2344 2345 + dwc->susphy_state = (dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0)) & 2346 + DWC3_GUSB2PHYCFG_SUSPHY) || 2347 + (dwc3_readl(dwc->regs, DWC3_GUSB3PIPECTL(0)) & 2348 + DWC3_GUSB3PIPECTL_SUSPHY); 2349 + 2345 2350 switch (dwc->current_dr_role) { 2346 2351 case DWC3_GCTL_PRTCAP_DEVICE: 2347 2352 if (pm_runtime_suspended(dwc->dev)) ··· 2396 2391 default: 2397 2392 /* do nothing */ 2398 2393 break; 2394 + } 2395 + 2396 + if (!PMSG_IS_AUTO(msg)) { 2397 + /* 2398 + * TI AM62 platform requires SUSPHY to be 2399 + * enabled for system suspend to work. 2400 + */ 2401 + if (!dwc->susphy_state) 2402 + dwc3_enable_susphy(dwc, true); 2399 2403 } 2400 2404 2401 2405 return 0; ··· 2472 2458 default: 2473 2459 /* do nothing */ 2474 2460 break; 2461 + } 2462 + 2463 + if (!PMSG_IS_AUTO(msg)) { 2464 + /* restore SUSPHY state to that before system suspend. */ 2465 + dwc3_enable_susphy(dwc, dwc->susphy_state); 2475 2466 } 2476 2467 2477 2468 return 0;
+3
drivers/usb/dwc3/core.h
··· 1150 1150 * @sys_wakeup: set if the device may do system wakeup. 1151 1151 * @wakeup_configured: set if the device is configured for remote wakeup. 1152 1152 * @suspended: set to track suspend event due to U3/L2. 1153 + * @susphy_state: state of DWC3_GUSB2PHYCFG_SUSPHY + DWC3_GUSB3PIPECTL_SUSPHY 1154 + * before PM suspend. 1153 1155 * @imod_interval: set the interrupt moderation interval in 250ns 1154 1156 * increments or 0 to disable. 1155 1157 * @max_cfg_eps: current max number of IN eps used across all USB configs. ··· 1384 1382 unsigned sys_wakeup:1; 1385 1383 unsigned wakeup_configured:1; 1386 1384 unsigned suspended:1; 1385 + unsigned susphy_state:1; 1387 1386 1388 1387 u16 imod_interval; 1389 1388
+6 -4
drivers/usb/dwc3/gadget.c
··· 438 438 dwc3_gadget_ep_get_transfer_index(dep); 439 439 } 440 440 441 + if (DWC3_DEPCMD_CMD(cmd) == DWC3_DEPCMD_ENDTRANSFER && 442 + !(cmd & DWC3_DEPCMD_CMDIOC)) 443 + mdelay(1); 444 + 441 445 if (saved_config) { 442 446 reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0)); 443 447 reg |= saved_config; ··· 1719 1715 WARN_ON_ONCE(ret); 1720 1716 dep->resource_index = 0; 1721 1717 1722 - if (!interrupt) { 1723 - mdelay(1); 1718 + if (!interrupt) 1724 1719 dep->flags &= ~DWC3_EP_TRANSFER_STARTED; 1725 - } else if (!ret) { 1720 + else if (!ret) 1726 1721 dep->flags |= DWC3_EP_END_TRANSFER_PENDING; 1727 - } 1728 1722 1729 1723 dep->flags &= ~DWC3_EP_DELAY_STOP; 1730 1724 return ret;
+3 -3
drivers/usb/gadget/function/f_uac2.c
··· 2061 2061 const char *page, size_t len) \ 2062 2062 { \ 2063 2063 struct f_uac2_opts *opts = to_f_uac2_opts(item); \ 2064 - int ret = 0; \ 2064 + int ret = len; \ 2065 2065 \ 2066 2066 mutex_lock(&opts->lock); \ 2067 2067 if (opts->refcnt) { \ ··· 2072 2072 if (len && page[len - 1] == '\n') \ 2073 2073 len--; \ 2074 2074 \ 2075 - ret = scnprintf(opts->name, min(sizeof(opts->name), len + 1), \ 2076 - "%s", page); \ 2075 + scnprintf(opts->name, min(sizeof(opts->name), len + 1), \ 2076 + "%s", page); \ 2077 2077 \ 2078 2078 end: \ 2079 2079 mutex_unlock(&opts->lock); \
+15 -5
drivers/usb/gadget/udc/dummy_hcd.c
··· 254 254 u32 stream_en_ep; 255 255 u8 num_stream[30 / 2]; 256 256 257 + unsigned timer_pending:1; 257 258 unsigned active:1; 258 259 unsigned old_active:1; 259 260 unsigned resuming:1; ··· 1304 1303 urb->error_count = 1; /* mark as a new urb */ 1305 1304 1306 1305 /* kick the scheduler, it'll do the rest */ 1307 - if (!hrtimer_active(&dum_hcd->timer)) 1306 + if (!dum_hcd->timer_pending) { 1307 + dum_hcd->timer_pending = 1; 1308 1308 hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS), 1309 1309 HRTIMER_MODE_REL_SOFT); 1310 + } 1310 1311 1311 1312 done: 1312 1313 spin_unlock_irqrestore(&dum_hcd->dum->lock, flags); ··· 1327 1324 spin_lock_irqsave(&dum_hcd->dum->lock, flags); 1328 1325 1329 1326 rc = usb_hcd_check_unlink_urb(hcd, urb, status); 1330 - if (!rc && dum_hcd->rh_state != DUMMY_RH_RUNNING && 1331 - !list_empty(&dum_hcd->urbp_list)) 1327 + if (rc == 0 && !dum_hcd->timer_pending) { 1328 + dum_hcd->timer_pending = 1; 1332 1329 hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL_SOFT); 1330 + } 1333 1331 1334 1332 spin_unlock_irqrestore(&dum_hcd->dum->lock, flags); 1335 1333 return rc; ··· 1817 1813 1818 1814 /* look at each urb queued by the host side driver */ 1819 1815 spin_lock_irqsave(&dum->lock, flags); 1816 + dum_hcd->timer_pending = 0; 1820 1817 1821 1818 if (!dum_hcd->udev) { 1822 1819 dev_err(dummy_dev(dum_hcd), ··· 1999 1994 if (list_empty(&dum_hcd->urbp_list)) { 2000 1995 usb_put_dev(dum_hcd->udev); 2001 1996 dum_hcd->udev = NULL; 2002 - } else if (dum_hcd->rh_state == DUMMY_RH_RUNNING) { 1997 + } else if (!dum_hcd->timer_pending && 1998 + dum_hcd->rh_state == DUMMY_RH_RUNNING) { 2003 1999 /* want a 1 msec delay here */ 2000 + dum_hcd->timer_pending = 1; 2004 2001 hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS), 2005 2002 HRTIMER_MODE_REL_SOFT); 2006 2003 } ··· 2397 2390 } else { 2398 2391 dum_hcd->rh_state = DUMMY_RH_RUNNING; 2399 2392 set_link_state(dum_hcd); 2400 - if (!list_empty(&dum_hcd->urbp_list)) 2393 + if (!list_empty(&dum_hcd->urbp_list)) { 2394 + dum_hcd->timer_pending = 1; 2401 2395 hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL_SOFT); 2396 + } 2402 2397 hcd->state = HC_STATE_RUNNING; 2403 2398 } 2404 2399 spin_unlock_irq(&dum_hcd->dum->lock); ··· 2531 2522 struct dummy_hcd *dum_hcd = hcd_to_dummy_hcd(hcd); 2532 2523 2533 2524 hrtimer_cancel(&dum_hcd->timer); 2525 + dum_hcd->timer_pending = 0; 2534 2526 device_remove_file(dummy_dev(dum_hcd), &dev_attr_urbs); 2535 2527 dev_info(dummy_dev(dum_hcd), "stopped\n"); 2536 2528 }
+1
drivers/usb/host/xhci-dbgcap.h
··· 110 110 struct tasklet_struct push; 111 111 112 112 struct list_head write_pool; 113 + unsigned int tx_boundary; 113 114 114 115 bool registered; 115 116 };
+50 -5
drivers/usb/host/xhci-dbgtty.c
··· 24 24 return dbc->priv; 25 25 } 26 26 27 + static unsigned int 28 + dbc_kfifo_to_req(struct dbc_port *port, char *packet) 29 + { 30 + unsigned int len; 31 + 32 + len = kfifo_len(&port->port.xmit_fifo); 33 + 34 + if (len == 0) 35 + return 0; 36 + 37 + len = min(len, DBC_MAX_PACKET); 38 + 39 + if (port->tx_boundary) 40 + len = min(port->tx_boundary, len); 41 + 42 + len = kfifo_out(&port->port.xmit_fifo, packet, len); 43 + 44 + if (port->tx_boundary) 45 + port->tx_boundary -= len; 46 + 47 + return len; 48 + } 49 + 27 50 static int dbc_start_tx(struct dbc_port *port) 28 51 __releases(&port->port_lock) 29 52 __acquires(&port->port_lock) ··· 59 36 60 37 while (!list_empty(pool)) { 61 38 req = list_entry(pool->next, struct dbc_request, list_pool); 62 - len = kfifo_out(&port->port.xmit_fifo, req->buf, DBC_MAX_PACKET); 39 + len = dbc_kfifo_to_req(port, req->buf); 63 40 if (len == 0) 64 41 break; 65 42 do_tty_wake = true; ··· 223 200 { 224 201 struct dbc_port *port = tty->driver_data; 225 202 unsigned long flags; 203 + unsigned int written = 0; 226 204 227 205 spin_lock_irqsave(&port->port_lock, flags); 228 - if (count) 229 - count = kfifo_in(&port->port.xmit_fifo, buf, count); 230 - dbc_start_tx(port); 206 + 207 + /* 208 + * Treat tty write as one usb transfer. Make sure the writes are turned 209 + * into TRB request having the same size boundaries as the tty writes. 210 + * Don't add data to kfifo before previous write is turned into TRBs 211 + */ 212 + if (port->tx_boundary) { 213 + spin_unlock_irqrestore(&port->port_lock, flags); 214 + return 0; 215 + } 216 + 217 + if (count) { 218 + written = kfifo_in(&port->port.xmit_fifo, buf, count); 219 + 220 + if (written == count) 221 + port->tx_boundary = kfifo_len(&port->port.xmit_fifo); 222 + 223 + dbc_start_tx(port); 224 + } 225 + 231 226 spin_unlock_irqrestore(&port->port_lock, flags); 232 227 233 - return count; 228 + return written; 234 229 } 235 230 236 231 static int dbc_tty_put_char(struct tty_struct *tty, u8 ch) ··· 282 241 283 242 spin_lock_irqsave(&port->port_lock, flags); 284 243 room = kfifo_avail(&port->port.xmit_fifo); 244 + 245 + if (port->tx_boundary) 246 + room = 0; 247 + 285 248 spin_unlock_irqrestore(&port->port_lock, flags); 286 249 287 250 return room;
+30 -38
drivers/usb/host/xhci-ring.c
··· 1023 1023 td_to_noop(xhci, ring, cached_td, false); 1024 1024 cached_td->cancel_status = TD_CLEARED; 1025 1025 } 1026 - 1026 + td_to_noop(xhci, ring, td, false); 1027 1027 td->cancel_status = TD_CLEARING_CACHE; 1028 1028 cached_td = td; 1029 1029 break; ··· 2775 2775 return 0; 2776 2776 } 2777 2777 2778 + /* 2779 + * xhci 4.10.2 states isoc endpoints should continue 2780 + * processing the next TD if there was an error mid TD. 2781 + * So host like NEC don't generate an event for the last 2782 + * isoc TRB even if the IOC flag is set. 2783 + * xhci 4.9.1 states that if there are errors in mult-TRB 2784 + * TDs xHC should generate an error for that TRB, and if xHC 2785 + * proceeds to the next TD it should genete an event for 2786 + * any TRB with IOC flag on the way. Other host follow this. 2787 + * 2788 + * We wait for the final IOC event, but if we get an event 2789 + * anywhere outside this TD, just give it back already. 2790 + */ 2791 + td = list_first_entry_or_null(&ep_ring->td_list, struct xhci_td, td_list); 2792 + 2793 + if (td && td->error_mid_td && !trb_in_td(xhci, td, ep_trb_dma, false)) { 2794 + xhci_dbg(xhci, "Missing TD completion event after mid TD error\n"); 2795 + ep_ring->dequeue = td->last_trb; 2796 + ep_ring->deq_seg = td->last_trb_seg; 2797 + inc_deq(xhci, ep_ring); 2798 + xhci_td_cleanup(xhci, td, ep_ring, td->status); 2799 + } 2800 + 2778 2801 if (list_empty(&ep_ring->td_list)) { 2779 2802 /* 2780 2803 * Don't print wanings if ring is empty due to a stopped endpoint generating an ··· 2859 2836 return 0; 2860 2837 } 2861 2838 2862 - /* 2863 - * xhci 4.10.2 states isoc endpoints should continue 2864 - * processing the next TD if there was an error mid TD. 2865 - * So host like NEC don't generate an event for the last 2866 - * isoc TRB even if the IOC flag is set. 2867 - * xhci 4.9.1 states that if there are errors in mult-TRB 2868 - * TDs xHC should generate an error for that TRB, and if xHC 2869 - * proceeds to the next TD it should genete an event for 2870 - * any TRB with IOC flag on the way. Other host follow this. 2871 - * So this event might be for the next TD. 2872 - */ 2873 - if (td->error_mid_td && 2874 - !list_is_last(&td->td_list, &ep_ring->td_list)) { 2875 - struct xhci_td *td_next = list_next_entry(td, td_list); 2839 + /* HC is busted, give up! */ 2840 + xhci_err(xhci, 2841 + "ERROR Transfer event TRB DMA ptr not part of current TD ep_index %d comp_code %u\n", 2842 + ep_index, trb_comp_code); 2843 + trb_in_td(xhci, td, ep_trb_dma, true); 2876 2844 2877 - ep_seg = trb_in_td(xhci, td_next, ep_trb_dma, false); 2878 - if (ep_seg) { 2879 - /* give back previous TD, start handling new */ 2880 - xhci_dbg(xhci, "Missing TD completion event after mid TD error\n"); 2881 - ep_ring->dequeue = td->last_trb; 2882 - ep_ring->deq_seg = td->last_trb_seg; 2883 - inc_deq(xhci, ep_ring); 2884 - xhci_td_cleanup(xhci, td, ep_ring, td->status); 2885 - td = td_next; 2886 - } 2887 - } 2888 - 2889 - if (!ep_seg) { 2890 - /* HC is busted, give up! */ 2891 - xhci_err(xhci, 2892 - "ERROR Transfer event TRB DMA ptr not " 2893 - "part of current TD ep_index %d " 2894 - "comp_code %u\n", ep_index, 2895 - trb_comp_code); 2896 - trb_in_td(xhci, td, ep_trb_dma, true); 2897 - 2898 - return -ESHUTDOWN; 2899 - } 2845 + return -ESHUTDOWN; 2900 2846 } 2901 2847 2902 2848 if (ep->skip) {
+1 -1
drivers/usb/host/xhci-tegra.c
··· 2183 2183 goto out; 2184 2184 } 2185 2185 2186 - for (i = 0; i < tegra->num_usb_phys; i++) { 2186 + for (i = 0; i < xhci->usb2_rhub.num_ports; i++) { 2187 2187 if (!xhci->usb2_rhub.ports[i]) 2188 2188 continue; 2189 2189 portsc = readl(xhci->usb2_rhub.ports[i]->addr);
+1 -1
drivers/usb/host/xhci.h
··· 1001 1001 /* Set TR Dequeue Pointer command TRB fields, 6.4.3.9 */ 1002 1002 #define TRB_TO_STREAM_ID(p) ((((p) & (0xffff << 16)) >> 16)) 1003 1003 #define STREAM_ID_FOR_TRB(p) ((((p)) & 0xffff) << 16) 1004 - #define SCT_FOR_TRB(p) (((p) << 1) & 0x7) 1004 + #define SCT_FOR_TRB(p) (((p) & 0x7) << 1) 1005 1005 1006 1006 /* Link TRB specific fields */ 1007 1007 #define TRB_TC (1<<1)
+8
drivers/usb/serial/option.c
··· 279 279 #define QUECTEL_PRODUCT_EG912Y 0x6001 280 280 #define QUECTEL_PRODUCT_EC200S_CN 0x6002 281 281 #define QUECTEL_PRODUCT_EC200A 0x6005 282 + #define QUECTEL_PRODUCT_EG916Q 0x6007 282 283 #define QUECTEL_PRODUCT_EM061K_LWW 0x6008 283 284 #define QUECTEL_PRODUCT_EM061K_LCN 0x6009 284 285 #define QUECTEL_PRODUCT_EC200T 0x6026 ··· 1271 1270 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) }, 1272 1271 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) }, 1273 1272 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG912Y, 0xff, 0, 0) }, 1273 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG916Q, 0xff, 0x00, 0x00) }, 1274 1274 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500K, 0xff, 0x00, 0x00) }, 1275 1275 1276 1276 { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) }, ··· 1382 1380 .driver_info = NCTRL(0) | RSVD(1) }, 1383 1381 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a0, 0xff), /* Telit FN20C04 (rmnet) */ 1384 1382 .driver_info = RSVD(0) | NCTRL(3) }, 1383 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a2, 0xff), /* Telit FN920C04 (MBIM) */ 1384 + .driver_info = NCTRL(4) }, 1385 1385 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a4, 0xff), /* Telit FN20C04 (rmnet) */ 1386 1386 .driver_info = RSVD(0) | NCTRL(3) }, 1387 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a7, 0xff), /* Telit FN920C04 (MBIM) */ 1388 + .driver_info = NCTRL(4) }, 1387 1389 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a9, 0xff), /* Telit FN20C04 (rmnet) */ 1388 1390 .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) }, 1391 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10aa, 0xff), /* Telit FN920C04 (MBIM) */ 1392 + .driver_info = NCTRL(3) | RSVD(4) | RSVD(5) }, 1389 1393 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910), 1390 1394 .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) }, 1391 1395 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+3
drivers/usb/typec/class.c
··· 519 519 typec_altmode_put_partner(alt); 520 520 521 521 altmode_id_remove(alt->adev.dev.parent, alt->id); 522 + put_device(alt->adev.dev.parent); 522 523 kfree(alt); 523 524 } 524 525 ··· 568 567 alt->adev.dev.groups = alt->groups; 569 568 alt->adev.dev.type = &typec_altmode_dev_type; 570 569 dev_set_name(&alt->adev.dev, "%s.%u", dev_name(parent), id); 570 + 571 + get_device(alt->adev.dev.parent); 571 572 572 573 /* Link partners and plugs with the ports */ 573 574 if (!is_port)
-1
drivers/usb/typec/tcpm/qcom/qcom_pmic_typec_port.c
··· 432 432 val = TYPEC_CC_RP_DEF; 433 433 break; 434 434 } 435 - val = TYPEC_CC_RP_DEF; 436 435 } 437 436 438 437 if (misc & CC_ORIENTATION)
-1
drivers/xen/Kconfig
··· 261 261 config XEN_PRIVCMD 262 262 tristate "Xen hypercall passthrough driver" 263 263 depends on XEN 264 - imply XEN_PCIDEV_BACKEND 265 264 default m 266 265 help 267 266 The hypercall passthrough driver allows privileged user programs to
+24
drivers/xen/acpi.c
··· 125 125 return 0; 126 126 } 127 127 EXPORT_SYMBOL_GPL(xen_acpi_get_gsi_info); 128 + 129 + static get_gsi_from_sbdf_t get_gsi_from_sbdf; 130 + static DEFINE_RWLOCK(get_gsi_from_sbdf_lock); 131 + 132 + void xen_acpi_register_get_gsi_func(get_gsi_from_sbdf_t func) 133 + { 134 + write_lock(&get_gsi_from_sbdf_lock); 135 + get_gsi_from_sbdf = func; 136 + write_unlock(&get_gsi_from_sbdf_lock); 137 + } 138 + EXPORT_SYMBOL_GPL(xen_acpi_register_get_gsi_func); 139 + 140 + int xen_acpi_get_gsi_from_sbdf(u32 sbdf) 141 + { 142 + int ret = -EOPNOTSUPP; 143 + 144 + read_lock(&get_gsi_from_sbdf_lock); 145 + if (get_gsi_from_sbdf) 146 + ret = get_gsi_from_sbdf(sbdf); 147 + read_unlock(&get_gsi_from_sbdf_lock); 148 + 149 + return ret; 150 + } 151 + EXPORT_SYMBOL_GPL(xen_acpi_get_gsi_from_sbdf);
+2 -4
drivers/xen/privcmd.c
··· 850 850 static long privcmd_ioctl_pcidev_get_gsi(struct file *file, void __user *udata) 851 851 { 852 852 #if defined(CONFIG_XEN_ACPI) 853 - int rc = -EINVAL; 853 + int rc; 854 854 struct privcmd_pcidev_get_gsi kdata; 855 855 856 856 if (copy_from_user(&kdata, udata, sizeof(kdata))) 857 857 return -EFAULT; 858 858 859 - if (IS_REACHABLE(CONFIG_XEN_PCIDEV_BACKEND)) 860 - rc = pcistub_get_gsi_from_sbdf(kdata.sbdf); 861 - 859 + rc = xen_acpi_get_gsi_from_sbdf(kdata.sbdf); 862 860 if (rc < 0) 863 861 return rc; 864 862
+9 -2
drivers/xen/xen-pciback/pci_stub.c
··· 227 227 } 228 228 229 229 #ifdef CONFIG_XEN_ACPI 230 - int pcistub_get_gsi_from_sbdf(unsigned int sbdf) 230 + static int pcistub_get_gsi_from_sbdf(unsigned int sbdf) 231 231 { 232 232 struct pcistub_device *psdev; 233 233 int domain = (sbdf >> 16) & 0xffff; ··· 242 242 243 243 return psdev->gsi; 244 244 } 245 - EXPORT_SYMBOL_GPL(pcistub_get_gsi_from_sbdf); 246 245 #endif 247 246 248 247 struct pci_dev *pcistub_get_pci_dev_by_slot(struct xen_pcibk_device *pdev, ··· 1756 1757 bus_register_notifier(&pci_bus_type, &pci_stub_nb); 1757 1758 #endif 1758 1759 1760 + #ifdef CONFIG_XEN_ACPI 1761 + xen_acpi_register_get_gsi_func(pcistub_get_gsi_from_sbdf); 1762 + #endif 1763 + 1759 1764 return err; 1760 1765 } 1761 1766 1762 1767 static void __exit xen_pcibk_cleanup(void) 1763 1768 { 1769 + #ifdef CONFIG_XEN_ACPI 1770 + xen_acpi_register_get_gsi_func(NULL); 1771 + #endif 1772 + 1764 1773 #ifdef CONFIG_PCI_IOV 1765 1774 bus_unregister_notifier(&pci_bus_type, &pci_stub_nb); 1766 1775 #endif
+2 -3
fs/9p/fid.c
··· 131 131 } 132 132 } 133 133 spin_unlock(&dentry->d_lock); 134 - } else { 135 - if (dentry->d_inode) 136 - ret = v9fs_fid_find_inode(dentry->d_inode, false, uid, any); 137 134 } 135 + if (!ret && dentry->d_inode) 136 + ret = v9fs_fid_find_inode(dentry->d_inode, false, uid, any); 138 137 139 138 return ret; 140 139 }
+2
fs/afs/internal.h
··· 130 130 wait_queue_head_t waitq; /* processes awaiting completion */ 131 131 struct work_struct async_work; /* async I/O processor */ 132 132 struct work_struct work; /* actual work processor */ 133 + struct work_struct free_work; /* Deferred free processor */ 133 134 struct rxrpc_call *rxcall; /* RxRPC call handle */ 134 135 struct rxrpc_peer *peer; /* Remote endpoint */ 135 136 struct key *key; /* security for this call */ ··· 1332 1331 extern void __net_exit afs_close_socket(struct afs_net *); 1333 1332 extern void afs_charge_preallocation(struct work_struct *); 1334 1333 extern void afs_put_call(struct afs_call *); 1334 + void afs_deferred_put_call(struct afs_call *call); 1335 1335 void afs_make_call(struct afs_call *call, gfp_t gfp); 1336 1336 void afs_wait_for_call_to_complete(struct afs_call *call); 1337 1337 extern struct afs_call *afs_alloc_flat_call(struct afs_net *,
+59 -24
fs/afs/rxrpc.c
··· 18 18 19 19 struct workqueue_struct *afs_async_calls; 20 20 21 + static void afs_deferred_free_worker(struct work_struct *work); 21 22 static void afs_wake_up_call_waiter(struct sock *, struct rxrpc_call *, unsigned long); 22 23 static void afs_wake_up_async_call(struct sock *, struct rxrpc_call *, unsigned long); 23 24 static void afs_process_async_call(struct work_struct *); ··· 150 149 call->debug_id = atomic_inc_return(&rxrpc_debug_id); 151 150 refcount_set(&call->ref, 1); 152 151 INIT_WORK(&call->async_work, afs_process_async_call); 152 + INIT_WORK(&call->free_work, afs_deferred_free_worker); 153 153 init_waitqueue_head(&call->waitq); 154 154 spin_lock_init(&call->state_lock); 155 155 call->iter = &call->def_iter; ··· 159 157 trace_afs_call(call->debug_id, afs_call_trace_alloc, 1, o, 160 158 __builtin_return_address(0)); 161 159 return call; 160 + } 161 + 162 + static void afs_free_call(struct afs_call *call) 163 + { 164 + struct afs_net *net = call->net; 165 + int o; 166 + 167 + ASSERT(!work_pending(&call->async_work)); 168 + 169 + rxrpc_kernel_put_peer(call->peer); 170 + 171 + if (call->rxcall) { 172 + rxrpc_kernel_shutdown_call(net->socket, call->rxcall); 173 + rxrpc_kernel_put_call(net->socket, call->rxcall); 174 + call->rxcall = NULL; 175 + } 176 + if (call->type->destructor) 177 + call->type->destructor(call); 178 + 179 + afs_unuse_server_notime(call->net, call->server, afs_server_trace_put_call); 180 + kfree(call->request); 181 + 182 + o = atomic_read(&net->nr_outstanding_calls); 183 + trace_afs_call(call->debug_id, afs_call_trace_free, 0, o, 184 + __builtin_return_address(0)); 185 + kfree(call); 186 + 187 + o = atomic_dec_return(&net->nr_outstanding_calls); 188 + if (o == 0) 189 + wake_up_var(&net->nr_outstanding_calls); 162 190 } 163 191 164 192 /* ··· 205 173 o = atomic_read(&net->nr_outstanding_calls); 206 174 trace_afs_call(debug_id, afs_call_trace_put, r - 1, o, 207 175 __builtin_return_address(0)); 176 + if (zero) 177 + afs_free_call(call); 178 + } 208 179 209 - if (zero) { 210 - ASSERT(!work_pending(&call->async_work)); 211 - ASSERT(call->type->name != NULL); 180 + static void afs_deferred_free_worker(struct work_struct *work) 181 + { 182 + struct afs_call *call = container_of(work, struct afs_call, free_work); 212 183 213 - rxrpc_kernel_put_peer(call->peer); 184 + afs_free_call(call); 185 + } 214 186 215 - if (call->rxcall) { 216 - rxrpc_kernel_shutdown_call(net->socket, call->rxcall); 217 - rxrpc_kernel_put_call(net->socket, call->rxcall); 218 - call->rxcall = NULL; 219 - } 220 - if (call->type->destructor) 221 - call->type->destructor(call); 187 + /* 188 + * Dispose of a reference on a call, deferring the cleanup to a workqueue 189 + * to avoid lock recursion. 190 + */ 191 + void afs_deferred_put_call(struct afs_call *call) 192 + { 193 + struct afs_net *net = call->net; 194 + unsigned int debug_id = call->debug_id; 195 + bool zero; 196 + int r, o; 222 197 223 - afs_unuse_server_notime(call->net, call->server, afs_server_trace_put_call); 224 - kfree(call->request); 225 - 226 - trace_afs_call(call->debug_id, afs_call_trace_free, 0, o, 227 - __builtin_return_address(0)); 228 - kfree(call); 229 - 230 - o = atomic_dec_return(&net->nr_outstanding_calls); 231 - if (o == 0) 232 - wake_up_var(&net->nr_outstanding_calls); 233 - } 198 + zero = __refcount_dec_and_test(&call->ref, &r); 199 + o = atomic_read(&net->nr_outstanding_calls); 200 + trace_afs_call(debug_id, afs_call_trace_put, r - 1, o, 201 + __builtin_return_address(0)); 202 + if (zero) 203 + schedule_work(&call->free_work); 234 204 } 235 205 236 206 static struct afs_call *afs_get_call(struct afs_call *call, ··· 674 640 } 675 641 676 642 /* 677 - * wake up an asynchronous call 643 + * Wake up an asynchronous call. The caller is holding the call notify 644 + * spinlock around this, so we can't call afs_put_call(). 678 645 */ 679 646 static void afs_wake_up_async_call(struct sock *sk, struct rxrpc_call *rxcall, 680 647 unsigned long call_user_ID) ··· 692 657 __builtin_return_address(0)); 693 658 694 659 if (!queue_work(afs_async_calls, &call->async_work)) 695 - afs_put_call(call); 660 + afs_deferred_put_call(call); 696 661 } 697 662 } 698 663
+21 -16
fs/bcachefs/alloc_background.c
··· 1977 1977 ca->mi.bucket_size, 1978 1978 GFP_KERNEL); 1979 1979 1980 - int ret = bch2_trans_do(c, NULL, NULL, 1980 + int ret = bch2_trans_commit_do(c, NULL, NULL, 1981 1981 BCH_WATERMARK_btree| 1982 1982 BCH_TRANS_COMMIT_no_enospc, 1983 1983 bch2_clear_bucket_needs_discard(trans, POS(ca->dev_idx, bucket))); ··· 2137 2137 2138 2138 struct bkey_s_c k = next_lru_key(trans, &iter, ca, &wrapped); 2139 2139 ret = bkey_err(k); 2140 - if (bch2_err_matches(ret, BCH_ERR_transaction_restart)) 2141 - continue; 2142 2140 if (ret) 2143 - break; 2141 + goto restart_err; 2144 2142 if (!k.k) 2145 2143 break; 2146 2144 2147 2145 ret = invalidate_one_bucket(trans, &iter, k, &nr_to_invalidate); 2146 + restart_err: 2147 + if (bch2_err_matches(ret, BCH_ERR_transaction_restart)) 2148 + continue; 2148 2149 if (ret) 2149 2150 break; 2150 2151 ··· 2351 2350 2352 2351 /* Bucket IO clocks: */ 2353 2352 2354 - int bch2_bucket_io_time_reset(struct btree_trans *trans, unsigned dev, 2355 - size_t bucket_nr, int rw) 2353 + static int __bch2_bucket_io_time_reset(struct btree_trans *trans, unsigned dev, 2354 + size_t bucket_nr, int rw) 2356 2355 { 2357 2356 struct bch_fs *c = trans->c; 2357 + 2358 2358 struct btree_iter iter; 2359 - struct bkey_i_alloc_v4 *a; 2360 - u64 now; 2361 - int ret = 0; 2362 - 2363 - if (bch2_trans_relock(trans)) 2364 - bch2_trans_begin(trans); 2365 - 2366 - a = bch2_trans_start_alloc_update_noupdate(trans, &iter, POS(dev, bucket_nr)); 2367 - ret = PTR_ERR_OR_ZERO(a); 2359 + struct bkey_i_alloc_v4 *a = 2360 + bch2_trans_start_alloc_update_noupdate(trans, &iter, POS(dev, bucket_nr)); 2361 + int ret = PTR_ERR_OR_ZERO(a); 2368 2362 if (ret) 2369 2363 return ret; 2370 2364 2371 - now = bch2_current_io_time(c, rw); 2365 + u64 now = bch2_current_io_time(c, rw); 2372 2366 if (a->v.io_time[rw] == now) 2373 2367 goto out; 2374 2368 ··· 2374 2378 out: 2375 2379 bch2_trans_iter_exit(trans, &iter); 2376 2380 return ret; 2381 + } 2382 + 2383 + int bch2_bucket_io_time_reset(struct btree_trans *trans, unsigned dev, 2384 + size_t bucket_nr, int rw) 2385 + { 2386 + if (bch2_trans_relock(trans)) 2387 + bch2_trans_begin(trans); 2388 + 2389 + return nested_lockrestart_do(trans, __bch2_bucket_io_time_reset(trans, dev, bucket_nr, rw)); 2377 2390 } 2378 2391 2379 2392 /* Startup/shutdown (ro/rw): */
+1 -1
fs/bcachefs/alloc_foreground.c
··· 684 684 struct bch_dev_usage usage; 685 685 struct open_bucket *ob; 686 686 687 - bch2_trans_do(c, NULL, NULL, 0, 687 + bch2_trans_do(c, 688 688 PTR_ERR_OR_ZERO(ob = bch2_bucket_alloc_trans(trans, ca, watermark, 689 689 data_type, cl, false, &usage))); 690 690 return ob;
+11 -1
fs/bcachefs/btree_gc.c
··· 820 820 * fix that here: 821 821 */ 822 822 alloc_data_type_set(&gc, gc.data_type); 823 - 824 823 if (gc.data_type != old_gc.data_type || 825 824 gc.dirty_sectors != old_gc.dirty_sectors) { 826 825 ret = bch2_alloc_key_to_dev_counters(trans, ca, &old_gc, &gc, BTREE_TRIGGER_gc); 827 826 if (ret) 828 827 return ret; 828 + 829 + /* 830 + * Ugly: alloc_key_to_dev_counters(..., BTREE_TRIGGER_gc) is not 831 + * safe w.r.t. transaction restarts, so fixup the gc_bucket so 832 + * we don't run it twice: 833 + */ 834 + percpu_down_read(&c->mark_lock); 835 + struct bucket *gc_m = gc_bucket(ca, iter->pos.offset); 836 + gc_m->data_type = gc.data_type; 837 + gc_m->dirty_sectors = gc.dirty_sectors; 838 + percpu_up_read(&c->mark_lock); 829 839 } 830 840 831 841 if (fsck_err_on(new.data_type != gc.data_type,
+1 -1
fs/bcachefs/btree_io.c
··· 1871 1871 1872 1872 } 1873 1873 } else { 1874 - ret = bch2_trans_do(c, NULL, NULL, 0, 1874 + ret = bch2_trans_do(c, 1875 1875 bch2_btree_node_update_key_get_iter(trans, b, &wbio->key, 1876 1876 BCH_WATERMARK_interior_updates| 1877 1877 BCH_TRANS_COMMIT_journal_reclaim|
+2
fs/bcachefs/btree_iter.h
··· 912 912 _ret; \ 913 913 }) 914 914 915 + #define bch2_trans_do(_c, _do) bch2_trans_run(_c, lockrestart_do(trans, _do)) 916 + 915 917 struct btree_trans *__bch2_trans_get(struct bch_fs *, unsigned); 916 918 void bch2_trans_put(struct btree_trans *); 917 919
+2 -2
fs/bcachefs/btree_update.c
··· 668 668 struct disk_reservation *disk_res, int flags, 669 669 enum btree_iter_update_trigger_flags iter_flags) 670 670 { 671 - return bch2_trans_do(c, disk_res, NULL, flags, 671 + return bch2_trans_commit_do(c, disk_res, NULL, flags, 672 672 bch2_btree_insert_trans(trans, id, k, iter_flags)); 673 673 } 674 674 ··· 865 865 memcpy(l->d, buf.buf, buf.pos); 866 866 c->journal.early_journal_entries.nr += jset_u64s(u64s); 867 867 } else { 868 - ret = bch2_trans_do(c, NULL, NULL, 868 + ret = bch2_trans_commit_do(c, NULL, NULL, 869 869 BCH_TRANS_COMMIT_lazy_rw|commit_flags, 870 870 __bch2_trans_log_msg(trans, &buf, u64s)); 871 871 }
+1 -1
fs/bcachefs/btree_update.h
··· 192 192 nested_lockrestart_do(_trans, _do ?: bch2_trans_commit(_trans, (_disk_res),\ 193 193 (_journal_seq), (_flags))) 194 194 195 - #define bch2_trans_do(_c, _disk_res, _journal_seq, _flags, _do) \ 195 + #define bch2_trans_commit_do(_c, _disk_res, _journal_seq, _flags, _do) \ 196 196 bch2_trans_run(_c, commit_do(trans, _disk_res, _journal_seq, _flags, _do)) 197 197 198 198 #define trans_for_each_update(_trans, _i) \
+1 -3
fs/bcachefs/btree_update_interior.c
··· 2239 2239 struct async_btree_rewrite *a = 2240 2240 container_of(work, struct async_btree_rewrite, work); 2241 2241 struct bch_fs *c = a->c; 2242 - int ret; 2243 2242 2244 - ret = bch2_trans_do(c, NULL, NULL, 0, 2245 - async_btree_node_rewrite_trans(trans, a)); 2243 + int ret = bch2_trans_do(c, async_btree_node_rewrite_trans(trans, a)); 2246 2244 bch_err_fn_ratelimited(c, ret); 2247 2245 bch2_write_ref_put(c, BCH_WRITE_REF_node_rewrite); 2248 2246 kfree(a);
+5 -2
fs/bcachefs/buckets.c
··· 1160 1160 #define SECTORS_CACHE 1024 1161 1161 1162 1162 int __bch2_disk_reservation_add(struct bch_fs *c, struct disk_reservation *res, 1163 - u64 sectors, int flags) 1163 + u64 sectors, enum bch_reservation_flags flags) 1164 1164 { 1165 1165 struct bch_fs_pcpu *pcpu; 1166 1166 u64 old, get; 1167 - s64 sectors_available; 1167 + u64 sectors_available; 1168 1168 int ret; 1169 1169 1170 1170 percpu_down_read(&c->mark_lock); ··· 1201 1201 1202 1202 percpu_u64_set(&c->pcpu->sectors_available, 0); 1203 1203 sectors_available = avail_factor(__bch2_fs_usage_read_short(c).free); 1204 + 1205 + if (sectors_available && (flags & BCH_DISK_RESERVATION_PARTIAL)) 1206 + sectors = min(sectors, sectors_available); 1204 1207 1205 1208 if (sectors <= sectors_available || 1206 1209 (flags & BCH_DISK_RESERVATION_NOFAIL)) {
+7 -5
fs/bcachefs/buckets.h
··· 344 344 } 345 345 } 346 346 347 - #define BCH_DISK_RESERVATION_NOFAIL (1 << 0) 347 + enum bch_reservation_flags { 348 + BCH_DISK_RESERVATION_NOFAIL = 1 << 0, 349 + BCH_DISK_RESERVATION_PARTIAL = 1 << 1, 350 + }; 348 351 349 - int __bch2_disk_reservation_add(struct bch_fs *, 350 - struct disk_reservation *, 351 - u64, int); 352 + int __bch2_disk_reservation_add(struct bch_fs *, struct disk_reservation *, 353 + u64, enum bch_reservation_flags); 352 354 353 355 static inline int bch2_disk_reservation_add(struct bch_fs *c, struct disk_reservation *res, 354 - u64 sectors, int flags) 356 + u64 sectors, enum bch_reservation_flags flags) 355 357 { 356 358 #ifdef __KERNEL__ 357 359 u64 old, new;
+1
fs/bcachefs/chardev.c
··· 225 225 226 226 opt_set(thr->opts, stdio, (u64)(unsigned long)&thr->thr.stdio); 227 227 opt_set(thr->opts, read_only, 1); 228 + opt_set(thr->opts, ratelimit_errors, 0); 228 229 229 230 /* We need request_key() to be called before we punt to kthread: */ 230 231 opt_set(thr->opts, nostart, true);
+14 -1
fs/bcachefs/darray.c
··· 2 2 3 3 #include <linux/log2.h> 4 4 #include <linux/slab.h> 5 + #include <linux/vmalloc.h> 5 6 #include "darray.h" 6 7 7 8 int __bch2_darray_resize_noprof(darray_char *d, size_t element_size, size_t new_size, gfp_t gfp) ··· 10 9 if (new_size > d->size) { 11 10 new_size = roundup_pow_of_two(new_size); 12 11 13 - void *data = kvmalloc_array_noprof(new_size, element_size, gfp); 12 + /* 13 + * This is a workaround: kvmalloc() doesn't support > INT_MAX 14 + * allocations, but vmalloc() does. 15 + * The limit needs to be lifted from kvmalloc, and when it does 16 + * we'll go back to just using that. 17 + */ 18 + size_t bytes; 19 + if (unlikely(check_mul_overflow(new_size, element_size, &bytes))) 20 + return -ENOMEM; 21 + 22 + void *data = likely(bytes < INT_MAX) 23 + ? kvmalloc_noprof(bytes, gfp) 24 + : vmalloc_noprof(bytes); 14 25 if (!data) 15 26 return -ENOMEM; 16 27
-7
fs/bcachefs/dirent.c
··· 250 250 return ret; 251 251 } 252 252 253 - static void dirent_copy_target(struct bkey_i_dirent *dst, 254 - struct bkey_s_c_dirent src) 255 - { 256 - dst->v.d_inum = src.v->d_inum; 257 - dst->v.d_type = src.v->d_type; 258 - } 259 - 260 253 int bch2_dirent_read_target(struct btree_trans *trans, subvol_inum dir, 261 254 struct bkey_s_c_dirent d, subvol_inum *target) 262 255 {
+7
fs/bcachefs/dirent.h
··· 34 34 int bch2_dirent_read_target(struct btree_trans *, subvol_inum, 35 35 struct bkey_s_c_dirent, subvol_inum *); 36 36 37 + static inline void dirent_copy_target(struct bkey_i_dirent *dst, 38 + struct bkey_s_c_dirent src) 39 + { 40 + dst->v.d_inum = src.v->d_inum; 41 + dst->v.d_type = src.v->d_type; 42 + } 43 + 37 44 int bch2_dirent_create_snapshot(struct btree_trans *, u32, u64, u32, 38 45 const struct bch_hash_info *, u8, 39 46 const struct qstr *, u64, u64 *,
+4 -2
fs/bcachefs/disk_accounting.c
··· 856 856 }; 857 857 u64 v[3] = { ca->mi.nbuckets - ca->mi.first_bucket, 0, 0 }; 858 858 859 - int ret = bch2_trans_do(c, NULL, NULL, 0, 860 - bch2_disk_accounting_mod(trans, &acc, v, ARRAY_SIZE(v), gc)); 859 + int ret = bch2_trans_do(c, ({ 860 + bch2_disk_accounting_mod(trans, &acc, v, ARRAY_SIZE(v), gc) ?: 861 + (!gc ? bch2_trans_commit(trans, NULL, NULL, 0) : 0); 862 + })); 861 863 bch_err_fn(c, ret); 862 864 return ret; 863 865 }
+11 -11
fs/bcachefs/ec.c
··· 266 266 if (!deleting) { 267 267 a->stripe = s.k->p.offset; 268 268 a->stripe_redundancy = s.v->nr_redundant; 269 + alloc_data_type_set(a, data_type); 269 270 } else { 270 271 a->stripe = 0; 271 272 a->stripe_redundancy = 0; 273 + alloc_data_type_set(a, BCH_DATA_user); 272 274 } 273 - 274 - alloc_data_type_set(a, data_type); 275 275 err: 276 276 printbuf_exit(&buf); 277 277 return ret; ··· 1186 1186 if (!idx) 1187 1187 break; 1188 1188 1189 - int ret = bch2_trans_do(c, NULL, NULL, BCH_TRANS_COMMIT_no_enospc, 1189 + int ret = bch2_trans_commit_do(c, NULL, NULL, BCH_TRANS_COMMIT_no_enospc, 1190 1190 ec_stripe_delete(trans, idx)); 1191 1191 bch_err_fn(c, ret); 1192 1192 if (ret) ··· 1519 1519 goto err; 1520 1520 } 1521 1521 1522 - ret = bch2_trans_do(c, &s->res, NULL, 1523 - BCH_TRANS_COMMIT_no_check_rw| 1524 - BCH_TRANS_COMMIT_no_enospc, 1525 - ec_stripe_key_update(trans, 1526 - s->have_existing_stripe 1527 - ? bkey_i_to_stripe(&s->existing_stripe.key) 1528 - : NULL, 1529 - bkey_i_to_stripe(&s->new_stripe.key))); 1522 + ret = bch2_trans_commit_do(c, &s->res, NULL, 1523 + BCH_TRANS_COMMIT_no_check_rw| 1524 + BCH_TRANS_COMMIT_no_enospc, 1525 + ec_stripe_key_update(trans, 1526 + s->have_existing_stripe 1527 + ? bkey_i_to_stripe(&s->existing_stripe.key) 1528 + : NULL, 1529 + bkey_i_to_stripe(&s->new_stripe.key))); 1530 1530 bch_err_msg(c, ret, "creating stripe key"); 1531 1531 if (ret) { 1532 1532 goto err;
+4 -1
fs/bcachefs/error.c
··· 251 251 * delete the key) 252 252 * - and we don't need to warn if we're not prompting 253 253 */ 254 - WARN_ON(!(flags & FSCK_AUTOFIX) && !trans && bch2_current_has_btree_trans(c)); 254 + WARN_ON((flags & FSCK_CAN_FIX) && 255 + !(flags & FSCK_AUTOFIX) && 256 + !trans && 257 + bch2_current_has_btree_trans(c)); 255 258 256 259 if ((flags & FSCK_CAN_FIX) && 257 260 test_bit(err, c->sb.errors_silent))
+6
fs/bcachefs/fs-io-buffered.c
··· 856 856 folios_trunc(&fs, fi); 857 857 end = min(end, folio_end_pos(darray_last(fs))); 858 858 } else { 859 + if (!folio_test_uptodate(f)) { 860 + ret = bch2_read_single_folio(f, mapping); 861 + if (ret) 862 + goto out; 863 + } 864 + 859 865 folios_trunc(&fs, fi + 1); 860 866 end = f_pos + f_reserved; 861 867 }
+45 -25
fs/bcachefs/fs-io-pagecache.c
··· 399 399 bch2_quota_reservation_put(c, inode, &res->quota); 400 400 } 401 401 402 - int bch2_folio_reservation_get(struct bch_fs *c, 402 + static int __bch2_folio_reservation_get(struct bch_fs *c, 403 403 struct bch_inode_info *inode, 404 404 struct folio *folio, 405 405 struct bch2_folio_reservation *res, 406 - size_t offset, size_t len) 406 + size_t offset, size_t len, 407 + bool partial) 407 408 { 408 409 struct bch_folio *s = bch2_folio_create(folio, 0); 409 410 unsigned i, disk_sectors = 0, quota_sectors = 0; 411 + struct disk_reservation disk_res = {}; 412 + size_t reserved = len; 410 413 int ret; 411 414 412 415 if (!s) ··· 425 422 } 426 423 427 424 if (disk_sectors) { 428 - ret = bch2_disk_reservation_add(c, &res->disk, disk_sectors, 0); 425 + ret = bch2_disk_reservation_add(c, &disk_res, disk_sectors, 426 + partial ? BCH_DISK_RESERVATION_PARTIAL : 0); 429 427 if (unlikely(ret)) 430 428 return ret; 429 + 430 + if (unlikely(disk_res.sectors != disk_sectors)) { 431 + disk_sectors = quota_sectors = 0; 432 + 433 + for (i = round_down(offset, block_bytes(c)) >> 9; 434 + i < round_up(offset + len, block_bytes(c)) >> 9; 435 + i++) { 436 + disk_sectors += sectors_to_reserve(&s->s[i], res->disk.nr_replicas); 437 + if (disk_sectors > disk_res.sectors) { 438 + /* 439 + * Make sure to get a reservation that's 440 + * aligned to the filesystem blocksize: 441 + */ 442 + unsigned reserved_offset = round_down(i << 9, block_bytes(c)); 443 + reserved = clamp(reserved_offset, offset, offset + len) - offset; 444 + 445 + if (!reserved) { 446 + bch2_disk_reservation_put(c, &disk_res); 447 + return -BCH_ERR_ENOSPC_disk_reservation; 448 + } 449 + break; 450 + } 451 + quota_sectors += s->s[i].state == SECTOR_unallocated; 452 + } 453 + } 431 454 } 432 455 433 456 if (quota_sectors) { 434 457 ret = bch2_quota_reservation_add(c, inode, &res->quota, quota_sectors, true); 435 458 if (unlikely(ret)) { 436 - struct disk_reservation tmp = { .sectors = disk_sectors }; 437 - 438 - bch2_disk_reservation_put(c, &tmp); 439 - res->disk.sectors -= disk_sectors; 459 + bch2_disk_reservation_put(c, &disk_res); 440 460 return ret; 441 461 } 442 462 } 443 463 444 - return 0; 464 + res->disk.sectors += disk_res.sectors; 465 + return partial ? reserved : 0; 466 + } 467 + 468 + int bch2_folio_reservation_get(struct bch_fs *c, 469 + struct bch_inode_info *inode, 470 + struct folio *folio, 471 + struct bch2_folio_reservation *res, 472 + size_t offset, size_t len) 473 + { 474 + return __bch2_folio_reservation_get(c, inode, folio, res, offset, len, false); 445 475 } 446 476 447 477 ssize_t bch2_folio_reservation_get_partial(struct bch_fs *c, ··· 483 447 struct bch2_folio_reservation *res, 484 448 size_t offset, size_t len) 485 449 { 486 - size_t l, reserved = 0; 487 - int ret; 488 - 489 - while ((l = len - reserved)) { 490 - while ((ret = bch2_folio_reservation_get(c, inode, folio, res, offset, l))) { 491 - if ((offset & (block_bytes(c) - 1)) + l <= block_bytes(c)) 492 - return reserved ?: ret; 493 - 494 - len = reserved + l; 495 - l /= 2; 496 - } 497 - 498 - offset += l; 499 - reserved += l; 500 - } 501 - 502 - return reserved; 450 + return __bch2_folio_reservation_get(c, inode, folio, res, offset, len, true); 503 451 } 504 452 505 453 static void bch2_clear_folio_bits(struct folio *folio)
+1 -1
fs/bcachefs/fs-io.c
··· 182 182 183 183 struct bch_inode_unpacked u; 184 184 int ret = bch2_inode_find_by_inum(c, inode_inum(inode), &u) ?: 185 - bch2_journal_flush_seq(&c->journal, u.bi_journal_seq) ?: 185 + bch2_journal_flush_seq(&c->journal, u.bi_journal_seq, TASK_INTERRUPTIBLE) ?: 186 186 bch2_inode_flush_nocow_writes(c, inode); 187 187 bch2_write_ref_put(c, BCH_WRITE_REF_fsync); 188 188 return ret;
+7 -11
fs/bcachefs/fs.c
··· 656 656 struct bch_hash_info hash = bch2_hash_info_init(c, &dir->ei_inode); 657 657 658 658 struct bch_inode_info *inode; 659 - bch2_trans_do(c, NULL, NULL, 0, 659 + bch2_trans_do(c, 660 660 PTR_ERR_OR_ZERO(inode = bch2_lookup_trans(trans, inode_inum(dir), 661 661 &hash, &dentry->d_name))); 662 662 if (IS_ERR(inode)) ··· 869 869 ret = bch2_subvol_is_ro_trans(trans, src_dir->ei_inum.subvol) ?: 870 870 bch2_subvol_is_ro_trans(trans, dst_dir->ei_inum.subvol); 871 871 if (ret) 872 - goto err; 872 + goto err_tx_restart; 873 873 874 874 if (inode_attr_changing(dst_dir, src_inode, Inode_opt_project)) { 875 875 ret = bch2_fs_quota_transfer(c, src_inode, ··· 1266 1266 bch2_trans_iter_init(trans, &iter, BTREE_ID_extents, 1267 1267 POS(ei->v.i_ino, start), 0); 1268 1268 1269 - while (true) { 1269 + while (!ret || bch2_err_matches(ret, BCH_ERR_transaction_restart)) { 1270 1270 enum btree_id data_btree = BTREE_ID_extents; 1271 1271 1272 1272 bch2_trans_begin(trans); ··· 1274 1274 u32 snapshot; 1275 1275 ret = bch2_subvolume_get_snapshot(trans, ei->ei_inum.subvol, &snapshot); 1276 1276 if (ret) 1277 - goto err; 1277 + continue; 1278 1278 1279 1279 bch2_btree_iter_set_snapshot(&iter, snapshot); 1280 1280 1281 1281 k = bch2_btree_iter_peek_upto(&iter, end); 1282 1282 ret = bkey_err(k); 1283 1283 if (ret) 1284 - goto err; 1284 + continue; 1285 1285 1286 1286 if (!k.k) 1287 1287 break; ··· 1301 1301 ret = bch2_read_indirect_extent(trans, &data_btree, 1302 1302 &offset_into_extent, &cur); 1303 1303 if (ret) 1304 - break; 1304 + continue; 1305 1305 1306 1306 k = bkey_i_to_s_c(cur.k); 1307 1307 bch2_bkey_buf_realloc(&prev, c, k.k->u64s); ··· 1329 1329 1330 1330 bch2_btree_iter_set_pos(&iter, 1331 1331 POS(iter.pos.inode, iter.pos.offset + sectors)); 1332 - err: 1333 - if (ret && 1334 - !bch2_err_matches(ret, BCH_ERR_transaction_restart)) 1335 - break; 1336 1332 } 1337 1333 bch2_trans_iter_exit(trans, &iter); 1338 1334 ··· 2036 2040 bch2_opts_to_text(&buf, c->opts, c, c->disk_sb.sb, 2037 2041 OPT_MOUNT, OPT_HIDDEN, OPT_SHOW_MOUNT_STYLE); 2038 2042 printbuf_nul_terminate(&buf); 2039 - seq_puts(seq, buf.buf); 2043 + seq_printf(seq, ",%s", buf.buf); 2040 2044 2041 2045 int ret = buf.allocation_failure ? -ENOMEM : 0; 2042 2046 printbuf_exit(&buf);
+223 -50
fs/bcachefs/fsck.c
··· 929 929 return ret; 930 930 } 931 931 932 - static int hash_redo_key(struct btree_trans *trans, 933 - const struct bch_hash_desc desc, 934 - struct bch_hash_info *hash_info, 935 - struct btree_iter *k_iter, struct bkey_s_c k) 932 + static int dirent_has_target(struct btree_trans *trans, struct bkey_s_c_dirent d) 936 933 { 937 - struct bkey_i *delete; 938 - struct bkey_i *tmp; 934 + if (d.v->d_type == DT_SUBVOL) { 935 + u32 snap; 936 + u64 inum; 937 + int ret = subvol_lookup(trans, le32_to_cpu(d.v->d_child_subvol), &snap, &inum); 938 + if (ret && !bch2_err_matches(ret, ENOENT)) 939 + return ret; 940 + return !ret; 941 + } else { 942 + struct btree_iter iter; 943 + struct bkey_s_c k = bch2_bkey_get_iter(trans, &iter, BTREE_ID_inodes, 944 + SPOS(0, le64_to_cpu(d.v->d_inum), d.k->p.snapshot), 0); 945 + int ret = bkey_err(k); 946 + if (ret) 947 + return ret; 939 948 940 - delete = bch2_trans_kmalloc(trans, sizeof(*delete)); 941 - if (IS_ERR(delete)) 942 - return PTR_ERR(delete); 949 + ret = bkey_is_inode(k.k); 950 + bch2_trans_iter_exit(trans, &iter); 951 + return ret; 952 + } 953 + } 943 954 944 - tmp = bch2_bkey_make_mut_noupdate(trans, k); 945 - if (IS_ERR(tmp)) 946 - return PTR_ERR(tmp); 955 + /* 956 + * Prefer to delete the first one, since that will be the one at the wrong 957 + * offset: 958 + * return value: 0 -> delete k1, 1 -> delete k2 959 + */ 960 + static int hash_pick_winner(struct btree_trans *trans, 961 + const struct bch_hash_desc desc, 962 + struct bch_hash_info *hash_info, 963 + struct bkey_s_c k1, 964 + struct bkey_s_c k2) 965 + { 966 + if (bkey_val_bytes(k1.k) == bkey_val_bytes(k2.k) && 967 + !memcmp(k1.v, k2.v, bkey_val_bytes(k1.k))) 968 + return 0; 947 969 948 - bkey_init(&delete->k); 949 - delete->k.p = k_iter->pos; 950 - return bch2_btree_iter_traverse(k_iter) ?: 951 - bch2_trans_update(trans, k_iter, delete, 0) ?: 952 - bch2_hash_set_in_snapshot(trans, desc, hash_info, 953 - (subvol_inum) { 0, k.k->p.inode }, 954 - k.k->p.snapshot, tmp, 955 - STR_HASH_must_create| 956 - BTREE_UPDATE_internal_snapshot_node) ?: 957 - bch2_trans_commit(trans, NULL, NULL, BCH_TRANS_COMMIT_no_enospc); 970 + switch (desc.btree_id) { 971 + case BTREE_ID_dirents: { 972 + int ret = dirent_has_target(trans, bkey_s_c_to_dirent(k1)); 973 + if (ret < 0) 974 + return ret; 975 + if (!ret) 976 + return 0; 977 + 978 + ret = dirent_has_target(trans, bkey_s_c_to_dirent(k2)); 979 + if (ret < 0) 980 + return ret; 981 + if (!ret) 982 + return 1; 983 + return 2; 984 + } 985 + default: 986 + return 0; 987 + } 988 + } 989 + 990 + static int fsck_update_backpointers(struct btree_trans *trans, 991 + struct snapshots_seen *s, 992 + const struct bch_hash_desc desc, 993 + struct bch_hash_info *hash_info, 994 + struct bkey_i *new) 995 + { 996 + if (new->k.type != KEY_TYPE_dirent) 997 + return 0; 998 + 999 + struct bkey_i_dirent *d = bkey_i_to_dirent(new); 1000 + struct inode_walker target = inode_walker_init(); 1001 + int ret = 0; 1002 + 1003 + if (d->v.d_type == DT_SUBVOL) { 1004 + BUG(); 1005 + } else { 1006 + ret = get_visible_inodes(trans, &target, s, le64_to_cpu(d->v.d_inum)); 1007 + if (ret) 1008 + goto err; 1009 + 1010 + darray_for_each(target.inodes, i) { 1011 + i->inode.bi_dir_offset = d->k.p.offset; 1012 + ret = __bch2_fsck_write_inode(trans, &i->inode); 1013 + if (ret) 1014 + goto err; 1015 + } 1016 + } 1017 + err: 1018 + inode_walker_exit(&target); 1019 + return ret; 1020 + } 1021 + 1022 + static int fsck_rename_dirent(struct btree_trans *trans, 1023 + struct snapshots_seen *s, 1024 + const struct bch_hash_desc desc, 1025 + struct bch_hash_info *hash_info, 1026 + struct bkey_s_c_dirent old) 1027 + { 1028 + struct qstr old_name = bch2_dirent_get_name(old); 1029 + struct bkey_i_dirent *new = bch2_trans_kmalloc(trans, bkey_bytes(old.k) + 32); 1030 + int ret = PTR_ERR_OR_ZERO(new); 1031 + if (ret) 1032 + return ret; 1033 + 1034 + bkey_dirent_init(&new->k_i); 1035 + dirent_copy_target(new, old); 1036 + new->k.p = old.k->p; 1037 + 1038 + for (unsigned i = 0; i < 1000; i++) { 1039 + unsigned len = sprintf(new->v.d_name, "%.*s.fsck_renamed-%u", 1040 + old_name.len, old_name.name, i); 1041 + unsigned u64s = BKEY_U64s + dirent_val_u64s(len); 1042 + 1043 + if (u64s > U8_MAX) 1044 + return -EINVAL; 1045 + 1046 + new->k.u64s = u64s; 1047 + 1048 + ret = bch2_hash_set_in_snapshot(trans, bch2_dirent_hash_desc, hash_info, 1049 + (subvol_inum) { 0, old.k->p.inode }, 1050 + old.k->p.snapshot, &new->k_i, 1051 + BTREE_UPDATE_internal_snapshot_node); 1052 + if (!bch2_err_matches(ret, EEXIST)) 1053 + break; 1054 + } 1055 + 1056 + if (ret) 1057 + return ret; 1058 + 1059 + return fsck_update_backpointers(trans, s, desc, hash_info, &new->k_i); 958 1060 } 959 1061 960 1062 static int hash_check_key(struct btree_trans *trans, 1063 + struct snapshots_seen *s, 961 1064 const struct bch_hash_desc desc, 962 1065 struct bch_hash_info *hash_info, 963 1066 struct btree_iter *k_iter, struct bkey_s_c hash_k) ··· 1089 986 if (bkey_eq(k.k->p, hash_k.k->p)) 1090 987 break; 1091 988 1092 - if (fsck_err_on(k.k->type == desc.key_type && 1093 - !desc.cmp_bkey(k, hash_k), 1094 - trans, hash_table_key_duplicate, 1095 - "duplicate hash table keys:\n%s", 1096 - (printbuf_reset(&buf), 1097 - bch2_bkey_val_to_text(&buf, c, hash_k), 1098 - buf.buf))) { 1099 - ret = bch2_hash_delete_at(trans, desc, hash_info, k_iter, 0) ?: 1; 1100 - break; 1101 - } 989 + if (k.k->type == desc.key_type && 990 + !desc.cmp_bkey(k, hash_k)) 991 + goto duplicate_entries; 1102 992 1103 993 if (bkey_deleted(k.k)) { 1104 994 bch2_trans_iter_exit(trans, &iter); ··· 1104 1008 return ret; 1105 1009 bad_hash: 1106 1010 if (fsck_err(trans, hash_table_key_wrong_offset, 1107 - "hash table key at wrong offset: btree %s inode %llu offset %llu, hashed to %llu\n%s", 1011 + "hash table key at wrong offset: btree %s inode %llu offset %llu, hashed to %llu\n %s", 1108 1012 bch2_btree_id_str(desc.btree_id), hash_k.k->p.inode, hash_k.k->p.offset, hash, 1109 1013 (printbuf_reset(&buf), 1110 1014 bch2_bkey_val_to_text(&buf, c, hash_k), buf.buf))) { 1111 - ret = hash_redo_key(trans, desc, hash_info, k_iter, hash_k); 1112 - bch_err_fn(c, ret); 1015 + struct bkey_i *new = bch2_bkey_make_mut_noupdate(trans, hash_k); 1016 + if (IS_ERR(new)) 1017 + return PTR_ERR(new); 1018 + 1019 + k = bch2_hash_set_or_get_in_snapshot(trans, &iter, desc, hash_info, 1020 + (subvol_inum) { 0, hash_k.k->p.inode }, 1021 + hash_k.k->p.snapshot, new, 1022 + STR_HASH_must_create| 1023 + BTREE_ITER_with_updates| 1024 + BTREE_UPDATE_internal_snapshot_node); 1025 + ret = bkey_err(k); 1113 1026 if (ret) 1114 - return ret; 1115 - ret = -BCH_ERR_transaction_restart_nested; 1027 + goto out; 1028 + if (k.k) 1029 + goto duplicate_entries; 1030 + 1031 + ret = bch2_hash_delete_at(trans, desc, hash_info, k_iter, 1032 + BTREE_UPDATE_internal_snapshot_node) ?: 1033 + fsck_update_backpointers(trans, s, desc, hash_info, new) ?: 1034 + bch2_trans_commit(trans, NULL, NULL, BCH_TRANS_COMMIT_no_enospc) ?: 1035 + -BCH_ERR_transaction_restart_nested; 1036 + goto out; 1116 1037 } 1117 1038 fsck_err: 1039 + goto out; 1040 + duplicate_entries: 1041 + ret = hash_pick_winner(trans, desc, hash_info, hash_k, k); 1042 + if (ret < 0) 1043 + goto out; 1044 + 1045 + if (!fsck_err(trans, hash_table_key_duplicate, 1046 + "duplicate hash table keys%s:\n%s", 1047 + ret != 2 ? "" : ", both point to valid inodes", 1048 + (printbuf_reset(&buf), 1049 + bch2_bkey_val_to_text(&buf, c, hash_k), 1050 + prt_newline(&buf), 1051 + bch2_bkey_val_to_text(&buf, c, k), 1052 + buf.buf))) 1053 + goto out; 1054 + 1055 + switch (ret) { 1056 + case 0: 1057 + ret = bch2_hash_delete_at(trans, desc, hash_info, k_iter, 0); 1058 + break; 1059 + case 1: 1060 + ret = bch2_hash_delete_at(trans, desc, hash_info, &iter, 0); 1061 + break; 1062 + case 2: 1063 + ret = fsck_rename_dirent(trans, s, desc, hash_info, bkey_s_c_to_dirent(hash_k)) ?: 1064 + bch2_hash_delete_at(trans, desc, hash_info, k_iter, 0); 1065 + goto out; 1066 + } 1067 + 1068 + ret = bch2_trans_commit(trans, NULL, NULL, 0) ?: 1069 + -BCH_ERR_transaction_restart_nested; 1118 1070 goto out; 1119 1071 } 1120 1072 ··· 1240 1096 return ret; 1241 1097 } 1242 1098 1099 + static int get_snapshot_root_inode(struct btree_trans *trans, 1100 + struct bch_inode_unpacked *root, 1101 + u64 inum) 1102 + { 1103 + struct btree_iter iter; 1104 + struct bkey_s_c k; 1105 + int ret = 0; 1106 + 1107 + for_each_btree_key_reverse_norestart(trans, iter, BTREE_ID_inodes, 1108 + SPOS(0, inum, U32_MAX), 1109 + BTREE_ITER_all_snapshots, k, ret) { 1110 + if (k.k->p.offset != inum) 1111 + break; 1112 + if (bkey_is_inode(k.k)) 1113 + goto found_root; 1114 + } 1115 + if (ret) 1116 + goto err; 1117 + BUG(); 1118 + found_root: 1119 + BUG_ON(bch2_inode_unpack(k, root)); 1120 + err: 1121 + bch2_trans_iter_exit(trans, &iter); 1122 + return ret; 1123 + } 1124 + 1243 1125 static int check_inode(struct btree_trans *trans, 1244 1126 struct btree_iter *iter, 1245 1127 struct bkey_s_c k, 1246 - struct bch_inode_unpacked *prev, 1128 + struct bch_inode_unpacked *snapshot_root, 1247 1129 struct snapshots_seen *s) 1248 1130 { 1249 1131 struct bch_fs *c = trans->c; ··· 1293 1123 1294 1124 BUG_ON(bch2_inode_unpack(k, &u)); 1295 1125 1296 - if (prev->bi_inum != u.bi_inum) 1297 - *prev = u; 1126 + if (snapshot_root->bi_inum != u.bi_inum) { 1127 + ret = get_snapshot_root_inode(trans, snapshot_root, u.bi_inum); 1128 + if (ret) 1129 + goto err; 1130 + } 1298 1131 1299 - if (fsck_err_on(prev->bi_hash_seed != u.bi_hash_seed || 1300 - inode_d_type(prev) != inode_d_type(&u), 1132 + if (fsck_err_on(u.bi_hash_seed != snapshot_root->bi_hash_seed || 1133 + INODE_STR_HASH(&u) != INODE_STR_HASH(snapshot_root), 1301 1134 trans, inode_snapshot_mismatch, 1302 1135 "inodes in different snapshots don't match")) { 1303 - bch_err(c, "repair not implemented yet"); 1304 - ret = -BCH_ERR_fsck_repair_unimplemented; 1305 - goto err_noprint; 1136 + u.bi_hash_seed = snapshot_root->bi_hash_seed; 1137 + SET_INODE_STR_HASH(&u, INODE_STR_HASH(snapshot_root)); 1138 + do_update = true; 1306 1139 } 1307 1140 1308 1141 if (u.bi_dir || u.bi_dir_offset) { ··· 1458 1285 1459 1286 int bch2_check_inodes(struct bch_fs *c) 1460 1287 { 1461 - struct bch_inode_unpacked prev = { 0 }; 1288 + struct bch_inode_unpacked snapshot_root = {}; 1462 1289 struct snapshots_seen s; 1463 1290 1464 1291 snapshots_seen_init(&s); ··· 1468 1295 POS_MIN, 1469 1296 BTREE_ITER_prefetch|BTREE_ITER_all_snapshots, k, 1470 1297 NULL, NULL, BCH_TRANS_COMMIT_no_enospc, 1471 - check_inode(trans, &iter, k, &prev, &s))); 1298 + check_inode(trans, &iter, k, &snapshot_root, &s))); 1472 1299 1473 1300 snapshots_seen_exit(&s); 1474 1301 bch_err_fn(c, ret); ··· 2480 2307 *hash_info = bch2_hash_info_init(c, &i->inode); 2481 2308 dir->first_this_inode = false; 2482 2309 2483 - ret = hash_check_key(trans, bch2_dirent_hash_desc, hash_info, iter, k); 2310 + ret = hash_check_key(trans, s, bch2_dirent_hash_desc, hash_info, iter, k); 2484 2311 if (ret < 0) 2485 2312 goto err; 2486 2313 if (ret) { ··· 2594 2421 *hash_info = bch2_hash_info_init(c, &i->inode); 2595 2422 inode->first_this_inode = false; 2596 2423 2597 - ret = hash_check_key(trans, bch2_xattr_hash_desc, hash_info, iter, k); 2424 + ret = hash_check_key(trans, NULL, bch2_xattr_hash_desc, hash_info, iter, k); 2598 2425 bch_err_fn(c, ret); 2599 2426 return ret; 2600 2427 } ··· 2682 2509 /* Get root directory, create if it doesn't exist: */ 2683 2510 int bch2_check_root(struct bch_fs *c) 2684 2511 { 2685 - int ret = bch2_trans_do(c, NULL, NULL, BCH_TRANS_COMMIT_no_enospc, 2512 + int ret = bch2_trans_commit_do(c, NULL, NULL, BCH_TRANS_COMMIT_no_enospc, 2686 2513 check_root_trans(trans)); 2687 2514 bch_err_fn(c, ret); 2688 2515 return ret;
+15 -12
fs/bcachefs/inode.c
··· 163 163 unsigned fieldnr = 0, field_bits; 164 164 int ret; 165 165 166 - #define x(_name, _bits) \ 167 - if (fieldnr++ == INODE_NR_FIELDS(inode.v)) { \ 166 + #define x(_name, _bits) \ 167 + if (fieldnr++ == INODEv1_NR_FIELDS(inode.v)) { \ 168 168 unsigned offset = offsetof(struct bch_inode_unpacked, _name);\ 169 169 memset((void *) unpacked + offset, 0, \ 170 170 sizeof(*unpacked) - offset); \ ··· 283 283 { 284 284 memset(unpacked, 0, sizeof(*unpacked)); 285 285 286 + unpacked->bi_snapshot = k.k->p.snapshot; 287 + 286 288 switch (k.k->type) { 287 289 case KEY_TYPE_inode: { 288 290 struct bkey_s_c_inode inode = bkey_s_c_to_inode(k); ··· 295 293 unpacked->bi_flags = le32_to_cpu(inode.v->bi_flags); 296 294 unpacked->bi_mode = le16_to_cpu(inode.v->bi_mode); 297 295 298 - if (INODE_NEW_VARINT(inode.v)) { 296 + if (INODEv1_NEW_VARINT(inode.v)) { 299 297 return bch2_inode_unpack_v2(unpacked, inode.v->fields, 300 298 bkey_val_end(inode), 301 - INODE_NR_FIELDS(inode.v)); 299 + INODEv1_NR_FIELDS(inode.v)); 302 300 } else { 303 301 return bch2_inode_unpack_v1(inode, unpacked); 304 302 } ··· 473 471 struct bkey_s_c_inode inode = bkey_s_c_to_inode(k); 474 472 int ret = 0; 475 473 476 - bkey_fsck_err_on(INODE_STR_HASH(inode.v) >= BCH_STR_HASH_NR, 474 + bkey_fsck_err_on(INODEv1_STR_HASH(inode.v) >= BCH_STR_HASH_NR, 477 475 c, inode_str_hash_invalid, 478 476 "invalid str hash type (%llu >= %u)", 479 - INODE_STR_HASH(inode.v), BCH_STR_HASH_NR); 477 + INODEv1_STR_HASH(inode.v), BCH_STR_HASH_NR); 480 478 481 479 ret = __bch2_inode_validate(c, k, flags); 482 480 fsck_err: ··· 535 533 prt_printf(out, "(%x)\n", inode->bi_flags); 536 534 537 535 prt_printf(out, "journal_seq=%llu\n", inode->bi_journal_seq); 536 + prt_printf(out, "hash_seed=%llx\n", inode->bi_hash_seed); 537 + prt_printf(out, "hash_type="); 538 + bch2_prt_str_hash_type(out, INODE_STR_HASH(inode)); 539 + prt_newline(out); 538 540 prt_printf(out, "bi_size=%llu\n", inode->bi_size); 539 541 prt_printf(out, "bi_sectors=%llu\n", inode->bi_sectors); 540 542 prt_printf(out, "bi_version=%llu\n", inode->bi_version); ··· 806 800 807 801 memset(inode_u, 0, sizeof(*inode_u)); 808 802 809 - /* ick */ 810 - inode_u->bi_flags |= str_hash << INODE_STR_HASH_OFFSET; 811 - get_random_bytes(&inode_u->bi_hash_seed, 812 - sizeof(inode_u->bi_hash_seed)); 803 + SET_INODE_STR_HASH(inode_u, str_hash); 804 + get_random_bytes(&inode_u->bi_hash_seed, sizeof(inode_u->bi_hash_seed)); 813 805 } 814 806 815 807 void bch2_inode_init_late(struct bch_inode_unpacked *inode_u, u64 now, ··· 1091 1087 int bch2_inode_find_by_inum(struct bch_fs *c, subvol_inum inum, 1092 1088 struct bch_inode_unpacked *inode) 1093 1089 { 1094 - return bch2_trans_do(c, NULL, NULL, 0, 1095 - bch2_inode_find_by_inum_trans(trans, inum, inode)); 1090 + return bch2_trans_do(c, bch2_inode_find_by_inum_trans(trans, inum, inode)); 1096 1091 } 1097 1092 1098 1093 int bch2_inode_nlink_inc(struct bch_inode_unpacked *bi)
+1
fs/bcachefs/inode.h
··· 92 92 BCH_INODE_FIELDS_v3() 93 93 #undef x 94 94 }; 95 + BITMASK(INODE_STR_HASH, struct bch_inode_unpacked, bi_flags, 20, 24); 95 96 96 97 struct bkey_inode_buf { 97 98 struct bkey_i_inode_v3 inode;
+3 -3
fs/bcachefs/inode_format.h
··· 150 150 #undef x 151 151 }; 152 152 153 - LE32_BITMASK(INODE_STR_HASH, struct bch_inode, bi_flags, 20, 24); 154 - LE32_BITMASK(INODE_NR_FIELDS, struct bch_inode, bi_flags, 24, 31); 155 - LE32_BITMASK(INODE_NEW_VARINT, struct bch_inode, bi_flags, 31, 32); 153 + LE32_BITMASK(INODEv1_STR_HASH, struct bch_inode, bi_flags, 20, 24); 154 + LE32_BITMASK(INODEv1_NR_FIELDS, struct bch_inode, bi_flags, 24, 31); 155 + LE32_BITMASK(INODEv1_NEW_VARINT,struct bch_inode, bi_flags, 31, 32); 156 156 157 157 LE64_BITMASK(INODEv2_STR_HASH, struct bch_inode_v2, bi_flags, 20, 24); 158 158 LE64_BITMASK(INODEv2_NR_FIELDS, struct bch_inode_v2, bi_flags, 24, 31);
+1 -1
fs/bcachefs/io_misc.c
··· 377 377 * check for missing subvolume before fpunch, as in resume we don't want 378 378 * it to be a fatal error 379 379 */ 380 - ret = __bch2_subvolume_get_snapshot(trans, inum.subvol, &snapshot, warn_errors); 380 + ret = lockrestart_do(trans, __bch2_subvolume_get_snapshot(trans, inum.subvol, &snapshot, warn_errors)); 381 381 if (ret) 382 382 return ret; 383 383
+4 -4
fs/bcachefs/io_read.c
··· 409 409 bch2_trans_begin(trans); 410 410 rbio->bio.bi_status = 0; 411 411 412 - k = bch2_btree_iter_peek_slot(&iter); 413 - if (bkey_err(k)) 412 + ret = lockrestart_do(trans, bkey_err(k = bch2_btree_iter_peek_slot(&iter))); 413 + if (ret) 414 414 goto err; 415 415 416 416 bch2_bkey_buf_reassemble(&sk, c, k); ··· 557 557 558 558 static noinline void bch2_rbio_narrow_crcs(struct bch_read_bio *rbio) 559 559 { 560 - bch2_trans_do(rbio->c, NULL, NULL, BCH_TRANS_COMMIT_no_enospc, 561 - __bch2_rbio_narrow_crcs(trans, rbio)); 560 + bch2_trans_commit_do(rbio->c, NULL, NULL, BCH_TRANS_COMMIT_no_enospc, 561 + __bch2_rbio_narrow_crcs(trans, rbio)); 562 562 } 563 563 564 564 /* Inner part that may run in process context */
+2 -2
fs/bcachefs/io_write.c
··· 1437 1437 * freeing up space on specific disks, which means that 1438 1438 * allocations for specific disks may hang arbitrarily long: 1439 1439 */ 1440 - ret = bch2_trans_do(c, NULL, NULL, 0, 1440 + ret = bch2_trans_run(c, lockrestart_do(trans, 1441 1441 bch2_alloc_sectors_start_trans(trans, 1442 1442 op->target, 1443 1443 op->opts.erasure_code && !(op->flags & BCH_WRITE_CACHED), ··· 1447 1447 op->nr_replicas_required, 1448 1448 op->watermark, 1449 1449 op->flags, 1450 - &op->cl, &wp)); 1450 + &op->cl, &wp))); 1451 1451 if (unlikely(ret)) { 1452 1452 if (bch2_err_matches(ret, BCH_ERR_operation_blocked)) 1453 1453 break;
+6 -4
fs/bcachefs/journal.c
··· 758 758 return ret; 759 759 } 760 760 761 - int bch2_journal_flush_seq(struct journal *j, u64 seq) 761 + int bch2_journal_flush_seq(struct journal *j, u64 seq, unsigned task_state) 762 762 { 763 763 u64 start_time = local_clock(); 764 764 int ret, ret2; ··· 769 769 if (seq <= j->flushed_seq_ondisk) 770 770 return 0; 771 771 772 - ret = wait_event_interruptible(j->wait, (ret2 = bch2_journal_flush_seq_async(j, seq, NULL))); 772 + ret = wait_event_state(j->wait, 773 + (ret2 = bch2_journal_flush_seq_async(j, seq, NULL)), 774 + task_state); 773 775 774 776 if (!ret) 775 777 bch2_time_stats_update(j->flush_seq_time, start_time); ··· 790 788 791 789 int bch2_journal_flush(struct journal *j) 792 790 { 793 - return bch2_journal_flush_seq(j, atomic64_read(&j->seq)); 791 + return bch2_journal_flush_seq(j, atomic64_read(&j->seq), TASK_UNINTERRUPTIBLE); 794 792 } 795 793 796 794 /* ··· 853 851 854 852 bch2_journal_res_put(j, &res); 855 853 856 - return bch2_journal_flush_seq(j, res.seq); 854 + return bch2_journal_flush_seq(j, res.seq, TASK_UNINTERRUPTIBLE); 857 855 } 858 856 859 857 /* block/unlock the journal: */
+1 -1
fs/bcachefs/journal.h
··· 401 401 int bch2_journal_flush_seq_async(struct journal *, u64, struct closure *); 402 402 void bch2_journal_flush_async(struct journal *, struct closure *); 403 403 404 - int bch2_journal_flush_seq(struct journal *, u64); 404 + int bch2_journal_flush_seq(struct journal *, u64, unsigned); 405 405 int bch2_journal_flush(struct journal *); 406 406 bool bch2_journal_noflush_seq(struct journal *, u64); 407 407 int bch2_journal_meta(struct journal *);
+5 -1
fs/bcachefs/opts.c
··· 63 63 NULL 64 64 }; 65 65 66 - const char * const bch2_str_hash_types[] = { 66 + const char * const __bch2_str_hash_types[] = { 67 67 BCH_STR_HASH_TYPES() 68 68 NULL 69 69 }; ··· 115 115 PRT_STR_OPT_BOUNDSCHECKED(data_type, enum bch_data_type); 116 116 PRT_STR_OPT_BOUNDSCHECKED(csum_type, enum bch_csum_type); 117 117 PRT_STR_OPT_BOUNDSCHECKED(compression_type, enum bch_compression_type); 118 + PRT_STR_OPT_BOUNDSCHECKED(str_hash_type, enum bch_str_hash_type); 118 119 119 120 static int bch2_opt_fix_errors_parse(struct bch_fs *c, const char *val, u64 *res, 120 121 struct printbuf *err) ··· 597 596 copied_opts_start = copied_opts; 598 597 599 598 while ((opt = strsep(&copied_opts, ",")) != NULL) { 599 + if (!*opt) 600 + continue; 601 + 600 602 name = strsep(&opt, "="); 601 603 val = opt; 602 604
+2 -1
fs/bcachefs/opts.h
··· 18 18 extern const char * const __bch2_btree_ids[]; 19 19 extern const char * const bch2_csum_opts[]; 20 20 extern const char * const bch2_compression_opts[]; 21 - extern const char * const bch2_str_hash_types[]; 21 + extern const char * const __bch2_str_hash_types[]; 22 22 extern const char * const bch2_str_hash_opts[]; 23 23 extern const char * const __bch2_data_types[]; 24 24 extern const char * const bch2_member_states[]; ··· 29 29 void bch2_prt_data_type(struct printbuf *, enum bch_data_type); 30 30 void bch2_prt_csum_type(struct printbuf *, enum bch_csum_type); 31 31 void bch2_prt_compression_type(struct printbuf *, enum bch_compression_type); 32 + void bch2_prt_str_hash_type(struct printbuf *, enum bch_str_hash_type); 32 33 33 34 static inline const char *bch2_d_type_str(unsigned d_type) 34 35 {
+1 -1
fs/bcachefs/quota.c
··· 869 869 bkey_quota_init(&new_quota.k_i); 870 870 new_quota.k.p = POS(qid.type, from_kqid(&init_user_ns, qid)); 871 871 872 - ret = bch2_trans_do(c, NULL, NULL, 0, 872 + ret = bch2_trans_commit_do(c, NULL, NULL, 0, 873 873 bch2_set_quota_trans(trans, &new_quota, qdq)) ?: 874 874 __bch2_quota_set(c, bkey_i_to_s_c(&new_quota.k_i), qdq); 875 875
+3 -1
fs/bcachefs/rebalance.c
··· 70 70 71 71 int bch2_set_rebalance_needs_scan(struct bch_fs *c, u64 inum) 72 72 { 73 - int ret = bch2_trans_do(c, NULL, NULL, BCH_TRANS_COMMIT_no_enospc|BCH_TRANS_COMMIT_lazy_rw, 73 + int ret = bch2_trans_commit_do(c, NULL, NULL, 74 + BCH_TRANS_COMMIT_no_enospc| 75 + BCH_TRANS_COMMIT_lazy_rw, 74 76 __bch2_set_rebalance_needs_scan(trans, inum)); 75 77 rebalance_wakeup(c); 76 78 return ret;
+1 -1
fs/bcachefs/recovery.c
··· 1091 1091 1092 1092 bch2_inode_init_early(c, &lostfound_inode); 1093 1093 1094 - ret = bch2_trans_do(c, NULL, NULL, 0, 1094 + ret = bch2_trans_commit_do(c, NULL, NULL, 0, 1095 1095 bch2_create_trans(trans, 1096 1096 BCACHEFS_ROOT_SUBVOL_INUM, 1097 1097 &root_inode, &lostfound_inode,
+2 -2
fs/bcachefs/sb-errors_format.h
··· 267 267 x(journal_entry_dup_same_device, 246, 0) \ 268 268 x(inode_bi_subvol_missing, 247, 0) \ 269 269 x(inode_bi_subvol_wrong, 248, 0) \ 270 - x(inode_points_to_missing_dirent, 249, 0) \ 271 - x(inode_points_to_wrong_dirent, 250, 0) \ 270 + x(inode_points_to_missing_dirent, 249, FSCK_AUTOFIX) \ 271 + x(inode_points_to_wrong_dirent, 250, FSCK_AUTOFIX) \ 272 272 x(inode_bi_parent_nonzero, 251, 0) \ 273 273 x(dirent_to_missing_parent_subvol, 252, 0) \ 274 274 x(dirent_not_visible_in_parent_subvol, 253, 0) \
+42 -18
fs/bcachefs/str_hash.h
··· 46 46 { 47 47 /* XXX ick */ 48 48 struct bch_hash_info info = { 49 - .type = (bi->bi_flags >> INODE_STR_HASH_OFFSET) & 50 - ~(~0U << INODE_STR_HASH_BITS), 49 + .type = INODE_STR_HASH(bi), 51 50 .siphash_key = { .k0 = bi->bi_hash_seed } 52 51 }; 53 52 ··· 252 253 } 253 254 254 255 static __always_inline 255 - int bch2_hash_set_in_snapshot(struct btree_trans *trans, 256 + struct bkey_s_c bch2_hash_set_or_get_in_snapshot(struct btree_trans *trans, 257 + struct btree_iter *iter, 256 258 const struct bch_hash_desc desc, 257 259 const struct bch_hash_info *info, 258 260 subvol_inum inum, u32 snapshot, 259 261 struct bkey_i *insert, 260 262 enum btree_iter_update_trigger_flags flags) 261 263 { 262 - struct btree_iter iter, slot = { NULL }; 264 + struct btree_iter slot = {}; 263 265 struct bkey_s_c k; 264 266 bool found = false; 265 267 int ret; 266 268 267 - for_each_btree_key_upto_norestart(trans, iter, desc.btree_id, 269 + for_each_btree_key_upto_norestart(trans, *iter, desc.btree_id, 268 270 SPOS(insert->k.p.inode, 269 271 desc.hash_bkey(info, bkey_i_to_s_c(insert)), 270 272 snapshot), ··· 280 280 } 281 281 282 282 if (!slot.path && !(flags & STR_HASH_must_replace)) 283 - bch2_trans_copy_iter(&slot, &iter); 283 + bch2_trans_copy_iter(&slot, iter); 284 284 285 285 if (k.k->type != KEY_TYPE_hash_whiteout) 286 286 goto not_found; ··· 290 290 ret = -BCH_ERR_ENOSPC_str_hash_create; 291 291 out: 292 292 bch2_trans_iter_exit(trans, &slot); 293 - bch2_trans_iter_exit(trans, &iter); 294 - 295 - return ret; 293 + bch2_trans_iter_exit(trans, iter); 294 + return ret ? bkey_s_c_err(ret) : bkey_s_c_null; 296 295 found: 297 296 found = true; 298 297 not_found: 299 - 300 - if (!found && (flags & STR_HASH_must_replace)) { 298 + if (found && (flags & STR_HASH_must_create)) { 299 + bch2_trans_iter_exit(trans, &slot); 300 + return k; 301 + } else if (!found && (flags & STR_HASH_must_replace)) { 301 302 ret = -BCH_ERR_ENOENT_str_hash_set_must_replace; 302 - } else if (found && (flags & STR_HASH_must_create)) { 303 - ret = -BCH_ERR_EEXIST_str_hash_set; 304 303 } else { 305 304 if (!found && slot.path) 306 - swap(iter, slot); 305 + swap(*iter, slot); 307 306 308 - insert->k.p = iter.pos; 309 - ret = bch2_trans_update(trans, &iter, insert, flags); 307 + insert->k.p = iter->pos; 308 + ret = bch2_trans_update(trans, iter, insert, flags); 310 309 } 311 310 312 311 goto out; 312 + } 313 + 314 + static __always_inline 315 + int bch2_hash_set_in_snapshot(struct btree_trans *trans, 316 + const struct bch_hash_desc desc, 317 + const struct bch_hash_info *info, 318 + subvol_inum inum, u32 snapshot, 319 + struct bkey_i *insert, 320 + enum btree_iter_update_trigger_flags flags) 321 + { 322 + struct btree_iter iter; 323 + struct bkey_s_c k = bch2_hash_set_or_get_in_snapshot(trans, &iter, desc, info, inum, 324 + snapshot, insert, flags); 325 + int ret = bkey_err(k); 326 + if (ret) 327 + return ret; 328 + if (k.k) { 329 + bch2_trans_iter_exit(trans, &iter); 330 + return -BCH_ERR_EEXIST_str_hash_set; 331 + } 332 + 333 + return 0; 313 334 } 314 335 315 336 static __always_inline ··· 384 363 struct btree_iter iter; 385 364 struct bkey_s_c k = bch2_hash_lookup(trans, &iter, desc, info, inum, key, 386 365 BTREE_ITER_intent); 387 - int ret = bkey_err(k) ?: 388 - bch2_hash_delete_at(trans, desc, info, &iter, 0); 366 + int ret = bkey_err(k); 367 + if (ret) 368 + return ret; 369 + 370 + ret = bch2_hash_delete_at(trans, desc, info, &iter, 0); 389 371 bch2_trans_iter_exit(trans, &iter); 390 372 return ret; 391 373 }
+3 -4
fs/bcachefs/subvolume.c
··· 319 319 320 320 int bch2_subvol_is_ro(struct bch_fs *c, u32 subvol) 321 321 { 322 - return bch2_trans_do(c, NULL, NULL, 0, 323 - bch2_subvol_is_ro_trans(trans, subvol)); 322 + return bch2_trans_do(c, bch2_subvol_is_ro_trans(trans, subvol)); 324 323 } 325 324 326 325 int bch2_snapshot_get_subvol(struct btree_trans *trans, u32 snapshot, ··· 675 676 /* set bi_subvol on root inode */ 676 677 int bch2_fs_upgrade_for_subvolumes(struct bch_fs *c) 677 678 { 678 - int ret = bch2_trans_do(c, NULL, NULL, BCH_TRANS_COMMIT_lazy_rw, 679 - __bch2_fs_upgrade_for_subvolumes(trans)); 679 + int ret = bch2_trans_commit_do(c, NULL, NULL, BCH_TRANS_COMMIT_lazy_rw, 680 + __bch2_fs_upgrade_for_subvolumes(trans)); 680 681 bch_err_fn(c, ret); 681 682 return ret; 682 683 }
+1 -1
fs/bcachefs/super.c
··· 1972 1972 }; 1973 1973 u64 v[3] = { nbuckets - old_nbuckets, 0, 0 }; 1974 1974 1975 - ret = bch2_trans_do(ca->fs, NULL, NULL, 0, 1975 + ret = bch2_trans_commit_do(ca->fs, NULL, NULL, 0, 1976 1976 bch2_disk_accounting_mod(trans, &acc, v, ARRAY_SIZE(v), false)) ?: 1977 1977 bch2_dev_freespace_init(c, ca, old_nbuckets, nbuckets); 1978 1978 if (ret)
+2 -2
fs/bcachefs/tests.c
··· 450 450 k.k_i.k.p.snapshot = snapid; 451 451 k.k_i.k.size = len; 452 452 453 - ret = bch2_trans_do(c, NULL, NULL, 0, 453 + ret = bch2_trans_commit_do(c, NULL, NULL, 0, 454 454 bch2_btree_insert_nonextent(trans, BTREE_ID_extents, &k.k_i, 455 455 BTREE_UPDATE_internal_snapshot_node)); 456 456 bch_err_fn(c, ret); ··· 510 510 if (ret) 511 511 return ret; 512 512 513 - ret = bch2_trans_do(c, NULL, NULL, 0, 513 + ret = bch2_trans_commit_do(c, NULL, NULL, 0, 514 514 bch2_snapshot_node_create(trans, U32_MAX, 515 515 snapids, 516 516 snapid_subvols,
+1 -1
fs/bcachefs/xattr.c
··· 330 330 { 331 331 struct bch_inode_info *inode = to_bch_ei(vinode); 332 332 struct bch_fs *c = inode->v.i_sb->s_fs_info; 333 - int ret = bch2_trans_do(c, NULL, NULL, 0, 333 + int ret = bch2_trans_do(c, 334 334 bch2_xattr_get_trans(trans, inode, name, buffer, size, handler->flags)); 335 335 336 336 if (ret < 0 && bch2_err_matches(ret, ENOENT))
+2
fs/btrfs/block-group.c
··· 3819 3819 spin_lock(&cache->lock); 3820 3820 if (cache->ro) 3821 3821 space_info->bytes_readonly += num_bytes; 3822 + else if (btrfs_is_zoned(cache->fs_info)) 3823 + space_info->bytes_zone_unusable += num_bytes; 3822 3824 cache->reserved -= num_bytes; 3823 3825 space_info->bytes_reserved -= num_bytes; 3824 3826 space_info->max_extent_size = 0;
+2 -2
fs/btrfs/dir-item.c
··· 347 347 return di; 348 348 } 349 349 /* Adjust return code if the key was not found in the next leaf. */ 350 - if (ret > 0) 351 - ret = 0; 350 + if (ret >= 0) 351 + ret = -ENOENT; 352 352 353 353 return ERR_PTR(ret); 354 354 }
+1 -1
fs/btrfs/disk-io.c
··· 1959 1959 fs_info->qgroup_seq = 1; 1960 1960 fs_info->qgroup_ulist = NULL; 1961 1961 fs_info->qgroup_rescan_running = false; 1962 - fs_info->qgroup_drop_subtree_thres = BTRFS_MAX_LEVEL; 1962 + fs_info->qgroup_drop_subtree_thres = BTRFS_QGROUP_DROP_SUBTREE_THRES_DEFAULT; 1963 1963 mutex_init(&fs_info->qgroup_rescan_lock); 1964 1964 } 1965 1965
+9 -8
fs/btrfs/extent_io.c
··· 262 262 263 263 for (i = 0; i < found_folios; i++) { 264 264 struct folio *folio = fbatch.folios[i]; 265 - u32 len = end + 1 - start; 265 + u64 range_start; 266 + u32 range_len; 266 267 267 268 if (folio == locked_folio) 268 269 continue; 269 270 270 - if (btrfs_folio_start_writer_lock(fs_info, folio, start, 271 - len)) 272 - goto out; 273 - 271 + folio_lock(folio); 274 272 if (!folio_test_dirty(folio) || folio->mapping != mapping) { 275 - btrfs_folio_end_writer_lock(fs_info, folio, start, 276 - len); 273 + folio_unlock(folio); 277 274 goto out; 278 275 } 276 + range_start = max_t(u64, folio_pos(folio), start); 277 + range_len = min_t(u64, folio_pos(folio) + folio_size(folio), 278 + end + 1) - range_start; 279 + btrfs_folio_set_writer_lock(fs_info, folio, range_start, range_len); 279 280 280 - processed_end = folio_pos(folio) + folio_size(folio) - 1; 281 + processed_end = range_start + range_len - 1; 281 282 } 282 283 folio_batch_release(&fbatch); 283 284 cond_resched();
+16 -15
fs/btrfs/extent_map.c
··· 243 243 /* 244 244 * Handle the on-disk data extents merge for @prev and @next. 245 245 * 246 + * @prev: left extent to merge 247 + * @next: right extent to merge 248 + * @merged: the extent we will not discard after the merge; updated with new values 249 + * 250 + * After this, one of the two extents is the new merged extent and the other is 251 + * removed from the tree and likely freed. Note that @merged is one of @prev/@next 252 + * so there is const/non-const aliasing occurring here. 253 + * 246 254 * Only touches disk_bytenr/disk_num_bytes/offset/ram_bytes. 247 255 * For now only uncompressed regular extent can be merged. 248 - * 249 - * @prev and @next will be both updated to point to the new merged range. 250 - * Thus one of them should be removed by the caller. 251 256 */ 252 - static void merge_ondisk_extents(struct extent_map *prev, struct extent_map *next) 257 + static void merge_ondisk_extents(const struct extent_map *prev, const struct extent_map *next, 258 + struct extent_map *merged) 253 259 { 254 260 u64 new_disk_bytenr; 255 261 u64 new_disk_num_bytes; ··· 290 284 new_disk_bytenr; 291 285 new_offset = prev->disk_bytenr + prev->offset - new_disk_bytenr; 292 286 293 - prev->disk_bytenr = new_disk_bytenr; 294 - prev->disk_num_bytes = new_disk_num_bytes; 295 - prev->ram_bytes = new_disk_num_bytes; 296 - prev->offset = new_offset; 297 - 298 - next->disk_bytenr = new_disk_bytenr; 299 - next->disk_num_bytes = new_disk_num_bytes; 300 - next->ram_bytes = new_disk_num_bytes; 301 - next->offset = new_offset; 287 + merged->disk_bytenr = new_disk_bytenr; 288 + merged->disk_num_bytes = new_disk_num_bytes; 289 + merged->ram_bytes = new_disk_num_bytes; 290 + merged->offset = new_offset; 302 291 } 303 292 304 293 static void dump_extent_map(struct btrfs_fs_info *fs_info, const char *prefix, ··· 362 361 em->generation = max(em->generation, merge->generation); 363 362 364 363 if (em->disk_bytenr < EXTENT_MAP_LAST_BYTE) 365 - merge_ondisk_extents(merge, em); 364 + merge_ondisk_extents(merge, em, em); 366 365 em->flags |= EXTENT_FLAG_MERGED; 367 366 368 367 validate_extent_map(fs_info, em); ··· 379 378 if (rb && can_merge_extent_map(merge) && mergeable_maps(em, merge)) { 380 379 em->len += merge->len; 381 380 if (em->disk_bytenr < EXTENT_MAP_LAST_BYTE) 382 - merge_ondisk_extents(em, merge); 381 + merge_ondisk_extents(em, merge, em); 383 382 validate_extent_map(fs_info, em); 384 383 rb_erase(&merge->rb_node, &tree->root); 385 384 RB_CLEAR_NODE(&merge->rb_node);
+2 -5
fs/btrfs/inode.c
··· 4368 4368 */ 4369 4369 if (btrfs_ino(inode) == BTRFS_EMPTY_SUBVOL_DIR_OBJECTID) { 4370 4370 di = btrfs_search_dir_index_item(root, path, dir_ino, &fname.disk_name); 4371 - if (IS_ERR_OR_NULL(di)) { 4372 - if (!di) 4373 - ret = -ENOENT; 4374 - else 4375 - ret = PTR_ERR(di); 4371 + if (IS_ERR(di)) { 4372 + ret = PTR_ERR(di); 4376 4373 btrfs_abort_transaction(trans, ret); 4377 4374 goto out; 4378 4375 }
+1 -1
fs/btrfs/qgroup.c
··· 1407 1407 fs_info->quota_root = NULL; 1408 1408 fs_info->qgroup_flags &= ~BTRFS_QGROUP_STATUS_FLAG_ON; 1409 1409 fs_info->qgroup_flags &= ~BTRFS_QGROUP_STATUS_FLAG_SIMPLE_MODE; 1410 - fs_info->qgroup_drop_subtree_thres = BTRFS_MAX_LEVEL; 1410 + fs_info->qgroup_drop_subtree_thres = BTRFS_QGROUP_DROP_SUBTREE_THRES_DEFAULT; 1411 1411 spin_unlock(&fs_info->qgroup_lock); 1412 1412 1413 1413 btrfs_free_qgroup_config(fs_info);
+2
fs/btrfs/qgroup.h
··· 121 121 #define BTRFS_QGROUP_RUNTIME_FLAG_CANCEL_RESCAN (1ULL << 63) 122 122 #define BTRFS_QGROUP_RUNTIME_FLAG_NO_ACCOUNTING (1ULL << 62) 123 123 124 + #define BTRFS_QGROUP_DROP_SUBTREE_THRES_DEFAULT (3) 125 + 124 126 /* 125 127 * Record a dirty extent, and info qgroup to update quota on it 126 128 */
+10 -2
fs/btrfs/super.c
··· 340 340 fallthrough; 341 341 case Opt_compress: 342 342 case Opt_compress_type: 343 + /* 344 + * Provide the same semantics as older kernels that don't use fs 345 + * context, specifying the "compress" option clears 346 + * "force-compress" without the need to pass 347 + * "compress-force=[no|none]" before specifying "compress". 348 + */ 349 + if (opt != Opt_compress_force && opt != Opt_compress_force_type) 350 + btrfs_clear_opt(ctx->mount_opt, FORCE_COMPRESS); 351 + 343 352 if (opt == Opt_compress || opt == Opt_compress_force) { 344 353 ctx->compress_type = BTRFS_COMPRESS_ZLIB; 345 354 ctx->compress_level = BTRFS_ZLIB_DEFAULT_LEVEL; ··· 1507 1498 sync_filesystem(sb); 1508 1499 set_bit(BTRFS_FS_STATE_REMOUNTING, &fs_info->fs_state); 1509 1500 1510 - if (!mount_reconfigure && 1511 - !btrfs_check_options(fs_info, &ctx->mount_opt, fc->sb_flags)) 1501 + if (!btrfs_check_options(fs_info, &ctx->mount_opt, fc->sb_flags)) 1512 1502 return -EINVAL; 1513 1503 1514 1504 ret = btrfs_check_features(fs_info, !(fc->sb_flags & SB_RDONLY));
+1 -1
fs/fat/namei_vfat.c
··· 1037 1037 if (corrupt < 0) { 1038 1038 fat_fs_error(new_dir->i_sb, 1039 1039 "%s: Filesystem corrupted (i_pos %lld)", 1040 - __func__, sinfo.i_pos); 1040 + __func__, new_i_pos); 1041 1041 } 1042 1042 goto out; 1043 1043 }
+36 -75
fs/iomap/buffered-io.c
··· 1145 1145 } 1146 1146 1147 1147 /* 1148 + * When a short write occurs, the filesystem might need to use ->iomap_end 1149 + * to remove space reservations created in ->iomap_begin. 1150 + * 1151 + * For filesystems that use delayed allocation, there can be dirty pages over 1152 + * the delalloc extent outside the range of a short write but still within the 1153 + * delalloc extent allocated for this iomap if the write raced with page 1154 + * faults. 1155 + * 1148 1156 * Punch out all the delalloc blocks in the range given except for those that 1149 1157 * have dirty data still pending in the page cache - those are going to be 1150 1158 * written and so must still retain the delalloc backing for writeback. 1159 + * 1160 + * The punch() callback *must* only punch delalloc extents in the range passed 1161 + * to it. It must skip over all other types of extents in the range and leave 1162 + * them completely unchanged. It must do this punch atomically with respect to 1163 + * other extent modifications. 1164 + * 1165 + * The punch() callback may be called with a folio locked to prevent writeback 1166 + * extent allocation racing at the edge of the range we are currently punching. 1167 + * The locked folio may or may not cover the range being punched, so it is not 1168 + * safe for the punch() callback to lock folios itself. 1169 + * 1170 + * Lock order is: 1171 + * 1172 + * inode->i_rwsem (shared or exclusive) 1173 + * inode->i_mapping->invalidate_lock (exclusive) 1174 + * folio_lock() 1175 + * ->punch 1176 + * internal filesystem allocation lock 1151 1177 * 1152 1178 * As we are scanning the page cache for data, we don't need to reimplement the 1153 1179 * wheel - mapping_seek_hole_data() does exactly what we need to identify the ··· 1203 1177 * require sprinkling this code with magic "+ 1" and "- 1" arithmetic and expose 1204 1178 * the code to subtle off-by-one bugs.... 1205 1179 */ 1206 - static void iomap_write_delalloc_release(struct inode *inode, loff_t start_byte, 1180 + void iomap_write_delalloc_release(struct inode *inode, loff_t start_byte, 1207 1181 loff_t end_byte, unsigned flags, struct iomap *iomap, 1208 1182 iomap_punch_t punch) 1209 1183 { ··· 1211 1185 loff_t scan_end_byte = min(i_size_read(inode), end_byte); 1212 1186 1213 1187 /* 1214 - * Lock the mapping to avoid races with page faults re-instantiating 1215 - * folios and dirtying them via ->page_mkwrite whilst we walk the 1216 - * cache and perform delalloc extent removal. Failing to do this can 1217 - * leave dirty pages with no space reservation in the cache. 1188 + * The caller must hold invalidate_lock to avoid races with page faults 1189 + * re-instantiating folios and dirtying them via ->page_mkwrite whilst 1190 + * we walk the cache and perform delalloc extent removal. Failing to do 1191 + * this can leave dirty pages with no space reservation in the cache. 1218 1192 */ 1219 - filemap_invalidate_lock(inode->i_mapping); 1193 + lockdep_assert_held_write(&inode->i_mapping->invalidate_lock); 1194 + 1220 1195 while (start_byte < scan_end_byte) { 1221 1196 loff_t data_end; 1222 1197 ··· 1234 1207 if (start_byte == -ENXIO || start_byte == scan_end_byte) 1235 1208 break; 1236 1209 if (WARN_ON_ONCE(start_byte < 0)) 1237 - goto out_unlock; 1210 + return; 1238 1211 WARN_ON_ONCE(start_byte < punch_start_byte); 1239 1212 WARN_ON_ONCE(start_byte > scan_end_byte); 1240 1213 ··· 1245 1218 data_end = mapping_seek_hole_data(inode->i_mapping, start_byte, 1246 1219 scan_end_byte, SEEK_HOLE); 1247 1220 if (WARN_ON_ONCE(data_end < 0)) 1248 - goto out_unlock; 1221 + return; 1249 1222 1250 1223 /* 1251 1224 * If we race with post-direct I/O invalidation of the page cache, ··· 1267 1240 if (punch_start_byte < end_byte) 1268 1241 punch(inode, punch_start_byte, end_byte - punch_start_byte, 1269 1242 iomap); 1270 - out_unlock: 1271 - filemap_invalidate_unlock(inode->i_mapping); 1272 1243 } 1273 - 1274 - /* 1275 - * When a short write occurs, the filesystem may need to remove reserved space 1276 - * that was allocated in ->iomap_begin from it's ->iomap_end method. For 1277 - * filesystems that use delayed allocation, we need to punch out delalloc 1278 - * extents from the range that are not dirty in the page cache. As the write can 1279 - * race with page faults, there can be dirty pages over the delalloc extent 1280 - * outside the range of a short write but still within the delalloc extent 1281 - * allocated for this iomap. 1282 - * 1283 - * This function uses [start_byte, end_byte) intervals (i.e. open ended) to 1284 - * simplify range iterations. 1285 - * 1286 - * The punch() callback *must* only punch delalloc extents in the range passed 1287 - * to it. It must skip over all other types of extents in the range and leave 1288 - * them completely unchanged. It must do this punch atomically with respect to 1289 - * other extent modifications. 1290 - * 1291 - * The punch() callback may be called with a folio locked to prevent writeback 1292 - * extent allocation racing at the edge of the range we are currently punching. 1293 - * The locked folio may or may not cover the range being punched, so it is not 1294 - * safe for the punch() callback to lock folios itself. 1295 - * 1296 - * Lock order is: 1297 - * 1298 - * inode->i_rwsem (shared or exclusive) 1299 - * inode->i_mapping->invalidate_lock (exclusive) 1300 - * folio_lock() 1301 - * ->punch 1302 - * internal filesystem allocation lock 1303 - */ 1304 - void iomap_file_buffered_write_punch_delalloc(struct inode *inode, 1305 - loff_t pos, loff_t length, ssize_t written, unsigned flags, 1306 - struct iomap *iomap, iomap_punch_t punch) 1307 - { 1308 - loff_t start_byte; 1309 - loff_t end_byte; 1310 - unsigned int blocksize = i_blocksize(inode); 1311 - 1312 - if (iomap->type != IOMAP_DELALLOC) 1313 - return; 1314 - 1315 - /* If we didn't reserve the blocks, we're not allowed to punch them. */ 1316 - if (!(iomap->flags & IOMAP_F_NEW)) 1317 - return; 1318 - 1319 - /* 1320 - * start_byte refers to the first unused block after a short write. If 1321 - * nothing was written, round offset down to point at the first block in 1322 - * the range. 1323 - */ 1324 - if (unlikely(!written)) 1325 - start_byte = round_down(pos, blocksize); 1326 - else 1327 - start_byte = round_up(pos + written, blocksize); 1328 - end_byte = round_up(pos + length, blocksize); 1329 - 1330 - /* Nothing to do if we've written the entire delalloc extent */ 1331 - if (start_byte >= end_byte) 1332 - return; 1333 - 1334 - iomap_write_delalloc_release(inode, start_byte, end_byte, flags, iomap, 1335 - punch); 1336 - } 1337 - EXPORT_SYMBOL_GPL(iomap_file_buffered_write_punch_delalloc); 1244 + EXPORT_SYMBOL_GPL(iomap_write_delalloc_release); 1338 1245 1339 1246 static loff_t iomap_unshare_iter(struct iomap_iter *iter) 1340 1247 {
+1 -1
fs/jfs/jfs_dmap.c
··· 187 187 } 188 188 189 189 bmp->db_numag = le32_to_cpu(dbmp_le->dn_numag); 190 - if (!bmp->db_numag || bmp->db_numag >= MAXAG) { 190 + if (!bmp->db_numag || bmp->db_numag > MAXAG) { 191 191 err = -EINVAL; 192 192 goto err_release_metapage; 193 193 }
+3 -1
fs/namespace.c
··· 3944 3944 new = copy_tree(old, old->mnt.mnt_root, copy_flags); 3945 3945 if (IS_ERR(new)) { 3946 3946 namespace_unlock(); 3947 - free_mnt_ns(new_ns); 3947 + ns_free_inum(&new_ns->ns); 3948 + dec_mnt_namespaces(new_ns->ucounts); 3949 + mnt_ns_release(new_ns); 3948 3950 return ERR_CAST(new); 3949 3951 } 3950 3952 if (user_ns != ns->user_ns) {
+14 -33
fs/netfs/buffered_read.c
··· 67 67 * Decant the list of folios to read into a rolling buffer. 68 68 */ 69 69 static size_t netfs_load_buffer_from_ra(struct netfs_io_request *rreq, 70 - struct folio_queue *folioq) 70 + struct folio_queue *folioq, 71 + struct folio_batch *put_batch) 71 72 { 72 73 unsigned int order, nr; 73 74 size_t size = 0; ··· 83 82 order = folio_order(folio); 84 83 folioq->orders[i] = order; 85 84 size += PAGE_SIZE << order; 85 + 86 + if (!folio_batch_add(put_batch, folio)) 87 + folio_batch_release(put_batch); 86 88 } 87 89 88 90 for (int i = nr; i < folioq_nr_slots(folioq); i++) ··· 124 120 * that we will need to release later - but we don't want to do 125 121 * that until after we've started the I/O. 126 122 */ 123 + struct folio_batch put_batch; 124 + 125 + folio_batch_init(&put_batch); 127 126 while (rreq->submitted < subreq->start + rsize) { 128 127 struct folio_queue *tail = rreq->buffer_tail, *new; 129 128 size_t added; ··· 139 132 new->prev = tail; 140 133 tail->next = new; 141 134 rreq->buffer_tail = new; 142 - added = netfs_load_buffer_from_ra(rreq, new); 135 + added = netfs_load_buffer_from_ra(rreq, new, &put_batch); 143 136 rreq->iter.count += added; 144 137 rreq->submitted += added; 145 138 } 139 + folio_batch_release(&put_batch); 146 140 } 147 141 148 142 subreq->len = rsize; ··· 356 348 static int netfs_prime_buffer(struct netfs_io_request *rreq) 357 349 { 358 350 struct folio_queue *folioq; 351 + struct folio_batch put_batch; 359 352 size_t added; 360 353 361 354 folioq = kmalloc(sizeof(*folioq), GFP_KERNEL); ··· 369 360 rreq->submitted = rreq->start; 370 361 iov_iter_folio_queue(&rreq->iter, ITER_DEST, folioq, 0, 0, 0); 371 362 372 - added = netfs_load_buffer_from_ra(rreq, folioq); 363 + folio_batch_init(&put_batch); 364 + added = netfs_load_buffer_from_ra(rreq, folioq, &put_batch); 365 + folio_batch_release(&put_batch); 373 366 rreq->iter.count += added; 374 367 rreq->submitted += added; 375 368 return 0; 376 - } 377 - 378 - /* 379 - * Drop the ref on each folio that we inherited from the VM readahead code. We 380 - * still have the folio locks to pin the page until we complete the I/O. 381 - * 382 - * Note that we can't just release the batch in each queue struct as we use the 383 - * occupancy count in other places. 384 - */ 385 - static void netfs_put_ra_refs(struct folio_queue *folioq) 386 - { 387 - struct folio_batch fbatch; 388 - 389 - folio_batch_init(&fbatch); 390 - while (folioq) { 391 - for (unsigned int slot = 0; slot < folioq_count(folioq); slot++) { 392 - struct folio *folio = folioq_folio(folioq, slot); 393 - if (!folio) 394 - continue; 395 - trace_netfs_folio(folio, netfs_folio_trace_read_put); 396 - if (!folio_batch_add(&fbatch, folio)) 397 - folio_batch_release(&fbatch); 398 - } 399 - folioq = folioq->next; 400 - } 401 - 402 - folio_batch_release(&fbatch); 403 369 } 404 370 405 371 /** ··· 419 435 if (netfs_prime_buffer(rreq) < 0) 420 436 goto cleanup_free; 421 437 netfs_read_to_pagecache(rreq); 422 - 423 - /* Release the folio refs whilst we're waiting for the I/O. */ 424 - netfs_put_ra_refs(rreq->buffer); 425 438 426 439 netfs_put_request(rreq, true, netfs_rreq_trace_put_return); 427 440 return;
+2 -1
fs/netfs/locking.c
··· 109 109 up_write(&inode->i_rwsem); 110 110 return -ERESTARTSYS; 111 111 } 112 + downgrade_write(&inode->i_rwsem); 112 113 return 0; 113 114 } 114 115 EXPORT_SYMBOL(netfs_start_io_write); ··· 124 123 void netfs_end_io_write(struct inode *inode) 125 124 __releases(inode->i_rwsem) 126 125 { 127 - up_write(&inode->i_rwsem); 126 + up_read(&inode->i_rwsem); 128 127 } 129 128 EXPORT_SYMBOL(netfs_end_io_write); 130 129
+2
fs/netfs/read_collect.c
··· 77 77 folio_unlock(folio); 78 78 } 79 79 } 80 + 81 + folioq_clear(folioq, slot); 80 82 } 81 83 82 84 /*
+25 -23
fs/nilfs2/dir.c
··· 289 289 * The folio is mapped and unlocked. When the caller is finished with 290 290 * the entry, it should call folio_release_kmap(). 291 291 * 292 - * On failure, returns NULL and the caller should ignore foliop. 292 + * On failure, returns an error pointer and the caller should ignore foliop. 293 293 */ 294 294 struct nilfs_dir_entry *nilfs_find_entry(struct inode *dir, 295 295 const struct qstr *qstr, struct folio **foliop) ··· 312 312 do { 313 313 char *kaddr = nilfs_get_folio(dir, n, foliop); 314 314 315 - if (!IS_ERR(kaddr)) { 316 - de = (struct nilfs_dir_entry *)kaddr; 317 - kaddr += nilfs_last_byte(dir, n) - reclen; 318 - while ((char *) de <= kaddr) { 319 - if (de->rec_len == 0) { 320 - nilfs_error(dir->i_sb, 321 - "zero-length directory entry"); 322 - folio_release_kmap(*foliop, kaddr); 323 - goto out; 324 - } 325 - if (nilfs_match(namelen, name, de)) 326 - goto found; 327 - de = nilfs_next_entry(de); 315 + if (IS_ERR(kaddr)) 316 + return ERR_CAST(kaddr); 317 + 318 + de = (struct nilfs_dir_entry *)kaddr; 319 + kaddr += nilfs_last_byte(dir, n) - reclen; 320 + while ((char *)de <= kaddr) { 321 + if (de->rec_len == 0) { 322 + nilfs_error(dir->i_sb, 323 + "zero-length directory entry"); 324 + folio_release_kmap(*foliop, kaddr); 325 + goto out; 328 326 } 329 - folio_release_kmap(*foliop, kaddr); 327 + if (nilfs_match(namelen, name, de)) 328 + goto found; 329 + de = nilfs_next_entry(de); 330 330 } 331 + folio_release_kmap(*foliop, kaddr); 332 + 331 333 if (++n >= npages) 332 334 n = 0; 333 335 /* next folio is past the blocks we've got */ ··· 342 340 } 343 341 } while (n != start); 344 342 out: 345 - return NULL; 343 + return ERR_PTR(-ENOENT); 346 344 347 345 found: 348 346 ei->i_dir_start_lookup = n; ··· 386 384 return NULL; 387 385 } 388 386 389 - ino_t nilfs_inode_by_name(struct inode *dir, const struct qstr *qstr) 387 + int nilfs_inode_by_name(struct inode *dir, const struct qstr *qstr, ino_t *ino) 390 388 { 391 - ino_t res = 0; 392 389 struct nilfs_dir_entry *de; 393 390 struct folio *folio; 394 391 395 392 de = nilfs_find_entry(dir, qstr, &folio); 396 - if (de) { 397 - res = le64_to_cpu(de->inode); 398 - folio_release_kmap(folio, de); 399 - } 400 - return res; 393 + if (IS_ERR(de)) 394 + return PTR_ERR(de); 395 + 396 + *ino = le64_to_cpu(de->inode); 397 + folio_release_kmap(folio, de); 398 + return 0; 401 399 } 402 400 403 401 void nilfs_set_link(struct inode *dir, struct nilfs_dir_entry *de,
+26 -13
fs/nilfs2/namei.c
··· 55 55 { 56 56 struct inode *inode; 57 57 ino_t ino; 58 + int res; 58 59 59 60 if (dentry->d_name.len > NILFS_NAME_LEN) 60 61 return ERR_PTR(-ENAMETOOLONG); 61 62 62 - ino = nilfs_inode_by_name(dir, &dentry->d_name); 63 - inode = ino ? nilfs_iget(dir->i_sb, NILFS_I(dir)->i_root, ino) : NULL; 63 + res = nilfs_inode_by_name(dir, &dentry->d_name, &ino); 64 + if (res) { 65 + if (res != -ENOENT) 66 + return ERR_PTR(res); 67 + inode = NULL; 68 + } else { 69 + inode = nilfs_iget(dir->i_sb, NILFS_I(dir)->i_root, ino); 70 + } 71 + 64 72 return d_splice_alias(inode, dentry); 65 73 } 66 74 ··· 271 263 struct folio *folio; 272 264 int err; 273 265 274 - err = -ENOENT; 275 266 de = nilfs_find_entry(dir, &dentry->d_name, &folio); 276 - if (!de) 267 + if (IS_ERR(de)) { 268 + err = PTR_ERR(de); 277 269 goto out; 270 + } 278 271 279 272 inode = d_inode(dentry); 280 273 err = -EIO; ··· 371 362 if (unlikely(err)) 372 363 return err; 373 364 374 - err = -ENOENT; 375 365 old_de = nilfs_find_entry(old_dir, &old_dentry->d_name, &old_folio); 376 - if (!old_de) 366 + if (IS_ERR(old_de)) { 367 + err = PTR_ERR(old_de); 377 368 goto out; 369 + } 378 370 379 371 if (S_ISDIR(old_inode->i_mode)) { 380 372 err = -EIO; ··· 392 382 if (dir_de && !nilfs_empty_dir(new_inode)) 393 383 goto out_dir; 394 384 395 - err = -ENOENT; 396 - new_de = nilfs_find_entry(new_dir, &new_dentry->d_name, &new_folio); 397 - if (!new_de) 385 + new_de = nilfs_find_entry(new_dir, &new_dentry->d_name, 386 + &new_folio); 387 + if (IS_ERR(new_de)) { 388 + err = PTR_ERR(new_de); 398 389 goto out_dir; 390 + } 399 391 nilfs_set_link(new_dir, new_de, new_folio, old_inode); 400 392 folio_release_kmap(new_folio, new_de); 401 393 nilfs_mark_inode_dirty(new_dir); ··· 452 440 */ 453 441 static struct dentry *nilfs_get_parent(struct dentry *child) 454 442 { 455 - unsigned long ino; 443 + ino_t ino; 444 + int res; 456 445 struct nilfs_root *root; 457 446 458 - ino = nilfs_inode_by_name(d_inode(child), &dotdot_name); 459 - if (!ino) 460 - return ERR_PTR(-ENOENT); 447 + res = nilfs_inode_by_name(d_inode(child), &dotdot_name, &ino); 448 + if (res) 449 + return ERR_PTR(res); 461 450 462 451 root = NILFS_I(d_inode(child))->i_root; 463 452
+1 -1
fs/nilfs2/nilfs.h
··· 254 254 255 255 /* dir.c */ 256 256 int nilfs_add_link(struct dentry *, struct inode *); 257 - ino_t nilfs_inode_by_name(struct inode *, const struct qstr *); 257 + int nilfs_inode_by_name(struct inode *dir, const struct qstr *qstr, ino_t *ino); 258 258 int nilfs_make_empty(struct inode *, struct inode *); 259 259 struct nilfs_dir_entry *nilfs_find_entry(struct inode *, const struct qstr *, 260 260 struct folio **);
+4 -2
fs/nilfs2/page.c
··· 77 77 const unsigned long clear_bits = 78 78 (BIT(BH_Uptodate) | BIT(BH_Dirty) | BIT(BH_Mapped) | 79 79 BIT(BH_Async_Write) | BIT(BH_NILFS_Volatile) | 80 - BIT(BH_NILFS_Checked) | BIT(BH_NILFS_Redirected)); 80 + BIT(BH_NILFS_Checked) | BIT(BH_NILFS_Redirected) | 81 + BIT(BH_Delay)); 81 82 82 83 lock_buffer(bh); 83 84 set_mask_bits(&bh->b_state, clear_bits, 0); ··· 407 406 const unsigned long clear_bits = 408 407 (BIT(BH_Uptodate) | BIT(BH_Dirty) | BIT(BH_Mapped) | 409 408 BIT(BH_Async_Write) | BIT(BH_NILFS_Volatile) | 410 - BIT(BH_NILFS_Checked) | BIT(BH_NILFS_Redirected)); 409 + BIT(BH_NILFS_Checked) | BIT(BH_NILFS_Redirected) | 410 + BIT(BH_Delay)); 411 411 412 412 bh = head; 413 413 do {
+6 -3
fs/ocfs2/file.c
··· 1129 1129 trace_ocfs2_setattr(inode, dentry, 1130 1130 (unsigned long long)OCFS2_I(inode)->ip_blkno, 1131 1131 dentry->d_name.len, dentry->d_name.name, 1132 - attr->ia_valid, attr->ia_mode, 1133 - from_kuid(&init_user_ns, attr->ia_uid), 1134 - from_kgid(&init_user_ns, attr->ia_gid)); 1132 + attr->ia_valid, 1133 + attr->ia_valid & ATTR_MODE ? attr->ia_mode : 0, 1134 + attr->ia_valid & ATTR_UID ? 1135 + from_kuid(&init_user_ns, attr->ia_uid) : 0, 1136 + attr->ia_valid & ATTR_GID ? 1137 + from_kgid(&init_user_ns, attr->ia_gid) : 0); 1135 1138 1136 1139 /* ensuring we don't even attempt to truncate a symlink */ 1137 1140 if (S_ISLNK(inode->i_mode))
+2
fs/open.c
··· 1457 1457 1458 1458 if (unlikely(usize < OPEN_HOW_SIZE_VER0)) 1459 1459 return -EINVAL; 1460 + if (unlikely(usize > PAGE_SIZE)) 1461 + return -E2BIG; 1460 1462 1461 1463 err = copy_struct_from_user(&tmp, sizeof(tmp), how, usize); 1462 1464 if (err)
+1 -1
fs/proc/fd.c
··· 77 77 return single_open(file, seq_show, inode); 78 78 } 79 79 80 - /** 80 + /* 81 81 * Shared /proc/pid/fdinfo and /proc/pid/fdinfo/fd permission helper to ensure 82 82 * that the current task has PTRACE_MODE_READ in addition to the normal 83 83 * POSIX-like checks.
+10 -6
fs/proc/task_mmu.c
··· 909 909 { 910 910 /* 911 911 * Don't forget to update Documentation/ on changes. 912 + * 913 + * The length of the second argument of mnemonics[] 914 + * needs to be 3 instead of previously set 2 915 + * (i.e. from [BITS_PER_LONG][2] to [BITS_PER_LONG][3]) 916 + * to avoid spurious 917 + * -Werror=unterminated-string-initialization warning 918 + * with GCC 15 912 919 */ 913 - static const char mnemonics[BITS_PER_LONG][2] = { 920 + static const char mnemonics[BITS_PER_LONG][3] = { 914 921 /* 915 922 * In case if we meet a flag we don't know about. 916 923 */ ··· 994 987 for (i = 0; i < BITS_PER_LONG; i++) { 995 988 if (!mnemonics[i][0]) 996 989 continue; 997 - if (vma->vm_flags & (1UL << i)) { 998 - seq_putc(m, mnemonics[i][0]); 999 - seq_putc(m, mnemonics[i][1]); 1000 - seq_putc(m, ' '); 1001 - } 990 + if (vma->vm_flags & (1UL << i)) 991 + seq_printf(m, "%s ", mnemonics[i]); 1002 992 } 1003 993 seq_putc(m, '\n'); 1004 994 }
-9
fs/smb/client/cifsproto.h
··· 252 252 unsigned int to_read); 253 253 extern ssize_t cifs_discard_from_socket(struct TCP_Server_Info *server, 254 254 size_t to_read); 255 - extern int cifs_read_page_from_socket(struct TCP_Server_Info *server, 256 - struct page *page, 257 - unsigned int page_offset, 258 - unsigned int to_read); 259 255 int cifs_read_iter_from_socket(struct TCP_Server_Info *server, 260 256 struct iov_iter *iter, 261 257 unsigned int to_read); ··· 619 623 int cifs_alloc_hash(const char *name, struct shash_desc **sdesc); 620 624 void cifs_free_hash(struct shash_desc **sdesc); 621 625 622 - struct cifs_chan * 623 - cifs_ses_find_chan(struct cifs_ses *ses, struct TCP_Server_Info *server); 624 626 int cifs_try_adding_channels(struct cifs_ses *ses); 625 627 bool is_server_using_iface(struct TCP_Server_Info *server, 626 628 struct cifs_server_iface *iface); ··· 634 640 void 635 641 cifs_chan_clear_in_reconnect(struct cifs_ses *ses, 636 642 struct TCP_Server_Info *server); 637 - bool 638 - cifs_chan_in_reconnect(struct cifs_ses *ses, 639 - struct TCP_Server_Info *server); 640 643 void 641 644 cifs_chan_set_need_reconnect(struct cifs_ses *ses, 642 645 struct TCP_Server_Info *server);
-4
fs/smb/client/compress.c
··· 166 166 loff_t start = iter->xarray_start + iter->iov_offset; 167 167 pgoff_t last, index = start / PAGE_SIZE; 168 168 size_t len, off, foff; 169 - ssize_t ret = 0; 170 169 void *p; 171 170 int s = 0; 172 171 ··· 191 192 p = kmap_local_page(folio_page(folio, j)); 192 193 memcpy(&sample[s], p, len2); 193 194 kunmap_local(p); 194 - 195 - if (ret < 0) 196 - return ret; 197 195 198 196 s += len2; 199 197
-12
fs/smb/client/connect.c
··· 795 795 } 796 796 797 797 int 798 - cifs_read_page_from_socket(struct TCP_Server_Info *server, struct page *page, 799 - unsigned int page_offset, unsigned int to_read) 800 - { 801 - struct msghdr smb_msg = {}; 802 - struct bio_vec bv; 803 - 804 - bvec_set_page(&bv, page, to_read, page_offset); 805 - iov_iter_bvec(&smb_msg.msg_iter, ITER_DEST, &bv, 1, to_read); 806 - return cifs_readv_from_socket(server, &smb_msg); 807 - } 808 - 809 - int 810 798 cifs_read_iter_from_socket(struct TCP_Server_Info *server, struct iov_iter *iter, 811 799 unsigned int to_read) 812 800 {
-32
fs/smb/client/sess.c
··· 115 115 ses->chans[chan_index].in_reconnect = false; 116 116 } 117 117 118 - bool 119 - cifs_chan_in_reconnect(struct cifs_ses *ses, 120 - struct TCP_Server_Info *server) 121 - { 122 - unsigned int chan_index = cifs_ses_get_chan_index(ses, server); 123 - 124 - if (chan_index == CIFS_INVAL_CHAN_INDEX) 125 - return true; /* err on the safer side */ 126 - 127 - return CIFS_CHAN_IN_RECONNECT(ses, chan_index); 128 - } 129 - 130 118 void 131 119 cifs_chan_set_need_reconnect(struct cifs_ses *ses, 132 120 struct TCP_Server_Info *server) ··· 473 485 474 486 ses->chans[chan_index].iface = iface; 475 487 spin_unlock(&ses->chan_lock); 476 - } 477 - 478 - /* 479 - * If server is a channel of ses, return the corresponding enclosing 480 - * cifs_chan otherwise return NULL. 481 - */ 482 - struct cifs_chan * 483 - cifs_ses_find_chan(struct cifs_ses *ses, struct TCP_Server_Info *server) 484 - { 485 - int i; 486 - 487 - spin_lock(&ses->chan_lock); 488 - for (i = 0; i < ses->chan_count; i++) { 489 - if (ses->chans[i].server == server) { 490 - spin_unlock(&ses->chan_lock); 491 - return &ses->chans[i]; 492 - } 493 - } 494 - spin_unlock(&ses->chan_lock); 495 - return NULL; 496 488 } 497 489 498 490 static int
+2 -1
fs/smb/client/smb2ops.c
··· 1158 1158 struct cifs_fid fid; 1159 1159 unsigned int size[1]; 1160 1160 void *data[1]; 1161 - struct smb2_file_full_ea_info *ea = NULL; 1161 + struct smb2_file_full_ea_info *ea; 1162 1162 struct smb2_query_info_rsp *rsp; 1163 1163 int rc, used_len = 0; 1164 1164 int retries = 0, cur_sleep = 1; ··· 1179 1179 if (!utf16_path) 1180 1180 return -ENOMEM; 1181 1181 1182 + ea = NULL; 1182 1183 resp_buftype[0] = resp_buftype[1] = resp_buftype[2] = CIFS_NO_BUFFER; 1183 1184 vars = kzalloc(sizeof(*vars), GFP_KERNEL); 1184 1185 if (!vars) {
+9
fs/smb/client/smb2pdu.c
··· 3313 3313 return rc; 3314 3314 3315 3315 if (indatalen) { 3316 + unsigned int len; 3317 + 3318 + if (WARN_ON_ONCE(smb3_encryption_required(tcon) && 3319 + (check_add_overflow(total_len - 1, 3320 + ALIGN(indatalen, 8), &len) || 3321 + len > MAX_CIFS_SMALL_BUFFER_SIZE))) { 3322 + cifs_small_buf_release(req); 3323 + return -EIO; 3324 + } 3316 3325 /* 3317 3326 * indatalen is usually small at a couple of bytes max, so 3318 3327 * just allocate through generic pool
+1 -1
fs/xfs/scrub/bmap_repair.c
··· 801 801 { 802 802 struct xrep_bmap *rb; 803 803 char *descr; 804 - unsigned int max_bmbt_recs; 804 + xfs_extnum_t max_bmbt_recs; 805 805 bool large_extcount; 806 806 int error = 0; 807 807
+2 -2
fs/xfs/xfs_aops.c
··· 116 116 if (unlikely(error)) { 117 117 if (ioend->io_flags & IOMAP_F_SHARED) { 118 118 xfs_reflink_cancel_cow_range(ip, offset, size, true); 119 - xfs_bmap_punch_delalloc_range(ip, offset, 119 + xfs_bmap_punch_delalloc_range(ip, XFS_DATA_FORK, offset, 120 120 offset + size); 121 121 } 122 122 goto done; ··· 456 456 * byte of the next folio. Hence the end offset is only dependent on the 457 457 * folio itself and not the start offset that is passed in. 458 458 */ 459 - xfs_bmap_punch_delalloc_range(ip, pos, 459 + xfs_bmap_punch_delalloc_range(ip, XFS_DATA_FORK, pos, 460 460 folio_pos(folio) + folio_size(folio)); 461 461 } 462 462
+7 -3
fs/xfs/xfs_bmap_util.c
··· 442 442 void 443 443 xfs_bmap_punch_delalloc_range( 444 444 struct xfs_inode *ip, 445 + int whichfork, 445 446 xfs_off_t start_byte, 446 447 xfs_off_t end_byte) 447 448 { 448 449 struct xfs_mount *mp = ip->i_mount; 449 - struct xfs_ifork *ifp = &ip->i_df; 450 + struct xfs_ifork *ifp = xfs_ifork_ptr(ip, whichfork); 450 451 xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, start_byte); 451 452 xfs_fileoff_t end_fsb = XFS_B_TO_FSB(mp, end_byte); 452 453 struct xfs_bmbt_irec got, del; ··· 475 474 continue; 476 475 } 477 476 478 - xfs_bmap_del_extent_delay(ip, XFS_DATA_FORK, &icur, &got, &del); 477 + xfs_bmap_del_extent_delay(ip, whichfork, &icur, &got, &del); 479 478 if (!xfs_iext_get_extent(ifp, &icur, &got)) 480 479 break; 481 480 } 481 + 482 + if (whichfork == XFS_COW_FORK && !ifp->if_bytes) 483 + xfs_inode_clear_cowblocks_tag(ip); 482 484 483 485 out_unlock: 484 486 xfs_iunlock(ip, XFS_ILOCK_EXCL); ··· 584 580 */ 585 581 if (ip->i_diflags & (XFS_DIFLAG_PREALLOC | XFS_DIFLAG_APPEND)) { 586 582 if (ip->i_delayed_blks) { 587 - xfs_bmap_punch_delalloc_range(ip, 583 + xfs_bmap_punch_delalloc_range(ip, XFS_DATA_FORK, 588 584 round_up(XFS_ISIZE(ip), mp->m_sb.sb_blocksize), 589 585 LLONG_MAX); 590 586 }
+1 -1
fs/xfs/xfs_bmap_util.h
··· 30 30 } 31 31 #endif /* CONFIG_XFS_RT */ 32 32 33 - void xfs_bmap_punch_delalloc_range(struct xfs_inode *ip, 33 + void xfs_bmap_punch_delalloc_range(struct xfs_inode *ip, int whichfork, 34 34 xfs_off_t start_byte, xfs_off_t end_byte); 35 35 36 36 struct kgetbmap {
+88 -58
fs/xfs/xfs_file.c
··· 348 348 } 349 349 350 350 /* 351 + * Take care of zeroing post-EOF blocks when they might exist. 352 + * 353 + * Returns 0 if successfully, a negative error for a failure, or 1 if this 354 + * function dropped the iolock and reacquired it exclusively and the caller 355 + * needs to restart the write sanity checks. 356 + */ 357 + static ssize_t 358 + xfs_file_write_zero_eof( 359 + struct kiocb *iocb, 360 + struct iov_iter *from, 361 + unsigned int *iolock, 362 + size_t count, 363 + bool *drained_dio) 364 + { 365 + struct xfs_inode *ip = XFS_I(iocb->ki_filp->f_mapping->host); 366 + loff_t isize; 367 + int error; 368 + 369 + /* 370 + * We need to serialise against EOF updates that occur in IO completions 371 + * here. We want to make sure that nobody is changing the size while 372 + * we do this check until we have placed an IO barrier (i.e. hold 373 + * XFS_IOLOCK_EXCL) that prevents new IO from being dispatched. The 374 + * spinlock effectively forms a memory barrier once we have 375 + * XFS_IOLOCK_EXCL so we are guaranteed to see the latest EOF value and 376 + * hence be able to correctly determine if we need to run zeroing. 377 + */ 378 + spin_lock(&ip->i_flags_lock); 379 + isize = i_size_read(VFS_I(ip)); 380 + if (iocb->ki_pos <= isize) { 381 + spin_unlock(&ip->i_flags_lock); 382 + return 0; 383 + } 384 + spin_unlock(&ip->i_flags_lock); 385 + 386 + if (iocb->ki_flags & IOCB_NOWAIT) 387 + return -EAGAIN; 388 + 389 + if (!*drained_dio) { 390 + /* 391 + * If zeroing is needed and we are currently holding the iolock 392 + * shared, we need to update it to exclusive which implies 393 + * having to redo all checks before. 394 + */ 395 + if (*iolock == XFS_IOLOCK_SHARED) { 396 + xfs_iunlock(ip, *iolock); 397 + *iolock = XFS_IOLOCK_EXCL; 398 + xfs_ilock(ip, *iolock); 399 + iov_iter_reexpand(from, count); 400 + } 401 + 402 + /* 403 + * We now have an IO submission barrier in place, but AIO can do 404 + * EOF updates during IO completion and hence we now need to 405 + * wait for all of them to drain. Non-AIO DIO will have drained 406 + * before we are given the XFS_IOLOCK_EXCL, and so for most 407 + * cases this wait is a no-op. 408 + */ 409 + inode_dio_wait(VFS_I(ip)); 410 + *drained_dio = true; 411 + return 1; 412 + } 413 + 414 + trace_xfs_zero_eof(ip, isize, iocb->ki_pos - isize); 415 + 416 + xfs_ilock(ip, XFS_MMAPLOCK_EXCL); 417 + error = xfs_zero_range(ip, isize, iocb->ki_pos - isize, NULL); 418 + xfs_iunlock(ip, XFS_MMAPLOCK_EXCL); 419 + 420 + return error; 421 + } 422 + 423 + /* 351 424 * Common pre-write limit and setup checks. 352 425 * 353 - * Called with the iolocked held either shared and exclusive according to 426 + * Called with the iolock held either shared and exclusive according to 354 427 * @iolock, and returns with it held. Might upgrade the iolock to exclusive 355 428 * if called for a direct write beyond i_size. 356 429 */ ··· 433 360 struct iov_iter *from, 434 361 unsigned int *iolock) 435 362 { 436 - struct file *file = iocb->ki_filp; 437 - struct inode *inode = file->f_mapping->host; 438 - struct xfs_inode *ip = XFS_I(inode); 439 - ssize_t error = 0; 363 + struct inode *inode = iocb->ki_filp->f_mapping->host; 440 364 size_t count = iov_iter_count(from); 441 365 bool drained_dio = false; 442 - loff_t isize; 366 + ssize_t error; 443 367 444 368 restart: 445 369 error = generic_write_checks(iocb, from); ··· 459 389 * exclusively. 460 390 */ 461 391 if (*iolock == XFS_IOLOCK_SHARED && !IS_NOSEC(inode)) { 462 - xfs_iunlock(ip, *iolock); 392 + xfs_iunlock(XFS_I(inode), *iolock); 463 393 *iolock = XFS_IOLOCK_EXCL; 464 394 error = xfs_ilock_iocb(iocb, *iolock); 465 395 if (error) { ··· 470 400 } 471 401 472 402 /* 473 - * If the offset is beyond the size of the file, we need to zero any 403 + * If the offset is beyond the size of the file, we need to zero all 474 404 * blocks that fall between the existing EOF and the start of this 475 - * write. If zeroing is needed and we are currently holding the iolock 476 - * shared, we need to update it to exclusive which implies having to 477 - * redo all checks before. 405 + * write. 478 406 * 479 - * We need to serialise against EOF updates that occur in IO completions 480 - * here. We want to make sure that nobody is changing the size while we 481 - * do this check until we have placed an IO barrier (i.e. hold the 482 - * XFS_IOLOCK_EXCL) that prevents new IO from being dispatched. The 483 - * spinlock effectively forms a memory barrier once we have the 484 - * XFS_IOLOCK_EXCL so we are guaranteed to see the latest EOF value and 485 - * hence be able to correctly determine if we need to run zeroing. 486 - * 487 - * We can do an unlocked check here safely as IO completion can only 488 - * extend EOF. Truncate is locked out at this point, so the EOF can 489 - * not move backwards, only forwards. Hence we only need to take the 490 - * slow path and spin locks when we are at or beyond the current EOF. 407 + * We can do an unlocked check for i_size here safely as I/O completion 408 + * can only extend EOF. Truncate is locked out at this point, so the 409 + * EOF can not move backwards, only forwards. Hence we only need to take 410 + * the slow path when we are at or beyond the current EOF. 491 411 */ 492 - if (iocb->ki_pos <= i_size_read(inode)) 493 - goto out; 494 - 495 - spin_lock(&ip->i_flags_lock); 496 - isize = i_size_read(inode); 497 - if (iocb->ki_pos > isize) { 498 - spin_unlock(&ip->i_flags_lock); 499 - 500 - if (iocb->ki_flags & IOCB_NOWAIT) 501 - return -EAGAIN; 502 - 503 - if (!drained_dio) { 504 - if (*iolock == XFS_IOLOCK_SHARED) { 505 - xfs_iunlock(ip, *iolock); 506 - *iolock = XFS_IOLOCK_EXCL; 507 - xfs_ilock(ip, *iolock); 508 - iov_iter_reexpand(from, count); 509 - } 510 - /* 511 - * We now have an IO submission barrier in place, but 512 - * AIO can do EOF updates during IO completion and hence 513 - * we now need to wait for all of them to drain. Non-AIO 514 - * DIO will have drained before we are given the 515 - * XFS_IOLOCK_EXCL, and so for most cases this wait is a 516 - * no-op. 517 - */ 518 - inode_dio_wait(inode); 519 - drained_dio = true; 412 + if (iocb->ki_pos > i_size_read(inode)) { 413 + error = xfs_file_write_zero_eof(iocb, from, iolock, count, 414 + &drained_dio); 415 + if (error == 1) 520 416 goto restart; 521 - } 522 - 523 - trace_xfs_zero_eof(ip, isize, iocb->ki_pos - isize); 524 - error = xfs_zero_range(ip, isize, iocb->ki_pos - isize, NULL); 525 417 if (error) 526 418 return error; 527 - } else 528 - spin_unlock(&ip->i_flags_lock); 419 + } 529 420 530 - out: 531 421 return kiocb_modified(iocb); 532 422 } 533 423
+46 -21
fs/xfs/xfs_iomap.c
··· 975 975 int allocfork = XFS_DATA_FORK; 976 976 int error = 0; 977 977 unsigned int lockmode = XFS_ILOCK_EXCL; 978 + unsigned int iomap_flags = 0; 978 979 u64 seq; 979 980 980 981 if (xfs_is_shutdown(mp)) ··· 1146 1145 } 1147 1146 } 1148 1147 1148 + /* 1149 + * Flag newly allocated delalloc blocks with IOMAP_F_NEW so we punch 1150 + * them out if the write happens to fail. 1151 + */ 1152 + iomap_flags |= IOMAP_F_NEW; 1149 1153 if (allocfork == XFS_COW_FORK) { 1150 1154 error = xfs_bmapi_reserve_delalloc(ip, allocfork, offset_fsb, 1151 1155 end_fsb - offset_fsb, prealloc_blocks, &cmap, ··· 1168 1162 if (error) 1169 1163 goto out_unlock; 1170 1164 1171 - /* 1172 - * Flag newly allocated delalloc blocks with IOMAP_F_NEW so we punch 1173 - * them out if the write happens to fail. 1174 - */ 1175 - seq = xfs_iomap_inode_sequence(ip, IOMAP_F_NEW); 1176 - xfs_iunlock(ip, lockmode); 1177 1165 trace_xfs_iomap_alloc(ip, offset, count, allocfork, &imap); 1178 - return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, IOMAP_F_NEW, seq); 1179 - 1180 1166 found_imap: 1181 - seq = xfs_iomap_inode_sequence(ip, 0); 1167 + seq = xfs_iomap_inode_sequence(ip, iomap_flags); 1182 1168 xfs_iunlock(ip, lockmode); 1183 - return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, 0, seq); 1169 + return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, iomap_flags, seq); 1184 1170 1185 1171 convert_delay: 1186 1172 xfs_iunlock(ip, lockmode); ··· 1186 1188 return 0; 1187 1189 1188 1190 found_cow: 1189 - seq = xfs_iomap_inode_sequence(ip, 0); 1190 1191 if (imap.br_startoff <= offset_fsb) { 1191 - error = xfs_bmbt_to_iomap(ip, srcmap, &imap, flags, 0, seq); 1192 + error = xfs_bmbt_to_iomap(ip, srcmap, &imap, flags, 0, 1193 + xfs_iomap_inode_sequence(ip, 0)); 1192 1194 if (error) 1193 1195 goto out_unlock; 1194 - seq = xfs_iomap_inode_sequence(ip, IOMAP_F_SHARED); 1195 - xfs_iunlock(ip, lockmode); 1196 - return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags, 1197 - IOMAP_F_SHARED, seq); 1196 + } else { 1197 + xfs_trim_extent(&cmap, offset_fsb, 1198 + imap.br_startoff - offset_fsb); 1198 1199 } 1199 1200 1200 - xfs_trim_extent(&cmap, offset_fsb, imap.br_startoff - offset_fsb); 1201 + iomap_flags |= IOMAP_F_SHARED; 1202 + seq = xfs_iomap_inode_sequence(ip, iomap_flags); 1201 1203 xfs_iunlock(ip, lockmode); 1202 - return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags, 0, seq); 1204 + return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags, iomap_flags, seq); 1203 1205 1204 1206 out_unlock: 1205 1207 xfs_iunlock(ip, lockmode); ··· 1213 1215 loff_t length, 1214 1216 struct iomap *iomap) 1215 1217 { 1216 - xfs_bmap_punch_delalloc_range(XFS_I(inode), offset, offset + length); 1218 + xfs_bmap_punch_delalloc_range(XFS_I(inode), 1219 + (iomap->flags & IOMAP_F_SHARED) ? 1220 + XFS_COW_FORK : XFS_DATA_FORK, 1221 + offset, offset + length); 1217 1222 } 1218 1223 1219 1224 static int ··· 1228 1227 unsigned flags, 1229 1228 struct iomap *iomap) 1230 1229 { 1231 - iomap_file_buffered_write_punch_delalloc(inode, offset, length, written, 1232 - flags, iomap, &xfs_buffered_write_delalloc_punch); 1230 + loff_t start_byte, end_byte; 1231 + 1232 + /* If we didn't reserve the blocks, we're not allowed to punch them. */ 1233 + if (iomap->type != IOMAP_DELALLOC || !(iomap->flags & IOMAP_F_NEW)) 1234 + return 0; 1235 + 1236 + /* Nothing to do if we've written the entire delalloc extent */ 1237 + start_byte = iomap_last_written_block(inode, offset, written); 1238 + end_byte = round_up(offset + length, i_blocksize(inode)); 1239 + if (start_byte >= end_byte) 1240 + return 0; 1241 + 1242 + /* For zeroing operations the callers already hold invalidate_lock. */ 1243 + if (flags & (IOMAP_UNSHARE | IOMAP_ZERO)) { 1244 + rwsem_assert_held_write(&inode->i_mapping->invalidate_lock); 1245 + iomap_write_delalloc_release(inode, start_byte, end_byte, flags, 1246 + iomap, xfs_buffered_write_delalloc_punch); 1247 + } else { 1248 + filemap_invalidate_lock(inode->i_mapping); 1249 + iomap_write_delalloc_release(inode, start_byte, end_byte, flags, 1250 + iomap, xfs_buffered_write_delalloc_punch); 1251 + filemap_invalidate_unlock(inode->i_mapping); 1252 + } 1253 + 1233 1254 return 0; 1234 1255 } 1235 1256 ··· 1457 1434 bool *did_zero) 1458 1435 { 1459 1436 struct inode *inode = VFS_I(ip); 1437 + 1438 + xfs_assert_ilocked(ip, XFS_IOLOCK_EXCL | XFS_MMAPLOCK_EXCL); 1460 1439 1461 1440 if (IS_DAX(inode)) 1462 1441 return dax_zero_range(inode, pos, len, did_zero,
+1
include/linux/host1x.h
··· 466 466 refcount_t ref; 467 467 struct pid *owner; 468 468 469 + struct device_dma_parameters dma_parms; 469 470 struct device dev; 470 471 u64 dma_mask; 471 472 u32 stream_id;
+18
include/linux/huge_mm.h
··· 322 322 (transparent_hugepage_flags & \ 323 323 (1<<TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG)) 324 324 325 + static inline bool vma_thp_disabled(struct vm_area_struct *vma, 326 + unsigned long vm_flags) 327 + { 328 + /* 329 + * Explicitly disabled through madvise or prctl, or some 330 + * architectures may disable THP for some mappings, for 331 + * example, s390 kvm. 332 + */ 333 + return (vm_flags & VM_NOHUGEPAGE) || 334 + test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags); 335 + } 336 + 337 + static inline bool thp_disabled_by_hw(void) 338 + { 339 + /* If the hardware/firmware marked hugepage support disabled. */ 340 + return transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED); 341 + } 342 + 325 343 unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, 326 344 unsigned long len, unsigned long pgoff, unsigned long flags); 327 345 unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long addr,
+17 -3
include/linux/iomap.h
··· 256 256 return &i->iomap; 257 257 } 258 258 259 + /* 260 + * Return the file offset for the first unchanged block after a short write. 261 + * 262 + * If nothing was written, round @pos down to point at the first block in 263 + * the range, else round up to include the partially written block. 264 + */ 265 + static inline loff_t iomap_last_written_block(struct inode *inode, loff_t pos, 266 + ssize_t written) 267 + { 268 + if (unlikely(!written)) 269 + return round_down(pos, i_blocksize(inode)); 270 + return round_up(pos + written, i_blocksize(inode)); 271 + } 272 + 259 273 ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from, 260 274 const struct iomap_ops *ops, void *private); 261 275 int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops); ··· 290 276 291 277 typedef void (*iomap_punch_t)(struct inode *inode, loff_t offset, loff_t length, 292 278 struct iomap *iomap); 293 - void iomap_file_buffered_write_punch_delalloc(struct inode *inode, loff_t pos, 294 - loff_t length, ssize_t written, unsigned flag, 295 - struct iomap *iomap, iomap_punch_t punch); 279 + void iomap_write_delalloc_release(struct inode *inode, loff_t start_byte, 280 + loff_t end_byte, unsigned flags, struct iomap *iomap, 281 + iomap_punch_t punch); 296 282 297 283 int iomap_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, 298 284 u64 start, u64 len, const struct iomap_ops *ops);
+3 -1
include/linux/irqchip/arm-gic-v4.h
··· 66 66 bool enabled; 67 67 bool group; 68 68 } sgi_config[16]; 69 - atomic_t vmapp_count; 70 69 }; 71 70 }; 71 + 72 + /* Track the VPE being mapped */ 73 + atomic_t vmapp_count; 72 74 73 75 /* 74 76 * Ensures mutual exclusion between affinity setting of the
-2
include/linux/kvm_host.h
··· 1313 1313 1314 1314 struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); 1315 1315 struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn); 1316 - kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn); 1317 - kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn); 1318 1316 int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map); 1319 1317 void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty); 1320 1318 unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn);
+2 -1
include/linux/mm.h
··· 3818 3818 struct page * __populate_section_memmap(unsigned long pfn, 3819 3819 unsigned long nr_pages, int nid, struct vmem_altmap *altmap, 3820 3820 struct dev_pagemap *pgmap); 3821 - void pmd_init(void *addr); 3822 3821 void pud_init(void *addr); 3822 + void pmd_init(void *addr); 3823 + void kernel_pte_init(void *addr); 3823 3824 pgd_t *vmemmap_pgd_populate(unsigned long addr, int node); 3824 3825 p4d_t *vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node); 3825 3826 pud_t *vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node);
+12
include/linux/netdevice.h
··· 3384 3384 3385 3385 static __always_inline void netif_tx_stop_queue(struct netdev_queue *dev_queue) 3386 3386 { 3387 + /* Paired with READ_ONCE() from dev_watchdog() */ 3388 + WRITE_ONCE(dev_queue->trans_start, jiffies); 3389 + 3390 + /* This barrier is paired with smp_mb() from dev_watchdog() */ 3391 + smp_mb__before_atomic(); 3392 + 3387 3393 /* Must be an atomic op see netif_txq_try_stop() */ 3388 3394 set_bit(__QUEUE_STATE_DRV_XOFF, &dev_queue->state); 3389 3395 } ··· 3515 3509 3516 3510 if (likely(dql_avail(&dev_queue->dql) >= 0)) 3517 3511 return; 3512 + 3513 + /* Paired with READ_ONCE() from dev_watchdog() */ 3514 + WRITE_ONCE(dev_queue->trans_start, jiffies); 3515 + 3516 + /* This barrier is paired with smp_mb() from dev_watchdog() */ 3517 + smp_mb__before_atomic(); 3518 3518 3519 3519 set_bit(__QUEUE_STATE_STACK_XOFF, &dev_queue->state); 3520 3520
+5 -1
include/linux/percpu.h
··· 41 41 PCPU_MIN_ALLOC_SHIFT) 42 42 43 43 #ifdef CONFIG_RANDOM_KMALLOC_CACHES 44 - #define PERCPU_DYNAMIC_SIZE_SHIFT 12 44 + # if defined(CONFIG_LOCKDEP) && !defined(CONFIG_PAGE_SIZE_4KB) 45 + # define PERCPU_DYNAMIC_SIZE_SHIFT 13 46 + # else 47 + # define PERCPU_DYNAMIC_SIZE_SHIFT 12 48 + #endif /* LOCKDEP and PAGE_SIZE > 4KiB */ 45 49 #else 46 50 #define PERCPU_DYNAMIC_SIZE_SHIFT 10 47 51 #endif
+5
include/linux/sched.h
··· 2133 2133 2134 2134 #endif /* CONFIG_SMP */ 2135 2135 2136 + static inline bool task_is_runnable(struct task_struct *p) 2137 + { 2138 + return p->on_rq && !p->se.sched_delayed; 2139 + } 2140 + 2136 2141 extern bool sched_task_on_rq(struct task_struct *p); 2137 2142 extern unsigned long get_wchan(struct task_struct *p); 2138 2143 extern struct task_struct *cpu_curr_snapshot(int cpu);
+1 -1
include/linux/soc/qcom/geni-se.h
··· 258 258 #define RX_DMA_PARITY_ERR BIT(5) 259 259 #define RX_DMA_BREAK GENMASK(8, 7) 260 260 #define RX_GENI_GP_IRQ GENMASK(10, 5) 261 - #define RX_GENI_CANCEL_IRQ BIT(11) 262 261 #define RX_GENI_GP_IRQ_EXT GENMASK(13, 12) 262 + #define RX_GENI_CANCEL_IRQ BIT(14) 263 263 264 264 /* SE_HW_PARAM_0 fields */ 265 265 #define TX_FIFO_WIDTH_MSK GENMASK(29, 24)
+1 -1
include/linux/soundwire/sdw_intel.h
··· 227 227 /** 228 228 * struct sdw_intel_acpi_info - Soundwire Intel information found in ACPI tables 229 229 * @handle: ACPI controller handle 230 - * @count: link count found with "sdw-master-count" property 230 + * @count: link count found with "sdw-master-count" or "sdw-manager-list" property 231 231 * @link_mask: bit-wise mask listing links enabled by BIOS menu 232 232 * 233 233 * this structure could be expanded to e.g. provide all the _ADR
+4 -1
include/linux/task_work.h
··· 14 14 } 15 15 16 16 enum task_work_notify_mode { 17 - TWA_NONE, 17 + TWA_NONE = 0, 18 18 TWA_RESUME, 19 19 TWA_SIGNAL, 20 20 TWA_SIGNAL_NO_IPI, 21 21 TWA_NMI_CURRENT, 22 + 23 + TWA_FLAGS = 0xff00, 24 + TWAF_NO_ALLOC = 0x0100, 22 25 }; 23 26 24 27 static inline bool task_work_pending(struct task_struct *task)
+1
include/net/bluetooth/bluetooth.h
··· 403 403 void bt_sock_unregister(int proto); 404 404 void bt_sock_link(struct bt_sock_list *l, struct sock *s); 405 405 void bt_sock_unlink(struct bt_sock_list *l, struct sock *s); 406 + bool bt_sock_linked(struct bt_sock_list *l, struct sock *s); 406 407 struct sock *bt_sock_alloc(struct net *net, struct socket *sock, 407 408 struct proto *prot, int proto, gfp_t prio, int kern); 408 409 int bt_sock_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
-1
include/net/netns/xfrm.h
··· 51 51 struct hlist_head *policy_byidx; 52 52 unsigned int policy_idx_hmask; 53 53 unsigned int idx_generator; 54 - struct hlist_head policy_inexact[XFRM_POLICY_MAX]; 55 54 struct xfrm_policy_hash policy_bydst[XFRM_POLICY_MAX]; 56 55 unsigned int policy_count[XFRM_POLICY_MAX * 2]; 57 56 struct work_struct policy_hash_work;
+5
include/net/sock.h
··· 2742 2742 return sk->sk_family == AF_UNIX && sk->sk_type == SOCK_STREAM; 2743 2743 } 2744 2744 2745 + static inline bool sk_is_vsock(const struct sock *sk) 2746 + { 2747 + return sk->sk_family == AF_VSOCK; 2748 + } 2749 + 2745 2750 /** 2746 2751 * sk_eat_skb - Release a skb if it is no longer needed 2747 2752 * @sk: socket to eat this skb from
+15 -13
include/net/xfrm.h
··· 349 349 void xfrm_if_register_cb(const struct xfrm_if_cb *ifcb); 350 350 void xfrm_if_unregister_cb(void); 351 351 352 + struct xfrm_dst_lookup_params { 353 + struct net *net; 354 + int tos; 355 + int oif; 356 + xfrm_address_t *saddr; 357 + xfrm_address_t *daddr; 358 + u32 mark; 359 + __u8 ipproto; 360 + union flowi_uli uli; 361 + }; 362 + 352 363 struct net_device; 353 364 struct xfrm_type; 354 365 struct xfrm_dst; 355 366 struct xfrm_policy_afinfo { 356 367 struct dst_ops *dst_ops; 357 - struct dst_entry *(*dst_lookup)(struct net *net, 358 - int tos, int oif, 359 - const xfrm_address_t *saddr, 360 - const xfrm_address_t *daddr, 361 - u32 mark); 362 - int (*get_saddr)(struct net *net, int oif, 363 - xfrm_address_t *saddr, 364 - xfrm_address_t *daddr, 365 - u32 mark); 368 + struct dst_entry *(*dst_lookup)(const struct xfrm_dst_lookup_params *params); 369 + int (*get_saddr)(xfrm_address_t *saddr, 370 + const struct xfrm_dst_lookup_params *params); 366 371 int (*fill_dst)(struct xfrm_dst *xdst, 367 372 struct net_device *dev, 368 373 const struct flowi *fl); ··· 1769 1764 } 1770 1765 #endif 1771 1766 1772 - struct dst_entry *__xfrm_dst_lookup(struct net *net, int tos, int oif, 1773 - const xfrm_address_t *saddr, 1774 - const xfrm_address_t *daddr, 1775 - int family, u32 mark); 1767 + struct dst_entry *__xfrm_dst_lookup(int family, const struct xfrm_dst_lookup_params *params); 1776 1768 1777 1769 struct xfrm_policy *xfrm_policy_alloc(struct net *net, gfp_t gfp); 1778 1770
+8 -8
include/trace/events/dma.h
··· 121 121 122 122 TP_STRUCT__entry( 123 123 __string(device, dev_name(dev)) 124 - __field(u64, phys_addr) 124 + __field(void *, virt_addr) 125 125 __field(u64, dma_addr) 126 126 __field(size_t, size) 127 127 __field(gfp_t, flags) ··· 130 130 131 131 TP_fast_assign( 132 132 __assign_str(device); 133 - __entry->phys_addr = virt_to_phys(virt_addr); 133 + __entry->virt_addr = virt_addr; 134 134 __entry->dma_addr = dma_addr; 135 135 __entry->size = size; 136 136 __entry->flags = flags; 137 137 __entry->attrs = attrs; 138 138 ), 139 139 140 - TP_printk("%s dma_addr=%llx size=%zu phys_addr=%llx flags=%s attrs=%s", 140 + TP_printk("%s dma_addr=%llx size=%zu virt_addr=%p flags=%s attrs=%s", 141 141 __get_str(device), 142 142 __entry->dma_addr, 143 143 __entry->size, 144 - __entry->phys_addr, 144 + __entry->virt_addr, 145 145 show_gfp_flags(__entry->flags), 146 146 decode_dma_attrs(__entry->attrs)) 147 147 ); ··· 153 153 154 154 TP_STRUCT__entry( 155 155 __string(device, dev_name(dev)) 156 - __field(u64, phys_addr) 156 + __field(void *, virt_addr) 157 157 __field(u64, dma_addr) 158 158 __field(size_t, size) 159 159 __field(unsigned long, attrs) ··· 161 161 162 162 TP_fast_assign( 163 163 __assign_str(device); 164 - __entry->phys_addr = virt_to_phys(virt_addr); 164 + __entry->virt_addr = virt_addr; 165 165 __entry->dma_addr = dma_addr; 166 166 __entry->size = size; 167 167 __entry->attrs = attrs; 168 168 ), 169 169 170 - TP_printk("%s dma_addr=%llx size=%zu phys_addr=%llx attrs=%s", 170 + TP_printk("%s dma_addr=%llx size=%zu virt_addr=%p attrs=%s", 171 171 __get_str(device), 172 172 __entry->dma_addr, 173 173 __entry->size, 174 - __entry->phys_addr, 174 + __entry->virt_addr, 175 175 decode_dma_attrs(__entry->attrs)) 176 176 ); 177 177
+2 -2
include/trace/events/huge_memory.h
··· 208 208 209 209 TRACE_EVENT(mm_khugepaged_collapse_file, 210 210 TP_PROTO(struct mm_struct *mm, struct folio *new_folio, pgoff_t index, 211 - bool is_shmem, unsigned long addr, struct file *file, 211 + unsigned long addr, bool is_shmem, struct file *file, 212 212 int nr, int result), 213 213 TP_ARGS(mm, new_folio, index, addr, is_shmem, file, nr, result), 214 214 TP_STRUCT__entry( ··· 233 233 __entry->result = result; 234 234 ), 235 235 236 - TP_printk("mm=%p, hpage_pfn=0x%lx, index=%ld, addr=%ld, is_shmem=%d, filename=%s, nr=%d, result=%s", 236 + TP_printk("mm=%p, hpage_pfn=0x%lx, index=%ld, addr=%lx, is_shmem=%d, filename=%s, nr=%d, result=%s", 237 237 __entry->mm, 238 238 __entry->hpfn, 239 239 __entry->index,
-1
include/trace/events/netfs.h
··· 172 172 EM(netfs_folio_trace_read, "read") \ 173 173 EM(netfs_folio_trace_read_done, "read-done") \ 174 174 EM(netfs_folio_trace_read_gaps, "read-gaps") \ 175 - EM(netfs_folio_trace_read_put, "read-put") \ 176 175 EM(netfs_folio_trace_read_unlock, "read-unlock") \ 177 176 EM(netfs_folio_trace_redirtied, "redirtied") \ 178 177 EM(netfs_folio_trace_store, "store") \
+5 -8
include/uapi/linux/bpf.h
··· 6047 6047 BPF_F_MARK_ENFORCE = (1ULL << 6), 6048 6048 }; 6049 6049 6050 - /* BPF_FUNC_clone_redirect and BPF_FUNC_redirect flags. */ 6051 - enum { 6052 - BPF_F_INGRESS = (1ULL << 0), 6053 - }; 6054 - 6055 6050 /* BPF_FUNC_skb_set_tunnel_key and BPF_FUNC_skb_get_tunnel_key flags. */ 6056 6051 enum { 6057 6052 BPF_F_TUNINFO_IPV6 = (1ULL << 0), ··· 6193 6198 BPF_F_BPRM_SECUREEXEC = (1ULL << 0), 6194 6199 }; 6195 6200 6196 - /* Flags for bpf_redirect_map helper */ 6201 + /* Flags for bpf_redirect and bpf_redirect_map helpers */ 6197 6202 enum { 6198 - BPF_F_BROADCAST = (1ULL << 3), 6199 - BPF_F_EXCLUDE_INGRESS = (1ULL << 4), 6203 + BPF_F_INGRESS = (1ULL << 0), /* used for skb path */ 6204 + BPF_F_BROADCAST = (1ULL << 3), /* used for XDP path */ 6205 + BPF_F_EXCLUDE_INGRESS = (1ULL << 4), /* used for XDP path */ 6206 + #define BPF_F_REDIRECT_FLAGS (BPF_F_INGRESS | BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS) 6200 6207 }; 6201 6208 6202 6209 #define __bpf_md_ptr(type, name) \
+7 -1
include/uapi/linux/ublk_cmd.h
··· 175 175 /* use ioctl encoding for uring command */ 176 176 #define UBLK_F_CMD_IOCTL_ENCODE (1UL << 6) 177 177 178 - /* Copy between request and user buffer by pread()/pwrite() */ 178 + /* 179 + * Copy between request and user buffer by pread()/pwrite() 180 + * 181 + * Not available for UBLK_F_UNPRIVILEGED_DEV, otherwise userspace may 182 + * deceive us by not filling request buffer, then kernel uninitialized 183 + * data may be leaked. 184 + */ 179 185 #define UBLK_F_USER_COPY (1UL << 7) 180 186 181 187 /*
+9 -5
include/xen/acpi.h
··· 35 35 36 36 #include <linux/types.h> 37 37 38 + typedef int (*get_gsi_from_sbdf_t)(u32 sbdf); 39 + 38 40 #ifdef CONFIG_XEN_DOM0 39 41 #include <asm/xen/hypervisor.h> 40 42 #include <xen/xen.h> ··· 74 72 int *gsi_out, 75 73 int *trigger_out, 76 74 int *polarity_out); 75 + void xen_acpi_register_get_gsi_func(get_gsi_from_sbdf_t func); 76 + int xen_acpi_get_gsi_from_sbdf(u32 sbdf); 77 77 #else 78 78 static inline void xen_acpi_sleep_register(void) 79 79 { ··· 93 89 { 94 90 return -1; 95 91 } 96 - #endif 97 92 98 - #ifdef CONFIG_XEN_PCI_STUB 99 - int pcistub_get_gsi_from_sbdf(unsigned int sbdf); 100 - #else 101 - static inline int pcistub_get_gsi_from_sbdf(unsigned int sbdf) 93 + static inline void xen_acpi_register_get_gsi_func(get_gsi_from_sbdf_t func) 94 + { 95 + } 96 + 97 + static inline int xen_acpi_get_gsi_from_sbdf(u32 sbdf) 102 98 { 103 99 return -1; 104 100 }
+6 -2
init/Kconfig
··· 62 62 63 63 config RUSTC_VERSION 64 64 int 65 - default $(shell,$(srctree)/scripts/rustc-version.sh $(RUSTC)) 65 + default $(rustc-version) 66 66 help 67 67 It does not depend on `RUST` since that one may need to use the version 68 68 in a `depends on`. ··· 77 77 78 78 In particular, the Makefile target 'rustavailable' is useful to check 79 79 why the Rust toolchain is not being detected. 80 + 81 + config RUSTC_LLVM_VERSION 82 + int 83 + default $(rustc-llvm-version) 80 84 81 85 config CC_CAN_LINK 82 86 bool ··· 1950 1946 depends on !GCC_PLUGIN_RANDSTRUCT 1951 1947 depends on !RANDSTRUCT 1952 1948 depends on !DEBUG_INFO_BTF || PAHOLE_HAS_LANG_EXCLUDE 1953 - depends on !CFI_CLANG || RUSTC_VERSION >= 107900 && HAVE_CFI_ICALL_NORMALIZE_INTEGERS 1949 + depends on !CFI_CLANG || HAVE_CFI_ICALL_NORMALIZE_INTEGERS_RUSTC 1954 1950 select CFI_ICALL_NORMALIZE_INTEGERS if CFI_CLANG 1955 1951 depends on !CALL_PADDING || RUSTC_VERSION >= 108100 1956 1952 depends on !KASAN_SW_TAGS
+9 -1
io_uring/io_uring.h
··· 284 284 { 285 285 struct io_rings *r = ctx->rings; 286 286 287 - return READ_ONCE(r->sq.tail) - ctx->cached_sq_head == ctx->sq_entries; 287 + /* 288 + * SQPOLL must use the actual sqring head, as using the cached_sq_head 289 + * is race prone if the SQPOLL thread has grabbed entries but not yet 290 + * committed them to the ring. For !SQPOLL, this doesn't matter, but 291 + * since this helper is just used for SQPOLL sqring waits (or POLLOUT), 292 + * just read the actual sqring head unconditionally. 293 + */ 294 + return READ_ONCE(r->sq.tail) - READ_ONCE(r->sq.head) == ctx->sq_entries; 288 295 } 289 296 290 297 static inline unsigned int io_sqring_entries(struct io_ring_ctx *ctx) ··· 327 320 if (current->io_uring) { 328 321 unsigned int count = 0; 329 322 323 + __set_current_state(TASK_RUNNING); 330 324 tctx_task_work_run(current->io_uring, UINT_MAX, &count); 331 325 if (count) 332 326 ret = true;
+2 -1
io_uring/rsrc.c
··· 1176 1176 for (i = 0; i < nbufs; i++) { 1177 1177 struct io_mapped_ubuf *src = src_ctx->user_bufs[i]; 1178 1178 1179 - refcount_inc(&src->refs); 1179 + if (src != &dummy_ubuf) 1180 + refcount_inc(&src->refs); 1180 1181 user_bufs[i] = src; 1181 1182 } 1182 1183
+1 -1
io_uring/rw.c
··· 807 807 * reliably. If not, or it IOCB_NOWAIT is set, don't retry. 808 808 */ 809 809 if (kiocb->ki_flags & IOCB_NOWAIT || 810 - ((file->f_flags & O_NONBLOCK && (req->flags & REQ_F_SUPPORT_NOWAIT)))) 810 + ((file->f_flags & O_NONBLOCK && !(req->flags & REQ_F_SUPPORT_NOWAIT)))) 811 811 req->flags |= REQ_F_NOWAIT; 812 812 813 813 if (ctx->flags & IORING_SETUP_IOPOLL) {
-4
kernel/bpf/bpf_lsm.c
··· 339 339 BTF_ID(func, bpf_lsm_path_chown) 340 340 #endif /* CONFIG_SECURITY_PATH */ 341 341 342 - #ifdef CONFIG_KEYS 343 - BTF_ID(func, bpf_lsm_key_free) 344 - #endif /* CONFIG_KEYS */ 345 - 346 342 BTF_ID(func, bpf_lsm_mmap_file) 347 343 BTF_ID(func, bpf_lsm_netlink_send) 348 344 BTF_ID(func, bpf_lsm_path_notify)
+11 -4
kernel/bpf/btf.c
··· 3523 3523 * (i + 1) * elem_size 3524 3524 * where i is the repeat index and elem_size is the size of an element. 3525 3525 */ 3526 - static int btf_repeat_fields(struct btf_field_info *info, 3526 + static int btf_repeat_fields(struct btf_field_info *info, int info_cnt, 3527 3527 u32 field_cnt, u32 repeat_cnt, u32 elem_size) 3528 3528 { 3529 3529 u32 i, j; ··· 3542 3542 return -EINVAL; 3543 3543 } 3544 3544 } 3545 + 3546 + /* The type of struct size or variable size is u32, 3547 + * so the multiplication will not overflow. 3548 + */ 3549 + if (field_cnt * (repeat_cnt + 1) > info_cnt) 3550 + return -E2BIG; 3545 3551 3546 3552 cur = field_cnt; 3547 3553 for (i = 0; i < repeat_cnt; i++) { ··· 3593 3587 info[i].off += off; 3594 3588 3595 3589 if (nelems > 1) { 3596 - err = btf_repeat_fields(info, ret, nelems - 1, t->size); 3590 + err = btf_repeat_fields(info, info_cnt, ret, nelems - 1, t->size); 3597 3591 if (err == 0) 3598 3592 ret *= nelems; 3599 3593 else ··· 3687 3681 3688 3682 if (ret == BTF_FIELD_IGNORE) 3689 3683 return 0; 3690 - if (nelems > info_cnt) 3684 + if (!info_cnt) 3691 3685 return -E2BIG; 3692 3686 if (nelems > 1) { 3693 - ret = btf_repeat_fields(info, 1, nelems - 1, sz); 3687 + ret = btf_repeat_fields(info, info_cnt, 1, nelems - 1, sz); 3694 3688 if (ret < 0) 3695 3689 return ret; 3696 3690 } ··· 8967 8961 if (!type) { 8968 8962 bpf_log(ctx->log, "relo #%u: bad type id %u\n", 8969 8963 relo_idx, relo->type_id); 8964 + kfree(specs); 8970 8965 return -EINVAL; 8971 8966 } 8972 8967
+7 -4
kernel/bpf/devmap.c
··· 333 333 334 334 static int dev_map_bpf_prog_run(struct bpf_prog *xdp_prog, 335 335 struct xdp_frame **frames, int n, 336 - struct net_device *dev) 336 + struct net_device *tx_dev, 337 + struct net_device *rx_dev) 337 338 { 338 - struct xdp_txq_info txq = { .dev = dev }; 339 + struct xdp_txq_info txq = { .dev = tx_dev }; 340 + struct xdp_rxq_info rxq = { .dev = rx_dev }; 339 341 struct xdp_buff xdp; 340 342 int i, nframes = 0; 341 343 ··· 348 346 349 347 xdp_convert_frame_to_buff(xdpf, &xdp); 350 348 xdp.txq = &txq; 349 + xdp.rxq = &rxq; 351 350 352 351 act = bpf_prog_run_xdp(xdp_prog, &xdp); 353 352 switch (act) { ··· 363 360 bpf_warn_invalid_xdp_action(NULL, xdp_prog, act); 364 361 fallthrough; 365 362 case XDP_ABORTED: 366 - trace_xdp_exception(dev, xdp_prog, act); 363 + trace_xdp_exception(tx_dev, xdp_prog, act); 367 364 fallthrough; 368 365 case XDP_DROP: 369 366 xdp_return_frame_rx_napi(xdpf); ··· 391 388 } 392 389 393 390 if (bq->xdp_prog) { 394 - to_send = dev_map_bpf_prog_run(bq->xdp_prog, bq->q, cnt, dev); 391 + to_send = dev_map_bpf_prog_run(bq->xdp_prog, bq->q, cnt, dev, bq->dev_rx); 395 392 if (!to_send) 396 393 goto out; 397 394 }
+1 -2
kernel/bpf/log.c
··· 688 688 if (t == SCALAR_VALUE && reg->precise) 689 689 verbose(env, "P"); 690 690 if (t == SCALAR_VALUE && tnum_is_const(reg->var_off)) { 691 - /* reg->off should be 0 for SCALAR_VALUE */ 692 - verbose_snum(env, reg->var_off.value + reg->off); 691 + verbose_snum(env, reg->var_off.value); 693 692 return; 694 693 } 695 694
+6 -6
kernel/bpf/ringbuf.c
··· 29 29 u64 mask; 30 30 struct page **pages; 31 31 int nr_pages; 32 - spinlock_t spinlock ____cacheline_aligned_in_smp; 32 + raw_spinlock_t spinlock ____cacheline_aligned_in_smp; 33 33 /* For user-space producer ring buffers, an atomic_t busy bit is used 34 34 * to synchronize access to the ring buffers in the kernel, rather than 35 35 * the spinlock that is used for kernel-producer ring buffers. This is ··· 173 173 if (!rb) 174 174 return NULL; 175 175 176 - spin_lock_init(&rb->spinlock); 176 + raw_spin_lock_init(&rb->spinlock); 177 177 atomic_set(&rb->busy, 0); 178 178 init_waitqueue_head(&rb->waitq); 179 179 init_irq_work(&rb->work, bpf_ringbuf_notify); ··· 421 421 cons_pos = smp_load_acquire(&rb->consumer_pos); 422 422 423 423 if (in_nmi()) { 424 - if (!spin_trylock_irqsave(&rb->spinlock, flags)) 424 + if (!raw_spin_trylock_irqsave(&rb->spinlock, flags)) 425 425 return NULL; 426 426 } else { 427 - spin_lock_irqsave(&rb->spinlock, flags); 427 + raw_spin_lock_irqsave(&rb->spinlock, flags); 428 428 } 429 429 430 430 pend_pos = rb->pending_pos; ··· 450 450 */ 451 451 if (new_prod_pos - cons_pos > rb->mask || 452 452 new_prod_pos - pend_pos > rb->mask) { 453 - spin_unlock_irqrestore(&rb->spinlock, flags); 453 + raw_spin_unlock_irqrestore(&rb->spinlock, flags); 454 454 return NULL; 455 455 } 456 456 ··· 462 462 /* pairs with consumer's smp_load_acquire() */ 463 463 smp_store_release(&rb->producer_pos, new_prod_pos); 464 464 465 - spin_unlock_irqrestore(&rb->spinlock, flags); 465 + raw_spin_unlock_irqrestore(&rb->spinlock, flags); 466 466 467 467 return (void *)hdr + BPF_RINGBUF_HDR_SZ; 468 468 }
+23 -8
kernel/bpf/syscall.c
··· 3565 3565 } 3566 3566 3567 3567 static int bpf_perf_link_fill_common(const struct perf_event *event, 3568 - char __user *uname, u32 ulen, 3568 + char __user *uname, u32 *ulenp, 3569 3569 u64 *probe_offset, u64 *probe_addr, 3570 3570 u32 *fd_type, unsigned long *missed) 3571 3571 { 3572 3572 const char *buf; 3573 - u32 prog_id; 3573 + u32 prog_id, ulen; 3574 3574 size_t len; 3575 3575 int err; 3576 3576 3577 + ulen = *ulenp; 3577 3578 if (!ulen ^ !uname) 3578 3579 return -EINVAL; 3579 3580 ··· 3582 3581 probe_offset, probe_addr, missed); 3583 3582 if (err) 3584 3583 return err; 3585 - if (!uname) 3586 - return 0; 3584 + 3587 3585 if (buf) { 3588 3586 len = strlen(buf); 3587 + *ulenp = len + 1; 3588 + } else { 3589 + *ulenp = 1; 3590 + } 3591 + if (!uname) 3592 + return 0; 3593 + 3594 + if (buf) { 3589 3595 err = bpf_copy_to_user(uname, buf, ulen, len); 3590 3596 if (err) 3591 3597 return err; ··· 3617 3609 3618 3610 uname = u64_to_user_ptr(info->perf_event.kprobe.func_name); 3619 3611 ulen = info->perf_event.kprobe.name_len; 3620 - err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr, 3612 + err = bpf_perf_link_fill_common(event, uname, &ulen, &offset, &addr, 3621 3613 &type, &missed); 3622 3614 if (err) 3623 3615 return err; ··· 3625 3617 info->perf_event.type = BPF_PERF_EVENT_KRETPROBE; 3626 3618 else 3627 3619 info->perf_event.type = BPF_PERF_EVENT_KPROBE; 3628 - 3620 + info->perf_event.kprobe.name_len = ulen; 3629 3621 info->perf_event.kprobe.offset = offset; 3630 3622 info->perf_event.kprobe.missed = missed; 3631 3623 if (!kallsyms_show_value(current_cred())) ··· 3647 3639 3648 3640 uname = u64_to_user_ptr(info->perf_event.uprobe.file_name); 3649 3641 ulen = info->perf_event.uprobe.name_len; 3650 - err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr, 3642 + err = bpf_perf_link_fill_common(event, uname, &ulen, &offset, &addr, 3651 3643 &type, NULL); 3652 3644 if (err) 3653 3645 return err; ··· 3656 3648 info->perf_event.type = BPF_PERF_EVENT_URETPROBE; 3657 3649 else 3658 3650 info->perf_event.type = BPF_PERF_EVENT_UPROBE; 3651 + info->perf_event.uprobe.name_len = ulen; 3659 3652 info->perf_event.uprobe.offset = offset; 3660 3653 info->perf_event.uprobe.cookie = event->bpf_cookie; 3661 3654 return 0; ··· 3682 3673 { 3683 3674 char __user *uname; 3684 3675 u32 ulen; 3676 + int err; 3685 3677 3686 3678 uname = u64_to_user_ptr(info->perf_event.tracepoint.tp_name); 3687 3679 ulen = info->perf_event.tracepoint.name_len; 3680 + err = bpf_perf_link_fill_common(event, uname, &ulen, NULL, NULL, NULL, NULL); 3681 + if (err) 3682 + return err; 3683 + 3688 3684 info->perf_event.type = BPF_PERF_EVENT_TRACEPOINT; 3685 + info->perf_event.tracepoint.name_len = ulen; 3689 3686 info->perf_event.tracepoint.cookie = event->bpf_cookie; 3690 - return bpf_perf_link_fill_common(event, uname, ulen, NULL, NULL, NULL, NULL); 3687 + return 0; 3691 3688 } 3692 3689 3693 3690 static int bpf_perf_link_fill_perf_event(const struct perf_event *event,
+1 -1
kernel/bpf/task_iter.c
··· 99 99 rcu_read_lock(); 100 100 pid = find_pid_ns(common->pid, common->ns); 101 101 if (pid) { 102 - task = get_pid_task(pid, PIDTYPE_TGID); 102 + task = get_pid_task(pid, PIDTYPE_PID); 103 103 *tid = common->pid; 104 104 } 105 105 rcu_read_unlock();
+24 -12
kernel/bpf/verifier.c
··· 2750 2750 b->module = mod; 2751 2751 b->offset = offset; 2752 2752 2753 + /* sort() reorders entries by value, so b may no longer point 2754 + * to the right entry after this 2755 + */ 2753 2756 sort(tab->descs, tab->nr_descs, sizeof(tab->descs[0]), 2754 2757 kfunc_btf_cmp_by_off, NULL); 2758 + } else { 2759 + btf = b->btf; 2755 2760 } 2756 - return b->btf; 2761 + 2762 + return btf; 2757 2763 } 2758 2764 2759 2765 void bpf_free_kfunc_btf_tab(struct bpf_kfunc_btf_tab *tab) ··· 6339 6333 6340 6334 /* both of s64_max/s64_min positive or negative */ 6341 6335 if ((s64_max >= 0) == (s64_min >= 0)) { 6342 - reg->smin_value = reg->s32_min_value = s64_min; 6343 - reg->smax_value = reg->s32_max_value = s64_max; 6344 - reg->umin_value = reg->u32_min_value = s64_min; 6345 - reg->umax_value = reg->u32_max_value = s64_max; 6336 + reg->s32_min_value = reg->smin_value = s64_min; 6337 + reg->s32_max_value = reg->smax_value = s64_max; 6338 + reg->u32_min_value = reg->umin_value = s64_min; 6339 + reg->u32_max_value = reg->umax_value = s64_max; 6346 6340 reg->var_off = tnum_range(s64_min, s64_max); 6347 6341 return; 6348 6342 } ··· 14270 14264 * r1 += 0x1 14271 14265 * if r2 < 1000 goto ... 14272 14266 * use r1 in memory access 14273 - * So remember constant delta between r2 and r1 and update r1 after 14274 - * 'if' condition. 14267 + * So for 64-bit alu remember constant delta between r2 and r1 and 14268 + * update r1 after 'if' condition. 14275 14269 */ 14276 - if (env->bpf_capable && BPF_OP(insn->code) == BPF_ADD && 14277 - dst_reg->id && is_reg_const(src_reg, alu32)) { 14278 - u64 val = reg_const_value(src_reg, alu32); 14270 + if (env->bpf_capable && 14271 + BPF_OP(insn->code) == BPF_ADD && !alu32 && 14272 + dst_reg->id && is_reg_const(src_reg, false)) { 14273 + u64 val = reg_const_value(src_reg, false); 14279 14274 14280 14275 if ((dst_reg->id & BPF_ADD_CONST) || 14281 14276 /* prevent overflow in sync_linked_regs() later */ ··· 15333 15326 continue; 15334 15327 if ((!(reg->id & BPF_ADD_CONST) && !(known_reg->id & BPF_ADD_CONST)) || 15335 15328 reg->off == known_reg->off) { 15329 + s32 saved_subreg_def = reg->subreg_def; 15330 + 15336 15331 copy_register_state(reg, known_reg); 15332 + reg->subreg_def = saved_subreg_def; 15337 15333 } else { 15334 + s32 saved_subreg_def = reg->subreg_def; 15338 15335 s32 saved_off = reg->off; 15339 15336 15340 15337 fake_reg.type = SCALAR_VALUE; ··· 15351 15340 * otherwise another sync_linked_regs() will be incorrect. 15352 15341 */ 15353 15342 reg->off = saved_off; 15343 + reg->subreg_def = saved_subreg_def; 15354 15344 15355 15345 scalar32_min_max_add(reg, &fake_reg); 15356 15346 scalar_min_max_add(reg, &fake_reg); ··· 22322 22310 /* 'struct bpf_verifier_env' can be global, but since it's not small, 22323 22311 * allocate/free it every time bpf_check() is called 22324 22312 */ 22325 - env = kzalloc(sizeof(struct bpf_verifier_env), GFP_KERNEL); 22313 + env = kvzalloc(sizeof(struct bpf_verifier_env), GFP_KERNEL); 22326 22314 if (!env) 22327 22315 return -ENOMEM; 22328 22316 ··· 22558 22546 mutex_unlock(&bpf_verifier_lock); 22559 22547 vfree(env->insn_aux_data); 22560 22548 err_free_env: 22561 - kfree(env); 22549 + kvfree(env); 22562 22550 return ret; 22563 22551 }
+1 -1
kernel/events/core.c
··· 9251 9251 }, 9252 9252 }; 9253 9253 9254 - if (!sched_in && task->on_rq) { 9254 + if (!sched_in && task_is_runnable(task)) { 9255 9255 switch_event.event_id.header.misc |= 9256 9256 PERF_RECORD_MISC_SWITCH_OUT_PREEMPT; 9257 9257 }
+6 -1
kernel/freezer.c
··· 109 109 { 110 110 unsigned int state = READ_ONCE(p->__state); 111 111 112 - if (p->on_rq) 112 + /* 113 + * Allow freezing the sched_delayed tasks; they will not execute until 114 + * ttwu() fixes them up, so it is safe to swap their state now, instead 115 + * of waiting for them to get fully dequeued. 116 + */ 117 + if (task_is_runnable(p)) 113 118 return 0; 114 119 115 120 if (p != current && task_curr(p))
+9
kernel/rcu/tasks.h
··· 986 986 return false; 987 987 988 988 /* 989 + * t->on_rq && !t->se.sched_delayed *could* be considered sleeping but 990 + * since it is a spurious state (it will transition into the 991 + * traditional blocked state or get woken up without outside 992 + * dependencies), not considering it such should only affect timing. 993 + * 994 + * Be conservative for now and not include it. 995 + */ 996 + 997 + /* 989 998 * Idle tasks (or idle injection) within the idle loop are RCU-tasks 990 999 * quiescent states. But CPU boot code performed by the idle task 991 1000 * isn't a quiescent state.
+41 -24
kernel/sched/core.c
··· 548 548 * ON_RQ_MIGRATING state is used for migration without holding both 549 549 * rq->locks. It indicates task_cpu() is not stable, see task_rq_lock(). 550 550 * 551 + * Additionally it is possible to be ->on_rq but still be considered not 552 + * runnable when p->se.sched_delayed is true. These tasks are on the runqueue 553 + * but will be dequeued as soon as they get picked again. See the 554 + * task_is_runnable() helper. 555 + * 551 556 * p->on_cpu <- { 0, 1 }: 552 557 * 553 558 * is set by prepare_task() and cleared by finish_task() such that it will be ··· 2017 2012 if (!(flags & ENQUEUE_NOCLOCK)) 2018 2013 update_rq_clock(rq); 2019 2014 2020 - if (!(flags & ENQUEUE_RESTORE)) { 2021 - sched_info_enqueue(rq, p); 2022 - psi_enqueue(p, (flags & ENQUEUE_WAKEUP) && !(flags & ENQUEUE_MIGRATED)); 2023 - } 2024 - 2025 2015 p->sched_class->enqueue_task(rq, p, flags); 2026 2016 /* 2027 2017 * Must be after ->enqueue_task() because ENQUEUE_DELAYED can clear 2028 2018 * ->sched_delayed. 2029 2019 */ 2030 2020 uclamp_rq_inc(rq, p); 2021 + 2022 + if (!(flags & ENQUEUE_RESTORE)) { 2023 + sched_info_enqueue(rq, p); 2024 + psi_enqueue(p, flags & ENQUEUE_MIGRATED); 2025 + } 2031 2026 2032 2027 if (sched_core_enabled(rq)) 2033 2028 sched_core_enqueue(rq, p); ··· 2046 2041 2047 2042 if (!(flags & DEQUEUE_SAVE)) { 2048 2043 sched_info_dequeue(rq, p); 2049 - psi_dequeue(p, flags & DEQUEUE_SLEEP); 2044 + psi_dequeue(p, !(flags & DEQUEUE_SLEEP)); 2050 2045 } 2051 2046 2052 2047 /* ··· 4328 4323 * @arg: Argument to function. 4329 4324 * 4330 4325 * Fix the task in it's current state by avoiding wakeups and or rq operations 4331 - * and call @func(@arg) on it. This function can use ->on_rq and task_curr() 4332 - * to work out what the state is, if required. Given that @func can be invoked 4333 - * with a runqueue lock held, it had better be quite lightweight. 4326 + * and call @func(@arg) on it. This function can use task_is_runnable() and 4327 + * task_curr() to work out what the state is, if required. Given that @func 4328 + * can be invoked with a runqueue lock held, it had better be quite 4329 + * lightweight. 4334 4330 * 4335 4331 * Returns: 4336 4332 * Whatever @func returns ··· 6550 6544 * as a preemption by schedule_debug() and RCU. 6551 6545 */ 6552 6546 bool preempt = sched_mode > SM_NONE; 6547 + bool block = false; 6553 6548 unsigned long *switch_count; 6554 6549 unsigned long prev_state; 6555 6550 struct rq_flags rf; ··· 6636 6629 * After this, schedule() must not care about p->state any more. 6637 6630 */ 6638 6631 block_task(rq, prev, flags); 6632 + block = true; 6639 6633 } 6640 6634 switch_count = &prev->nvcsw; 6641 6635 } ··· 6682 6674 6683 6675 migrate_disable_switch(rq, prev); 6684 6676 psi_account_irqtime(rq, prev, next); 6685 - psi_sched_switch(prev, next, !task_on_rq_queued(prev)); 6677 + psi_sched_switch(prev, next, block); 6686 6678 6687 6679 trace_sched_switch(preempt, prev, next, prev_state); 6688 6680 ··· 7025 7017 } 7026 7018 EXPORT_SYMBOL(default_wake_function); 7027 7019 7028 - void __setscheduler_prio(struct task_struct *p, int prio) 7020 + const struct sched_class *__setscheduler_class(struct task_struct *p, int prio) 7029 7021 { 7030 7022 if (dl_prio(prio)) 7031 - p->sched_class = &dl_sched_class; 7032 - else if (rt_prio(prio)) 7033 - p->sched_class = &rt_sched_class; 7034 - #ifdef CONFIG_SCHED_CLASS_EXT 7035 - else if (task_should_scx(p)) 7036 - p->sched_class = &ext_sched_class; 7037 - #endif 7038 - else 7039 - p->sched_class = &fair_sched_class; 7023 + return &dl_sched_class; 7040 7024 7041 - p->prio = prio; 7025 + if (rt_prio(prio)) 7026 + return &rt_sched_class; 7027 + 7028 + #ifdef CONFIG_SCHED_CLASS_EXT 7029 + if (task_should_scx(p)) 7030 + return &ext_sched_class; 7031 + #endif 7032 + 7033 + return &fair_sched_class; 7042 7034 } 7043 7035 7044 7036 #ifdef CONFIG_RT_MUTEXES ··· 7084 7076 { 7085 7077 int prio, oldprio, queued, running, queue_flag = 7086 7078 DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK; 7087 - const struct sched_class *prev_class; 7079 + const struct sched_class *prev_class, *next_class; 7088 7080 struct rq_flags rf; 7089 7081 struct rq *rq; 7090 7082 ··· 7142 7134 queue_flag &= ~DEQUEUE_MOVE; 7143 7135 7144 7136 prev_class = p->sched_class; 7137 + next_class = __setscheduler_class(p, prio); 7138 + 7139 + if (prev_class != next_class && p->se.sched_delayed) 7140 + dequeue_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_DELAYED | DEQUEUE_NOCLOCK); 7141 + 7145 7142 queued = task_on_rq_queued(p); 7146 7143 running = task_current(rq, p); 7147 7144 if (queued) ··· 7184 7171 p->rt.timeout = 0; 7185 7172 } 7186 7173 7187 - __setscheduler_prio(p, prio); 7174 + p->sched_class = next_class; 7175 + p->prio = prio; 7176 + 7188 7177 check_class_changing(rq, p, prev_class); 7189 7178 7190 7179 if (queued) ··· 10480 10465 return; 10481 10466 if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan))) 10482 10467 return; 10483 - task_work_add(curr, work, TWA_RESUME); 10468 + 10469 + /* No page allocation under rq lock */ 10470 + task_work_add(curr, work, TWA_RESUME | TWAF_NO_ALLOC); 10484 10471 } 10485 10472 10486 10473 void sched_mm_cid_exit_signals(struct task_struct *t)
+1 -1
kernel/sched/deadline.c
··· 2385 2385 2386 2386 deadline_queue_push_tasks(rq); 2387 2387 2388 - if (hrtick_enabled(rq)) 2388 + if (hrtick_enabled_dl(rq)) 2389 2389 start_hrtick_dl(rq, &p->dl); 2390 2390 } 2391 2391
+2 -2
kernel/sched/ext.c
··· 4493 4493 4494 4494 sched_deq_and_put_task(p, DEQUEUE_SAVE | DEQUEUE_MOVE, &ctx); 4495 4495 4496 - __setscheduler_prio(p, p->prio); 4496 + p->sched_class = __setscheduler_class(p, p->prio); 4497 4497 check_class_changing(task_rq(p), p, old_class); 4498 4498 4499 4499 sched_enq_and_set_task(&ctx); ··· 5204 5204 sched_deq_and_put_task(p, DEQUEUE_SAVE | DEQUEUE_MOVE, &ctx); 5205 5205 5206 5206 p->scx.slice = SCX_SLICE_DFL; 5207 - __setscheduler_prio(p, p->prio); 5207 + p->sched_class = __setscheduler_class(p, p->prio); 5208 5208 check_class_changing(task_rq(p), p, old_class); 5209 5209 5210 5210 sched_enq_and_set_task(&ctx);
+7 -20
kernel/sched/fair.c
··· 1247 1247 1248 1248 account_cfs_rq_runtime(cfs_rq, delta_exec); 1249 1249 1250 - if (rq->nr_running == 1) 1250 + if (cfs_rq->nr_running == 1) 1251 1251 return; 1252 1252 1253 1253 if (resched || did_preempt_short(cfs_rq, curr)) { ··· 6058 6058 for_each_sched_entity(se) { 6059 6059 struct cfs_rq *qcfs_rq = cfs_rq_of(se); 6060 6060 6061 - if (se->on_rq) { 6062 - SCHED_WARN_ON(se->sched_delayed); 6061 + /* Handle any unfinished DELAY_DEQUEUE business first. */ 6062 + if (se->sched_delayed) { 6063 + int flags = DEQUEUE_SLEEP | DEQUEUE_DELAYED; 6064 + 6065 + dequeue_entity(qcfs_rq, se, flags); 6066 + } else if (se->on_rq) 6063 6067 break; 6064 - } 6065 6068 enqueue_entity(qcfs_rq, se, ENQUEUE_WAKEUP); 6066 6069 6067 6070 if (cfs_rq_is_idle(group_cfs_rq(se))) ··· 13177 13174 static void switched_from_fair(struct rq *rq, struct task_struct *p) 13178 13175 { 13179 13176 detach_task_cfs_rq(p); 13180 - /* 13181 - * Since this is called after changing class, this is a little weird 13182 - * and we cannot use DEQUEUE_DELAYED. 13183 - */ 13184 - if (p->se.sched_delayed) { 13185 - /* First, dequeue it from its new class' structures */ 13186 - dequeue_task(rq, p, DEQUEUE_NOCLOCK | DEQUEUE_SLEEP); 13187 - /* 13188 - * Now, clean up the fair_sched_class side of things 13189 - * related to sched_delayed being true and that wasn't done 13190 - * due to the generic dequeue not using DEQUEUE_DELAYED. 13191 - */ 13192 - finish_delayed_dequeue_entity(&p->se); 13193 - p->se.rel_deadline = 0; 13194 - __block_task(rq, p); 13195 - } 13196 13177 } 13197 13178 13198 13179 static void switched_to_fair(struct rq *rq, struct task_struct *p)
+1 -1
kernel/sched/sched.h
··· 3800 3800 3801 3801 extern int __sched_setscheduler(struct task_struct *p, const struct sched_attr *attr, bool user, bool pi); 3802 3802 extern int __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx); 3803 - extern void __setscheduler_prio(struct task_struct *p, int prio); 3803 + extern const struct sched_class *__setscheduler_class(struct task_struct *p, int prio); 3804 3804 extern void set_load_weight(struct task_struct *p, bool update_load); 3805 3805 extern void enqueue_task(struct rq *rq, struct task_struct *p, int flags); 3806 3806 extern bool dequeue_task(struct rq *rq, struct task_struct *p, int flags);
+33 -15
kernel/sched/stats.h
··· 119 119 /* 120 120 * PSI tracks state that persists across sleeps, such as iowaits and 121 121 * memory stalls. As a result, it has to distinguish between sleeps, 122 - * where a task's runnable state changes, and requeues, where a task 123 - * and its state are being moved between CPUs and runqueues. 122 + * where a task's runnable state changes, and migrations, where a task 123 + * and its runnable state are being moved between CPUs and runqueues. 124 + * 125 + * A notable case is a task whose dequeue is delayed. PSI considers 126 + * those sleeping, but because they are still on the runqueue they can 127 + * go through migration requeues. In this case, *sleeping* states need 128 + * to be transferred. 124 129 */ 125 - static inline void psi_enqueue(struct task_struct *p, bool wakeup) 130 + static inline void psi_enqueue(struct task_struct *p, bool migrate) 126 131 { 127 - int clear = 0, set = TSK_RUNNING; 132 + int clear = 0, set = 0; 128 133 129 134 if (static_branch_likely(&psi_disabled)) 130 135 return; 131 136 132 - if (p->in_memstall) 133 - set |= TSK_MEMSTALL_RUNNING; 134 - 135 - if (!wakeup) { 137 + if (p->se.sched_delayed) { 138 + /* CPU migration of "sleeping" task */ 139 + SCHED_WARN_ON(!migrate); 136 140 if (p->in_memstall) 137 141 set |= TSK_MEMSTALL; 142 + if (p->in_iowait) 143 + set |= TSK_IOWAIT; 144 + } else if (migrate) { 145 + /* CPU migration of runnable task */ 146 + set = TSK_RUNNING; 147 + if (p->in_memstall) 148 + set |= TSK_MEMSTALL | TSK_MEMSTALL_RUNNING; 138 149 } else { 150 + /* Wakeup of new or sleeping task */ 139 151 if (p->in_iowait) 140 152 clear |= TSK_IOWAIT; 153 + set = TSK_RUNNING; 154 + if (p->in_memstall) 155 + set |= TSK_MEMSTALL_RUNNING; 141 156 } 142 157 143 158 psi_task_change(p, clear, set); 144 159 } 145 160 146 - static inline void psi_dequeue(struct task_struct *p, bool sleep) 161 + static inline void psi_dequeue(struct task_struct *p, bool migrate) 147 162 { 148 163 if (static_branch_likely(&psi_disabled)) 149 164 return; 165 + 166 + /* 167 + * When migrating a task to another CPU, clear all psi 168 + * state. The enqueue callback above will work it out. 169 + */ 170 + if (migrate) 171 + psi_task_change(p, p->psi_flags, 0); 150 172 151 173 /* 152 174 * A voluntary sleep is a dequeue followed by a task switch. To ··· 176 154 * TSK_RUNNING and TSK_IOWAIT for us when it moves TSK_ONCPU. 177 155 * Do nothing here. 178 156 */ 179 - if (sleep) 180 - return; 181 - 182 - psi_task_change(p, p->psi_flags, 0); 183 157 } 184 158 185 159 static inline void psi_ttwu_dequeue(struct task_struct *p) ··· 208 190 } 209 191 210 192 #else /* CONFIG_PSI */ 211 - static inline void psi_enqueue(struct task_struct *p, bool wakeup) {} 212 - static inline void psi_dequeue(struct task_struct *p, bool sleep) {} 193 + static inline void psi_enqueue(struct task_struct *p, bool migrate) {} 194 + static inline void psi_dequeue(struct task_struct *p, bool migrate) {} 213 195 static inline void psi_ttwu_dequeue(struct task_struct *p) {} 214 196 static inline void psi_sched_switch(struct task_struct *prev, 215 197 struct task_struct *next,
+9 -4
kernel/sched/syscalls.c
··· 529 529 { 530 530 int oldpolicy = -1, policy = attr->sched_policy; 531 531 int retval, oldprio, newprio, queued, running; 532 - const struct sched_class *prev_class; 532 + const struct sched_class *prev_class, *next_class; 533 533 struct balance_callback *head; 534 534 struct rq_flags rf; 535 535 int reset_on_fork; ··· 706 706 queue_flags &= ~DEQUEUE_MOVE; 707 707 } 708 708 709 + prev_class = p->sched_class; 710 + next_class = __setscheduler_class(p, newprio); 711 + 712 + if (prev_class != next_class && p->se.sched_delayed) 713 + dequeue_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_DELAYED | DEQUEUE_NOCLOCK); 714 + 709 715 queued = task_on_rq_queued(p); 710 716 running = task_current(rq, p); 711 717 if (queued) ··· 719 713 if (running) 720 714 put_prev_task(rq, p); 721 715 722 - prev_class = p->sched_class; 723 - 724 716 if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) { 725 717 __setscheduler_params(p, attr); 726 - __setscheduler_prio(p, newprio); 718 + p->sched_class = next_class; 719 + p->prio = newprio; 727 720 } 728 721 __setscheduler_uclamp(p, attr); 729 722 check_class_changing(rq, p, prev_class);
+13 -2
kernel/task_work.c
··· 55 55 enum task_work_notify_mode notify) 56 56 { 57 57 struct callback_head *head; 58 + int flags = notify & TWA_FLAGS; 58 59 60 + notify &= ~TWA_FLAGS; 59 61 if (notify == TWA_NMI_CURRENT) { 60 62 if (WARN_ON_ONCE(task != current)) 61 63 return -EINVAL; 62 64 if (!IS_ENABLED(CONFIG_IRQ_WORK)) 63 65 return -EINVAL; 64 66 } else { 65 - /* record the work call stack in order to print it in KASAN reports */ 66 - kasan_record_aux_stack(work); 67 + /* 68 + * Record the work call stack in order to print it in KASAN 69 + * reports. 70 + * 71 + * Note that stack allocation can fail if TWAF_NO_ALLOC flag 72 + * is set and new page is needed to expand the stack buffer. 73 + */ 74 + if (flags & TWAF_NO_ALLOC) 75 + kasan_record_aux_stack_noalloc(work); 76 + else 77 + kasan_record_aux_stack(work); 67 78 } 68 79 69 80 head = READ_ONCE(task->task_works);
+3 -3
kernel/time/posix-clock.c
··· 309 309 struct posix_clock_desc cd; 310 310 int err; 311 311 312 + if (!timespec64_valid_strict(ts)) 313 + return -EINVAL; 314 + 312 315 err = get_clock_desc(id, &cd); 313 316 if (err) 314 317 return err; ··· 320 317 err = -EACCES; 321 318 goto out; 322 319 } 323 - 324 - if (!timespec64_valid_strict(ts)) 325 - return -EINVAL; 326 320 327 321 if (cd.clk->ops.clock_settime) 328 322 err = cd.clk->ops.clock_settime(cd.clk, ts);
+6
kernel/time/tick-sched.c
··· 434 434 * smp_mb__after_spin_lock() 435 435 * tick_nohz_task_switch() 436 436 * LOAD p->tick_dep_mask 437 + * 438 + * XXX given a task picks up the dependency on schedule(), should we 439 + * only care about tasks that are currently on the CPU instead of all 440 + * that are on the runqueue? 441 + * 442 + * That is, does this want to be: task_on_cpu() / task_curr()? 437 443 */ 438 444 if (!sched_task_on_rq(tsk)) 439 445 return;
+17 -19
kernel/trace/bpf_trace.c
··· 3133 3133 struct bpf_uprobe_multi_link *umulti_link; 3134 3134 u32 ucount = info->uprobe_multi.count; 3135 3135 int err = 0, i; 3136 - long left; 3136 + char *p, *buf; 3137 + long left = 0; 3137 3138 3138 3139 if (!upath ^ !upath_size) 3139 3140 return -EINVAL; ··· 3148 3147 info->uprobe_multi.pid = umulti_link->task ? 3149 3148 task_pid_nr_ns(umulti_link->task, task_active_pid_ns(current)) : 0; 3150 3149 3151 - if (upath) { 3152 - char *p, *buf; 3153 - 3154 - upath_size = min_t(u32, upath_size, PATH_MAX); 3155 - 3156 - buf = kmalloc(upath_size, GFP_KERNEL); 3157 - if (!buf) 3158 - return -ENOMEM; 3159 - p = d_path(&umulti_link->path, buf, upath_size); 3160 - if (IS_ERR(p)) { 3161 - kfree(buf); 3162 - return PTR_ERR(p); 3163 - } 3164 - upath_size = buf + upath_size - p; 3165 - left = copy_to_user(upath, p, upath_size); 3150 + upath_size = upath_size ? min_t(u32, upath_size, PATH_MAX) : PATH_MAX; 3151 + buf = kmalloc(upath_size, GFP_KERNEL); 3152 + if (!buf) 3153 + return -ENOMEM; 3154 + p = d_path(&umulti_link->path, buf, upath_size); 3155 + if (IS_ERR(p)) { 3166 3156 kfree(buf); 3167 - if (left) 3168 - return -EFAULT; 3169 - info->uprobe_multi.path_size = upath_size; 3157 + return PTR_ERR(p); 3170 3158 } 3159 + upath_size = buf + upath_size - p; 3160 + 3161 + if (upath) 3162 + left = copy_to_user(upath, p, upath_size); 3163 + kfree(buf); 3164 + if (left) 3165 + return -EFAULT; 3166 + info->uprobe_multi.path_size = upath_size; 3171 3167 3172 3168 if (!uoffsets && !ucookies && !uref_ctr_offsets) 3173 3169 return 0;
+23 -8
kernel/trace/fgraph.c
··· 1160 1160 static int start_graph_tracing(void) 1161 1161 { 1162 1162 unsigned long **ret_stack_list; 1163 - int ret, cpu; 1163 + int ret; 1164 1164 1165 - ret_stack_list = kmalloc(SHADOW_STACK_SIZE, GFP_KERNEL); 1165 + ret_stack_list = kcalloc(FTRACE_RETSTACK_ALLOC_SIZE, 1166 + sizeof(*ret_stack_list), GFP_KERNEL); 1166 1167 1167 1168 if (!ret_stack_list) 1168 1169 return -ENOMEM; 1169 - 1170 - /* The cpu_boot init_task->ret_stack will never be freed */ 1171 - for_each_online_cpu(cpu) { 1172 - if (!idle_task(cpu)->ret_stack) 1173 - ftrace_graph_init_idle_task(idle_task(cpu), cpu); 1174 - } 1175 1170 1176 1171 do { 1177 1172 ret = alloc_retstack_tasklist(ret_stack_list); ··· 1237 1242 fgraph_direct_gops = &fgraph_stub; 1238 1243 } 1239 1244 1245 + /* The cpu_boot init_task->ret_stack will never be freed */ 1246 + static int fgraph_cpu_init(unsigned int cpu) 1247 + { 1248 + if (!idle_task(cpu)->ret_stack) 1249 + ftrace_graph_init_idle_task(idle_task(cpu), cpu); 1250 + return 0; 1251 + } 1252 + 1240 1253 int register_ftrace_graph(struct fgraph_ops *gops) 1241 1254 { 1255 + static bool fgraph_initialized; 1242 1256 int command = 0; 1243 1257 int ret = 0; 1244 1258 int i = -1; 1245 1259 1246 1260 mutex_lock(&ftrace_lock); 1261 + 1262 + if (!fgraph_initialized) { 1263 + ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "fgraph_idle_init", 1264 + fgraph_cpu_init, NULL); 1265 + if (ret < 0) { 1266 + pr_warn("fgraph: Error to init cpu hotplug support\n"); 1267 + return ret; 1268 + } 1269 + fgraph_initialized = true; 1270 + ret = 0; 1271 + } 1247 1272 1248 1273 if (!fgraph_array[0]) { 1249 1274 /* The array must always have real data on it */
+6 -1
kernel/trace/trace_eprobe.c
··· 912 912 } 913 913 } 914 914 915 + if (argc - 2 > MAX_TRACE_ARGS) { 916 + ret = -E2BIG; 917 + goto error; 918 + } 919 + 915 920 mutex_lock(&event_mutex); 916 921 event_call = find_and_get_event(sys_name, sys_event); 917 922 ep = alloc_event_probe(group, event, event_call, argc - 2); ··· 942 937 943 938 argc -= 2; argv += 2; 944 939 /* parse arguments */ 945 - for (i = 0; i < argc && i < MAX_TRACE_ARGS; i++) { 940 + for (i = 0; i < argc; i++) { 946 941 trace_probe_log_set_index(i + 2); 947 942 ret = trace_eprobe_tp_update_arg(ep, argv, i); 948 943 if (ret)
+5 -1
kernel/trace/trace_fprobe.c
··· 1187 1187 argc = new_argc; 1188 1188 argv = new_argv; 1189 1189 } 1190 + if (argc > MAX_TRACE_ARGS) { 1191 + ret = -E2BIG; 1192 + goto out; 1193 + } 1190 1194 1191 1195 ret = traceprobe_expand_dentry_args(argc, argv, &dbuf); 1192 1196 if (ret) ··· 1207 1203 } 1208 1204 1209 1205 /* parse arguments */ 1210 - for (i = 0; i < argc && i < MAX_TRACE_ARGS; i++) { 1206 + for (i = 0; i < argc; i++) { 1211 1207 trace_probe_log_set_index(i + 2); 1212 1208 ctx.offset = 0; 1213 1209 ret = traceprobe_parse_probe_arg(&tf->tp, i, argv[i], &ctx);
+5 -1
kernel/trace/trace_kprobe.c
··· 1013 1013 argc = new_argc; 1014 1014 argv = new_argv; 1015 1015 } 1016 + if (argc > MAX_TRACE_ARGS) { 1017 + ret = -E2BIG; 1018 + goto out; 1019 + } 1016 1020 1017 1021 ret = traceprobe_expand_dentry_args(argc, argv, &dbuf); 1018 1022 if (ret) ··· 1033 1029 } 1034 1030 1035 1031 /* parse arguments */ 1036 - for (i = 0; i < argc && i < MAX_TRACE_ARGS; i++) { 1032 + for (i = 0; i < argc; i++) { 1037 1033 trace_probe_log_set_index(i + 2); 1038 1034 ctx.offset = 0; 1039 1035 ret = traceprobe_parse_probe_arg(&tk->tp, i, argv[i], &ctx);
+1 -1
kernel/trace/trace_probe.c
··· 276 276 } 277 277 trace_probe_log_err(offset, NO_EVENT_NAME); 278 278 return -EINVAL; 279 - } else if (len > MAX_EVENT_NAME_LEN) { 279 + } else if (len >= MAX_EVENT_NAME_LEN) { 280 280 trace_probe_log_err(offset, EVENT_TOO_LONG); 281 281 return -EINVAL; 282 282 }
+1 -1
kernel/trace/trace_selftest.c
··· 1485 1485 /* reset the max latency */ 1486 1486 tr->max_latency = 0; 1487 1487 1488 - while (p->on_rq) { 1488 + while (task_is_runnable(p)) { 1489 1489 /* 1490 1490 * Sleep to make sure the -deadline thread is asleep too. 1491 1491 * On virtual machines we can't rely on timings,
+9 -4
kernel/trace/trace_uprobe.c
··· 565 565 566 566 if (argc < 2) 567 567 return -ECANCELED; 568 + if (argc - 2 > MAX_TRACE_ARGS) 569 + return -E2BIG; 568 570 569 571 if (argv[0][1] == ':') 570 572 event = &argv[0][2]; ··· 692 690 tu->filename = filename; 693 691 694 692 /* parse arguments */ 695 - for (i = 0; i < argc && i < MAX_TRACE_ARGS; i++) { 693 + for (i = 0; i < argc; i++) { 696 694 struct traceprobe_parse_context ctx = { 697 695 .flags = (is_return ? TPARG_FL_RETURN : 0) | TPARG_FL_USER, 698 696 }; ··· 877 875 }; 878 876 static struct uprobe_cpu_buffer __percpu *uprobe_cpu_buffer; 879 877 static int uprobe_buffer_refcnt; 878 + #define MAX_UCB_BUFFER_SIZE PAGE_SIZE 880 879 881 880 static int uprobe_buffer_init(void) 882 881 { ··· 982 979 ucb = uprobe_buffer_get(); 983 980 ucb->dsize = tu->tp.size + dsize; 984 981 982 + if (WARN_ON_ONCE(ucb->dsize > MAX_UCB_BUFFER_SIZE)) { 983 + ucb->dsize = MAX_UCB_BUFFER_SIZE; 984 + dsize = MAX_UCB_BUFFER_SIZE - tu->tp.size; 985 + } 986 + 985 987 store_trace_args(ucb->buf, &tu->tp, regs, NULL, esize, dsize); 986 988 987 989 *ucbp = ucb; ··· 1005 997 struct trace_event_call *call = trace_probe_event_call(&tu->tp); 1006 998 1007 999 WARN_ON(call != trace_file->event_call); 1008 - 1009 - if (WARN_ON_ONCE(ucb->dsize > PAGE_SIZE)) 1010 - return; 1011 1000 1012 1001 if (trace_trigger_soft_disabled(trace_file)) 1013 1002 return;
+1 -1
lib/Kconfig.debug
··· 3060 3060 bool "Allow unoptimized build-time assertions" 3061 3061 depends on RUST 3062 3062 help 3063 - Controls how are `build_error!` and `build_assert!` handled during build. 3063 + Controls how `build_error!` and `build_assert!` are handled during the build. 3064 3064 3065 3065 If calls to them exist in the binary, it may indicate a violated invariant 3066 3066 or that the optimizer failed to verify the invariant during compilation.
+5 -2
lib/Kconfig.kasan
··· 22 22 config CC_HAS_KASAN_GENERIC 23 23 def_bool $(cc-option, -fsanitize=kernel-address) 24 24 25 + # GCC appears to ignore no_sanitize_address when -fsanitize=kernel-hwaddress 26 + # is passed. See https://bugzilla.kernel.org/show_bug.cgi?id=218854 (and 27 + # the linked LKML thread) for more details. 25 28 config CC_HAS_KASAN_SW_TAGS 26 - def_bool $(cc-option, -fsanitize=kernel-hwaddress) 29 + def_bool !CC_IS_GCC && $(cc-option, -fsanitize=kernel-hwaddress) 27 30 28 31 # This option is only required for software KASAN modes. 29 32 # Old GCC versions do not have proper support for no_sanitize_address. ··· 101 98 help 102 99 Enables Software Tag-Based KASAN. 103 100 104 - Requires GCC 11+ or Clang. 101 + Requires Clang. 105 102 106 103 Supported only on arm64 CPUs and relies on Top Byte Ignore. 107 104
+5
lib/buildid.c
··· 5 5 #include <linux/elf.h> 6 6 #include <linux/kernel.h> 7 7 #include <linux/pagemap.h> 8 + #include <linux/secretmem.h> 8 9 9 10 #define BUILD_ID 3 10 11 ··· 64 63 return 0; 65 64 66 65 freader_put_folio(r); 66 + 67 + /* reject secretmem folios created with memfd_secret() */ 68 + if (secretmem_mapping(r->file->f_mapping)) 69 + return -EFAULT; 67 70 68 71 r->folio = filemap_get_folio(r->file->f_mapping, file_off >> PAGE_SHIFT); 69 72
+3
lib/codetag.c
··· 228 228 if (!mod) 229 229 return true; 230 230 231 + /* await any module's kfree_rcu() operations to complete */ 232 + kvfree_rcu_barrier(); 233 + 231 234 mutex_lock(&codetag_lock); 232 235 list_for_each_entry(cttype, &codetag_types, link) { 233 236 struct codetag_module *found = NULL;
+1 -1
lib/crypto/mpi/mpi-mul.c
··· 21 21 int usign, vsign, sign_product; 22 22 int assign_wp = 0; 23 23 mpi_ptr_t tmp_limb = NULL; 24 - int err; 24 + int err = 0; 25 25 26 26 if (u->nlimbs < v->nlimbs) { 27 27 /* Swap U and V. */
+7 -7
lib/maple_tree.c
··· 2196 2196 2197 2197 /* 2198 2198 * mas_wr_node_walk() - Find the correct offset for the index in the @mas. 2199 + * If @mas->index cannot be found within the containing 2200 + * node, we traverse to the last entry in the node. 2199 2201 * @wr_mas: The maple write state 2200 2202 * 2201 2203 * Uses mas_slot_locked() and does not need to worry about dead nodes. ··· 3534 3532 return true; 3535 3533 } 3536 3534 3537 - static bool mas_wr_walk_index(struct ma_wr_state *wr_mas) 3535 + static void mas_wr_walk_index(struct ma_wr_state *wr_mas) 3538 3536 { 3539 3537 struct ma_state *mas = wr_mas->mas; 3540 3538 ··· 3543 3541 wr_mas->content = mas_slot_locked(mas, wr_mas->slots, 3544 3542 mas->offset); 3545 3543 if (ma_is_leaf(wr_mas->type)) 3546 - return true; 3544 + return; 3547 3545 mas_wr_walk_traverse(wr_mas); 3548 - 3549 3546 } 3550 - return true; 3551 3547 } 3552 3548 /* 3553 3549 * mas_extend_spanning_null() - Extend a store of a %NULL to include surrounding %NULLs. ··· 3765 3765 memset(&b_node, 0, sizeof(struct maple_big_node)); 3766 3766 /* Copy l_mas and store the value in b_node. */ 3767 3767 mas_store_b_node(&l_wr_mas, &b_node, l_mas.end); 3768 - /* Copy r_mas into b_node. */ 3769 - if (r_mas.offset <= r_mas.end) 3768 + /* Copy r_mas into b_node if there is anything to copy. */ 3769 + if (r_mas.max > r_mas.last) 3770 3770 mas_mab_cp(&r_mas, r_mas.offset, r_mas.end, 3771 3771 &b_node, b_node.b_end + 1); 3772 3772 else ··· 4218 4218 4219 4219 /* Potential spanning rebalance collapsing a node */ 4220 4220 if (new_end < mt_min_slots[wr_mas->type]) { 4221 - if (!mte_is_root(mas->node)) { 4221 + if (!mte_is_root(mas->node) && !(mas->mas_flags & MA_STATE_BULK)) { 4222 4222 mas->store_type = wr_rebalance; 4223 4223 return; 4224 4224 }
+1 -1
lib/objpool.c
··· 76 76 * mimimal size of vmalloc is one page since vmalloc would 77 77 * always align the requested size to page size 78 78 */ 79 - if (pool->gfp & GFP_ATOMIC) 79 + if ((pool->gfp & GFP_ATOMIC) == GFP_ATOMIC) 80 80 slot = kmalloc_node(size, pool->gfp, cpu_to_node(i)); 81 81 else 82 82 slot = __vmalloc_node(size, sizeof(void *), pool->gfp,
+1
mm/damon/tests/sysfs-kunit.h
··· 67 67 damon_destroy_ctx(ctx); 68 68 kfree(sysfs_targets->targets_arr); 69 69 kfree(sysfs_targets); 70 + kfree(sysfs_target->regions); 70 71 kfree(sysfs_target); 71 72 } 72 73
+1 -12
mm/huge_memory.c
··· 109 109 if (!vma->vm_mm) /* vdso */ 110 110 return 0; 111 111 112 - /* 113 - * Explicitly disabled through madvise or prctl, or some 114 - * architectures may disable THP for some mappings, for 115 - * example, s390 kvm. 116 - * */ 117 - if ((vm_flags & VM_NOHUGEPAGE) || 118 - test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) 119 - return 0; 120 - /* 121 - * If the hardware/firmware marked hugepage support disabled. 122 - */ 123 - if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED)) 112 + if (thp_disabled_by_hw() || vma_thp_disabled(vma, vm_flags)) 124 113 return 0; 125 114 126 115 /* khugepaged doesn't collapse DAX vma, but page fault is fine. */
+7 -1
mm/kasan/init.c
··· 106 106 } 107 107 } 108 108 109 + void __weak __meminit kernel_pte_init(void *addr) 110 + { 111 + } 112 + 109 113 static int __ref zero_pmd_populate(pud_t *pud, unsigned long addr, 110 114 unsigned long end) 111 115 { ··· 130 126 131 127 if (slab_is_available()) 132 128 p = pte_alloc_one_kernel(&init_mm); 133 - else 129 + else { 134 130 p = early_alloc(PAGE_SIZE, NUMA_NO_NODE); 131 + kernel_pte_init(p); 132 + } 135 133 if (!p) 136 134 return -ENOMEM; 137 135
+3 -3
mm/khugepaged.c
··· 2227 2227 folio_put(new_folio); 2228 2228 out: 2229 2229 VM_BUG_ON(!list_empty(&pagelist)); 2230 - trace_mm_khugepaged_collapse_file(mm, new_folio, index, is_shmem, addr, file, HPAGE_PMD_NR, result); 2230 + trace_mm_khugepaged_collapse_file(mm, new_folio, index, addr, is_shmem, file, HPAGE_PMD_NR, result); 2231 2231 return result; 2232 2232 } 2233 2233 ··· 2252 2252 continue; 2253 2253 2254 2254 if (xa_is_value(folio)) { 2255 - ++swap; 2255 + swap += 1 << xas_get_order(&xas); 2256 2256 if (cc->is_khugepaged && 2257 2257 swap > khugepaged_max_ptes_swap) { 2258 2258 result = SCAN_EXCEED_SWAP_PTE; ··· 2299 2299 * is just too costly... 2300 2300 */ 2301 2301 2302 - present++; 2302 + present += folio_nr_pages(folio); 2303 2303 2304 2304 if (need_resched()) { 2305 2305 xas_pause(&xas);
+11 -6
mm/memory.c
··· 4181 4181 return __alloc_swap_folio(vmf); 4182 4182 } 4183 4183 #else /* !CONFIG_TRANSPARENT_HUGEPAGE */ 4184 - static inline bool can_swapin_thp(struct vm_fault *vmf, pte_t *ptep, int nr_pages) 4185 - { 4186 - return false; 4187 - } 4188 - 4189 4184 static struct folio *alloc_swap_folio(struct vm_fault *vmf) 4190 4185 { 4191 4186 return __alloc_swap_folio(vmf); ··· 4919 4924 unsigned long haddr = vmf->address & HPAGE_PMD_MASK; 4920 4925 pmd_t entry; 4921 4926 vm_fault_t ret = VM_FAULT_FALLBACK; 4927 + 4928 + /* 4929 + * It is too late to allocate a small folio, we already have a large 4930 + * folio in the pagecache: especially s390 KVM cannot tolerate any 4931 + * PMD mappings, but PTE-mapped THP are fine. So let's simply refuse any 4932 + * PMD mappings if THPs are disabled. 4933 + */ 4934 + if (thp_disabled_by_hw() || vma_thp_disabled(vma, vma->vm_flags)) 4935 + return ret; 4922 4936 4923 4937 if (!thp_vma_suitable_order(vma, haddr, PMD_ORDER)) 4924 4938 return ret; ··· 6350 6346 static inline void pfnmap_lockdep_assert(struct vm_area_struct *vma) 6351 6347 { 6352 6348 #ifdef CONFIG_LOCKDEP 6353 - struct address_space *mapping = vma->vm_file->f_mapping; 6349 + struct file *file = vma->vm_file; 6350 + struct address_space *mapping = file ? file->f_mapping : NULL; 6354 6351 6355 6352 if (mapping) 6356 6353 lockdep_assert(lockdep_is_held(&vma->vm_file->f_mapping->i_mmap_rwsem) ||
+21 -11
mm/mmap.c
··· 1371 1371 struct maple_tree mt_detach; 1372 1372 unsigned long end = addr + len; 1373 1373 bool writable_file_mapping = false; 1374 - int error = -ENOMEM; 1374 + int error; 1375 1375 VMA_ITERATOR(vmi, mm, addr); 1376 1376 VMG_STATE(vmg, mm, &vmi, addr, end, vm_flags, pgoff); 1377 1377 ··· 1396 1396 } 1397 1397 1398 1398 /* Check against address space limit. */ 1399 - if (!may_expand_vm(mm, vm_flags, pglen - vms.nr_pages)) 1399 + if (!may_expand_vm(mm, vm_flags, pglen - vms.nr_pages)) { 1400 + error = -ENOMEM; 1400 1401 goto abort_munmap; 1402 + } 1401 1403 1402 1404 /* 1403 1405 * Private writable mapping: check memory availability ··· 1407 1405 if (accountable_mapping(file, vm_flags)) { 1408 1406 charged = pglen; 1409 1407 charged -= vms.nr_accounted; 1410 - if (charged && security_vm_enough_memory_mm(mm, charged)) 1411 - goto abort_munmap; 1408 + if (charged) { 1409 + error = security_vm_enough_memory_mm(mm, charged); 1410 + if (error) 1411 + goto abort_munmap; 1412 + } 1412 1413 1413 1414 vms.nr_accounted = 0; 1414 1415 vm_flags |= VM_ACCOUNT; ··· 1427 1422 * not unmapped, but the maps are removed from the list. 1428 1423 */ 1429 1424 vma = vm_area_alloc(mm); 1430 - if (!vma) 1425 + if (!vma) { 1426 + error = -ENOMEM; 1431 1427 goto unacct_error; 1428 + } 1432 1429 1433 1430 vma_iter_config(&vmi, addr, end); 1434 1431 vma_set_range(vma, addr, end, pgoff); ··· 1460 1453 * Expansion is handled above, merging is handled below. 1461 1454 * Drivers should not alter the address of the VMA. 1462 1455 */ 1463 - error = -EINVAL; 1464 - if (WARN_ON((addr != vma->vm_start))) 1456 + if (WARN_ON((addr != vma->vm_start))) { 1457 + error = -EINVAL; 1465 1458 goto close_and_free_vma; 1459 + } 1466 1460 1467 1461 vma_iter_config(&vmi, addr, end); 1468 1462 /* ··· 1508 1500 } 1509 1501 1510 1502 /* Allow architectures to sanity-check the vm_flags */ 1511 - error = -EINVAL; 1512 - if (!arch_validate_flags(vma->vm_flags)) 1503 + if (!arch_validate_flags(vma->vm_flags)) { 1504 + error = -EINVAL; 1513 1505 goto close_and_free_vma; 1506 + } 1514 1507 1515 - error = -ENOMEM; 1516 - if (vma_iter_prealloc(&vmi, vma)) 1508 + if (vma_iter_prealloc(&vmi, vma)) { 1509 + error = -ENOMEM; 1517 1510 goto close_and_free_vma; 1511 + } 1518 1512 1519 1513 /* Lock the VMA since it is modified after insertion into VMA tree */ 1520 1514 vma_start_write(vma);
+9 -2
mm/mremap.c
··· 238 238 { 239 239 spinlock_t *old_ptl, *new_ptl; 240 240 struct mm_struct *mm = vma->vm_mm; 241 + bool res = false; 241 242 pmd_t pmd; 242 243 243 244 if (!arch_supports_page_table_move()) ··· 278 277 if (new_ptl != old_ptl) 279 278 spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); 280 279 281 - /* Clear the pmd */ 282 280 pmd = *old_pmd; 281 + 282 + /* Racing with collapse? */ 283 + if (unlikely(!pmd_present(pmd) || pmd_leaf(pmd))) 284 + goto out_unlock; 285 + /* Clear the pmd */ 283 286 pmd_clear(old_pmd); 287 + res = true; 284 288 285 289 VM_BUG_ON(!pmd_none(*new_pmd)); 286 290 287 291 pmd_populate(mm, new_pmd, pmd_pgtable(pmd)); 288 292 flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE); 293 + out_unlock: 289 294 if (new_ptl != old_ptl) 290 295 spin_unlock(new_ptl); 291 296 spin_unlock(old_ptl); 292 297 293 - return true; 298 + return res; 294 299 } 295 300 #else 296 301 static inline bool move_normal_pmd(struct vm_area_struct *vma,
+1 -6
mm/shmem.c
··· 1664 1664 loff_t i_size; 1665 1665 int order; 1666 1666 1667 - if (vma && ((vm_flags & VM_NOHUGEPAGE) || 1668 - test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))) 1669 - return 0; 1670 - 1671 - /* If the hardware/firmware marked hugepage support disabled. */ 1672 - if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED)) 1667 + if (thp_disabled_by_hw() || (vma && vma_thp_disabled(vma, vm_flags))) 1673 1668 return 0; 1674 1669 1675 1670 global_huge = shmem_huge_global_enabled(inode, index, write_end,
+5
mm/sparse-vmemmap.c
··· 184 184 return p; 185 185 } 186 186 187 + void __weak __meminit kernel_pte_init(void *addr) 188 + { 189 + } 190 + 187 191 pmd_t * __meminit vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node) 188 192 { 189 193 pmd_t *pmd = pmd_offset(pud, addr); ··· 195 191 void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node); 196 192 if (!p) 197 193 return NULL; 194 + kernel_pte_init(p); 198 195 pmd_populate_kernel(&init_mm, pmd, p); 199 196 } 200 197 return pmd;
+5 -4
mm/swapfile.c
··· 194 194 if (IS_ERR(folio)) 195 195 return 0; 196 196 197 - /* offset could point to the middle of a large folio */ 198 - entry = folio->swap; 199 - offset = swp_offset(entry); 200 197 nr_pages = folio_nr_pages(folio); 201 198 ret = -nr_pages; 202 199 ··· 206 209 */ 207 210 if (!folio_trylock(folio)) 208 211 goto out; 212 + 213 + /* offset could point to the middle of a large folio */ 214 + entry = folio->swap; 215 + offset = swp_offset(entry); 209 216 210 217 need_reclaim = ((flags & TTRS_ANYWAY) || 211 218 ((flags & TTRS_UNMAPPED) && !folio_mapped(folio)) || ··· 2313 2312 2314 2313 mmap_read_lock(mm); 2315 2314 for_each_vma(vmi, vma) { 2316 - if (vma->anon_vma) { 2315 + if (vma->anon_vma && !is_vm_hugetlb_page(vma)) { 2317 2316 ret = unuse_vma(vma, type); 2318 2317 if (ret) 2319 2318 break;
+2 -2
mm/vmscan.c
··· 4963 4963 4964 4964 blk_finish_plug(&plug); 4965 4965 done: 4966 - /* kswapd should never fail */ 4967 - pgdat->kswapd_failures = 0; 4966 + if (sc->nr_reclaimed > reclaimed) 4967 + pgdat->kswapd_failures = 0; 4968 4968 } 4969 4969 4970 4970 /******************************************************************************
+11 -1
net/9p/client.c
··· 977 977 struct p9_client *p9_client_create(const char *dev_name, char *options) 978 978 { 979 979 int err; 980 + static atomic_t seqno = ATOMIC_INIT(0); 980 981 struct p9_client *clnt; 981 982 char *client_id; 983 + char *cache_name; 982 984 983 985 clnt = kmalloc(sizeof(*clnt), GFP_KERNEL); 984 986 if (!clnt) ··· 1037 1035 if (err) 1038 1036 goto close_trans; 1039 1037 1038 + cache_name = kasprintf(GFP_KERNEL, 1039 + "9p-fcall-cache-%u", atomic_inc_return(&seqno)); 1040 + if (!cache_name) { 1041 + err = -ENOMEM; 1042 + goto close_trans; 1043 + } 1044 + 1040 1045 /* P9_HDRSZ + 4 is the smallest packet header we can have that is 1041 1046 * followed by data accessed from userspace by read 1042 1047 */ 1043 1048 clnt->fcall_cache = 1044 - kmem_cache_create_usercopy("9p-fcall-cache", clnt->msize, 1049 + kmem_cache_create_usercopy(cache_name, clnt->msize, 1045 1050 0, 0, P9_HDRSZ + 4, 1046 1051 clnt->msize - (P9_HDRSZ + 4), 1047 1052 NULL); 1048 1053 1054 + kfree(cache_name); 1049 1055 return clnt; 1050 1056 1051 1057 close_trans:
+25
net/bluetooth/af_bluetooth.c
··· 185 185 } 186 186 EXPORT_SYMBOL(bt_sock_unlink); 187 187 188 + bool bt_sock_linked(struct bt_sock_list *l, struct sock *s) 189 + { 190 + struct sock *sk; 191 + 192 + if (!l || !s) 193 + return false; 194 + 195 + read_lock(&l->lock); 196 + 197 + sk_for_each(sk, &l->head) { 198 + if (s == sk) { 199 + read_unlock(&l->lock); 200 + return true; 201 + } 202 + } 203 + 204 + read_unlock(&l->lock); 205 + 206 + return false; 207 + } 208 + EXPORT_SYMBOL(bt_sock_linked); 209 + 188 210 void bt_accept_enqueue(struct sock *parent, struct sock *sk, bool bh) 189 211 { 190 212 const struct cred *old_cred; ··· 847 825 bt_sysfs_cleanup(); 848 826 cleanup_led: 849 827 bt_leds_cleanup(); 828 + debugfs_remove_recursive(bt_debugfs); 850 829 return err; 851 830 } 852 831 853 832 static void __exit bt_exit(void) 854 833 { 834 + iso_exit(); 835 + 855 836 mgmt_exit(); 856 837 857 838 sco_exit();
+1 -2
net/bluetooth/bnep/core.c
··· 745 745 if (flt[0]) 746 746 BT_INFO("BNEP filters: %s", flt); 747 747 748 - bnep_sock_init(); 749 - return 0; 748 + return bnep_sock_init(); 750 749 } 751 750 752 751 static void __exit bnep_exit(void)
+15 -9
net/bluetooth/hci_core.c
··· 1644 1644 struct adv_info *adv_instance, *n; 1645 1645 1646 1646 if (hdev->adv_instance_timeout) { 1647 - cancel_delayed_work(&hdev->adv_instance_expire); 1647 + disable_delayed_work(&hdev->adv_instance_expire); 1648 1648 hdev->adv_instance_timeout = 0; 1649 1649 } 1650 1650 1651 1651 list_for_each_entry_safe(adv_instance, n, &hdev->adv_instances, list) { 1652 - cancel_delayed_work_sync(&adv_instance->rpa_expired_cb); 1652 + disable_delayed_work_sync(&adv_instance->rpa_expired_cb); 1653 1653 list_del(&adv_instance->list); 1654 1654 kfree(adv_instance); 1655 1655 } ··· 2685 2685 list_del(&hdev->list); 2686 2686 write_unlock(&hci_dev_list_lock); 2687 2687 2688 - cancel_work_sync(&hdev->rx_work); 2689 - cancel_work_sync(&hdev->cmd_work); 2690 - cancel_work_sync(&hdev->tx_work); 2691 - cancel_work_sync(&hdev->power_on); 2692 - cancel_work_sync(&hdev->error_reset); 2688 + disable_work_sync(&hdev->rx_work); 2689 + disable_work_sync(&hdev->cmd_work); 2690 + disable_work_sync(&hdev->tx_work); 2691 + disable_work_sync(&hdev->power_on); 2692 + disable_work_sync(&hdev->error_reset); 2693 2693 2694 2694 hci_cmd_sync_clear(hdev); 2695 2695 ··· 2796 2796 { 2797 2797 bt_dev_dbg(hdev, "err 0x%2.2x", err); 2798 2798 2799 - cancel_delayed_work_sync(&hdev->cmd_timer); 2800 - cancel_delayed_work_sync(&hdev->ncmd_timer); 2799 + if (hci_dev_test_flag(hdev, HCI_UNREGISTER)) { 2800 + disable_delayed_work_sync(&hdev->cmd_timer); 2801 + disable_delayed_work_sync(&hdev->ncmd_timer); 2802 + } else { 2803 + cancel_delayed_work_sync(&hdev->cmd_timer); 2804 + cancel_delayed_work_sync(&hdev->ncmd_timer); 2805 + } 2806 + 2801 2807 atomic_set(&hdev->cmd_cnt, 1); 2802 2808 2803 2809 hci_cmd_sync_cancel_sync(hdev, err);
+9 -3
net/bluetooth/hci_sync.c
··· 5131 5131 5132 5132 bt_dev_dbg(hdev, ""); 5133 5133 5134 - cancel_delayed_work(&hdev->power_off); 5135 - cancel_delayed_work(&hdev->ncmd_timer); 5136 - cancel_delayed_work(&hdev->le_scan_disable); 5134 + if (hci_dev_test_flag(hdev, HCI_UNREGISTER)) { 5135 + disable_delayed_work(&hdev->power_off); 5136 + disable_delayed_work(&hdev->ncmd_timer); 5137 + disable_delayed_work(&hdev->le_scan_disable); 5138 + } else { 5139 + cancel_delayed_work(&hdev->power_off); 5140 + cancel_delayed_work(&hdev->ncmd_timer); 5141 + cancel_delayed_work(&hdev->le_scan_disable); 5142 + } 5137 5143 5138 5144 hci_cmd_sync_cancel_sync(hdev, ENODEV); 5139 5145
+13 -11
net/bluetooth/iso.c
··· 93 93 #define ISO_CONN_TIMEOUT (HZ * 40) 94 94 #define ISO_DISCONN_TIMEOUT (HZ * 2) 95 95 96 + static struct sock *iso_sock_hold(struct iso_conn *conn) 97 + { 98 + if (!conn || !bt_sock_linked(&iso_sk_list, conn->sk)) 99 + return NULL; 100 + 101 + sock_hold(conn->sk); 102 + 103 + return conn->sk; 104 + } 105 + 96 106 static void iso_sock_timeout(struct work_struct *work) 97 107 { 98 108 struct iso_conn *conn = container_of(work, struct iso_conn, ··· 110 100 struct sock *sk; 111 101 112 102 iso_conn_lock(conn); 113 - sk = conn->sk; 114 - if (sk) 115 - sock_hold(sk); 103 + sk = iso_sock_hold(conn); 116 104 iso_conn_unlock(conn); 117 105 118 106 if (!sk) ··· 217 209 218 210 /* Kill socket */ 219 211 iso_conn_lock(conn); 220 - sk = conn->sk; 221 - if (sk) 222 - sock_hold(sk); 212 + sk = iso_sock_hold(conn); 223 213 iso_conn_unlock(conn); 224 214 225 215 if (sk) { ··· 2307 2301 2308 2302 hci_register_cb(&iso_cb); 2309 2303 2310 - if (IS_ERR_OR_NULL(bt_debugfs)) 2311 - return 0; 2312 - 2313 - if (!iso_debugfs) { 2304 + if (!IS_ERR_OR_NULL(bt_debugfs)) 2314 2305 iso_debugfs = debugfs_create_file("iso", 0444, bt_debugfs, 2315 2306 NULL, &iso_debugfs_fops); 2316 - } 2317 2307 2318 2308 iso_inited = true; 2319 2309
+12 -6
net/bluetooth/sco.c
··· 76 76 #define SCO_CONN_TIMEOUT (HZ * 40) 77 77 #define SCO_DISCONN_TIMEOUT (HZ * 2) 78 78 79 + static struct sock *sco_sock_hold(struct sco_conn *conn) 80 + { 81 + if (!conn || !bt_sock_linked(&sco_sk_list, conn->sk)) 82 + return NULL; 83 + 84 + sock_hold(conn->sk); 85 + 86 + return conn->sk; 87 + } 88 + 79 89 static void sco_sock_timeout(struct work_struct *work) 80 90 { 81 91 struct sco_conn *conn = container_of(work, struct sco_conn, ··· 97 87 sco_conn_unlock(conn); 98 88 return; 99 89 } 100 - sk = conn->sk; 101 - if (sk) 102 - sock_hold(sk); 90 + sk = sco_sock_hold(conn); 103 91 sco_conn_unlock(conn); 104 92 105 93 if (!sk) ··· 202 194 203 195 /* Kill socket */ 204 196 sco_conn_lock(conn); 205 - sk = conn->sk; 206 - if (sk) 207 - sock_hold(sk); 197 + sk = sco_sock_hold(conn); 208 198 sco_conn_unlock(conn); 209 199 210 200 if (sk) {
+5 -3
net/core/filter.c
··· 2438 2438 2439 2439 /* Internal, non-exposed redirect flags. */ 2440 2440 enum { 2441 - BPF_F_NEIGH = (1ULL << 1), 2442 - BPF_F_PEER = (1ULL << 2), 2443 - BPF_F_NEXTHOP = (1ULL << 3), 2441 + BPF_F_NEIGH = (1ULL << 16), 2442 + BPF_F_PEER = (1ULL << 17), 2443 + BPF_F_NEXTHOP = (1ULL << 18), 2444 2444 #define BPF_F_REDIRECT_INTERNAL (BPF_F_NEIGH | BPF_F_PEER | BPF_F_NEXTHOP) 2445 2445 }; 2446 2446 ··· 2449 2449 struct net_device *dev; 2450 2450 struct sk_buff *clone; 2451 2451 int ret; 2452 + 2453 + BUILD_BUG_ON(BPF_F_REDIRECT_INTERNAL & BPF_F_REDIRECT_FLAGS); 2452 2454 2453 2455 if (unlikely(flags & (~(BPF_F_INGRESS) | BPF_F_REDIRECT_INTERNAL))) 2454 2456 return -EINVAL;
+8
net/core/sock_map.c
··· 647 647 sk = __sock_map_lookup_elem(map, key); 648 648 if (unlikely(!sk || !sock_map_redirect_allowed(sk))) 649 649 return SK_DROP; 650 + if ((flags & BPF_F_INGRESS) && sk_is_vsock(sk)) 651 + return SK_DROP; 650 652 651 653 skb_bpf_set_redir(skb, sk, flags & BPF_F_INGRESS); 652 654 return SK_PASS; ··· 676 674 if (unlikely(!sk || !sock_map_redirect_allowed(sk))) 677 675 return SK_DROP; 678 676 if (!(flags & BPF_F_INGRESS) && !sk_is_tcp(sk)) 677 + return SK_DROP; 678 + if (sk_is_vsock(sk)) 679 679 return SK_DROP; 680 680 681 681 msg->flags = flags; ··· 1253 1249 sk = __sock_hash_lookup_elem(map, key); 1254 1250 if (unlikely(!sk || !sock_map_redirect_allowed(sk))) 1255 1251 return SK_DROP; 1252 + if ((flags & BPF_F_INGRESS) && sk_is_vsock(sk)) 1253 + return SK_DROP; 1256 1254 1257 1255 skb_bpf_set_redir(skb, sk, flags & BPF_F_INGRESS); 1258 1256 return SK_PASS; ··· 1282 1276 if (unlikely(!sk || !sock_map_redirect_allowed(sk))) 1283 1277 return SK_DROP; 1284 1278 if (!(flags & BPF_F_INGRESS) && !sk_is_tcp(sk)) 1279 + return SK_DROP; 1280 + if (sk_is_vsock(sk)) 1285 1281 return SK_DROP; 1286 1282 1287 1283 msg->flags = flags;
+17 -21
net/ipv4/xfrm4_policy.c
··· 17 17 #include <net/ip.h> 18 18 #include <net/l3mdev.h> 19 19 20 - static struct dst_entry *__xfrm4_dst_lookup(struct net *net, struct flowi4 *fl4, 21 - int tos, int oif, 22 - const xfrm_address_t *saddr, 23 - const xfrm_address_t *daddr, 24 - u32 mark) 20 + static struct dst_entry *__xfrm4_dst_lookup(struct flowi4 *fl4, 21 + const struct xfrm_dst_lookup_params *params) 25 22 { 26 23 struct rtable *rt; 27 24 28 25 memset(fl4, 0, sizeof(*fl4)); 29 - fl4->daddr = daddr->a4; 30 - fl4->flowi4_tos = tos; 31 - fl4->flowi4_l3mdev = l3mdev_master_ifindex_by_index(net, oif); 32 - fl4->flowi4_mark = mark; 33 - if (saddr) 34 - fl4->saddr = saddr->a4; 26 + fl4->daddr = params->daddr->a4; 27 + fl4->flowi4_tos = params->tos; 28 + fl4->flowi4_l3mdev = l3mdev_master_ifindex_by_index(params->net, 29 + params->oif); 30 + fl4->flowi4_mark = params->mark; 31 + if (params->saddr) 32 + fl4->saddr = params->saddr->a4; 33 + fl4->flowi4_proto = params->ipproto; 34 + fl4->uli = params->uli; 35 35 36 - rt = __ip_route_output_key(net, fl4); 36 + rt = __ip_route_output_key(params->net, fl4); 37 37 if (!IS_ERR(rt)) 38 38 return &rt->dst; 39 39 40 40 return ERR_CAST(rt); 41 41 } 42 42 43 - static struct dst_entry *xfrm4_dst_lookup(struct net *net, int tos, int oif, 44 - const xfrm_address_t *saddr, 45 - const xfrm_address_t *daddr, 46 - u32 mark) 43 + static struct dst_entry *xfrm4_dst_lookup(const struct xfrm_dst_lookup_params *params) 47 44 { 48 45 struct flowi4 fl4; 49 46 50 - return __xfrm4_dst_lookup(net, &fl4, tos, oif, saddr, daddr, mark); 47 + return __xfrm4_dst_lookup(&fl4, params); 51 48 } 52 49 53 - static int xfrm4_get_saddr(struct net *net, int oif, 54 - xfrm_address_t *saddr, xfrm_address_t *daddr, 55 - u32 mark) 50 + static int xfrm4_get_saddr(xfrm_address_t *saddr, 51 + const struct xfrm_dst_lookup_params *params) 56 52 { 57 53 struct dst_entry *dst; 58 54 struct flowi4 fl4; 59 55 60 - dst = __xfrm4_dst_lookup(net, &fl4, 0, oif, NULL, daddr, mark); 56 + dst = __xfrm4_dst_lookup(&fl4, params); 61 57 if (IS_ERR(dst)) 62 58 return -EHOSTUNREACH; 63 59
+16 -15
net/ipv6/xfrm6_policy.c
··· 23 23 #include <net/ip6_route.h> 24 24 #include <net/l3mdev.h> 25 25 26 - static struct dst_entry *xfrm6_dst_lookup(struct net *net, int tos, int oif, 27 - const xfrm_address_t *saddr, 28 - const xfrm_address_t *daddr, 29 - u32 mark) 26 + static struct dst_entry *xfrm6_dst_lookup(const struct xfrm_dst_lookup_params *params) 30 27 { 31 28 struct flowi6 fl6; 32 29 struct dst_entry *dst; 33 30 int err; 34 31 35 32 memset(&fl6, 0, sizeof(fl6)); 36 - fl6.flowi6_l3mdev = l3mdev_master_ifindex_by_index(net, oif); 37 - fl6.flowi6_mark = mark; 38 - memcpy(&fl6.daddr, daddr, sizeof(fl6.daddr)); 39 - if (saddr) 40 - memcpy(&fl6.saddr, saddr, sizeof(fl6.saddr)); 33 + fl6.flowi6_l3mdev = l3mdev_master_ifindex_by_index(params->net, 34 + params->oif); 35 + fl6.flowi6_mark = params->mark; 36 + memcpy(&fl6.daddr, params->daddr, sizeof(fl6.daddr)); 37 + if (params->saddr) 38 + memcpy(&fl6.saddr, params->saddr, sizeof(fl6.saddr)); 41 39 42 - dst = ip6_route_output(net, NULL, &fl6); 40 + fl6.flowi4_proto = params->ipproto; 41 + fl6.uli = params->uli; 42 + 43 + dst = ip6_route_output(params->net, NULL, &fl6); 43 44 44 45 err = dst->error; 45 46 if (dst->error) { ··· 51 50 return dst; 52 51 } 53 52 54 - static int xfrm6_get_saddr(struct net *net, int oif, 55 - xfrm_address_t *saddr, xfrm_address_t *daddr, 56 - u32 mark) 53 + static int xfrm6_get_saddr(xfrm_address_t *saddr, 54 + const struct xfrm_dst_lookup_params *params) 57 55 { 58 56 struct dst_entry *dst; 59 57 struct net_device *dev; 60 58 struct inet6_dev *idev; 61 59 62 - dst = xfrm6_dst_lookup(net, 0, oif, NULL, daddr, mark); 60 + dst = xfrm6_dst_lookup(params); 63 61 if (IS_ERR(dst)) 64 62 return -EHOSTUNREACH; 65 63 ··· 68 68 return -EHOSTUNREACH; 69 69 } 70 70 dev = idev->dev; 71 - ipv6_dev_get_saddr(dev_net(dev), dev, &daddr->in6, 0, &saddr->in6); 71 + ipv6_dev_get_saddr(dev_net(dev), dev, &params->daddr->in6, 0, 72 + &saddr->in6); 72 73 dst_release(dst); 73 74 return 0; 74 75 }
+6 -1
net/netfilter/nf_bpf_link.c
··· 23 23 struct bpf_nf_link { 24 24 struct bpf_link link; 25 25 struct nf_hook_ops hook_ops; 26 + netns_tracker ns_tracker; 26 27 struct net *net; 27 28 u32 dead; 28 29 const struct nf_defrag_hook *defrag_hook; ··· 121 120 if (!cmpxchg(&nf_link->dead, 0, 1)) { 122 121 nf_unregister_net_hook(nf_link->net, &nf_link->hook_ops); 123 122 bpf_nf_disable_defrag(nf_link); 123 + put_net_track(nf_link->net, &nf_link->ns_tracker); 124 124 } 125 125 } 126 126 ··· 152 150 struct bpf_link_info *info) 153 151 { 154 152 struct bpf_nf_link *nf_link = container_of(link, struct bpf_nf_link, link); 153 + const struct nf_defrag_hook *hook = nf_link->defrag_hook; 155 154 156 155 info->netfilter.pf = nf_link->hook_ops.pf; 157 156 info->netfilter.hooknum = nf_link->hook_ops.hooknum; 158 157 info->netfilter.priority = nf_link->hook_ops.priority; 159 - info->netfilter.flags = 0; 158 + info->netfilter.flags = hook ? BPF_F_NETFILTER_IP_DEFRAG : 0; 160 159 161 160 return 0; 162 161 } ··· 259 256 bpf_link_cleanup(&link_primer); 260 257 return err; 261 258 } 259 + 260 + get_net_track(net, &link->ns_tracker, GFP_KERNEL); 262 261 263 262 return bpf_link_settle(&link_primer); 264 263 }
+1 -1
net/netfilter/xt_NFLOG.c
··· 79 79 { 80 80 .name = "NFLOG", 81 81 .revision = 0, 82 - .family = NFPROTO_IPV4, 82 + .family = NFPROTO_IPV6, 83 83 .checkentry = nflog_tg_check, 84 84 .destroy = nflog_tg_destroy, 85 85 .target = nflog_tg,
+1
net/netfilter/xt_TRACE.c
··· 49 49 .target = trace_tg, 50 50 .checkentry = trace_tg_check, 51 51 .destroy = trace_tg_destroy, 52 + .me = THIS_MODULE, 52 53 }, 53 54 #endif 54 55 };
+1 -1
net/netfilter/xt_mark.c
··· 62 62 { 63 63 .name = "MARK", 64 64 .revision = 2, 65 - .family = NFPROTO_IPV4, 65 + .family = NFPROTO_IPV6, 66 66 .target = mark_tg, 67 67 .targetsize = sizeof(struct xt_mark_tginfo2), 68 68 .me = THIS_MODULE,
+22 -1
net/sched/act_api.c
··· 1497 1497 bool skip_sw = tc_skip_sw(fl_flags); 1498 1498 bool skip_hw = tc_skip_hw(fl_flags); 1499 1499 1500 - if (tc_act_bind(act->tcfa_flags)) 1500 + if (tc_act_bind(act->tcfa_flags)) { 1501 + /* Action is created by classifier and is not 1502 + * standalone. Check that the user did not set 1503 + * any action flags different than the 1504 + * classifier flags, and inherit the flags from 1505 + * the classifier for the compatibility case 1506 + * where no flags were specified at all. 1507 + */ 1508 + if ((tc_act_skip_sw(act->tcfa_flags) && !skip_sw) || 1509 + (tc_act_skip_hw(act->tcfa_flags) && !skip_hw)) { 1510 + NL_SET_ERR_MSG(extack, 1511 + "Mismatch between action and filter offload flags"); 1512 + err = -EINVAL; 1513 + goto err; 1514 + } 1515 + if (skip_sw) 1516 + act->tcfa_flags |= TCA_ACT_FLAGS_SKIP_SW; 1517 + if (skip_hw) 1518 + act->tcfa_flags |= TCA_ACT_FLAGS_SKIP_HW; 1501 1519 continue; 1520 + } 1521 + 1522 + /* Action is standalone */ 1502 1523 if (skip_sw != tc_act_skip_sw(act->tcfa_flags) || 1503 1524 skip_hw != tc_act_skip_hw(act->tcfa_flags)) { 1504 1525 NL_SET_ERR_MSG(extack,
+7 -1
net/sched/sch_generic.c
··· 512 512 struct netdev_queue *txq; 513 513 514 514 txq = netdev_get_tx_queue(dev, i); 515 - trans_start = READ_ONCE(txq->trans_start); 516 515 if (!netif_xmit_stopped(txq)) 517 516 continue; 517 + 518 + /* Paired with WRITE_ONCE() + smp_mb...() in 519 + * netdev_tx_sent_queue() and netif_tx_stop_queue(). 520 + */ 521 + smp_mb(); 522 + trans_start = READ_ONCE(txq->trans_start); 523 + 518 524 if (time_after(jiffies, trans_start + dev->watchdog_timeo)) { 519 525 timedout_ms = jiffies_to_msecs(jiffies - trans_start); 520 526 atomic_long_inc(&txq->trans_timeout);
+14 -7
net/sched/sch_taprio.c
··· 1965 1965 1966 1966 taprio_start_sched(sch, start, new_admin); 1967 1967 1968 - rcu_assign_pointer(q->admin_sched, new_admin); 1968 + admin = rcu_replace_pointer(q->admin_sched, new_admin, 1969 + lockdep_rtnl_is_held()); 1969 1970 if (admin) 1970 1971 call_rcu(&admin->rcu, taprio_free_sched_cb); 1971 1972 ··· 2374 2373 struct tc_mqprio_qopt opt = { 0 }; 2375 2374 struct nlattr *nest, *sched_nest; 2376 2375 2377 - oper = rtnl_dereference(q->oper_sched); 2378 - admin = rtnl_dereference(q->admin_sched); 2379 - 2380 2376 mqprio_qopt_reconstruct(dev, &opt); 2381 2377 2382 2378 nest = nla_nest_start_noflag(skb, TCA_OPTIONS); ··· 2394 2396 nla_put_u32(skb, TCA_TAPRIO_ATTR_TXTIME_DELAY, q->txtime_delay)) 2395 2397 goto options_error; 2396 2398 2399 + rcu_read_lock(); 2400 + 2401 + oper = rtnl_dereference(q->oper_sched); 2402 + admin = rtnl_dereference(q->admin_sched); 2403 + 2397 2404 if (oper && taprio_dump_tc_entries(skb, q, oper)) 2398 - goto options_error; 2405 + goto options_error_rcu; 2399 2406 2400 2407 if (oper && dump_schedule(skb, oper)) 2401 - goto options_error; 2408 + goto options_error_rcu; 2402 2409 2403 2410 if (!admin) 2404 2411 goto done; 2405 2412 2406 2413 sched_nest = nla_nest_start_noflag(skb, TCA_TAPRIO_ATTR_ADMIN_SCHED); 2407 2414 if (!sched_nest) 2408 - goto options_error; 2415 + goto options_error_rcu; 2409 2416 2410 2417 if (dump_schedule(skb, admin)) 2411 2418 goto admin_error; ··· 2418 2415 nla_nest_end(skb, sched_nest); 2419 2416 2420 2417 done: 2418 + rcu_read_unlock(); 2421 2419 return nla_nest_end(skb, nest); 2422 2420 2423 2421 admin_error: 2424 2422 nla_nest_cancel(skb, sched_nest); 2423 + 2424 + options_error_rcu: 2425 + rcu_read_unlock(); 2425 2426 2426 2427 options_error: 2427 2428 nla_nest_cancel(skb, nest);
+12 -2
net/vmw_vsock/virtio_transport_common.c
··· 1707 1707 { 1708 1708 struct virtio_vsock_sock *vvs = vsk->trans; 1709 1709 struct sock *sk = sk_vsock(vsk); 1710 + struct virtio_vsock_hdr *hdr; 1710 1711 struct sk_buff *skb; 1711 1712 int off = 0; 1712 1713 int err; ··· 1717 1716 * works for types other than dgrams. 1718 1717 */ 1719 1718 skb = __skb_recv_datagram(sk, &vvs->rx_queue, MSG_DONTWAIT, &off, &err); 1719 + if (!skb) { 1720 + spin_unlock_bh(&vvs->rx_lock); 1721 + return err; 1722 + } 1723 + 1724 + hdr = virtio_vsock_hdr(skb); 1725 + if (le32_to_cpu(hdr->flags) & VIRTIO_VSOCK_SEQ_EOM) 1726 + vvs->msg_count--; 1727 + 1728 + virtio_transport_dec_rx_pkt(vvs, le32_to_cpu(hdr->len)); 1720 1729 spin_unlock_bh(&vvs->rx_lock); 1721 1730 1722 - if (!skb) 1723 - return err; 1731 + virtio_transport_send_credit_update(vsk); 1724 1732 1725 1733 return recv_actor(sk, skb); 1726 1734 }
-8
net/vmw_vsock/vsock_bpf.c
··· 114 114 return copied; 115 115 } 116 116 117 - /* Copy of original proto with updated sock_map methods */ 118 - static struct proto vsock_bpf_prot = { 119 - .close = sock_map_close, 120 - .recvmsg = vsock_bpf_recvmsg, 121 - .sock_is_readable = sk_msg_is_readable, 122 - .unhash = sock_map_unhash, 123 - }; 124 - 125 117 static void vsock_bpf_rebuild_protos(struct proto *prot, const struct proto *base) 126 118 { 127 119 *prot = *base;
+8 -3
net/xfrm/xfrm_device.c
··· 269 269 270 270 dev = dev_get_by_index(net, xuo->ifindex); 271 271 if (!dev) { 272 + struct xfrm_dst_lookup_params params; 273 + 272 274 if (!(xuo->flags & XFRM_OFFLOAD_INBOUND)) { 273 275 saddr = &x->props.saddr; 274 276 daddr = &x->id.daddr; ··· 279 277 daddr = &x->props.saddr; 280 278 } 281 279 282 - dst = __xfrm_dst_lookup(net, 0, 0, saddr, daddr, 283 - x->props.family, 284 - xfrm_smark_get(0, x)); 280 + memset(&params, 0, sizeof(params)); 281 + params.net = net; 282 + params.saddr = saddr; 283 + params.daddr = daddr; 284 + params.mark = xfrm_smark_get(0, x); 285 + dst = __xfrm_dst_lookup(x->props.family, &params); 285 286 if (IS_ERR(dst)) 286 287 return (is_packet_offload) ? -EINVAL : 0; 287 288
+38 -15
net/xfrm/xfrm_policy.c
··· 270 270 return rcu_dereference(xfrm_if_cb); 271 271 } 272 272 273 - struct dst_entry *__xfrm_dst_lookup(struct net *net, int tos, int oif, 274 - const xfrm_address_t *saddr, 275 - const xfrm_address_t *daddr, 276 - int family, u32 mark) 273 + struct dst_entry *__xfrm_dst_lookup(int family, 274 + const struct xfrm_dst_lookup_params *params) 277 275 { 278 276 const struct xfrm_policy_afinfo *afinfo; 279 277 struct dst_entry *dst; ··· 280 282 if (unlikely(afinfo == NULL)) 281 283 return ERR_PTR(-EAFNOSUPPORT); 282 284 283 - dst = afinfo->dst_lookup(net, tos, oif, saddr, daddr, mark); 285 + dst = afinfo->dst_lookup(params); 284 286 285 287 rcu_read_unlock(); 286 288 ··· 294 296 xfrm_address_t *prev_daddr, 295 297 int family, u32 mark) 296 298 { 299 + struct xfrm_dst_lookup_params params; 297 300 struct net *net = xs_net(x); 298 301 xfrm_address_t *saddr = &x->props.saddr; 299 302 xfrm_address_t *daddr = &x->id.daddr; ··· 309 310 daddr = x->coaddr; 310 311 } 311 312 312 - dst = __xfrm_dst_lookup(net, tos, oif, saddr, daddr, family, mark); 313 + params.net = net; 314 + params.saddr = saddr; 315 + params.daddr = daddr; 316 + params.tos = tos; 317 + params.oif = oif; 318 + params.mark = mark; 319 + params.ipproto = x->id.proto; 320 + if (x->encap) { 321 + switch (x->encap->encap_type) { 322 + case UDP_ENCAP_ESPINUDP: 323 + params.ipproto = IPPROTO_UDP; 324 + params.uli.ports.sport = x->encap->encap_sport; 325 + params.uli.ports.dport = x->encap->encap_dport; 326 + break; 327 + case TCP_ENCAP_ESPINTCP: 328 + params.ipproto = IPPROTO_TCP; 329 + params.uli.ports.sport = x->encap->encap_sport; 330 + params.uli.ports.dport = x->encap->encap_dport; 331 + break; 332 + } 333 + } 334 + 335 + dst = __xfrm_dst_lookup(family, &params); 313 336 314 337 if (!IS_ERR(dst)) { 315 338 if (prev_saddr != saddr) ··· 2453 2432 } 2454 2433 2455 2434 static int 2456 - xfrm_get_saddr(struct net *net, int oif, xfrm_address_t *local, 2457 - xfrm_address_t *remote, unsigned short family, u32 mark) 2435 + xfrm_get_saddr(unsigned short family, xfrm_address_t *saddr, 2436 + const struct xfrm_dst_lookup_params *params) 2458 2437 { 2459 2438 int err; 2460 2439 const struct xfrm_policy_afinfo *afinfo = xfrm_policy_get_afinfo(family); 2461 2440 2462 2441 if (unlikely(afinfo == NULL)) 2463 2442 return -EINVAL; 2464 - err = afinfo->get_saddr(net, oif, local, remote, mark); 2443 + err = afinfo->get_saddr(saddr, params); 2465 2444 rcu_read_unlock(); 2466 2445 return err; 2467 2446 } ··· 2490 2469 remote = &tmpl->id.daddr; 2491 2470 local = &tmpl->saddr; 2492 2471 if (xfrm_addr_any(local, tmpl->encap_family)) { 2493 - error = xfrm_get_saddr(net, fl->flowi_oif, 2494 - &tmp, remote, 2495 - tmpl->encap_family, 0); 2472 + struct xfrm_dst_lookup_params params; 2473 + 2474 + memset(&params, 0, sizeof(params)); 2475 + params.net = net; 2476 + params.oif = fl->flowi_oif; 2477 + params.daddr = remote; 2478 + error = xfrm_get_saddr(tmpl->encap_family, &tmp, 2479 + &params); 2496 2480 if (error) 2497 2481 goto fail; 2498 2482 local = &tmp; ··· 4206 4180 4207 4181 net->xfrm.policy_count[dir] = 0; 4208 4182 net->xfrm.policy_count[XFRM_POLICY_MAX + dir] = 0; 4209 - INIT_HLIST_HEAD(&net->xfrm.policy_inexact[dir]); 4210 4183 4211 4184 htab = &net->xfrm.policy_bydst[dir]; 4212 4185 htab->table = xfrm_hash_alloc(sz); ··· 4258 4233 4259 4234 for (dir = 0; dir < XFRM_POLICY_MAX; dir++) { 4260 4235 struct xfrm_policy_hash *htab; 4261 - 4262 - WARN_ON(!hlist_empty(&net->xfrm.policy_inexact[dir])); 4263 4236 4264 4237 htab = &net->xfrm.policy_bydst[dir]; 4265 4238 sz = (htab->hmask + 1) * sizeof(struct hlist_head);
+8 -2
net/xfrm/xfrm_user.c
··· 201 201 { 202 202 int err; 203 203 u8 sa_dir = attrs[XFRMA_SA_DIR] ? nla_get_u8(attrs[XFRMA_SA_DIR]) : 0; 204 + u16 family = p->sel.family; 204 205 205 206 err = -EINVAL; 206 207 switch (p->family) { ··· 222 221 goto out; 223 222 } 224 223 225 - switch (p->sel.family) { 224 + if (!family && !(p->flags & XFRM_STATE_AF_UNSPEC)) 225 + family = p->family; 226 + 227 + switch (family) { 226 228 case AF_UNSPEC: 227 229 break; 228 230 ··· 1102 1098 if (!nla) 1103 1099 return -EMSGSIZE; 1104 1100 ap = nla_data(nla); 1105 - memcpy(ap, auth, sizeof(struct xfrm_algo_auth)); 1101 + strscpy_pad(ap->alg_name, auth->alg_name, sizeof(ap->alg_name)); 1102 + ap->alg_key_len = auth->alg_key_len; 1103 + ap->alg_trunc_len = auth->alg_trunc_len; 1106 1104 if (redact_secret && auth->alg_key_len) 1107 1105 memset(ap->alg_key, 0, (auth->alg_key_len + 7) / 8); 1108 1106 else
+3
scripts/Kconfig.include
··· 65 65 m32-flag := $(cc-option-bit,-m32) 66 66 m64-flag := $(cc-option-bit,-m64) 67 67 68 + rustc-version := $(shell,$(srctree)/scripts/rustc-version.sh $(RUSTC)) 69 + rustc-llvm-version := $(shell,$(srctree)/scripts/rustc-llvm-version.sh $(RUSTC)) 70 + 68 71 # $(rustc-option,<flag>) 69 72 # Return y if the Rust compiler supports <flag>, n otherwise 70 73 # Calls to this should be guarded so that they are not evaluated if
+7 -7
scripts/Makefile.compiler
··· 53 53 54 54 # cc-option-yn 55 55 # Usage: flag := $(call cc-option-yn,-march=winchip-c6) 56 - cc-option-yn = $(call try-run,\ 57 - $(CC) -Werror $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) $(1) -c -x c /dev/null -o "$$TMP",y,n) 56 + cc-option-yn = $(if $(call cc-option,$1),y,n) 58 57 59 58 # cc-disable-warning 60 59 # Usage: cflags-y += $(call cc-disable-warning,unused-but-set-variable) 61 - cc-disable-warning = $(call try-run,\ 62 - $(CC) -Werror $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) -W$(strip $(1)) -c -x c /dev/null -o "$$TMP",-Wno-$(strip $(1))) 60 + cc-disable-warning = $(if $(call cc-option,-W$(strip $1)),-Wno-$(strip $1)) 63 61 64 62 # gcc-min-version 65 63 # Usage: cflags-$(call gcc-min-version, 70100) += -foo ··· 73 75 74 76 # __rustc-option 75 77 # Usage: MY_RUSTFLAGS += $(call __rustc-option,$(RUSTC),$(MY_RUSTFLAGS),-Cinstrument-coverage,-Zinstrument-coverage) 78 + # TODO: remove RUSTC_BOOTSTRAP=1 when we raise the minimum GNU Make version to 4.4 76 79 __rustc-option = $(call try-run,\ 77 - $(1) $(2) $(3) --crate-type=rlib /dev/null --out-dir=$$TMPOUT -o "$$TMP",$(3),$(4)) 80 + echo '#![allow(missing_docs)]#![feature(no_core)]#![no_core]' | RUSTC_BOOTSTRAP=1\ 81 + $(1) --sysroot=/dev/null $(filter-out --sysroot=/dev/null,$(2)) $(3)\ 82 + --crate-type=rlib --out-dir=$(TMPOUT) --emit=obj=- - >/dev/null,$(3),$(4)) 78 83 79 84 # rustc-option 80 85 # Usage: rustflags-y += $(call rustc-option,-Cinstrument-coverage,-Zinstrument-coverage) ··· 86 85 87 86 # rustc-option-yn 88 87 # Usage: flag := $(call rustc-option-yn,-Cinstrument-coverage) 89 - rustc-option-yn = $(call try-run,\ 90 - $(RUSTC) $(KBUILD_RUSTFLAGS) $(1) --crate-type=rlib /dev/null --out-dir=$$TMPOUT -o "$$TMP",y,n) 88 + rustc-option-yn = $(if $(call rustc-option,$1),y,n)
+22
scripts/rustc-llvm-version.sh
··· 1 + #!/bin/sh 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Usage: $ ./rustc-llvm-version.sh rustc 5 + # 6 + # Print the LLVM version that the Rust compiler uses in a 6 digit form. 7 + 8 + # Convert the version string x.y.z to a canonical up-to-6-digits form. 9 + get_canonical_version() 10 + { 11 + IFS=. 12 + set -- $1 13 + echo $((10000 * $1 + 100 * $2 + $3)) 14 + } 15 + 16 + if output=$("$@" --version --verbose 2>/dev/null | grep LLVM); then 17 + set -- $output 18 + get_canonical_version $3 19 + else 20 + echo 0 21 + exit 1 22 + fi
+19
security/ipe/Kconfig
··· 31 31 32 32 If unsure, leave blank. 33 33 34 + config IPE_POLICY_SIG_SECONDARY_KEYRING 35 + bool "IPE policy update verification with secondary keyring" 36 + default y 37 + depends on SECONDARY_TRUSTED_KEYRING 38 + help 39 + Also allow the secondary trusted keyring to verify IPE policy 40 + updates. 41 + 42 + If unsure, answer Y. 43 + 44 + config IPE_POLICY_SIG_PLATFORM_KEYRING 45 + bool "IPE policy update verification with platform keyring" 46 + default y 47 + depends on INTEGRITY_PLATFORM_KEYRING 48 + help 49 + Also allow the platform keyring to verify IPE policy updates. 50 + 51 + If unsure, answer Y. 52 + 34 53 menu "IPE Trust Providers" 35 54 36 55 config IPE_PROP_DM_VERITY
+15 -3
security/ipe/policy.c
··· 106 106 goto err; 107 107 } 108 108 109 - if (ver_to_u64(old) > ver_to_u64(new)) { 110 - rc = -EINVAL; 109 + if (ver_to_u64(old) >= ver_to_u64(new)) { 110 + rc = -ESTALE; 111 111 goto err; 112 112 } 113 113 ··· 169 169 goto err; 170 170 } 171 171 172 - rc = verify_pkcs7_signature(NULL, 0, new->pkcs7, pkcs7len, NULL, 172 + rc = verify_pkcs7_signature(NULL, 0, new->pkcs7, pkcs7len, 173 + #ifdef CONFIG_IPE_POLICY_SIG_SECONDARY_KEYRING 174 + VERIFY_USE_SECONDARY_KEYRING, 175 + #else 176 + NULL, 177 + #endif 173 178 VERIFYING_UNSPECIFIED_SIGNATURE, 174 179 set_pkcs7_data, new); 180 + #ifdef CONFIG_IPE_POLICY_SIG_PLATFORM_KEYRING 181 + if (rc == -ENOKEY || rc == -EKEYREJECTED) 182 + rc = verify_pkcs7_signature(NULL, 0, new->pkcs7, pkcs7len, 183 + VERIFY_USE_PLATFORM_KEYRING, 184 + VERIFYING_UNSPECIFIED_SIGNATURE, 185 + set_pkcs7_data, new); 186 + #endif 175 187 if (rc) 176 188 goto err; 177 189 } else {
+1 -1
sound/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 menuconfig SOUND 3 3 tristate "Sound card support" 4 - depends on HAS_IOMEM || UML 4 + depends on HAS_IOMEM || INDIRECT_IOMEM 5 5 help 6 6 If you have a sound card in your computer, i.e. if it can say more 7 7 than an occasional beep, say Y.
+22 -11
sound/hda/intel-sdw-acpi.c
··· 56 56 sdw_intel_scan_controller(struct sdw_intel_acpi_info *info) 57 57 { 58 58 struct acpi_device *adev = acpi_fetch_acpi_dev(info->handle); 59 - u8 count, i; 59 + struct fwnode_handle *fwnode; 60 + unsigned long list; 61 + unsigned int i; 62 + u32 count; 63 + u32 tmp; 60 64 int ret; 61 65 62 66 if (!adev) 63 67 return -EINVAL; 64 68 65 - /* Found controller, find links supported */ 66 - count = 0; 67 - ret = fwnode_property_read_u8_array(acpi_fwnode_handle(adev), 68 - "mipi-sdw-master-count", &count, 1); 69 + fwnode = acpi_fwnode_handle(adev); 69 70 70 71 /* 72 + * Found controller, find links supported 73 + * 71 74 * In theory we could check the number of links supported in 72 75 * hardware, but in that step we cannot assume SoundWire IP is 73 76 * powered. ··· 81 78 * 82 79 * We will check the hardware capabilities in the startup() step 83 80 */ 84 - 81 + ret = fwnode_property_read_u32(fwnode, "mipi-sdw-manager-list", &tmp); 85 82 if (ret) { 86 - dev_err(&adev->dev, 87 - "Failed to read mipi-sdw-master-count: %d\n", ret); 88 - return -EINVAL; 83 + ret = fwnode_property_read_u32(fwnode, "mipi-sdw-master-count", &count); 84 + if (ret) { 85 + dev_err(&adev->dev, 86 + "Failed to read mipi-sdw-master-count: %d\n", 87 + ret); 88 + return ret; 89 + } 90 + list = GENMASK(count - 1, 0); 91 + } else { 92 + list = tmp; 93 + count = hweight32(list); 89 94 } 90 95 91 96 /* Check count is within bounds */ ··· 112 101 info->count = count; 113 102 info->link_mask = 0; 114 103 115 - for (i = 0; i < count; i++) { 104 + for_each_set_bit(i, &list, SDW_INTEL_MAX_LINKS) { 116 105 if (ctrl_link_mask && !(ctrl_link_mask & BIT(i))) { 117 106 dev_dbg(&adev->dev, 118 107 "Link %d masked, will not be enabled\n", i); 119 108 continue; 120 109 } 121 110 122 - if (!is_link_enabled(acpi_fwnode_handle(adev), i)) { 111 + if (!is_link_enabled(fwnode, i)) { 123 112 dev_dbg(&adev->dev, 124 113 "Link %d not selected in firmware\n", i); 125 114 continue;
+19
sound/pci/hda/patch_conexant.c
··· 303 303 CXT_FIXUP_HP_SPECTRE, 304 304 CXT_FIXUP_HP_GATE_MIC, 305 305 CXT_FIXUP_MUTE_LED_GPIO, 306 + CXT_FIXUP_HP_ELITEONE_OUT_DIS, 306 307 CXT_FIXUP_HP_ZBOOK_MUTE_LED, 307 308 CXT_FIXUP_HEADSET_MIC, 308 309 CXT_FIXUP_HP_MIC_NO_PRESENCE, ··· 319 318 { 320 319 struct conexant_spec *spec = codec->spec; 321 320 spec->gen.inv_dmic_split = 1; 321 + } 322 + 323 + /* fix widget control pin settings */ 324 + static void cxt_fixup_update_pinctl(struct hda_codec *codec, 325 + const struct hda_fixup *fix, int action) 326 + { 327 + if (action == HDA_FIXUP_ACT_PROBE) { 328 + /* Unset OUT_EN for this Node pin, leaving only HP_EN. 329 + * This is the value stored in the codec register after 330 + * the correct initialization of the previous windows boot. 331 + */ 332 + snd_hda_set_pin_ctl_cache(codec, 0x1d, AC_PINCTL_HP_EN); 333 + } 322 334 } 323 335 324 336 static void cxt5066_increase_mic_boost(struct hda_codec *codec, ··· 985 971 .type = HDA_FIXUP_FUNC, 986 972 .v.func = cxt_fixup_mute_led_gpio, 987 973 }, 974 + [CXT_FIXUP_HP_ELITEONE_OUT_DIS] = { 975 + .type = HDA_FIXUP_FUNC, 976 + .v.func = cxt_fixup_update_pinctl, 977 + }, 988 978 [CXT_FIXUP_HP_ZBOOK_MUTE_LED] = { 989 979 .type = HDA_FIXUP_FUNC, 990 980 .v.func = cxt_fixup_hp_zbook_mute_led, ··· 1079 1061 SND_PCI_QUIRK(0x103c, 0x83b2, "HP EliteBook 840 G5", CXT_FIXUP_HP_DOCK), 1080 1062 SND_PCI_QUIRK(0x103c, 0x83b3, "HP EliteBook 830 G5", CXT_FIXUP_HP_DOCK), 1081 1063 SND_PCI_QUIRK(0x103c, 0x83d3, "HP ProBook 640 G4", CXT_FIXUP_HP_DOCK), 1064 + SND_PCI_QUIRK(0x103c, 0x83e5, "HP EliteOne 1000 G2", CXT_FIXUP_HP_ELITEONE_OUT_DIS), 1082 1065 SND_PCI_QUIRK(0x103c, 0x8402, "HP ProBook 645 G4", CXT_FIXUP_MUTE_LED_GPIO), 1083 1066 SND_PCI_QUIRK(0x103c, 0x8427, "HP ZBook Studio G5", CXT_FIXUP_HP_ZBOOK_MUTE_LED), 1084 1067 SND_PCI_QUIRK(0x103c, 0x844f, "HP ZBook Studio G5", CXT_FIXUP_HP_ZBOOK_MUTE_LED),
+3 -2
sound/pci/hda/patch_cs8409.c
··· 1403 1403 kctrl = snd_hda_gen_add_kctl(&spec->gen, "Line Out Playback Volume", 1404 1404 &cs42l42_dac_volume_mixer); 1405 1405 /* Update Line Out kcontrol template */ 1406 - kctrl->private_value = HDA_COMPOSE_AMP_VAL_OFS(DOLPHIN_HP_PIN_NID, 3, CS8409_CODEC1, 1407 - HDA_OUTPUT, CS42L42_VOL_DAC) | HDA_AMP_VAL_MIN_MUTE; 1406 + if (kctrl) 1407 + kctrl->private_value = HDA_COMPOSE_AMP_VAL_OFS(DOLPHIN_HP_PIN_NID, 3, CS8409_CODEC1, 1408 + HDA_OUTPUT, CS42L42_VOL_DAC) | HDA_AMP_VAL_MIN_MUTE; 1408 1409 cs8409_enable_ur(codec, 0); 1409 1410 snd_hda_codec_set_name(codec, "CS8409/CS42L42"); 1410 1411 break;
+78 -1
sound/pci/hda/patch_realtek.c
··· 7403 7403 alc245_fixup_hp_gpio_led(codec, fix, action); 7404 7404 } 7405 7405 7406 + /* some changes for Spectre x360 16, 2024 model */ 7407 + static void alc245_fixup_hp_spectre_x360_16_aa0xxx(struct hda_codec *codec, 7408 + const struct hda_fixup *fix, int action) 7409 + { 7410 + /* 7411 + * The Pin Complex 0x14 for the treble speakers is wrongly reported as 7412 + * unconnected. 7413 + * The Pin Complex 0x17 for the bass speakers has the lowest association 7414 + * and sequence values so shift it up a bit to squeeze 0x14 in. 7415 + */ 7416 + struct alc_spec *spec = codec->spec; 7417 + static const struct hda_pintbl pincfgs[] = { 7418 + { 0x14, 0x90170110 }, // top/treble 7419 + { 0x17, 0x90170111 }, // bottom/bass 7420 + { } 7421 + }; 7422 + 7423 + /* 7424 + * Force DAC 0x02 for the bass speakers 0x17. 7425 + */ 7426 + static const hda_nid_t conn[] = { 0x02 }; 7427 + 7428 + switch (action) { 7429 + case HDA_FIXUP_ACT_PRE_PROBE: 7430 + /* needed for amp of back speakers */ 7431 + spec->gpio_mask |= 0x01; 7432 + spec->gpio_dir |= 0x01; 7433 + snd_hda_apply_pincfgs(codec, pincfgs); 7434 + snd_hda_override_conn_list(codec, 0x17, ARRAY_SIZE(conn), conn); 7435 + break; 7436 + case HDA_FIXUP_ACT_INIT: 7437 + /* need to toggle GPIO to enable the amp of back speakers */ 7438 + alc_update_gpio_data(codec, 0x01, true); 7439 + msleep(100); 7440 + alc_update_gpio_data(codec, 0x01, false); 7441 + break; 7442 + } 7443 + 7444 + cs35l41_fixup_i2c_two(codec, fix, action); 7445 + alc245_fixup_hp_mute_led_coefbit(codec, fix, action); 7446 + alc245_fixup_hp_gpio_led(codec, fix, action); 7447 + } 7448 + 7406 7449 /* 7407 7450 * ALC287 PCM hooks 7408 7451 */ ··· 7768 7725 ALC256_FIXUP_ACER_SFG16_MICMUTE_LED, 7769 7726 ALC256_FIXUP_HEADPHONE_AMP_VOL, 7770 7727 ALC245_FIXUP_HP_SPECTRE_X360_EU0XXX, 7728 + ALC245_FIXUP_HP_SPECTRE_X360_16_AA0XXX, 7771 7729 ALC285_FIXUP_ASUS_GA403U, 7772 7730 ALC285_FIXUP_ASUS_GA403U_HEADSET_MIC, 7773 7731 ALC285_FIXUP_ASUS_GA403U_I2C_SPEAKER2_TO_DAC1, ··· 10055 10011 .type = HDA_FIXUP_FUNC, 10056 10012 .v.func = alc245_fixup_hp_spectre_x360_eu0xxx, 10057 10013 }, 10014 + [ALC245_FIXUP_HP_SPECTRE_X360_16_AA0XXX] = { 10015 + .type = HDA_FIXUP_FUNC, 10016 + .v.func = alc245_fixup_hp_spectre_x360_16_aa0xxx, 10017 + }, 10058 10018 [ALC285_FIXUP_ASUS_GA403U] = { 10059 10019 .type = HDA_FIXUP_FUNC, 10060 10020 .v.func = alc285_fixup_asus_ga403u, ··· 10246 10198 SND_PCI_QUIRK(0x1028, 0x0c1e, "Dell Precision 3540", ALC236_FIXUP_DELL_DUAL_CODECS), 10247 10199 SND_PCI_QUIRK(0x1028, 0x0c28, "Dell Inspiron 16 Plus 7630", ALC295_FIXUP_DELL_INSPIRON_TOP_SPEAKERS), 10248 10200 SND_PCI_QUIRK(0x1028, 0x0c4d, "Dell", ALC287_FIXUP_CS35L41_I2C_4), 10201 + SND_PCI_QUIRK(0x1028, 0x0c94, "Dell Polaris 3 metal", ALC287_FIXUP_TAS2781_I2C), 10202 + SND_PCI_QUIRK(0x1028, 0x0c96, "Dell Polaris 2in1", ALC287_FIXUP_TAS2781_I2C), 10249 10203 SND_PCI_QUIRK(0x1028, 0x0cbd, "Dell Oasis 13 CS MTL-U", ALC289_FIXUP_DELL_CS35L41_SPI_2), 10250 10204 SND_PCI_QUIRK(0x1028, 0x0cbe, "Dell Oasis 13 2-IN-1 MTL-U", ALC289_FIXUP_DELL_CS35L41_SPI_2), 10251 10205 SND_PCI_QUIRK(0x1028, 0x0cbf, "Dell Oasis 13 Low Weight MTU-L", ALC289_FIXUP_DELL_CS35L41_SPI_2), ··· 10498 10448 SND_PCI_QUIRK(0x103c, 0x8be9, "HP Envy 15", ALC287_FIXUP_CS35L41_I2C_2), 10499 10449 SND_PCI_QUIRK(0x103c, 0x8bf0, "HP", ALC236_FIXUP_HP_GPIO_LED), 10500 10450 SND_PCI_QUIRK(0x103c, 0x8c15, "HP Spectre x360 2-in-1 Laptop 14-eu0xxx", ALC245_FIXUP_HP_SPECTRE_X360_EU0XXX), 10501 - SND_PCI_QUIRK(0x103c, 0x8c16, "HP Spectre 16", ALC287_FIXUP_CS35L41_I2C_2), 10451 + SND_PCI_QUIRK(0x103c, 0x8c16, "HP Spectre x360 2-in-1 Laptop 16-aa0xxx", ALC245_FIXUP_HP_SPECTRE_X360_16_AA0XXX), 10502 10452 SND_PCI_QUIRK(0x103c, 0x8c17, "HP Spectre 16", ALC287_FIXUP_CS35L41_I2C_2), 10503 10453 SND_PCI_QUIRK(0x103c, 0x8c21, "HP Pavilion Plus Laptop 14-ey0XXX", ALC245_FIXUP_HP_X360_MUTE_LEDS), 10504 10454 SND_PCI_QUIRK(0x103c, 0x8c30, "HP Victus 15-fb1xxx", ALC245_FIXUP_HP_MUTE_LED_COEFBIT), ··· 10551 10501 SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300), 10552 10502 SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 10553 10503 SND_PCI_QUIRK(0x1043, 0x10a1, "ASUS UX391UA", ALC294_FIXUP_ASUS_SPK), 10504 + SND_PCI_QUIRK(0x1043, 0x10a4, "ASUS TP3407SA", ALC287_FIXUP_TAS2781_I2C), 10554 10505 SND_PCI_QUIRK(0x1043, 0x10c0, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), 10555 10506 SND_PCI_QUIRK(0x1043, 0x10d0, "ASUS X540LA/X540LJ", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE), 10556 10507 SND_PCI_QUIRK(0x1043, 0x10d3, "ASUS K6500ZC", ALC294_FIXUP_ASUS_SPK), 10508 + SND_PCI_QUIRK(0x1043, 0x1154, "ASUS TP3607SH", ALC287_FIXUP_TAS2781_I2C), 10557 10509 SND_PCI_QUIRK(0x1043, 0x115d, "Asus 1015E", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 10558 10510 SND_PCI_QUIRK(0x1043, 0x11c0, "ASUS X556UR", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE), 10511 + SND_PCI_QUIRK(0x1043, 0x1204, "ASUS Strix G615JHR_JMR_JPR", ALC287_FIXUP_TAS2781_I2C), 10512 + SND_PCI_QUIRK(0x1043, 0x1214, "ASUS Strix G615LH_LM_LP", ALC287_FIXUP_TAS2781_I2C), 10559 10513 SND_PCI_QUIRK(0x1043, 0x125e, "ASUS Q524UQK", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE), 10560 10514 SND_PCI_QUIRK(0x1043, 0x1271, "ASUS X430UN", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE), 10561 10515 SND_PCI_QUIRK(0x1043, 0x1290, "ASUS X441SA", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE), ··· 10637 10583 SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS), 10638 10584 SND_PCI_QUIRK(0x1043, 0x1e5e, "ASUS ROG Strix G513", ALC294_FIXUP_ASUS_G513_PINS), 10639 10585 SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401), 10586 + SND_PCI_QUIRK(0x1043, 0x1eb3, "ASUS Ally RCLA72", ALC287_FIXUP_TAS2781_I2C), 10640 10587 SND_PCI_QUIRK(0x1043, 0x1ed3, "ASUS HN7306W", ALC287_FIXUP_CS35L41_I2C_2), 10641 10588 SND_PCI_QUIRK(0x1043, 0x1ee2, "ASUS UM6702RA/RC", ALC287_FIXUP_CS35L41_I2C_2), 10642 10589 SND_PCI_QUIRK(0x1043, 0x1c52, "ASUS Zephyrus G15 2022", ALC289_FIXUP_ASUS_GA401), ··· 10652 10597 SND_PCI_QUIRK(0x1043, 0x3a40, "ASUS G814JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS), 10653 10598 SND_PCI_QUIRK(0x1043, 0x3a50, "ASUS G834JYR/JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS), 10654 10599 SND_PCI_QUIRK(0x1043, 0x3a60, "ASUS G634JYR/JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS), 10600 + SND_PCI_QUIRK(0x1043, 0x3e30, "ASUS TP3607SA", ALC287_FIXUP_TAS2781_I2C), 10601 + SND_PCI_QUIRK(0x1043, 0x3ee0, "ASUS Strix G815_JHR_JMR_JPR", ALC287_FIXUP_TAS2781_I2C), 10602 + SND_PCI_QUIRK(0x1043, 0x3ef0, "ASUS Strix G635LR_LW_LX", ALC287_FIXUP_TAS2781_I2C), 10603 + SND_PCI_QUIRK(0x1043, 0x3f00, "ASUS Strix G815LH_LM_LP", ALC287_FIXUP_TAS2781_I2C), 10604 + SND_PCI_QUIRK(0x1043, 0x3f10, "ASUS Strix G835LR_LW_LX", ALC287_FIXUP_TAS2781_I2C), 10605 + SND_PCI_QUIRK(0x1043, 0x3f20, "ASUS Strix G615LR_LW", ALC287_FIXUP_TAS2781_I2C), 10606 + SND_PCI_QUIRK(0x1043, 0x3f30, "ASUS Strix G815LR_LW", ALC287_FIXUP_TAS2781_I2C), 10655 10607 SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC), 10656 10608 SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC), 10657 10609 SND_PCI_QUIRK(0x1043, 0x8398, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC), ··· 10881 10819 SND_PCI_QUIRK(0x17aa, 0x3878, "Lenovo Legion 7 Slim 16ARHA7", ALC287_FIXUP_CS35L41_I2C_2), 10882 10820 SND_PCI_QUIRK(0x17aa, 0x387d, "Yoga S780-16 pro Quad AAC", ALC287_FIXUP_TAS2781_I2C), 10883 10821 SND_PCI_QUIRK(0x17aa, 0x387e, "Yoga S780-16 pro Quad YC", ALC287_FIXUP_TAS2781_I2C), 10822 + SND_PCI_QUIRK(0x17aa, 0x387f, "Yoga S780-16 pro dual LX", ALC287_FIXUP_TAS2781_I2C), 10823 + SND_PCI_QUIRK(0x17aa, 0x3880, "Yoga S780-16 pro dual YC", ALC287_FIXUP_TAS2781_I2C), 10884 10824 SND_PCI_QUIRK(0x17aa, 0x3881, "YB9 dual power mode2 YC", ALC287_FIXUP_TAS2781_I2C), 10885 10825 SND_PCI_QUIRK(0x17aa, 0x3882, "Lenovo Yoga Pro 7 14APH8", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN), 10886 10826 SND_PCI_QUIRK(0x17aa, 0x3884, "Y780 YG DUAL", ALC287_FIXUP_TAS2781_I2C), 10887 10827 SND_PCI_QUIRK(0x17aa, 0x3886, "Y780 VECO DUAL", ALC287_FIXUP_TAS2781_I2C), 10888 10828 SND_PCI_QUIRK(0x17aa, 0x3891, "Lenovo Yoga Pro 7 14AHP9", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN), 10829 + SND_PCI_QUIRK(0x17aa, 0x38a5, "Y580P AMD dual", ALC287_FIXUP_TAS2781_I2C), 10889 10830 SND_PCI_QUIRK(0x17aa, 0x38a7, "Y780P AMD YG dual", ALC287_FIXUP_TAS2781_I2C), 10890 10831 SND_PCI_QUIRK(0x17aa, 0x38a8, "Y780P AMD VECO dual", ALC287_FIXUP_TAS2781_I2C), 10891 10832 SND_PCI_QUIRK(0x17aa, 0x38a9, "Thinkbook 16P", ALC287_FIXUP_MG_RTKC_CSAMP_CS35L41_I2C_THINKPAD), ··· 10897 10832 SND_PCI_QUIRK(0x17aa, 0x38b5, "Legion Slim 7 16IRH8", ALC287_FIXUP_CS35L41_I2C_2), 10898 10833 SND_PCI_QUIRK(0x17aa, 0x38b6, "Legion Slim 7 16APH8", ALC287_FIXUP_CS35L41_I2C_2), 10899 10834 SND_PCI_QUIRK(0x17aa, 0x38b7, "Legion Slim 7 16APH8", ALC287_FIXUP_CS35L41_I2C_2), 10835 + SND_PCI_QUIRK(0x17aa, 0x38b8, "Yoga S780-14.5 proX AMD YC Dual", ALC287_FIXUP_TAS2781_I2C), 10836 + SND_PCI_QUIRK(0x17aa, 0x38b9, "Yoga S780-14.5 proX AMD LX Dual", ALC287_FIXUP_TAS2781_I2C), 10900 10837 SND_PCI_QUIRK(0x17aa, 0x38ba, "Yoga S780-14.5 Air AMD quad YC", ALC287_FIXUP_TAS2781_I2C), 10901 10838 SND_PCI_QUIRK(0x17aa, 0x38bb, "Yoga S780-14.5 Air AMD quad AAC", ALC287_FIXUP_TAS2781_I2C), 10902 10839 SND_PCI_QUIRK(0x17aa, 0x38be, "Yoga S980-14.5 proX YC Dual", ALC287_FIXUP_TAS2781_I2C), ··· 10909 10842 SND_PCI_QUIRK(0x17aa, 0x38cb, "Y790 YG DUAL", ALC287_FIXUP_TAS2781_I2C), 10910 10843 SND_PCI_QUIRK(0x17aa, 0x38cd, "Y790 VECO DUAL", ALC287_FIXUP_TAS2781_I2C), 10911 10844 SND_PCI_QUIRK(0x17aa, 0x38d2, "Lenovo Yoga 9 14IMH9", ALC287_FIXUP_YOGA9_14IMH9_BASS_SPK_PIN), 10845 + SND_PCI_QUIRK(0x17aa, 0x38d3, "Yoga S990-16 Pro IMH YC Dual", ALC287_FIXUP_TAS2781_I2C), 10846 + SND_PCI_QUIRK(0x17aa, 0x38d4, "Yoga S990-16 Pro IMH VECO Dual", ALC287_FIXUP_TAS2781_I2C), 10847 + SND_PCI_QUIRK(0x17aa, 0x38d5, "Yoga S990-16 Pro IMH YC Quad", ALC287_FIXUP_TAS2781_I2C), 10848 + SND_PCI_QUIRK(0x17aa, 0x38d6, "Yoga S990-16 Pro IMH VECO Quad", ALC287_FIXUP_TAS2781_I2C), 10912 10849 SND_PCI_QUIRK(0x17aa, 0x38d7, "Lenovo Yoga 9 14IMH9", ALC287_FIXUP_YOGA9_14IMH9_BASS_SPK_PIN), 10850 + SND_PCI_QUIRK(0x17aa, 0x38df, "Yoga Y990 Intel YC Dual", ALC287_FIXUP_TAS2781_I2C), 10851 + SND_PCI_QUIRK(0x17aa, 0x38e0, "Yoga Y990 Intel VECO Dual", ALC287_FIXUP_TAS2781_I2C), 10852 + SND_PCI_QUIRK(0x17aa, 0x38f8, "Yoga Book 9i", ALC287_FIXUP_TAS2781_I2C), 10913 10853 SND_PCI_QUIRK(0x17aa, 0x38df, "Y990 YG DUAL", ALC287_FIXUP_TAS2781_I2C), 10914 10854 SND_PCI_QUIRK(0x17aa, 0x38f9, "Thinkbook 16P Gen5", ALC287_FIXUP_CS35L41_I2C_2), 10915 10855 SND_PCI_QUIRK(0x17aa, 0x38fa, "Thinkbook 16P Gen5", ALC287_FIXUP_CS35L41_I2C_2), 10856 + SND_PCI_QUIRK(0x17aa, 0x38fd, "ThinkBook plus Gen5 Hybrid", ALC287_FIXUP_TAS2781_I2C), 10916 10857 SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI), 10917 10858 SND_PCI_QUIRK(0x17aa, 0x3913, "Lenovo 145", ALC236_FIXUP_LENOVO_INV_DMIC), 10859 + SND_PCI_QUIRK(0x17aa, 0x391f, "Yoga S990-16 pro Quad YC Quad", ALC287_FIXUP_TAS2781_I2C), 10860 + SND_PCI_QUIRK(0x17aa, 0x3920, "Yoga S990-16 pro Quad VECO Quad", ALC287_FIXUP_TAS2781_I2C), 10918 10861 SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC), 10919 10862 SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI), 10920 10863 SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
+1 -1
sound/usb/line6/capture.c
··· 2 2 /* 3 3 * Line 6 Linux USB driver 4 4 * 5 - * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at) 5 + * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at) 6 6 */ 7 7 8 8 #include <linux/slab.h>
+1 -1
sound/usb/line6/capture.h
··· 2 2 /* 3 3 * Line 6 Linux USB driver 4 4 * 5 - * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at) 5 + * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at) 6 6 */ 7 7 8 8 #ifndef CAPTURE_H
+2 -2
sound/usb/line6/driver.c
··· 2 2 /* 3 3 * Line 6 Linux USB driver 4 4 * 5 - * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at) 5 + * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at) 6 6 */ 7 7 8 8 #include <linux/kernel.h> ··· 20 20 #include "midi.h" 21 21 #include "playback.h" 22 22 23 - #define DRIVER_AUTHOR "Markus Grabner <grabner@icg.tugraz.at>" 23 + #define DRIVER_AUTHOR "Markus Grabner <line6@grabner-graz.at>" 24 24 #define DRIVER_DESC "Line 6 USB Driver" 25 25 26 26 /*
+1 -1
sound/usb/line6/driver.h
··· 2 2 /* 3 3 * Line 6 Linux USB driver 4 4 * 5 - * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at) 5 + * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at) 6 6 */ 7 7 8 8 #ifndef DRIVER_H
+1 -1
sound/usb/line6/midi.c
··· 2 2 /* 3 3 * Line 6 Linux USB driver 4 4 * 5 - * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at) 5 + * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at) 6 6 */ 7 7 8 8 #include <linux/slab.h>
+1 -1
sound/usb/line6/midi.h
··· 2 2 /* 3 3 * Line 6 Linux USB driver 4 4 * 5 - * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at) 5 + * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at) 6 6 */ 7 7 8 8 #ifndef MIDI_H
+1 -1
sound/usb/line6/midibuf.c
··· 2 2 /* 3 3 * Line 6 Linux USB driver 4 4 * 5 - * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at) 5 + * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at) 6 6 */ 7 7 8 8 #include <linux/slab.h>
+1 -1
sound/usb/line6/midibuf.h
··· 2 2 /* 3 3 * Line 6 Linux USB driver 4 4 * 5 - * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at) 5 + * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at) 6 6 */ 7 7 8 8 #ifndef MIDIBUF_H
+1 -1
sound/usb/line6/pcm.c
··· 2 2 /* 3 3 * Line 6 Linux USB driver 4 4 * 5 - * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at) 5 + * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at) 6 6 */ 7 7 8 8 #include <linux/slab.h>
+1 -1
sound/usb/line6/pcm.h
··· 2 2 /* 3 3 * Line 6 Linux USB driver 4 4 * 5 - * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at) 5 + * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at) 6 6 */ 7 7 8 8 /*
+1 -1
sound/usb/line6/playback.c
··· 2 2 /* 3 3 * Line 6 Linux USB driver 4 4 * 5 - * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at) 5 + * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at) 6 6 */ 7 7 8 8 #include <linux/slab.h>
+1 -1
sound/usb/line6/playback.h
··· 2 2 /* 3 3 * Line 6 Linux USB driver 4 4 * 5 - * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at) 5 + * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at) 6 6 */ 7 7 8 8 #ifndef PLAYBACK_H
+1 -1
sound/usb/line6/pod.c
··· 2 2 /* 3 3 * Line 6 Linux USB driver 4 4 * 5 - * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at) 5 + * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at) 6 6 */ 7 7 8 8 #include <linux/slab.h>
+1 -1
sound/usb/line6/toneport.c
··· 2 2 /* 3 3 * Line 6 Linux USB driver 4 4 * 5 - * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at) 5 + * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at) 6 6 * Emil Myhrman (emil.myhrman@gmail.com) 7 7 */ 8 8
+1 -1
sound/usb/line6/variax.c
··· 2 2 /* 3 3 * Line 6 Linux USB driver 4 4 * 5 - * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at) 5 + * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at) 6 6 */ 7 7 8 8 #include <linux/slab.h>
+2
sound/usb/mixer_scarlett2.c
··· 5613 5613 info->peq_flt_total_count * 5614 5614 SCARLETT2_BIQUAD_COEFFS, 5615 5615 peq_flt_values); 5616 + if (err < 0) 5617 + return err; 5616 5618 5617 5619 for (i = 0, dst_idx = 0; i < info->dsp_input_count; i++) { 5618 5620 src_idx = i *
+1
sound/usb/stream.c
··· 1067 1067 UAC3_BADD_PD_ID10 : UAC3_BADD_PD_ID11; 1068 1068 pd->pd_d1d0_rec = UAC3_BADD_PD_RECOVER_D1D0; 1069 1069 pd->pd_d2d0_rec = UAC3_BADD_PD_RECOVER_D2D0; 1070 + pd->ctrl_iface = ctrl_intf; 1070 1071 1071 1072 } else { 1072 1073 fp->attributes = parse_uac_endpoint_attributes(chip, alts,
+10 -12
tools/include/uapi/linux/bpf.h
··· 5519 5519 * **-EOPNOTSUPP** if the hash calculation failed or **-EINVAL** if 5520 5520 * invalid arguments are passed. 5521 5521 * 5522 - * void *bpf_kptr_xchg(void *map_value, void *ptr) 5522 + * void *bpf_kptr_xchg(void *dst, void *ptr) 5523 5523 * Description 5524 - * Exchange kptr at pointer *map_value* with *ptr*, and return the 5525 - * old value. *ptr* can be NULL, otherwise it must be a referenced 5526 - * pointer which will be released when this helper is called. 5524 + * Exchange kptr at pointer *dst* with *ptr*, and return the old value. 5525 + * *dst* can be map value or local kptr. *ptr* can be NULL, otherwise 5526 + * it must be a referenced pointer which will be released when this helper 5527 + * is called. 5527 5528 * Return 5528 5529 * The old value of kptr (which can be NULL). The returned pointer 5529 5530 * if not NULL, is a reference which must be released using its ··· 6047 6046 BPF_F_MARK_ENFORCE = (1ULL << 6), 6048 6047 }; 6049 6048 6050 - /* BPF_FUNC_clone_redirect and BPF_FUNC_redirect flags. */ 6051 - enum { 6052 - BPF_F_INGRESS = (1ULL << 0), 6053 - }; 6054 - 6055 6049 /* BPF_FUNC_skb_set_tunnel_key and BPF_FUNC_skb_get_tunnel_key flags. */ 6056 6050 enum { 6057 6051 BPF_F_TUNINFO_IPV6 = (1ULL << 0), ··· 6193 6197 BPF_F_BPRM_SECUREEXEC = (1ULL << 0), 6194 6198 }; 6195 6199 6196 - /* Flags for bpf_redirect_map helper */ 6200 + /* Flags for bpf_redirect and bpf_redirect_map helpers */ 6197 6201 enum { 6198 - BPF_F_BROADCAST = (1ULL << 3), 6199 - BPF_F_EXCLUDE_INGRESS = (1ULL << 4), 6202 + BPF_F_INGRESS = (1ULL << 0), /* used for skb path */ 6203 + BPF_F_BROADCAST = (1ULL << 3), /* used for XDP path */ 6204 + BPF_F_EXCLUDE_INGRESS = (1ULL << 4), /* used for XDP path */ 6205 + #define BPF_F_REDIRECT_FLAGS (BPF_F_INGRESS | BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS) 6200 6206 }; 6201 6207 6202 6208 #define __bpf_md_ptr(type, name) \
+110
tools/testing/radix-tree/maple.c
··· 36317 36317 return 0; 36318 36318 } 36319 36319 36320 + /* 36321 + * test to check that bulk stores do not use wr_rebalance as the store 36322 + * type. 36323 + */ 36324 + static inline void check_bulk_rebalance(struct maple_tree *mt) 36325 + { 36326 + MA_STATE(mas, mt, ULONG_MAX, ULONG_MAX); 36327 + int max = 10; 36328 + 36329 + build_full_tree(mt, 0, 2); 36330 + 36331 + /* erase every entry in the tree */ 36332 + do { 36333 + /* set up bulk store mode */ 36334 + mas_expected_entries(&mas, max); 36335 + mas_erase(&mas); 36336 + MT_BUG_ON(mt, mas.store_type == wr_rebalance); 36337 + } while (mas_prev(&mas, 0) != NULL); 36338 + 36339 + mas_destroy(&mas); 36340 + } 36341 + 36320 36342 void farmer_tests(void) 36321 36343 { 36322 36344 struct maple_node *node; ··· 36348 36326 36349 36327 mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE | MT_FLAGS_LOCK_EXTERN | MT_FLAGS_USE_RCU); 36350 36328 check_vma_modification(&tree); 36329 + mtree_destroy(&tree); 36330 + 36331 + mt_init(&tree); 36332 + check_bulk_rebalance(&tree); 36351 36333 mtree_destroy(&tree); 36352 36334 36353 36335 tree.ma_root = xa_mk_value(0); ··· 36432 36406 check_nomem(&tree); 36433 36407 } 36434 36408 36409 + static unsigned long get_last_index(struct ma_state *mas) 36410 + { 36411 + struct maple_node *node = mas_mn(mas); 36412 + enum maple_type mt = mte_node_type(mas->node); 36413 + unsigned long *pivots = ma_pivots(node, mt); 36414 + unsigned long last_index = mas_data_end(mas); 36415 + 36416 + BUG_ON(last_index == 0); 36417 + 36418 + return pivots[last_index - 1] + 1; 36419 + } 36420 + 36421 + /* 36422 + * Assert that we handle spanning stores that consume the entirety of the right 36423 + * leaf node correctly. 36424 + */ 36425 + static void test_spanning_store_regression(void) 36426 + { 36427 + unsigned long from = 0, to = 0; 36428 + DEFINE_MTREE(tree); 36429 + MA_STATE(mas, &tree, 0, 0); 36430 + 36431 + /* 36432 + * Build a 3-level tree. We require a parent node below the root node 36433 + * and 2 leaf nodes under it, so we can span the entirety of the right 36434 + * hand node. 36435 + */ 36436 + build_full_tree(&tree, 0, 3); 36437 + 36438 + /* Descend into position at depth 2. */ 36439 + mas_reset(&mas); 36440 + mas_start(&mas); 36441 + mas_descend(&mas); 36442 + mas_descend(&mas); 36443 + 36444 + /* 36445 + * We need to establish a tree like the below. 36446 + * 36447 + * Then we can try a store in [from, to] which results in a spanned 36448 + * store across nodes B and C, with the maple state at the time of the 36449 + * write being such that only the subtree at A and below is considered. 36450 + * 36451 + * Height 36452 + * 0 Root Node 36453 + * / \ 36454 + * pivot = to / \ pivot = ULONG_MAX 36455 + * / \ 36456 + * 1 A [-----] ... 36457 + * / \ 36458 + * pivot = from / \ pivot = to 36459 + * / \ 36460 + * 2 (LEAVES) B [-----] [-----] C 36461 + * ^--- Last pivot to. 36462 + */ 36463 + while (true) { 36464 + unsigned long tmp = get_last_index(&mas); 36465 + 36466 + if (mas_next_sibling(&mas)) { 36467 + from = tmp; 36468 + to = mas.max; 36469 + } else { 36470 + break; 36471 + } 36472 + } 36473 + 36474 + BUG_ON(from == 0 && to == 0); 36475 + 36476 + /* Perform the store. */ 36477 + mas_set_range(&mas, from, to); 36478 + mas_store_gfp(&mas, xa_mk_value(0xdead), GFP_KERNEL); 36479 + 36480 + /* If the regression occurs, the validation will fail. */ 36481 + mt_validate(&tree); 36482 + 36483 + /* Cleanup. */ 36484 + __mt_destroy(&tree); 36485 + } 36486 + 36487 + static void regression_tests(void) 36488 + { 36489 + test_spanning_store_regression(); 36490 + } 36491 + 36435 36492 void maple_tree_tests(void) 36436 36493 { 36437 36494 #if !defined(BENCH) 36495 + regression_tests(); 36438 36496 farmer_tests(); 36439 36497 #endif 36440 36498 maple_tree_seed();
+20 -2
tools/testing/selftests/bpf/Makefile
··· 157 157 flow_dissector_load test_flow_dissector test_tcp_check_syncookie_user \ 158 158 test_lirc_mode2_user xdping test_cpp runqslower bench bpf_testmod.ko \ 159 159 xskxceiver xdp_redirect_multi xdp_synproxy veristat xdp_hw_metadata \ 160 - xdp_features bpf_test_no_cfi.ko 160 + xdp_features bpf_test_no_cfi.ko bpf_test_modorder_x.ko \ 161 + bpf_test_modorder_y.ko 161 162 162 163 TEST_GEN_FILES += liburandom_read.so urandom_read sign-file uprobe_multi 163 164 ··· 264 263 ifeq ($(SRCARCH),$(filter $(SRCARCH),x86 riscv)) 265 264 LLD := lld 266 265 else 267 - LLD := ld 266 + LLD := $(shell command -v $(LD)) 268 267 endif 269 268 270 269 # Filter out -static for liburandom_read.so and its dependent targets so that static builds ··· 303 302 $(Q)$(RM) bpf_test_no_cfi/bpf_test_no_cfi.ko # force re-compilation 304 303 $(Q)$(MAKE) $(submake_extras) RESOLVE_BTFIDS=$(RESOLVE_BTFIDS) -C bpf_test_no_cfi 305 304 $(Q)cp bpf_test_no_cfi/bpf_test_no_cfi.ko $@ 305 + 306 + $(OUTPUT)/bpf_test_modorder_x.ko: $(VMLINUX_BTF) $(RESOLVE_BTFIDS) $(wildcard bpf_test_modorder_x/Makefile bpf_test_modorder_x/*.[ch]) 307 + $(call msg,MOD,,$@) 308 + $(Q)$(RM) bpf_test_modorder_x/bpf_test_modorder_x.ko # force re-compilation 309 + $(Q)$(MAKE) $(submake_extras) RESOLVE_BTFIDS=$(RESOLVE_BTFIDS) -C bpf_test_modorder_x 310 + $(Q)cp bpf_test_modorder_x/bpf_test_modorder_x.ko $@ 311 + 312 + $(OUTPUT)/bpf_test_modorder_y.ko: $(VMLINUX_BTF) $(RESOLVE_BTFIDS) $(wildcard bpf_test_modorder_y/Makefile bpf_test_modorder_y/*.[ch]) 313 + $(call msg,MOD,,$@) 314 + $(Q)$(RM) bpf_test_modorder_y/bpf_test_modorder_y.ko # force re-compilation 315 + $(Q)$(MAKE) $(submake_extras) RESOLVE_BTFIDS=$(RESOLVE_BTFIDS) -C bpf_test_modorder_y 316 + $(Q)cp bpf_test_modorder_y/bpf_test_modorder_y.ko $@ 317 + 306 318 307 319 DEFAULT_BPFTOOL := $(HOST_SCRATCH_DIR)/sbin/bpftool 308 320 ifneq ($(CROSS_COMPILE),) ··· 736 722 ip_check_defrag_frags.h 737 723 TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read $(OUTPUT)/bpf_testmod.ko \ 738 724 $(OUTPUT)/bpf_test_no_cfi.ko \ 725 + $(OUTPUT)/bpf_test_modorder_x.ko \ 726 + $(OUTPUT)/bpf_test_modorder_y.ko \ 739 727 $(OUTPUT)/liburandom_read.so \ 740 728 $(OUTPUT)/xdp_synproxy \ 741 729 $(OUTPUT)/sign-file \ ··· 872 856 $(addprefix $(OUTPUT)/,*.o *.d *.skel.h *.lskel.h *.subskel.h \ 873 857 no_alu32 cpuv4 bpf_gcc bpf_testmod.ko \ 874 858 bpf_test_no_cfi.ko \ 859 + bpf_test_modorder_x.ko \ 860 + bpf_test_modorder_y.ko \ 875 861 liburandom_read.so) \ 876 862 $(OUTPUT)/FEATURE-DUMP.selftests 877 863
+19
tools/testing/selftests/bpf/bpf_test_modorder_x/Makefile
··· 1 + BPF_TESTMOD_DIR := $(realpath $(dir $(abspath $(lastword $(MAKEFILE_LIST))))) 2 + KDIR ?= $(abspath $(BPF_TESTMOD_DIR)/../../../../..) 3 + 4 + ifeq ($(V),1) 5 + Q = 6 + else 7 + Q = @ 8 + endif 9 + 10 + MODULES = bpf_test_modorder_x.ko 11 + 12 + obj-m += bpf_test_modorder_x.o 13 + 14 + all: 15 + +$(Q)make -C $(KDIR) M=$(BPF_TESTMOD_DIR) modules 16 + 17 + clean: 18 + +$(Q)make -C $(KDIR) M=$(BPF_TESTMOD_DIR) clean 19 +
+39
tools/testing/selftests/bpf/bpf_test_modorder_x/bpf_test_modorder_x.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/bpf.h> 3 + #include <linux/btf.h> 4 + #include <linux/module.h> 5 + #include <linux/init.h> 6 + 7 + __bpf_kfunc_start_defs(); 8 + 9 + __bpf_kfunc int bpf_test_modorder_retx(void) 10 + { 11 + return 'x'; 12 + } 13 + 14 + __bpf_kfunc_end_defs(); 15 + 16 + BTF_KFUNCS_START(bpf_test_modorder_kfunc_x_ids) 17 + BTF_ID_FLAGS(func, bpf_test_modorder_retx); 18 + BTF_KFUNCS_END(bpf_test_modorder_kfunc_x_ids) 19 + 20 + static const struct btf_kfunc_id_set bpf_test_modorder_x_set = { 21 + .owner = THIS_MODULE, 22 + .set = &bpf_test_modorder_kfunc_x_ids, 23 + }; 24 + 25 + static int __init bpf_test_modorder_x_init(void) 26 + { 27 + return register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, 28 + &bpf_test_modorder_x_set); 29 + } 30 + 31 + static void __exit bpf_test_modorder_x_exit(void) 32 + { 33 + } 34 + 35 + module_init(bpf_test_modorder_x_init); 36 + module_exit(bpf_test_modorder_x_exit); 37 + 38 + MODULE_DESCRIPTION("BPF selftest ordertest module X"); 39 + MODULE_LICENSE("GPL");
+19
tools/testing/selftests/bpf/bpf_test_modorder_y/Makefile
··· 1 + BPF_TESTMOD_DIR := $(realpath $(dir $(abspath $(lastword $(MAKEFILE_LIST))))) 2 + KDIR ?= $(abspath $(BPF_TESTMOD_DIR)/../../../../..) 3 + 4 + ifeq ($(V),1) 5 + Q = 6 + else 7 + Q = @ 8 + endif 9 + 10 + MODULES = bpf_test_modorder_y.ko 11 + 12 + obj-m += bpf_test_modorder_y.o 13 + 14 + all: 15 + +$(Q)make -C $(KDIR) M=$(BPF_TESTMOD_DIR) modules 16 + 17 + clean: 18 + +$(Q)make -C $(KDIR) M=$(BPF_TESTMOD_DIR) clean 19 +
+39
tools/testing/selftests/bpf/bpf_test_modorder_y/bpf_test_modorder_y.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/bpf.h> 3 + #include <linux/btf.h> 4 + #include <linux/module.h> 5 + #include <linux/init.h> 6 + 7 + __bpf_kfunc_start_defs(); 8 + 9 + __bpf_kfunc int bpf_test_modorder_rety(void) 10 + { 11 + return 'y'; 12 + } 13 + 14 + __bpf_kfunc_end_defs(); 15 + 16 + BTF_KFUNCS_START(bpf_test_modorder_kfunc_y_ids) 17 + BTF_ID_FLAGS(func, bpf_test_modorder_rety); 18 + BTF_KFUNCS_END(bpf_test_modorder_kfunc_y_ids) 19 + 20 + static const struct btf_kfunc_id_set bpf_test_modorder_y_set = { 21 + .owner = THIS_MODULE, 22 + .set = &bpf_test_modorder_kfunc_y_ids, 23 + }; 24 + 25 + static int __init bpf_test_modorder_y_init(void) 26 + { 27 + return register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, 28 + &bpf_test_modorder_y_set); 29 + } 30 + 31 + static void __exit bpf_test_modorder_y_exit(void) 32 + { 33 + } 34 + 35 + module_init(bpf_test_modorder_y_init); 36 + module_exit(bpf_test_modorder_y_exit); 37 + 38 + MODULE_DESCRIPTION("BPF selftest ordertest module Y"); 39 + MODULE_LICENSE("GPL");
+22 -5
tools/testing/selftests/bpf/prog_tests/bpf_iter.c
··· 226 226 ASSERT_OK(pthread_create(&thread_id, NULL, &do_nothing_wait, NULL), 227 227 "pthread_create"); 228 228 229 - skel->bss->tid = getpid(); 229 + skel->bss->tid = gettid(); 230 230 231 231 do_dummy_read_opts(skel->progs.dump_task, opts); 232 232 ··· 249 249 ASSERT_EQ(num_known_tid, num_known, "check_num_known_tid"); 250 250 } 251 251 252 - static void test_task_tid(void) 252 + static void *run_test_task_tid(void *arg) 253 253 { 254 254 LIBBPF_OPTS(bpf_iter_attach_opts, opts); 255 255 union bpf_iter_link_info linfo; 256 256 int num_unknown_tid, num_known_tid; 257 257 258 + ASSERT_NEQ(getpid(), gettid(), "check_new_thread_id"); 259 + 258 260 memset(&linfo, 0, sizeof(linfo)); 259 - linfo.task.tid = getpid(); 261 + linfo.task.tid = gettid(); 260 262 opts.link_info = &linfo; 261 263 opts.link_info_len = sizeof(linfo); 262 264 test_task_common(&opts, 0, 1); 263 265 264 266 linfo.task.tid = 0; 265 267 linfo.task.pid = getpid(); 266 - test_task_common(&opts, 1, 1); 268 + /* This includes the parent thread, this thread, 269 + * and the do_nothing_wait thread 270 + */ 271 + test_task_common(&opts, 2, 1); 267 272 268 273 test_task_common_nocheck(NULL, &num_unknown_tid, &num_known_tid); 269 - ASSERT_GT(num_unknown_tid, 1, "check_num_unknown_tid"); 274 + ASSERT_GT(num_unknown_tid, 2, "check_num_unknown_tid"); 270 275 ASSERT_EQ(num_known_tid, 1, "check_num_known_tid"); 276 + 277 + return NULL; 278 + } 279 + 280 + static void test_task_tid(void) 281 + { 282 + pthread_t thread_id; 283 + 284 + /* Create a new thread so pid and tid aren't the same */ 285 + ASSERT_OK(pthread_create(&thread_id, NULL, &run_test_task_tid, NULL), 286 + "pthread_create"); 287 + ASSERT_FALSE(pthread_join(thread_id, NULL), "pthread_join"); 271 288 } 272 289 273 290 static void test_task_pid(void)
+1 -1
tools/testing/selftests/bpf/prog_tests/cgroup_ancestor.c
··· 35 35 if (!ASSERT_OK_FD(sock, "create socket")) 36 36 return sock; 37 37 38 - if (!ASSERT_OK(connect(sock, &addr, sizeof(addr)), "connect")) { 38 + if (!ASSERT_OK(connect(sock, (struct sockaddr *)&addr, sizeof(addr)), "connect")) { 39 39 close(sock); 40 40 return -1; 41 41 }
+1
tools/testing/selftests/bpf/prog_tests/cpumask.c
··· 23 23 "test_global_mask_array_l2_rcu", 24 24 "test_global_mask_nested_rcu", 25 25 "test_global_mask_nested_deep_rcu", 26 + "test_global_mask_nested_deep_array_rcu", 26 27 "test_cpumask_weight", 27 28 }; 28 29
+55
tools/testing/selftests/bpf/prog_tests/kfunc_module_order.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <test_progs.h> 3 + #include <testing_helpers.h> 4 + 5 + #include "kfunc_module_order.skel.h" 6 + 7 + static int test_run_prog(const struct bpf_program *prog, 8 + struct bpf_test_run_opts *opts) 9 + { 10 + int err; 11 + 12 + err = bpf_prog_test_run_opts(bpf_program__fd(prog), opts); 13 + if (!ASSERT_OK(err, "bpf_prog_test_run_opts")) 14 + return err; 15 + 16 + if (!ASSERT_EQ((int)opts->retval, 0, bpf_program__name(prog))) 17 + return -EINVAL; 18 + 19 + return 0; 20 + } 21 + 22 + void test_kfunc_module_order(void) 23 + { 24 + struct kfunc_module_order *skel; 25 + char pkt_data[64] = {}; 26 + int err = 0; 27 + 28 + DECLARE_LIBBPF_OPTS(bpf_test_run_opts, test_opts, .data_in = pkt_data, 29 + .data_size_in = sizeof(pkt_data)); 30 + 31 + err = load_module("bpf_test_modorder_x.ko", 32 + env_verbosity > VERBOSE_NONE); 33 + if (!ASSERT_OK(err, "load bpf_test_modorder_x.ko")) 34 + return; 35 + 36 + err = load_module("bpf_test_modorder_y.ko", 37 + env_verbosity > VERBOSE_NONE); 38 + if (!ASSERT_OK(err, "load bpf_test_modorder_y.ko")) 39 + goto exit_modx; 40 + 41 + skel = kfunc_module_order__open_and_load(); 42 + if (!ASSERT_OK_PTR(skel, "kfunc_module_order__open_and_load()")) { 43 + err = -EINVAL; 44 + goto exit_mods; 45 + } 46 + 47 + test_run_prog(skel->progs.call_kfunc_xy, &test_opts); 48 + test_run_prog(skel->progs.call_kfunc_yx, &test_opts); 49 + 50 + kfunc_module_order__destroy(skel); 51 + exit_mods: 52 + unload_module("bpf_test_modorder_y", env_verbosity > VERBOSE_NONE); 53 + exit_modx: 54 + unload_module("bpf_test_modorder_x", env_verbosity > VERBOSE_NONE); 55 + }
+2
tools/testing/selftests/bpf/prog_tests/verifier.c
··· 44 44 #include "verifier_ld_ind.skel.h" 45 45 #include "verifier_ldsx.skel.h" 46 46 #include "verifier_leak_ptr.skel.h" 47 + #include "verifier_linked_scalars.skel.h" 47 48 #include "verifier_loops1.skel.h" 48 49 #include "verifier_lwt.skel.h" 49 50 #include "verifier_map_in_map.skel.h" ··· 171 170 void test_verifier_ld_ind(void) { RUN(verifier_ld_ind); } 172 171 void test_verifier_ldsx(void) { RUN(verifier_ldsx); } 173 172 void test_verifier_leak_ptr(void) { RUN(verifier_leak_ptr); } 173 + void test_verifier_linked_scalars(void) { RUN(verifier_linked_scalars); } 174 174 void test_verifier_loops1(void) { RUN(verifier_loops1); } 175 175 void test_verifier_lwt(void) { RUN(verifier_lwt); } 176 176 void test_verifier_map_in_map(void) { RUN(verifier_map_in_map); }
+118 -9
tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 + #include <arpa/inet.h> 2 3 #include <uapi/linux/bpf.h> 3 4 #include <linux/if_link.h> 5 + #include <network_helpers.h> 6 + #include <net/if.h> 4 7 #include <test_progs.h> 5 8 6 9 #include "test_xdp_devmap_helpers.skel.h" ··· 11 8 #include "test_xdp_with_devmap_helpers.skel.h" 12 9 13 10 #define IFINDEX_LO 1 11 + #define TEST_NS "devmap_attach_ns" 14 12 15 13 static void test_xdp_with_devmap_helpers(void) 16 14 { 17 - struct test_xdp_with_devmap_helpers *skel; 15 + struct test_xdp_with_devmap_helpers *skel = NULL; 18 16 struct bpf_prog_info info = {}; 19 17 struct bpf_devmap_val val = { 20 18 .ifindex = IFINDEX_LO, 21 19 }; 22 20 __u32 len = sizeof(info); 23 - int err, dm_fd, map_fd; 21 + int err, dm_fd, dm_fd_redir, map_fd; 22 + struct nstoken *nstoken = NULL; 23 + char data[10] = {}; 24 24 __u32 idx = 0; 25 25 26 + SYS(out_close, "ip netns add %s", TEST_NS); 27 + nstoken = open_netns(TEST_NS); 28 + if (!ASSERT_OK_PTR(nstoken, "open_netns")) 29 + goto out_close; 30 + SYS(out_close, "ip link set dev lo up"); 26 31 27 32 skel = test_xdp_with_devmap_helpers__open_and_load(); 28 33 if (!ASSERT_OK_PTR(skel, "test_xdp_with_devmap_helpers__open_and_load")) 29 - return; 30 - 31 - dm_fd = bpf_program__fd(skel->progs.xdp_redir_prog); 32 - err = bpf_xdp_attach(IFINDEX_LO, dm_fd, XDP_FLAGS_SKB_MODE, NULL); 33 - if (!ASSERT_OK(err, "Generic attach of program with 8-byte devmap")) 34 34 goto out_close; 35 35 36 - err = bpf_xdp_detach(IFINDEX_LO, XDP_FLAGS_SKB_MODE, NULL); 37 - ASSERT_OK(err, "XDP program detach"); 36 + dm_fd_redir = bpf_program__fd(skel->progs.xdp_redir_prog); 37 + err = bpf_xdp_attach(IFINDEX_LO, dm_fd_redir, XDP_FLAGS_SKB_MODE, NULL); 38 + if (!ASSERT_OK(err, "Generic attach of program with 8-byte devmap")) 39 + goto out_close; 38 40 39 41 dm_fd = bpf_program__fd(skel->progs.xdp_dummy_dm); 40 42 map_fd = bpf_map__fd(skel->maps.dm_ports); ··· 54 46 err = bpf_map_lookup_elem(map_fd, &idx, &val); 55 47 ASSERT_OK(err, "Read devmap entry"); 56 48 ASSERT_EQ(info.id, val.bpf_prog.id, "Match program id to devmap entry prog_id"); 49 + 50 + /* send a packet to trigger any potential bugs in there */ 51 + DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts, 52 + .data_in = &data, 53 + .data_size_in = 10, 54 + .flags = BPF_F_TEST_XDP_LIVE_FRAMES, 55 + .repeat = 1, 56 + ); 57 + err = bpf_prog_test_run_opts(dm_fd_redir, &opts); 58 + ASSERT_OK(err, "XDP test run"); 59 + 60 + /* wait for the packets to be flushed */ 61 + kern_sync_rcu(); 62 + 63 + err = bpf_xdp_detach(IFINDEX_LO, XDP_FLAGS_SKB_MODE, NULL); 64 + ASSERT_OK(err, "XDP program detach"); 57 65 58 66 /* can not attach BPF_XDP_DEVMAP program to a device */ 59 67 err = bpf_xdp_attach(IFINDEX_LO, dm_fd, XDP_FLAGS_SKB_MODE, NULL); ··· 91 67 ASSERT_NEQ(err, 0, "Add BPF_XDP program with frags to devmap entry"); 92 68 93 69 out_close: 70 + close_netns(nstoken); 71 + SYS_NOFAIL("ip netns del %s", TEST_NS); 94 72 test_xdp_with_devmap_helpers__destroy(skel); 95 73 } 96 74 ··· 150 124 test_xdp_with_devmap_frags_helpers__destroy(skel); 151 125 } 152 126 127 + static void test_xdp_with_devmap_helpers_veth(void) 128 + { 129 + struct test_xdp_with_devmap_helpers *skel = NULL; 130 + struct bpf_prog_info info = {}; 131 + struct bpf_devmap_val val = {}; 132 + struct nstoken *nstoken = NULL; 133 + __u32 len = sizeof(info); 134 + int err, dm_fd, dm_fd_redir, map_fd, ifindex_dst; 135 + char data[10] = {}; 136 + __u32 idx = 0; 137 + 138 + SYS(out_close, "ip netns add %s", TEST_NS); 139 + nstoken = open_netns(TEST_NS); 140 + if (!ASSERT_OK_PTR(nstoken, "open_netns")) 141 + goto out_close; 142 + 143 + SYS(out_close, "ip link add veth_src type veth peer name veth_dst"); 144 + SYS(out_close, "ip link set dev veth_src up"); 145 + SYS(out_close, "ip link set dev veth_dst up"); 146 + 147 + val.ifindex = if_nametoindex("veth_src"); 148 + ifindex_dst = if_nametoindex("veth_dst"); 149 + if (!ASSERT_NEQ(val.ifindex, 0, "val.ifindex") || 150 + !ASSERT_NEQ(ifindex_dst, 0, "ifindex_dst")) 151 + goto out_close; 152 + 153 + skel = test_xdp_with_devmap_helpers__open_and_load(); 154 + if (!ASSERT_OK_PTR(skel, "test_xdp_with_devmap_helpers__open_and_load")) 155 + goto out_close; 156 + 157 + dm_fd_redir = bpf_program__fd(skel->progs.xdp_redir_prog); 158 + err = bpf_xdp_attach(val.ifindex, dm_fd_redir, XDP_FLAGS_DRV_MODE, NULL); 159 + if (!ASSERT_OK(err, "Attach of program with 8-byte devmap")) 160 + goto out_close; 161 + 162 + dm_fd = bpf_program__fd(skel->progs.xdp_dummy_dm); 163 + map_fd = bpf_map__fd(skel->maps.dm_ports); 164 + err = bpf_prog_get_info_by_fd(dm_fd, &info, &len); 165 + if (!ASSERT_OK(err, "bpf_prog_get_info_by_fd")) 166 + goto out_close; 167 + 168 + val.bpf_prog.fd = dm_fd; 169 + err = bpf_map_update_elem(map_fd, &idx, &val, 0); 170 + ASSERT_OK(err, "Add program to devmap entry"); 171 + 172 + err = bpf_map_lookup_elem(map_fd, &idx, &val); 173 + ASSERT_OK(err, "Read devmap entry"); 174 + ASSERT_EQ(info.id, val.bpf_prog.id, "Match program id to devmap entry prog_id"); 175 + 176 + /* attach dummy to other side to enable reception */ 177 + dm_fd = bpf_program__fd(skel->progs.xdp_dummy_prog); 178 + err = bpf_xdp_attach(ifindex_dst, dm_fd, XDP_FLAGS_DRV_MODE, NULL); 179 + if (!ASSERT_OK(err, "Attach of dummy XDP")) 180 + goto out_close; 181 + 182 + /* send a packet to trigger any potential bugs in there */ 183 + DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts, 184 + .data_in = &data, 185 + .data_size_in = 10, 186 + .flags = BPF_F_TEST_XDP_LIVE_FRAMES, 187 + .repeat = 1, 188 + ); 189 + err = bpf_prog_test_run_opts(dm_fd_redir, &opts); 190 + ASSERT_OK(err, "XDP test run"); 191 + 192 + /* wait for the packets to be flushed */ 193 + kern_sync_rcu(); 194 + 195 + err = bpf_xdp_detach(val.ifindex, XDP_FLAGS_DRV_MODE, NULL); 196 + ASSERT_OK(err, "XDP program detach"); 197 + 198 + err = bpf_xdp_detach(ifindex_dst, XDP_FLAGS_DRV_MODE, NULL); 199 + ASSERT_OK(err, "XDP program detach"); 200 + 201 + out_close: 202 + close_netns(nstoken); 203 + SYS_NOFAIL("ip netns del %s", TEST_NS); 204 + test_xdp_with_devmap_helpers__destroy(skel); 205 + } 206 + 153 207 void serial_test_xdp_devmap_attach(void) 154 208 { 155 209 if (test__start_subtest("DEVMAP with programs in entries")) ··· 240 134 241 135 if (test__start_subtest("Verifier check of DEVMAP programs")) 242 136 test_neg_xdp_devmap_helpers(); 137 + 138 + if (test__start_subtest("DEVMAP with programs in entries on veth")) 139 + test_xdp_with_devmap_helpers_veth(); 243 140 }
+5
tools/testing/selftests/bpf/progs/cpumask_common.h
··· 7 7 #include "errno.h" 8 8 #include <stdbool.h> 9 9 10 + /* Should use BTF_FIELDS_MAX, but it is not always available in vmlinux.h, 11 + * so use the hard-coded number as a workaround. 12 + */ 13 + #define CPUMASK_KPTR_FIELDS_MAX 11 14 + 10 15 int err; 11 16 12 17 #define private(name) SEC(".bss." #name) __attribute__((aligned(8)))
+35
tools/testing/selftests/bpf/progs/cpumask_failure.c
··· 10 10 11 11 char _license[] SEC("license") = "GPL"; 12 12 13 + struct kptr_nested_array_2 { 14 + struct bpf_cpumask __kptr * mask; 15 + }; 16 + 17 + struct kptr_nested_array_1 { 18 + /* Make btf_parse_fields() in map_create() return -E2BIG */ 19 + struct kptr_nested_array_2 d_2[CPUMASK_KPTR_FIELDS_MAX + 1]; 20 + }; 21 + 22 + struct kptr_nested_array { 23 + struct kptr_nested_array_1 d_1; 24 + }; 25 + 26 + private(MASK_NESTED) static struct kptr_nested_array global_mask_nested_arr; 27 + 13 28 /* Prototype for all of the program trace events below: 14 29 * 15 30 * TRACE_EVENT(task_newtask, ··· 199 184 bpf_rcu_read_unlock(); 200 185 if (prev) 201 186 bpf_cpumask_release(prev); 187 + 188 + return 0; 189 + } 190 + 191 + SEC("tp_btf/task_newtask") 192 + __failure __msg("has no valid kptr") 193 + int BPF_PROG(test_invalid_nested_array, struct task_struct *task, u64 clone_flags) 194 + { 195 + struct bpf_cpumask *local, *prev; 196 + 197 + local = create_cpumask(); 198 + if (!local) 199 + return 0; 200 + 201 + prev = bpf_kptr_xchg(&global_mask_nested_arr.d_1.d_2[CPUMASK_KPTR_FIELDS_MAX].mask, local); 202 + if (prev) { 203 + bpf_cpumask_release(prev); 204 + err = 3; 205 + return 0; 206 + } 202 207 203 208 return 0; 204 209 }
+76 -2
tools/testing/selftests/bpf/progs/cpumask_success.c
··· 31 31 struct kptr_nested_pair ptr_pairs[3]; 32 32 }; 33 33 34 + struct kptr_nested_deep_array_1_2 { 35 + int dummy; 36 + struct bpf_cpumask __kptr * mask[CPUMASK_KPTR_FIELDS_MAX]; 37 + }; 38 + 39 + struct kptr_nested_deep_array_1_1 { 40 + int dummy; 41 + struct kptr_nested_deep_array_1_2 d_2; 42 + }; 43 + 44 + struct kptr_nested_deep_array_1 { 45 + long dummy; 46 + struct kptr_nested_deep_array_1_1 d_1; 47 + }; 48 + 49 + struct kptr_nested_deep_array_2_2 { 50 + long dummy[2]; 51 + struct bpf_cpumask __kptr * mask; 52 + }; 53 + 54 + struct kptr_nested_deep_array_2_1 { 55 + int dummy; 56 + struct kptr_nested_deep_array_2_2 d_2[CPUMASK_KPTR_FIELDS_MAX]; 57 + }; 58 + 59 + struct kptr_nested_deep_array_2 { 60 + long dummy; 61 + struct kptr_nested_deep_array_2_1 d_1; 62 + }; 63 + 64 + struct kptr_nested_deep_array_3_2 { 65 + long dummy[2]; 66 + struct bpf_cpumask __kptr * mask; 67 + }; 68 + 69 + struct kptr_nested_deep_array_3_1 { 70 + int dummy; 71 + struct kptr_nested_deep_array_3_2 d_2; 72 + }; 73 + 74 + struct kptr_nested_deep_array_3 { 75 + long dummy; 76 + struct kptr_nested_deep_array_3_1 d_1[CPUMASK_KPTR_FIELDS_MAX]; 77 + }; 78 + 34 79 private(MASK) static struct bpf_cpumask __kptr * global_mask_array[2]; 35 80 private(MASK) static struct bpf_cpumask __kptr * global_mask_array_l2[2][1]; 36 81 private(MASK) static struct bpf_cpumask __kptr * global_mask_array_one[1]; 37 82 private(MASK) static struct kptr_nested global_mask_nested[2]; 38 83 private(MASK_DEEP) static struct kptr_nested_deep global_mask_nested_deep; 84 + private(MASK_1) static struct kptr_nested_deep_array_1 global_mask_nested_deep_array_1; 85 + private(MASK_2) static struct kptr_nested_deep_array_2 global_mask_nested_deep_array_2; 86 + private(MASK_3) static struct kptr_nested_deep_array_3 global_mask_nested_deep_array_3; 39 87 40 88 static bool is_test_task(void) 41 89 { ··· 591 543 goto err_exit; 592 544 } 593 545 594 - /* [<mask 0>, NULL] */ 595 - if (!*mask0 || *mask1) { 546 + /* [<mask 0>, *] */ 547 + if (!*mask0) { 596 548 err = 2; 549 + goto err_exit; 550 + } 551 + 552 + if (!mask1) 553 + goto err_exit; 554 + 555 + /* [*, NULL] */ 556 + if (*mask1) { 557 + err = 3; 597 558 goto err_exit; 598 559 } 599 560 ··· 685 628 if (r) 686 629 return r; 687 630 } 631 + return 0; 632 + } 633 + 634 + SEC("tp_btf/task_newtask") 635 + int BPF_PROG(test_global_mask_nested_deep_array_rcu, struct task_struct *task, u64 clone_flags) 636 + { 637 + int i; 638 + 639 + for (i = 0; i < CPUMASK_KPTR_FIELDS_MAX; i++) 640 + _global_mask_array_rcu(&global_mask_nested_deep_array_1.d_1.d_2.mask[i], NULL); 641 + 642 + for (i = 0; i < CPUMASK_KPTR_FIELDS_MAX; i++) 643 + _global_mask_array_rcu(&global_mask_nested_deep_array_2.d_1.d_2[i].mask, NULL); 644 + 645 + for (i = 0; i < CPUMASK_KPTR_FIELDS_MAX; i++) 646 + _global_mask_array_rcu(&global_mask_nested_deep_array_3.d_1[i].d_2.mask, NULL); 647 + 688 648 return 0; 689 649 } 690 650
+30
tools/testing/selftests/bpf/progs/kfunc_module_order.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/bpf.h> 3 + #include <bpf/bpf_helpers.h> 4 + 5 + extern int bpf_test_modorder_retx(void) __ksym; 6 + extern int bpf_test_modorder_rety(void) __ksym; 7 + 8 + SEC("classifier") 9 + int call_kfunc_xy(struct __sk_buff *skb) 10 + { 11 + int ret1, ret2; 12 + 13 + ret1 = bpf_test_modorder_retx(); 14 + ret2 = bpf_test_modorder_rety(); 15 + 16 + return ret1 == 'x' && ret2 == 'y' ? 0 : -1; 17 + } 18 + 19 + SEC("classifier") 20 + int call_kfunc_yx(struct __sk_buff *skb) 21 + { 22 + int ret1, ret2; 23 + 24 + ret1 = bpf_test_modorder_rety(); 25 + ret2 = bpf_test_modorder_retx(); 26 + 27 + return ret1 == 'y' && ret2 == 'x' ? 0 : -1; 28 + } 29 + 30 + char _license[] SEC("license") = "GPL";
+1 -1
tools/testing/selftests/bpf/progs/test_xdp_with_devmap_helpers.c
··· 12 12 SEC("xdp") 13 13 int xdp_redir_prog(struct xdp_md *ctx) 14 14 { 15 - return bpf_redirect_map(&dm_ports, 1, 0); 15 + return bpf_redirect_map(&dm_ports, 0, 0); 16 16 } 17 17 18 18 /* invalid program on DEVMAP entry;
+34
tools/testing/selftests/bpf/progs/verifier_linked_scalars.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/bpf.h> 4 + #include <bpf/bpf_helpers.h> 5 + #include "bpf_misc.h" 6 + 7 + SEC("socket") 8 + __description("scalars: find linked scalars") 9 + __failure 10 + __msg("math between fp pointer and 2147483647 is not allowed") 11 + __naked void scalars(void) 12 + { 13 + asm volatile (" \ 14 + r0 = 0; \ 15 + r1 = 0x80000001 ll; \ 16 + r1 /= 1; \ 17 + r2 = r1; \ 18 + r4 = r1; \ 19 + w2 += 0x7FFFFFFF; \ 20 + w4 += 0; \ 21 + if r2 == 0 goto l1; \ 22 + exit; \ 23 + l1: \ 24 + r4 >>= 63; \ 25 + r3 = 1; \ 26 + r3 -= r4; \ 27 + r3 *= 0x7FFFFFFF; \ 28 + r3 += r10; \ 29 + *(u8*)(r3 - 1) = r0; \ 30 + exit; \ 31 + " ::: __clobber_all); 32 + } 33 + 34 + char _license[] SEC("license") = "GPL";
+40
tools/testing/selftests/bpf/progs/verifier_movsx.c
··· 287 287 : __clobber_all); 288 288 } 289 289 290 + SEC("socket") 291 + __description("MOV64SX, S8, unsigned range_check") 292 + __success __retval(0) 293 + __naked void mov64sx_s8_range_check(void) 294 + { 295 + asm volatile (" \ 296 + call %[bpf_get_prandom_u32]; \ 297 + r0 &= 0x1; \ 298 + r0 += 0xfe; \ 299 + r0 = (s8)r0; \ 300 + if r0 < 0xfffffffffffffffe goto label_%=; \ 301 + r0 = 0; \ 302 + exit; \ 303 + label_%=: \ 304 + exit; \ 305 + " : 306 + : __imm(bpf_get_prandom_u32) 307 + : __clobber_all); 308 + } 309 + 310 + SEC("socket") 311 + __description("MOV32SX, S8, unsigned range_check") 312 + __success __retval(0) 313 + __naked void mov32sx_s8_range_check(void) 314 + { 315 + asm volatile (" \ 316 + call %[bpf_get_prandom_u32]; \ 317 + w0 &= 0x1; \ 318 + w0 += 0xfe; \ 319 + w0 = (s8)w0; \ 320 + if w0 < 0xfffffffe goto label_%=; \ 321 + r0 = 0; \ 322 + exit; \ 323 + label_%=: \ 324 + exit; \ 325 + " : 326 + : __imm(bpf_get_prandom_u32) 327 + : __clobber_all); 328 + } 329 + 290 330 #else 291 331 292 332 SEC("socket")
+67
tools/testing/selftests/bpf/progs/verifier_scalar_ids.c
··· 760 760 : __clobber_all); 761 761 } 762 762 763 + SEC("socket") 764 + /* Note the flag, see verifier.c:opt_subreg_zext_lo32_rnd_hi32() */ 765 + __flag(BPF_F_TEST_RND_HI32) 766 + __success 767 + /* This test was added because of a bug in verifier.c:sync_linked_regs(), 768 + * upon range propagation it destroyed subreg_def marks for registers. 769 + * The subreg_def mark is used to decide whether zero extension instructions 770 + * are needed when register is read. When BPF_F_TEST_RND_HI32 is set it 771 + * also causes generation of statements to randomize upper halves of 772 + * read registers. 773 + * 774 + * The test is written in a way to return an upper half of a register 775 + * that is affected by range propagation and must have it's subreg_def 776 + * preserved. This gives a return value of 0 and leads to undefined 777 + * return value if subreg_def mark is not preserved. 778 + */ 779 + __retval(0) 780 + /* Check that verifier believes r1/r0 are zero at exit */ 781 + __log_level(2) 782 + __msg("4: (77) r1 >>= 32 ; R1_w=0") 783 + __msg("5: (bf) r0 = r1 ; R0_w=0 R1_w=0") 784 + __msg("6: (95) exit") 785 + __msg("from 3 to 4") 786 + __msg("4: (77) r1 >>= 32 ; R1_w=0") 787 + __msg("5: (bf) r0 = r1 ; R0_w=0 R1_w=0") 788 + __msg("6: (95) exit") 789 + /* Verify that statements to randomize upper half of r1 had not been 790 + * generated. 791 + */ 792 + __xlated("call unknown") 793 + __xlated("r0 &= 2147483647") 794 + __xlated("w1 = w0") 795 + /* This is how disasm.c prints BPF_ZEXT_REG at the moment, x86 and arm 796 + * are the only CI archs that do not need zero extension for subregs. 797 + */ 798 + #if !defined(__TARGET_ARCH_x86) && !defined(__TARGET_ARCH_arm64) 799 + __xlated("w1 = w1") 800 + #endif 801 + __xlated("if w0 < 0xa goto pc+0") 802 + __xlated("r1 >>= 32") 803 + __xlated("r0 = r1") 804 + __xlated("exit") 805 + __naked void linked_regs_and_subreg_def(void) 806 + { 807 + asm volatile ( 808 + "call %[bpf_ktime_get_ns];" 809 + /* make sure r0 is in 32-bit range, otherwise w1 = w0 won't 810 + * assign same IDs to registers. 811 + */ 812 + "r0 &= 0x7fffffff;" 813 + /* link w1 and w0 via ID */ 814 + "w1 = w0;" 815 + /* 'if' statement propagates range info from w0 to w1, 816 + * but should not affect w1->subreg_def property. 817 + */ 818 + "if w0 < 10 goto +0;" 819 + /* r1 is read here, on archs that require subreg zero 820 + * extension this would cause zext patch generation. 821 + */ 822 + "r1 >>= 32;" 823 + "r0 = r1;" 824 + "exit;" 825 + : 826 + : __imm(bpf_ktime_get_ns) 827 + : __clobber_all); 828 + } 829 + 763 830 char _license[] SEC("license") = "GPL";
+22 -12
tools/testing/selftests/bpf/testing_helpers.c
··· 367 367 return syscall(__NR_delete_module, name, flags); 368 368 } 369 369 370 - int unload_bpf_testmod(bool verbose) 370 + int unload_module(const char *name, bool verbose) 371 371 { 372 372 int ret, cnt = 0; 373 373 ··· 375 375 fprintf(stdout, "Failed to trigger kernel-side RCU sync!\n"); 376 376 377 377 for (;;) { 378 - ret = delete_module("bpf_testmod", 0); 378 + ret = delete_module(name, 0); 379 379 if (!ret || errno != EAGAIN) 380 380 break; 381 381 if (++cnt > 10000) { 382 - fprintf(stdout, "Unload of bpf_testmod timed out\n"); 382 + fprintf(stdout, "Unload of %s timed out\n", name); 383 383 break; 384 384 } 385 385 usleep(100); ··· 388 388 if (ret) { 389 389 if (errno == ENOENT) { 390 390 if (verbose) 391 - fprintf(stdout, "bpf_testmod.ko is already unloaded.\n"); 391 + fprintf(stdout, "%s.ko is already unloaded.\n", name); 392 392 return -1; 393 393 } 394 - fprintf(stdout, "Failed to unload bpf_testmod.ko from kernel: %d\n", -errno); 394 + fprintf(stdout, "Failed to unload %s.ko from kernel: %d\n", name, -errno); 395 395 return -1; 396 396 } 397 397 if (verbose) 398 - fprintf(stdout, "Successfully unloaded bpf_testmod.ko.\n"); 398 + fprintf(stdout, "Successfully unloaded %s.ko.\n", name); 399 399 return 0; 400 400 } 401 401 402 - int load_bpf_testmod(bool verbose) 402 + int load_module(const char *path, bool verbose) 403 403 { 404 404 int fd; 405 405 406 406 if (verbose) 407 - fprintf(stdout, "Loading bpf_testmod.ko...\n"); 407 + fprintf(stdout, "Loading %s...\n", path); 408 408 409 - fd = open("bpf_testmod.ko", O_RDONLY); 409 + fd = open(path, O_RDONLY); 410 410 if (fd < 0) { 411 - fprintf(stdout, "Can't find bpf_testmod.ko kernel module: %d\n", -errno); 411 + fprintf(stdout, "Can't find %s kernel module: %d\n", path, -errno); 412 412 return -ENOENT; 413 413 } 414 414 if (finit_module(fd, "", 0)) { 415 - fprintf(stdout, "Failed to load bpf_testmod.ko into the kernel: %d\n", -errno); 415 + fprintf(stdout, "Failed to load %s into the kernel: %d\n", path, -errno); 416 416 close(fd); 417 417 return -EINVAL; 418 418 } 419 419 close(fd); 420 420 421 421 if (verbose) 422 - fprintf(stdout, "Successfully loaded bpf_testmod.ko.\n"); 422 + fprintf(stdout, "Successfully loaded %s.\n", path); 423 423 return 0; 424 + } 425 + 426 + int unload_bpf_testmod(bool verbose) 427 + { 428 + return unload_module("bpf_testmod", verbose); 429 + } 430 + 431 + int load_bpf_testmod(bool verbose) 432 + { 433 + return load_module("bpf_testmod.ko", verbose); 424 434 } 425 435 426 436 /*
+2
tools/testing/selftests/bpf/testing_helpers.h
··· 38 38 int kern_sync_rcu(void); 39 39 int finit_module(int fd, const char *param_values, int flags); 40 40 int delete_module(const char *name, int flags); 41 + int load_module(const char *path, bool verbose); 42 + int unload_module(const char *name, bool verbose); 41 43 42 44 static inline __u64 get_time_ns(void) 43 45 {
+1
tools/testing/selftests/hid/Makefile
··· 18 18 TEST_PROGS += hid-wacom.sh 19 19 20 20 TEST_FILES := run-hid-tools-tests.sh 21 + TEST_FILES += tests 21 22 22 23 CXX ?= $(CROSS_COMPILE)g++ 23 24
+3
tools/testing/selftests/kvm/Makefile
··· 248 248 ifeq ($(ARCH),s390) 249 249 CFLAGS += -march=z10 250 250 endif 251 + ifeq ($(ARCH),x86) 252 + CFLAGS += -march=x86-64-v2 253 + endif 251 254 ifeq ($(ARCH),arm64) 252 255 tools_dir := $(top_srcdir)/tools 253 256 arm64_tools_dir := $(tools_dir)/arch/arm64/tools/
+13 -3
tools/testing/selftests/kvm/aarch64/set_id_regs.c
··· 68 68 } 69 69 70 70 static const struct reg_ftr_bits ftr_id_aa64dfr0_el1[] = { 71 + S_REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64DFR0_EL1, DoubleLock, 0), 72 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64DFR0_EL1, WRPs, 0), 71 73 S_REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64DFR0_EL1, PMUVer, 0), 72 74 REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64DFR0_EL1, DebugVer, ID_AA64DFR0_EL1_DebugVer_IMP), 73 75 REG_FTR_END, ··· 133 131 REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL2, 0), 134 132 REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL1, 0), 135 133 REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL0, 0), 134 + REG_FTR_END, 135 + }; 136 + 137 + static const struct reg_ftr_bits ftr_id_aa64pfr1_el1[] = { 138 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR1_EL1, CSV2_frac, 0), 139 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR1_EL1, SSBS, ID_AA64PFR1_EL1_SSBS_NI), 140 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR1_EL1, BT, 0), 136 141 REG_FTR_END, 137 142 }; 138 143 ··· 209 200 TEST_REG(SYS_ID_AA64ISAR1_EL1, ftr_id_aa64isar1_el1), 210 201 TEST_REG(SYS_ID_AA64ISAR2_EL1, ftr_id_aa64isar2_el1), 211 202 TEST_REG(SYS_ID_AA64PFR0_EL1, ftr_id_aa64pfr0_el1), 203 + TEST_REG(SYS_ID_AA64PFR1_EL1, ftr_id_aa64pfr1_el1), 212 204 TEST_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0_el1), 213 205 TEST_REG(SYS_ID_AA64MMFR1_EL1, ftr_id_aa64mmfr1_el1), 214 206 TEST_REG(SYS_ID_AA64MMFR2_EL1, ftr_id_aa64mmfr2_el1), ··· 579 569 test_cnt = ARRAY_SIZE(ftr_id_aa64dfr0_el1) + ARRAY_SIZE(ftr_id_dfr0_el1) + 580 570 ARRAY_SIZE(ftr_id_aa64isar0_el1) + ARRAY_SIZE(ftr_id_aa64isar1_el1) + 581 571 ARRAY_SIZE(ftr_id_aa64isar2_el1) + ARRAY_SIZE(ftr_id_aa64pfr0_el1) + 582 - ARRAY_SIZE(ftr_id_aa64mmfr0_el1) + ARRAY_SIZE(ftr_id_aa64mmfr1_el1) + 583 - ARRAY_SIZE(ftr_id_aa64mmfr2_el1) + ARRAY_SIZE(ftr_id_aa64zfr0_el1) - 584 - ARRAY_SIZE(test_regs) + 2; 572 + ARRAY_SIZE(ftr_id_aa64pfr1_el1) + ARRAY_SIZE(ftr_id_aa64mmfr0_el1) + 573 + ARRAY_SIZE(ftr_id_aa64mmfr1_el1) + ARRAY_SIZE(ftr_id_aa64mmfr2_el1) + 574 + ARRAY_SIZE(ftr_id_aa64zfr0_el1) - ARRAY_SIZE(test_regs) + 2; 585 575 586 576 ksft_set_plan(test_cnt); 587 577
+1 -1
tools/testing/selftests/kvm/x86_64/cpuid_test.c
··· 60 60 { 61 61 int i; 62 62 63 - for (i = 0; i < sizeof(mangled_cpuids); i++) { 63 + for (i = 0; i < ARRAY_SIZE(mangled_cpuids); i++) { 64 64 if (mangled_cpuids[i].function == entrie->function && 65 65 mangled_cpuids[i].index == entrie->index) 66 66 return true;
+1 -1
tools/testing/selftests/mm/khugepaged.c
··· 1091 1091 fprintf(stderr, "\n\t\"file,all\" mem_type requires kernel built with\n"); 1092 1092 fprintf(stderr, "\tCONFIG_READ_ONLY_THP_FOR_FS=y\n"); 1093 1093 fprintf(stderr, "\n\tif [dir] is a (sub)directory of a tmpfs mount, tmpfs must be\n"); 1094 - fprintf(stderr, "\tmounted with huge=madvise option for khugepaged tests to work\n"); 1094 + fprintf(stderr, "\tmounted with huge=advise option for khugepaged tests to work\n"); 1095 1095 fprintf(stderr, "\n\tSupported Options:\n"); 1096 1096 fprintf(stderr, "\t\t-h: This help message.\n"); 1097 1097 fprintf(stderr, "\t\t-s: mTHP size, expressed as page order.\n");
+3 -2
tools/testing/selftests/mm/uffd-common.c
··· 18 18 unsigned long long *count_verify; 19 19 uffd_test_ops_t *uffd_test_ops; 20 20 uffd_test_case_ops_t *uffd_test_case_ops; 21 - atomic_bool ready_for_fork; 21 + pthread_barrier_t ready_for_fork; 22 22 23 23 static int uffd_mem_fd_create(off_t mem_size, bool hugetlb) 24 24 { ··· 519 519 pollfd[1].fd = pipefd[cpu*2]; 520 520 pollfd[1].events = POLLIN; 521 521 522 - ready_for_fork = true; 522 + /* Ready for parent thread to fork */ 523 + pthread_barrier_wait(&ready_for_fork); 523 524 524 525 for (;;) { 525 526 ret = poll(pollfd, 2, -1);
+1 -2
tools/testing/selftests/mm/uffd-common.h
··· 33 33 #include <inttypes.h> 34 34 #include <stdint.h> 35 35 #include <sys/random.h> 36 - #include <stdatomic.h> 37 36 38 37 #include "../kselftest.h" 39 38 #include "vm_util.h" ··· 104 105 extern bool test_uffdio_wp; 105 106 extern unsigned long long *count_verify; 106 107 extern volatile bool test_uffdio_copy_eexist; 107 - extern atomic_bool ready_for_fork; 108 + extern pthread_barrier_t ready_for_fork; 108 109 109 110 extern uffd_test_ops_t anon_uffd_test_ops; 110 111 extern uffd_test_ops_t shmem_uffd_test_ops;
+15 -6
tools/testing/selftests/mm/uffd-unit-tests.c
··· 241 241 fork_event_args *args = data; 242 242 struct uffd_msg msg = { 0 }; 243 243 244 + /* Ready for parent thread to fork */ 245 + pthread_barrier_wait(&ready_for_fork); 246 + 244 247 /* Read until a full msg received */ 245 248 while (uffd_read_msg(args->parent_uffd, &msg)); 246 249 ··· 311 308 312 309 /* Prepare a thread to resolve EVENT_FORK */ 313 310 if (with_event) { 311 + pthread_barrier_init(&ready_for_fork, NULL, 2); 314 312 if (pthread_create(&thread, NULL, fork_event_consumer, &args)) 315 313 err("pthread_create()"); 314 + /* Wait for child thread to start before forking */ 315 + pthread_barrier_wait(&ready_for_fork); 316 + pthread_barrier_destroy(&ready_for_fork); 316 317 } 317 318 318 319 child = fork(); ··· 781 774 char c; 782 775 struct uffd_args args = { 0 }; 783 776 784 - ready_for_fork = false; 777 + pthread_barrier_init(&ready_for_fork, NULL, 2); 785 778 786 779 fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK); 787 780 ··· 798 791 if (pthread_create(&uffd_mon, NULL, uffd_poll_thread, &args)) 799 792 err("uffd_poll_thread create"); 800 793 801 - while (!ready_for_fork) 802 - ; /* Wait for the poll_thread to start executing before forking */ 794 + /* Wait for child thread to start before forking */ 795 + pthread_barrier_wait(&ready_for_fork); 796 + pthread_barrier_destroy(&ready_for_fork); 803 797 804 798 pid = fork(); 805 799 if (pid < 0) ··· 841 833 char c; 842 834 struct uffd_args args = { 0 }; 843 835 844 - ready_for_fork = false; 836 + pthread_barrier_init(&ready_for_fork, NULL, 2); 845 837 846 838 fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK); 847 839 if (uffd_register(uffd, area_dst, nr_pages * page_size, ··· 852 844 if (pthread_create(&uffd_mon, NULL, uffd_poll_thread, &args)) 853 845 err("uffd_poll_thread create"); 854 846 855 - while (!ready_for_fork) 856 - ; /* Wait for the poll_thread to start executing before forking */ 847 + /* Wait for child thread to start before forking */ 848 + pthread_barrier_wait(&ready_for_fork); 849 + pthread_barrier_destroy(&ready_for_fork); 857 850 858 851 pid = fork(); 859 852 if (pid < 0)
+1 -13
virt/kvm/kvm_main.c
··· 3035 3035 } 3036 3036 EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot_atomic); 3037 3037 3038 - kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn) 3039 - { 3040 - return gfn_to_pfn_memslot_atomic(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn); 3041 - } 3042 - EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn_atomic); 3043 - 3044 3038 kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn) 3045 3039 { 3046 3040 return gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn); 3047 3041 } 3048 3042 EXPORT_SYMBOL_GPL(gfn_to_pfn); 3049 - 3050 - kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn) 3051 - { 3052 - return gfn_to_pfn_memslot(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn); 3053 - } 3054 - EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn); 3055 3043 3056 3044 int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, 3057 3045 struct page **pages, int nr_pages) ··· 6375 6387 6376 6388 WRITE_ONCE(vcpu->scheduled_out, true); 6377 6389 6378 - if (current->on_rq && vcpu->wants_to_run) { 6390 + if (task_is_runnable(current) && vcpu->wants_to_run) { 6379 6391 WRITE_ONCE(vcpu->preempted, true); 6380 6392 WRITE_ONCE(vcpu->ready, true); 6381 6393 }