···223223authorization of the policies (prohibiting an attacker from gaining224224unconstrained root, and deploying an "allow all" policy). These225225policies must be signed by a certificate that chains to the226226-``SYSTEM_TRUSTED_KEYRING``. With openssl, the policy can be signed by::226226+``SYSTEM_TRUSTED_KEYRING``, or to the secondary and/or platform keyrings if227227+``CONFIG_IPE_POLICY_SIG_SECONDARY_KEYRING`` and/or228228+``CONFIG_IPE_POLICY_SIG_PLATFORM_KEYRING`` are enabled, respectively.229229+With openssl, the policy can be signed by::227230228231 openssl smime -sign \229232 -in "$MY_POLICY" \···269266policy. Two checks will always be performed on this policy: First, the270267``policy_names`` must match with the updated version and the existing271268version. Second the updated policy must have a policy version greater than272272-or equal to the currently-running version. This is to prevent rollback attacks.269269+the currently-running version. This is to prevent rollback attacks.273270274271The ``delete`` file is used to remove a policy that is no longer needed.275272This file is write-only and accepts a value of ``1`` to delete the policy.
+30-8
Documentation/core-api/protection-keys.rst
···1212 * Intel server CPUs, Skylake and later1313 * Intel client CPUs, Tiger Lake (11th Gen Core) and later1414 * Future AMD CPUs1515+ * arm64 CPUs implementing the Permission Overlay Extension (FEAT_S1POE)15161717+x86_641818+======1619Pkeys work by dedicating 4 previously Reserved bits in each page table entry to1720a "protection key", giving 16 possible keys.1821···3128theoretically space in the PAE PTEs. These permissions are enforced on data3229access only and have no effect on instruction fetches.33303131+arm643232+=====3333+3434+Pkeys use 3 bits in each page table entry, to encode a "protection key index",3535+giving 8 possible keys.3636+3737+Protections for each key are defined with a per-CPU user-writable system3838+register (POR_EL0). This is a 64-bit register encoding read, write and execute3939+overlay permissions for each protection key index.4040+4141+Being a CPU register, POR_EL0 is inherently thread-local, potentially giving4242+each thread a different set of protections from every other thread.4343+4444+Unlike x86_64, the protection key permissions also apply to instruction4545+fetches.4646+3447Syscalls3548========3649···5738 int pkey_mprotect(unsigned long start, size_t len,5839 unsigned long prot, int pkey);59406060-Before a pkey can be used, it must first be allocated with6161-pkey_alloc(). An application calls the WRPKRU instruction6262-directly in order to change access permissions to memory covered6363-with a key. In this example WRPKRU is wrapped by a C function6464-called pkey_set().4141+Before a pkey can be used, it must first be allocated with pkey_alloc(). An4242+application writes to the architecture specific CPU register directly in order4343+to change access permissions to memory covered with a key. In this example4444+this is wrapped by a C function called pkey_set().6545::66466747 int real_prot = PROT_READ|PROT_WRITE;···8264 munmap(ptr, PAGE_SIZE);8365 pkey_free(pkey);84668585-.. note:: pkey_set() is a wrapper for the RDPKRU and WRPKRU instructions.8686- An example implementation can be found in8787- tools/testing/selftests/x86/protection_keys.c.6767+.. note:: pkey_set() is a wrapper around writing to the CPU register.6868+ Example implementations can be found in6969+ tools/testing/selftests/mm/pkey-{arm64,powerpc,x86}.h88708971Behavior9072========···11496The kernel will send a SIGSEGV in both cases, but si_code will be set11597to SEGV_PKERR when violating protection keys versus SEGV_ACCERR when11698the plain mprotect() permissions are violated.9999+100100+Note that kernel accesses from a kthread (such as io_uring) will use a default101101+value for the protection key register and so will not be consistent with102102+userspace's value of the register or mprotect().
···44$id: http://devicetree.org/schemas/iio/dac/adi,ad5696.yaml#55$schema: http://devicetree.org/meta-schemas/core.yaml#6677-title: Analog Devices AD5696 and similar multi-channel DACs77+title: Analog Devices AD5696 and similar I2C multi-channel DACs8899maintainers:1010 - Michael Auchter <michael.auchter@ni.com>···1616 compatible:1717 enum:1818 - adi,ad5311r1919+ - adi,ad5337r1920 - adi,ad5338r2021 - adi,ad5671r2122 - adi,ad5675r
+1-1
Documentation/filesystems/iomap/operations.rst
···208208such `reservations209209<https://lore.kernel.org/linux-xfs/20220817093627.GZ3600936@dread.disaster.area/>`_210210because writeback will not consume the reservation.211211-The ``iomap_file_buffered_write_punch_delalloc`` can be called from a211211+The ``iomap_write_delalloc_release`` can be called from a212212``->iomap_end`` function to find all the clean areas of the folios213213caching a fresh (``IOMAP_F_NEW``) delalloc mapping.214214It takes the ``invalidate_lock``.
···77section of 'MAINTAINERS' file.8899The mailing lists for the subsystem are damon@lists.linux.dev and1010-linux-mm@kvack.org. Patches should be made against the mm-unstable `tree1111-<https://git.kernel.org/akpm/mm/h/mm-unstable>` whenever possible and posted to1212-the mailing lists.1010+linux-mm@kvack.org. Patches should be made against the `mm-unstable tree1111+<https://git.kernel.org/akpm/mm/h/mm-unstable>`_ whenever possible and posted1212+to the mailing lists.13131414SCM Trees1515---------16161717There are multiple Linux trees for DAMON development. Patches under1818development or testing are queued in `damon/next1919-<https://git.kernel.org/sj/h/damon/next>` by the DAMON maintainer.1919+<https://git.kernel.org/sj/h/damon/next>`_ by the DAMON maintainer.2020Sufficiently reviewed patches will be queued in `mm-unstable2121-<https://git.kernel.org/akpm/mm/h/mm-unstable>` by the memory management2121+<https://git.kernel.org/akpm/mm/h/mm-unstable>`_ by the memory management2222subsystem maintainer. After more sufficient tests, the patches will be queued2323-in `mm-stable <https://git.kernel.org/akpm/mm/h/mm-stable>` , and finally2323+in `mm-stable <https://git.kernel.org/akpm/mm/h/mm-stable>`_, and finally2424pull-requested to the mainline by the memory management subsystem maintainer.25252626-Note again the patches for mm-unstable `tree2727-<https://git.kernel.org/akpm/mm/h/mm-unstable>` are queued by the memory2626+Note again the patches for `mm-unstable tree2727+<https://git.kernel.org/akpm/mm/h/mm-unstable>`_ are queued by the memory2828management subsystem maintainer. If the patches requires some patches in2929-damon/next `tree <https://git.kernel.org/sj/h/damon/next>` which not yet merged2929+`damon/next tree <https://git.kernel.org/sj/h/damon/next>`_ which not yet merged3030in mm-unstable, please make sure the requirement is clearly specified.31313232Submit checklist addendum···3737- Build changes related outputs including kernel and documents.3838- Ensure the builds introduce no new errors or warnings.3939- Run and ensure no new failures for DAMON `selftests4040- <https://github.com/awslabs/damon-tests/blob/master/corr/run.sh#L49>` and4040+ <https://github.com/damonitor/damon-tests/blob/master/corr/run.sh#L49>`_ and4141 `kunittests4242- <https://github.com/awslabs/damon-tests/blob/master/corr/tests/kunit.sh>`.4242+ <https://github.com/damonitor/damon-tests/blob/master/corr/tests/kunit.sh>`_.43434444Further doing below and putting the results will be helpful.45454646- Run `damon-tests/corr4747- <https://github.com/awslabs/damon-tests/tree/master/corr>` for normal4747+ <https://github.com/damonitor/damon-tests/tree/master/corr>`_ for normal4848 changes.4949- Run `damon-tests/perf5050- <https://github.com/awslabs/damon-tests/tree/master/perf>` for performance5050+ <https://github.com/damonitor/damon-tests/tree/master/perf>`_ for performance5151 changes.52525353Key cycle dates5454---------------55555656Patches can be sent anytime. Key cycle dates of the `mm-unstable5757-<https://git.kernel.org/akpm/mm/h/mm-unstable>` and `mm-stable5858-<https://git.kernel.org/akpm/mm/h/mm-stable>` trees depend on the memory5757+<https://git.kernel.org/akpm/mm/h/mm-unstable>`_ and `mm-stable5858+<https://git.kernel.org/akpm/mm/h/mm-stable>`_ trees depend on the memory5959management subsystem maintainer.60606161Review cadence···7272Like many other Linux kernel subsystems, DAMON uses the mailing lists7373(damon@lists.linux.dev and linux-mm@kvack.org) as the major communication7474channel. There is a simple tool called `HacKerMaiL7575-<https://github.com/damonitor/hackermail>` (``hkml``), which is for people who7575+<https://github.com/damonitor/hackermail>`_ (``hkml``), which is for people who7676are not very familiar with the mailing lists based communication. The tool7777could be particularly helpful for DAMON community members since it is developed7878and maintained by DAMON maintainer. The tool is also officially announced to7979support DAMON and general Linux kernel development workflow.80808181-In other words, `hkml <https://github.com/damonitor/hackermail>` is a mailing8181+In other words, `hkml <https://github.com/damonitor/hackermail>`_ is a mailing8282tool for DAMON community, which DAMON maintainer is committed to support.8383Please feel free to try and report issues or feature requests for the tool to8484the maintainer.···9898time slot, by reaching out to the maintainer.9999100100Schedules and available reservation time slots are available at the Google `doc101101-<https://docs.google.com/document/d/1v43Kcj3ly4CYqmAkMaZzLiM2GEnWfgdGbZAH3mi2vpM/edit?usp=sharing>`.101101+<https://docs.google.com/document/d/1v43Kcj3ly4CYqmAkMaZzLiM2GEnWfgdGbZAH3mi2vpM/edit?usp=sharing>`_.102102There is also a public Google `calendar103103-<https://calendar.google.com/calendar/u/0?cid=ZDIwOTA4YTMxNjc2MDQ3NTIyMmUzYTM5ZmQyM2U4NDA0ZGIwZjBiYmJlZGQxNDM0MmY4ZTRjOTE0NjdhZDRiY0Bncm91cC5jYWxlbmRhci5nb29nbGUuY29t>`103103+<https://calendar.google.com/calendar/u/0?cid=ZDIwOTA4YTMxNjc2MDQ3NTIyMmUzYTM5ZmQyM2U4NDA0ZGIwZjBiYmJlZGQxNDM0MmY4ZTRjOTE0NjdhZDRiY0Bncm91cC5jYWxlbmRhci5nb29nbGUuY29t>`_104104that has the events. Anyone can subscribe it. DAMON maintainer will also105105provide periodic reminder to the mailing list (damon@lists.linux.dev).
+37-5
Documentation/process/maintainer-soc.rst
···3030The main SoC tree is housed on git.kernel.org:3131 https://git.kernel.org/pub/scm/linux/kernel/git/soc/soc.git/32323333+Maintainers3434+-----------3535+3336Clearly this is quite a wide range of topics, which no one person, or even3437small group of people are capable of maintaining. Instead, the SoC subsystem3535-is comprised of many submaintainers, each taking care of individual platforms3636-and driver subdirectories.3838+is comprised of many submaintainers (platform maintainers), each taking care of3939+individual platforms and driver subdirectories.3740In this regard, "platform" usually refers to a series of SoCs from a given3841vendor, for example, Nvidia's series of Tegra SoCs. Many submaintainers operate3942on a vendor level, responsible for multiple product lines. For several reasons,···46434744Most of these submaintainers have their own trees where they stage patches,4845sending pull requests to the main SoC tree. These trees are usually, but not4949-always, listed in MAINTAINERS. The main SoC maintainers can be reached via the5050-alias soc@kernel.org if there is no platform-specific maintainer, or if they5151-are unresponsive.4646+always, listed in MAINTAINERS.52475348What the SoC tree is not, however, is a location for architecture-specific code5449changes. Each architecture has its own maintainers that are responsible for5550architectural details, CPU errata and the like.5151+5252+Submitting Patches for Given SoC5353+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~5454+5555+All typical platform related patches should be sent via SoC submaintainers5656+(platform-specific maintainers). This includes also changes to per-platform or5757+shared defconfigs (scripts/get_maintainer.pl might not provide correct5858+addresses in such case).5959+6060+Submitting Patches to the Main SoC Maintainers6161+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~6262+6363+The main SoC maintainers can be reached via the alias soc@kernel.org only in6464+following cases:6565+6666+1. There are no platform-specific maintainers.6767+6868+2. Platform-specific maintainers are unresponsive.6969+7070+3. Introducing a completely new SoC platform. Such new SoC work should be sent7171+ first to common mailing lists, pointed out by scripts/get_maintainer.pl, for7272+ community review. After positive community review, work should be sent to7373+ soc@kernel.org in one patchset containing new arch/foo/Kconfig entry, DTS7474+ files, MAINTAINERS file entry and optionally initial drivers with their7575+ Devicetree bindings. The MAINTAINERS file entry should list new7676+ platform-specific maintainers, who are going to be responsible for handling7777+ patches for the platform from now on.7878+7979+Note that the soc@kernel.org is usually not the place to discuss the patches,8080+thus work sent to this address should be already considered as acceptable by8181+the community.56825783Information for (new) Submaintainers5884------------------------------------
+9-7
Documentation/virt/kvm/api.rst
···80988098 KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT is80998099 disabled.8100810081018101-KVM_X86_QUIRK_SLOT_ZAP_ALL By default, KVM invalidates all SPTEs in81028102- fast way for memslot deletion when VM type81038103- is KVM_X86_DEFAULT_VM.81048104- When this quirk is disabled or when VM type81058105- is other than KVM_X86_DEFAULT_VM, KVM zaps81068106- only leaf SPTEs that are within the range of81078107- the memslot being deleted.81018101+KVM_X86_QUIRK_SLOT_ZAP_ALL By default, for KVM_X86_DEFAULT_VM VMs, KVM81028102+ invalidates all SPTEs in all memslots and81038103+ address spaces when a memslot is deleted or81048104+ moved. When this quirk is disabled (or the81058105+ VM type isn't KVM_X86_DEFAULT_VM), KVM only81068106+ ensures the backing memory of the deleted81078107+ or moved memslot isn't reachable, i.e KVM81088108+ _may_ invalidate only SPTEs related to the81098109+ memslot.81088110=================================== ============================================81098111811081127.32 KVM_CAP_MAX_VCPU_ID
+1-1
Documentation/virt/kvm/locking.rst
···136136to gfn. For indirect sp, we disabled fast page fault for simplicity.137137138138A solution for indirect sp could be to pin the gfn, for example via139139-kvm_vcpu_gfn_to_pfn_atomic, before the cmpxchg. After the pinning:139139+gfn_to_pfn_memslot_atomic, before the cmpxchg. After the pinning:140140141141- We have held the refcount of pfn; that means the pfn can not be freed and142142 be reused for another gfn.
+33-185
MAINTAINERS
···258258S: Maintained259259F: drivers/net/ethernet/alteon/acenic*260260261261-ACER ASPIRE 1 EMBEDDED CONTROLLER DRIVER262262-M: Nikita Travkin <nikita@trvn.ru>263263-S: Maintained264264-F: Documentation/devicetree/bindings/platform/acer,aspire1-ec.yaml265265-F: drivers/platform/arm64/acer-aspire1-ec.c266266-267261ACER ASPIRE ONE TEMPERATURE AND FAN DRIVER268262M: Peter Kaestle <peter@piie.net>269263L: platform-driver-x86@vger.kernel.org···882888883889ALPHA PORT884890M: Richard Henderson <richard.henderson@linaro.org>885885-M: Ivan Kokshaysky <ink@jurassic.park.msu.ru>886891M: Matt Turner <mattst88@gmail.com>887892L: linux-alpha@vger.kernel.org888893S: Odd Fixes···17541761ARM AND ARM64 SoC SUB-ARCHITECTURES (COMMON PARTS)17551762M: Arnd Bergmann <arnd@arndb.de>17561763M: Olof Johansson <olof@lixom.net>17571757-M: soc@kernel.org17581764L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)17651765+L: soc@lists.linux.dev17591766S: Maintained17601767P: Documentation/process/maintainer-soc.rst17611768C: irc://irc.libera.chat/armlinux···22552262L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)22562263S: Maintained22572264F: arch/arm/mach-ep93xx/ts72xx.c22582258-22592259-ARM/CIRRUS LOGIC CLPS711X ARM ARCHITECTURE22602260-M: Alexander Shiyan <shc_work@mail.ru>22612261-L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)22622262-S: Odd Fixes22632263-N: clps711x2264226522652266ARM/CIRRUS LOGIC EP93XX ARM ARCHITECTURE22662267M: Hartley Sweeten <hsweeten@visionengravers.com>···38013814F: drivers/video/backlight/38023815F: include/linux/backlight.h38033816F: include/linux/pwm_backlight.h38043804-38053805-BAIKAL-T1 PVT HARDWARE MONITOR DRIVER38063806-M: Serge Semin <fancer.lancer@gmail.com>38073807-L: linux-hwmon@vger.kernel.org38083808-S: Supported38093809-F: Documentation/devicetree/bindings/hwmon/baikal,bt1-pvt.yaml38103810-F: Documentation/hwmon/bt1-pvt.rst38113811-F: drivers/hwmon/bt1-pvt.[ch]3812381738133818BARCO P50 GPIO DRIVER38143819M: Santosh Kumar Yadav <santoshkumar.yadav@barco.com>···6455647664566477DESIGNWARE EDMA CORE IP DRIVER64576478M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>64586458-R: Serge Semin <fancer.lancer@gmail.com>64596479L: dmaengine@vger.kernel.org64606480S: Maintained64616481F: drivers/dma/dw-edma/···97379759F: include/uapi/linux/gpio.h97389760F: tools/gpio/9739976197409740-GRE DEMULTIPLEXER DRIVER97419741-M: Dmitry Kozlov <xeb@mail.ru>97429742-L: netdev@vger.kernel.org97439743-S: Maintained97449744-F: include/net/gre.h97459745-F: net/ipv4/gre_demux.c97469746-F: net/ipv4/gre_offload.c97479747-97489762GRETH 10/100/1G Ethernet MAC device driver97499763M: Andreas Larsson <andreas@gaisler.com>97509764L: netdev@vger.kernel.org···1125911289F: security/integrity/ima/11260112901126111291INTEGRITY POLICY ENFORCEMENT (IPE)1126211262-M: Fan Wu <wufan@linux.microsoft.com>1129211292+M: Fan Wu <wufan@kernel.org>1126311293L: linux-security-module@vger.kernel.org1126411294S: Supported1126511265-T: git https://github.com/microsoft/ipe.git1129511295+T: git git://git.kernel.org/pub/scm/linux/kernel/git/wufan/ipe.git1126611296F: Documentation/admin-guide/LSM/ipe.rst1126711297F: Documentation/security/ipe.rst1126811298F: scripts/ipe/···1157911609F: drivers/crypto/intel/keembay/keembay-ocs-hcu-core.c1158011610F: drivers/crypto/intel/keembay/ocs-hcu.c1158111611F: drivers/crypto/intel/keembay/ocs-hcu.h1161211612+1161311613+INTEL LA JOLLA COVE ADAPTER (LJCA) USB I/O EXPANDER DRIVERS1161411614+M: Wentong Wu <wentong.wu@intel.com>1161511615+M: Sakari Ailus <sakari.ailus@linux.intel.com>1161611616+S: Maintained1161711617+F: drivers/gpio/gpio-ljca.c1161811618+F: drivers/i2c/busses/i2c-ljca.c1161911619+F: drivers/spi/spi-ljca.c1162011620+F: drivers/usb/misc/usb-ljca.c1162111621+F: include/linux/usb/ljca.h11582116221158311623INTEL MANAGEMENT ENGINE (mei)1158411624M: Tomas Winkler <tomas.winkler@intel.com>···1222812248R: Vincenzo Frascino <vincenzo.frascino@arm.com>1222912249L: kasan-dev@googlegroups.com1223012250S: Maintained1225112251+B: https://bugzilla.kernel.org/buglist.cgi?component=Sanitizers&product=Memory%20Management1223112252F: Documentation/dev-tools/kasan.rst1223212253F: arch/*/include/asm/*kasan.h1223312254F: arch/*/mm/kasan_init*···1225212271R: Andrey Konovalov <andreyknvl@gmail.com>1225312272L: kasan-dev@googlegroups.com1225412273S: Maintained1227412274+B: https://bugzilla.kernel.org/buglist.cgi?component=Sanitizers&product=Memory%20Management1225512275F: Documentation/dev-tools/kcov.rst1225612276F: include/linux/kcov.h1225712277F: include/uapi/linux/kcov.h···1293512953F: drivers/ata/pata_arasan_cf.c1293612954F: include/linux/pata_arasan_cf_data.h12937129551293812938-LIBATA PATA DRIVERS1293912939-R: Sergey Shtylyov <s.shtylyov@omp.ru>1294012940-L: linux-ide@vger.kernel.org1294112941-F: drivers/ata/ata_*.c1294212942-F: drivers/ata/pata_*.c1294312943-1294412956LIBATA PATA FARADAY FTIDE010 AND GEMINI SATA BRIDGE DRIVERS1294512957M: Linus Walleij <linus.walleij@linaro.org>1294612958L: linux-ide@vger.kernel.org···1295012974F: drivers/ata/ahci_platform.c1295112975F: drivers/ata/libahci_platform.c1295212976F: include/linux/ahci_platform.h1295312953-1295412954-LIBATA SATA AHCI SYNOPSYS DWC CONTROLLER DRIVER1295512955-M: Serge Semin <fancer.lancer@gmail.com>1295612956-L: linux-ide@vger.kernel.org1295712957-S: Maintained1295812958-F: Documentation/devicetree/bindings/ata/baikal,bt1-ahci.yaml1295912959-F: Documentation/devicetree/bindings/ata/snps,dwc-ahci.yaml1296012960-F: drivers/ata/ahci_dwc.c12961129771296212978LIBATA SATA PROMISE TX2/TX4 CONTROLLER DRIVER1296312979M: Mikael Pettersson <mikpelinux@gmail.com>···1414614178T: git git://linuxtv.org/media_tree.git1414714179F: drivers/media/platform/nxp/imx-pxp.[ch]14148141801414914149-MEDIA DRIVERS FOR ASCOT2E1415014150-M: Sergey Kozlov <serjk@netup.ru>1415114151-M: Abylay Ospan <aospan@netup.ru>1415214152-L: linux-media@vger.kernel.org1415314153-S: Supported1415414154-W: https://linuxtv.org1415514155-W: http://netup.tv/1415614156-T: git git://linuxtv.org/media_tree.git1415714157-F: drivers/media/dvb-frontends/ascot2e*1415814158-1415914181MEDIA DRIVERS FOR CXD2099AR CI CONTROLLERS1416014182M: Jasmin Jessich <jasmin@anw.at>1416114183L: linux-media@vger.kernel.org···1415314195W: https://linuxtv.org1415414196T: git git://linuxtv.org/media_tree.git1415514197F: drivers/media/dvb-frontends/cxd2099*1415614156-1415714157-MEDIA DRIVERS FOR CXD2841ER1415814158-M: Sergey Kozlov <serjk@netup.ru>1415914159-M: Abylay Ospan <aospan@netup.ru>1416014160-L: linux-media@vger.kernel.org1416114161-S: Supported1416214162-W: https://linuxtv.org1416314163-W: http://netup.tv/1416414164-T: git git://linuxtv.org/media_tree.git1416514165-F: drivers/media/dvb-frontends/cxd2841er*14166141981416714199MEDIA DRIVERS FOR CXD28801416814200M: Yasunari Takiguchi <Yasunari.Takiguchi@sony.com>···1419814250F: drivers/media/platform/nxp/imx7-media-csi.c1419914251F: drivers/media/platform/nxp/imx8mq-mipi-csi2.c14200142521420114201-MEDIA DRIVERS FOR HELENE1420214202-M: Abylay Ospan <aospan@netup.ru>1420314203-L: linux-media@vger.kernel.org1420414204-S: Supported1420514205-W: https://linuxtv.org1420614206-W: http://netup.tv/1420714207-T: git git://linuxtv.org/media_tree.git1420814208-F: drivers/media/dvb-frontends/helene*1420914209-1421014210-MEDIA DRIVERS FOR HORUS3A1421114211-M: Sergey Kozlov <serjk@netup.ru>1421214212-M: Abylay Ospan <aospan@netup.ru>1421314213-L: linux-media@vger.kernel.org1421414214-S: Supported1421514215-W: https://linuxtv.org1421614216-W: http://netup.tv/1421714217-T: git git://linuxtv.org/media_tree.git1421814218-F: drivers/media/dvb-frontends/horus3a*1421914219-1422014220-MEDIA DRIVERS FOR LNBH251422114221-M: Sergey Kozlov <serjk@netup.ru>1422214222-M: Abylay Ospan <aospan@netup.ru>1422314223-L: linux-media@vger.kernel.org1422414224-S: Supported1422514225-W: https://linuxtv.org1422614226-W: http://netup.tv/1422714227-T: git git://linuxtv.org/media_tree.git1422814228-F: drivers/media/dvb-frontends/lnbh25*1422914229-1423014253MEDIA DRIVERS FOR MXL5XX TUNER DEMODULATORS1423114254L: linux-media@vger.kernel.org1423214255S: Orphan1423314256W: https://linuxtv.org1423414257T: git git://linuxtv.org/media_tree.git1423514258F: drivers/media/dvb-frontends/mxl5xx*1423614236-1423714237-MEDIA DRIVERS FOR NETUP PCI UNIVERSAL DVB devices1423814238-M: Sergey Kozlov <serjk@netup.ru>1423914239-M: Abylay Ospan <aospan@netup.ru>1424014240-L: linux-media@vger.kernel.org1424114241-S: Supported1424214242-W: https://linuxtv.org1424314243-W: http://netup.tv/1424414244-T: git git://linuxtv.org/media_tree.git1424514245-F: drivers/media/pci/netup_unidvb/*14246142591424714260MEDIA DRIVERS FOR NVIDIA TEGRA - VDE1424814261M: Dmitry Osipenko <digetx@gmail.com>···14822149131482314914MEMORY MAPPING1482414915M: Andrew Morton <akpm@linux-foundation.org>1482514825-R: Liam R. Howlett <Liam.Howlett@oracle.com>1491614916+M: Liam R. Howlett <Liam.Howlett@oracle.com>1491714917+M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1482614918R: Vlastimil Babka <vbabka@suse.cz>1482714827-R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1491914919+R: Jann Horn <jannh@google.com>1482814920L: linux-mm@kvack.org1482914921S: Maintained1483014922W: http://www.linux-mm.org···1484714937F: drivers/mtd/1484814938F: include/linux/mtd/1484914939F: include/uapi/mtd/1485014850-1485114851-MEMSENSING MICROSYSTEMS MSA311 DRIVER1485214852-M: Dmitry Rokosov <ddrokosov@sberdevices.ru>1485314853-L: linux-iio@vger.kernel.org1485414854-S: Maintained1485514855-F: Documentation/devicetree/bindings/iio/accel/memsensing,msa311.yaml1485614856-F: drivers/iio/accel/msa311.c14857149401485814941MEN A21 WATCHDOG DRIVER1485914942M: Johannes Thumshirn <morbidrsa@gmail.com>···15181152781518215279MICROCHIP POLARFIRE FPGA DRIVERS1518315280M: Conor Dooley <conor.dooley@microchip.com>1518415184-R: Vladimir Georgiev <v.georgiev@metrotek.ru>1518515281L: linux-fpga@vger.kernel.org1518615282S: Supported1518715283F: Documentation/devicetree/bindings/fpga/microchip,mpf-spi-fpga-mgr.yaml···1543515533F: drivers/platform/mips/1543615534F: include/dt-bindings/mips/15437155351543815438-MIPS BAIKAL-T1 PLATFORM1543915439-M: Serge Semin <fancer.lancer@gmail.com>1544015440-L: linux-mips@vger.kernel.org1544115441-S: Supported1544215442-F: Documentation/devicetree/bindings/bus/baikal,bt1-*.yaml1544315443-F: Documentation/devicetree/bindings/clock/baikal,bt1-*.yaml1544415444-F: drivers/bus/bt1-*.c1544515445-F: drivers/clk/baikal-t1/1544615446-F: drivers/memory/bt1-l2-ctl.c1544715447-F: drivers/mtd/maps/physmap-bt1-rom.[ch]1544815448-1544915536MIPS BOSTON DEVELOPMENT BOARD1545015537M: Paul Burton <paulburton@kernel.org>1545115538L: linux-mips@vger.kernel.org···15447155561544815557MIPS CORE DRIVERS1544915558M: Thomas Bogendoerfer <tsbogend@alpha.franken.de>1545015450-M: Serge Semin <fancer.lancer@gmail.com>1545115559L: linux-mips@vger.kernel.org1545215560S: Supported1545315561F: drivers/bus/mips_cdmm.c···1604916159M: Eric Dumazet <edumazet@google.com>1605016160M: Jakub Kicinski <kuba@kernel.org>1605116161M: Paolo Abeni <pabeni@redhat.com>1616216162+R: Simon Horman <horms@kernel.org>1605216163L: netdev@vger.kernel.org1605316164S: Maintained1605416165P: Documentation/process/maintainer-netdev.rst···1609216201F: lib/net_utils.c1609316202F: lib/random32.c1609416203F: net/1620416204+F: samples/pktgen/1609516205F: tools/net/1609616206F: tools/testing/selftests/net/1609716207X: Documentation/networking/mac80211-injection.rst···1641616524F: include/linux/ntb.h1641716525F: include/linux/ntb_transport.h1641816526F: tools/testing/selftests/ntb/1641916419-1642016420-NTB IDT DRIVER1642116421-M: Serge Semin <fancer.lancer@gmail.com>1642216422-L: ntb@lists.linux.dev1642316423-S: Supported1642416424-F: drivers/ntb/hw/idt/16425165271642616528NTB INTEL DRIVER1642716529M: Dave Jiang <dave.jiang@intel.com>···1843818552F: include/linux/pps*.h1843918553F: include/uapi/linux/pps.h18440185541844118441-PPTP DRIVER1844218442-M: Dmitry Kozlov <xeb@mail.ru>1844318443-L: netdev@vger.kernel.org1844418444-S: Maintained1844518445-W: http://sourceforge.net/projects/accel-pptp1844618446-F: drivers/net/ppp/pptp.c1844718447-1844818555PRESSURE STALL INFORMATION (PSI)1844918556M: Johannes Weiner <hannes@cmpxchg.org>1845018557M: Suren Baghdasaryan <surenb@google.com>···1941819539F: Documentation/tools/rtla/1941919540F: tools/tracing/rtla/19420195411954219542+Real-time Linux (PREEMPT_RT)1954319543+M: Sebastian Andrzej Siewior <bigeasy@linutronix.de>1954419544+M: Clark Williams <clrkwllms@kernel.org>1954519545+M: Steven Rostedt <rostedt@goodmis.org>1954619546+L: linux-rt-devel@lists.linux.dev1954719547+S: Supported1954819548+K: PREEMPT_RT1954919549+1942119550REALTEK AUDIO CODECS1942219551M: Oder Chiou <oder_chiou@realtek.com>1942319552S: Maintained···1953519648F: Documentation/devicetree/bindings/i2c/renesas,iic-emev2.yaml1953619649F: drivers/i2c/busses/i2c-emev2.c19537196501953819538-RENESAS ETHERNET AVB DRIVER1953919539-R: Sergey Shtylyov <s.shtylyov@omp.ru>1954019540-L: netdev@vger.kernel.org1954119541-L: linux-renesas-soc@vger.kernel.org1954219542-F: Documentation/devicetree/bindings/net/renesas,etheravb.yaml1954319543-F: drivers/net/ethernet/renesas/Kconfig1954419544-F: drivers/net/ethernet/renesas/Makefile1954519545-F: drivers/net/ethernet/renesas/ravb*1954619546-1954719651RENESAS ETHERNET SWITCH DRIVER1954819652R: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>1954919653L: netdev@vger.kernel.org···1958319705F: Documentation/devicetree/bindings/i2c/renesas,rmobile-iic.yaml1958419706F: drivers/i2c/busses/i2c-rcar.c1958519707F: drivers/i2c/busses/i2c-sh_mobile.c1958619586-1958719587-RENESAS R-CAR SATA DRIVER1958819588-R: Sergey Shtylyov <s.shtylyov@omp.ru>1958919589-L: linux-ide@vger.kernel.org1959019590-L: linux-renesas-soc@vger.kernel.org1959119591-S: Supported1959219592-F: Documentation/devicetree/bindings/ata/renesas,rcar-sata.yaml1959319593-F: drivers/ata/sata_rcar.c19594197081959519709RENESAS R-CAR THERMAL DRIVERS1959619710M: Niklas Söderlund <niklas.soderlund@ragnatech.se>···1965819788S: Supported1965919789F: Documentation/devicetree/bindings/i2c/renesas,rzv2m.yaml1966019790F: drivers/i2c/busses/i2c-rzv2m.c1966119661-1966219662-RENESAS SUPERH ETHERNET DRIVER1966319663-R: Sergey Shtylyov <s.shtylyov@omp.ru>1966419664-L: netdev@vger.kernel.org1966519665-L: linux-renesas-soc@vger.kernel.org1966619666-F: Documentation/devicetree/bindings/net/renesas,ether.yaml1966719667-F: drivers/net/ethernet/renesas/Kconfig1966819668-F: drivers/net/ethernet/renesas/Makefile1966919669-F: drivers/net/ethernet/renesas/sh_eth*1967019670-F: include/linux/sh_eth.h19671197911967219792RENESAS USB PHY DRIVER1967319793M: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>···2165321793SPEAR PLATFORM/CLOCK/PINCTRL SUPPORT2165421794M: Viresh Kumar <vireshk@kernel.org>2165521795M: Shiraz Hashim <shiraz.linux.kernel@gmail.com>2165621656-M: soc@kernel.org2165721796L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)2179721797+L: soc@lists.linux.dev2165821798S: Maintained2165921799W: http://www.st.com/spear2166021800F: arch/arm/boot/dts/st/spear*···22306224462230722447SYNOPSYS DESIGNWARE APB GPIO DRIVER2230822448M: Hoan Tran <hoan@os.amperecomputing.com>2230922309-M: Serge Semin <fancer.lancer@gmail.com>2231022449L: linux-gpio@vger.kernel.org2231122450S: Maintained2231222451F: Documentation/devicetree/bindings/gpio/snps,dw-apb-gpio.yaml2231322452F: drivers/gpio/gpio-dwapb.c2231422314-2231522315-SYNOPSYS DESIGNWARE APB SSI DRIVER2231622316-M: Serge Semin <fancer.lancer@gmail.com>2231722317-L: linux-spi@vger.kernel.org2231822318-S: Supported2231922319-F: Documentation/devicetree/bindings/spi/snps,dw-apb-ssi.yaml2232022320-F: drivers/spi/spi-dw*22321224532232222454SYNOPSYS DESIGNWARE AXI DMAC DRIVER2232322455M: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>···2362023768S: Maintained2362123769F: drivers/hid/hid-udraw-ps3.c23622237702362323623-UFS FILESYSTEM2362423624-M: Evgeniy Dushistov <dushistov@mail.ru>2362523625-S: Maintained2362623626-F: Documentation/admin-guide/ufs.rst2362723627-F: fs/ufs/2362823628-2362923771UHID USERSPACE HID IO DRIVER2363023772M: David Rheinsberg <david@readahead.eu>2363123773L: linux-input@vger.kernel.org···2392224076R: Andrey Konovalov <andreyknvl@gmail.com>2392324077L: linux-usb@vger.kernel.org2392424078S: Maintained2407924079+B: https://github.com/xairy/raw-gadget/issues2392524080F: Documentation/usb/raw-gadget.rst2392624081F: drivers/usb/gadget/legacy/raw_gadget.c2392724082F: include/uapi/linux/usb/raw_gadget.h···24594247472459524748VMA2459624749M: Andrew Morton <akpm@linux-foundation.org>2459724597-R: Liam R. Howlett <Liam.Howlett@oracle.com>2475024750+M: Liam R. Howlett <Liam.Howlett@oracle.com>2475124751+M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>2459824752R: Vlastimil Babka <vbabka@suse.cz>2459924599-R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>2475324753+R: Jann Horn <jannh@google.com>2460024754L: linux-mm@kvack.org2460124755S: Maintained2460224756W: https://www.linux-mm.org
···838838config CFI_ICALL_NORMALIZE_INTEGERS839839 bool "Normalize CFI tags for integers"840840 depends on CFI_CLANG841841- depends on HAVE_CFI_ICALL_NORMALIZE_INTEGERS841841+ depends on HAVE_CFI_ICALL_NORMALIZE_INTEGERS_CLANG842842 help843843 This option normalizes the CFI tags for integer types so that all844844 integer types of the same size and signedness receive the same CFI···851851852852 This option is necessary for using CFI with Rust. If unsure, say N.853853854854-config HAVE_CFI_ICALL_NORMALIZE_INTEGERS855855- def_bool !GCOV_KERNEL && !KASAN856856- depends on CFI_CLANG854854+config HAVE_CFI_ICALL_NORMALIZE_INTEGERS_CLANG855855+ def_bool y857856 depends on $(cc-option,-fsanitize=kcfi -fsanitize-cfi-icall-experimental-normalize-integers)858858- help859859- Is CFI_ICALL_NORMALIZE_INTEGERS supported with the set of compilers860860- currently in use?857857+ # With GCOV/KASAN we need this fix: https://github.com/llvm/llvm-project/pull/104826858858+ depends on CLANG_VERSION >= 190000 || (!GCOV_KERNEL && !KASAN_GENERIC && !KASAN_SW_TAGS)861859862862- This option defaults to false if GCOV or KASAN is enabled, as there is863863- an LLVM bug that makes normalized integers tags incompatible with864864- KASAN and GCOV. Kconfig currently does not have the infrastructure to865865- detect whether your rustc compiler contains the fix for this bug, so866866- it is assumed that it doesn't. If your compiler has the fix, you can867867- explicitly enable this option in your config file. The Kconfig logic868868- needed to detect this will be added in a future kernel release.860860+config HAVE_CFI_ICALL_NORMALIZE_INTEGERS_RUSTC861861+ def_bool y862862+ depends on HAVE_CFI_ICALL_NORMALIZE_INTEGERS_CLANG863863+ depends on RUSTC_VERSION >= 107900864864+ # With GCOV/KASAN we need this fix: https://github.com/rust-lang/rust/pull/129373865865+ depends on (RUSTC_LLVM_VERSION >= 190000 && RUSTC_VERSION >= 108200) || \866866+ (!GCOV_KERNEL && !KASAN_GENERIC && !KASAN_SW_TAGS)869867870868config CFI_PERMISSIVE871869 bool "Use CFI in permissive mode"
···5151#define KVM_REQ_RELOAD_PMU KVM_ARCH_REQ(5)5252#define KVM_REQ_SUSPEND KVM_ARCH_REQ(6)5353#define KVM_REQ_RESYNC_PMU_EL0 KVM_ARCH_REQ(7)5454+#define KVM_REQ_NESTED_S2_UNMAP KVM_ARCH_REQ(8)54555556#define KVM_DIRTY_LOG_MANUAL_CAPS (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | \5657 KVM_DIRTY_LOG_INITIALLY_SET)···211210 * HCR_EL2.VM == 1212211 */213212 bool nested_stage2_enabled;213213+214214+ /*215215+ * true when this MMU needs to be unmapped before being used for a new216216+ * purpose.217217+ */218218+ bool pending_unmap;214219215220 /*216221 * 0: Nobody is currently using this, check vttbr for validity
···997997static int check_vcpu_requests(struct kvm_vcpu *vcpu)998998{999999 if (kvm_request_pending(vcpu)) {10001000+ if (kvm_check_request(KVM_REQ_VM_DEAD, vcpu))10011001+ return -EIO;10021002+10001003 if (kvm_check_request(KVM_REQ_SLEEP, vcpu))10011004 kvm_vcpu_sleep(vcpu);10021005···1034103110351032 if (kvm_dirty_ring_check_request(vcpu))10361033 return 0;10341034+10351035+ check_nested_vcpu_requests(vcpu);10371036 }1038103710391038 return 1;
···317317 * to the guest, and hide SSBS so that the318318 * guest stays protected.319319 */320320- if (cpus_have_final_cap(ARM64_SSBS))320320+ if (kvm_has_feat(vcpu->kvm, ID_AA64PFR1_EL1, SSBS, IMP))321321 break;322322 fallthrough;323323 case SPECTRE_UNAFFECTED:···428428 * Convert the workaround level into an easy-to-compare number, where higher429429 * values mean better protection.430430 */431431-static int get_kernel_wa_level(u64 regid)431431+static int get_kernel_wa_level(struct kvm_vcpu *vcpu, u64 regid)432432{433433 switch (regid) {434434 case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:···449449 * don't have any FW mitigation if SSBS is there at450450 * all times.451451 */452452- if (cpus_have_final_cap(ARM64_SSBS))452452+ if (kvm_has_feat(vcpu->kvm, ID_AA64PFR1_EL1, SSBS, IMP))453453 return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL;454454 fallthrough;455455 case SPECTRE_UNAFFECTED:···486486 case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:487487 case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:488488 case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3:489489- val = get_kernel_wa_level(reg->id) & KVM_REG_FEATURE_LEVEL_MASK;489489+ val = get_kernel_wa_level(vcpu, reg->id) & KVM_REG_FEATURE_LEVEL_MASK;490490 break;491491 case KVM_REG_ARM_STD_BMAP:492492 val = READ_ONCE(smccc_feat->std_bmap);···588588 if (val & ~KVM_REG_FEATURE_LEVEL_MASK)589589 return -EINVAL;590590591591- if (get_kernel_wa_level(reg->id) < val)591591+ if (get_kernel_wa_level(vcpu, reg->id) < val)592592 return -EINVAL;593593594594 return 0;···624624 * We can deal with NOT_AVAIL on NOT_REQUIRED, but not the625625 * other way around.626626 */627627- if (get_kernel_wa_level(reg->id) < wa_level)627627+ if (get_kernel_wa_level(vcpu, reg->id) < wa_level)628628 return -EINVAL;629629630630 return 0;
···632632 /* Set the scene for the next search */633633 kvm->arch.nested_mmus_next = (i + 1) % kvm->arch.nested_mmus_size;634634635635- /* Clear the old state */635635+ /* Make sure we don't forget to do the laundry */636636 if (kvm_s2_mmu_valid(s2_mmu))637637- kvm_stage2_unmap_range(s2_mmu, 0, kvm_phys_size(s2_mmu));637637+ s2_mmu->pending_unmap = true;638638639639 /*640640 * The virtual VMID (modulo CnP) will be used as a key when matching···650650651651out:652652 atomic_inc(&s2_mmu->refcnt);653653+654654+ /*655655+ * Set the vCPU request to perform an unmap, even if the pending unmap656656+ * originates from another vCPU. This guarantees that the MMU has been657657+ * completely unmapped before any vCPU actually uses it, and allows658658+ * multiple vCPUs to lend a hand with completing the unmap.659659+ */660660+ if (s2_mmu->pending_unmap)661661+ kvm_make_request(KVM_REQ_NESTED_S2_UNMAP, vcpu);662662+653663 return s2_mmu;654664}655665···673663674664void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu)675665{666666+ /*667667+ * The vCPU kept its reference on the MMU after the last put, keep668668+ * rolling with it.669669+ */670670+ if (vcpu->arch.hw_mmu)671671+ return;672672+676673 if (is_hyp_ctxt(vcpu)) {677674 vcpu->arch.hw_mmu = &vcpu->kvm->arch.mmu;678675 } else {···691674692675void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu)693676{694694- if (kvm_is_nested_s2_mmu(vcpu->kvm, vcpu->arch.hw_mmu)) {677677+ /*678678+ * Keep a reference on the associated stage-2 MMU if the vCPU is679679+ * scheduling out and not in WFI emulation, suggesting it is likely to680680+ * reuse the MMU sometime soon.681681+ */682682+ if (vcpu->scheduled_out && !vcpu_get_flag(vcpu, IN_WFI))683683+ return;684684+685685+ if (kvm_is_nested_s2_mmu(vcpu->kvm, vcpu->arch.hw_mmu))695686 atomic_dec(&vcpu->arch.hw_mmu->refcnt);696696- vcpu->arch.hw_mmu = NULL;697697- }687687+688688+ vcpu->arch.hw_mmu = NULL;698689}699690700691/*···755730 }756731}757732758758-void kvm_nested_s2_unmap(struct kvm *kvm)733733+void kvm_nested_s2_unmap(struct kvm *kvm, bool may_block)759734{760735 int i;761736···765740 struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];766741767742 if (kvm_s2_mmu_valid(mmu))768768- kvm_stage2_unmap_range(mmu, 0, kvm_phys_size(mmu));743743+ kvm_stage2_unmap_range(mmu, 0, kvm_phys_size(mmu), may_block);769744 }770745}771746···12081183 set_sysreg_masks(kvm, SCTLR_EL1, res0, res1);1209118412101185 return 0;11861186+}11871187+11881188+void check_nested_vcpu_requests(struct kvm_vcpu *vcpu)11891189+{11901190+ if (kvm_check_request(KVM_REQ_NESTED_S2_UNMAP, vcpu)) {11911191+ struct kvm_s2_mmu *mmu = vcpu->arch.hw_mmu;11921192+11931193+ write_lock(&vcpu->kvm->mmu_lock);11941194+ if (mmu->pending_unmap) {11951195+ kvm_stage2_unmap_range(mmu, 0, kvm_phys_size(mmu), true);11961196+ mmu->pending_unmap = false;11971197+ }11981198+ write_unlock(&vcpu->kvm->mmu_lock);11991199+ }12111200}
+70-7
arch/arm64/kvm/sys_regs.c
···15271527 val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE);1528152815291529 val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME);15301530+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_RNDR_trap);15311531+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_NMI);15321532+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE_frac);15331533+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_GCS);15341534+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_THE);15351535+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTEX);15361536+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_DF2);15371537+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_PFAR);15301538 break;15311539 case SYS_ID_AA64PFR2_EL1:15321540 /* We only expose FPMR */···15581550 val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK;15591551 break;15601552 case SYS_ID_AA64MMFR3_EL1:15611561- val &= ID_AA64MMFR3_EL1_TCRX | ID_AA64MMFR3_EL1_S1POE;15531553+ val &= ID_AA64MMFR3_EL1_TCRX | ID_AA64MMFR3_EL1_S1POE |15541554+ ID_AA64MMFR3_EL1_S1PIE;15621555 break;15631556 case SYS_ID_MMFR4_EL1:15641557 val &= ~ARM64_FEATURE_MASK(ID_MMFR4_EL1_CCIDX);···19941985 * one cache line.19951986 */19961987 if (kvm_has_mte(vcpu->kvm))19971997- clidr |= 2 << CLIDR_TTYPE_SHIFT(loc);19881988+ clidr |= 2ULL << CLIDR_TTYPE_SHIFT(loc);1998198919991990 __vcpu_sys_reg(vcpu, r->reg) = clidr;20001991···23852376 ID_AA64PFR0_EL1_RAS |23862377 ID_AA64PFR0_EL1_AdvSIMD |23872378 ID_AA64PFR0_EL1_FP), },23882388- ID_SANITISED(ID_AA64PFR1_EL1),23792379+ ID_WRITABLE(ID_AA64PFR1_EL1, ~(ID_AA64PFR1_EL1_PFAR |23802380+ ID_AA64PFR1_EL1_DF2 |23812381+ ID_AA64PFR1_EL1_MTEX |23822382+ ID_AA64PFR1_EL1_THE |23832383+ ID_AA64PFR1_EL1_GCS |23842384+ ID_AA64PFR1_EL1_MTE_frac |23852385+ ID_AA64PFR1_EL1_NMI |23862386+ ID_AA64PFR1_EL1_RNDR_trap |23872387+ ID_AA64PFR1_EL1_SME |23882388+ ID_AA64PFR1_EL1_RES0 |23892389+ ID_AA64PFR1_EL1_MPAM_frac |23902390+ ID_AA64PFR1_EL1_RAS_frac |23912391+ ID_AA64PFR1_EL1_MTE)),23892392 ID_WRITABLE(ID_AA64PFR2_EL1, ID_AA64PFR2_EL1_FPMR),23902393 ID_UNALLOCATED(4,3),23912394 ID_WRITABLE(ID_AA64ZFR0_EL1, ~ID_AA64ZFR0_EL1_RES0),···24112390 .get_user = get_id_reg,24122391 .set_user = set_id_aa64dfr0_el1,24132392 .reset = read_sanitised_id_aa64dfr0_el1,24142414- .val = ID_AA64DFR0_EL1_PMUVer_MASK |23932393+ /*23942394+ * Prior to FEAT_Debugv8.9, the architecture defines context-aware23952395+ * breakpoints (CTX_CMPs) as the highest numbered breakpoints (BRPs).23962396+ * KVM does not trap + emulate the breakpoint registers, and as such23972397+ * cannot support a layout that misaligns with the underlying hardware.23982398+ * While it may be possible to describe a subset that aligns with23992399+ * hardware, just prevent changes to BRPs and CTX_CMPs altogether for24002400+ * simplicity.24012401+ *24022402+ * See DDI0487K.a, section D2.8.3 Breakpoint types and linking24032403+ * of breakpoints for more details.24042404+ */24052405+ .val = ID_AA64DFR0_EL1_DoubleLock_MASK |24062406+ ID_AA64DFR0_EL1_WRPs_MASK |24072407+ ID_AA64DFR0_EL1_PMUVer_MASK |24152408 ID_AA64DFR0_EL1_DebugVer_MASK, },24162409 ID_SANITISED(ID_AA64DFR1_EL1),24172410 ID_UNALLOCATED(5,2),···24682433 ID_AA64MMFR2_EL1_NV |24692434 ID_AA64MMFR2_EL1_CCIDX)),24702435 ID_WRITABLE(ID_AA64MMFR3_EL1, (ID_AA64MMFR3_EL1_TCRX |24362436+ ID_AA64MMFR3_EL1_S1PIE |24712437 ID_AA64MMFR3_EL1_S1POE)),24722438 ID_SANITISED(ID_AA64MMFR4_EL1),24732439 ID_UNALLOCATED(7,5),···29392903 * Drop all shadow S2s, resulting in S1/S2 TLBIs for each of the29402904 * corresponding VMIDs.29412905 */29422942- kvm_nested_s2_unmap(vcpu->kvm);29062906+ kvm_nested_s2_unmap(vcpu->kvm, true);2943290729442908 write_unlock(&vcpu->kvm->mmu_lock);29452909···29912955static void s2_mmu_unmap_range(struct kvm_s2_mmu *mmu,29922956 const union tlbi_info *info)29932957{29942994- kvm_stage2_unmap_range(mmu, info->range.start, info->range.size);29582958+ /*29592959+ * The unmap operation is allowed to drop the MMU lock and block, which29602960+ * means that @mmu could be used for a different context than the one29612961+ * currently being invalidated.29622962+ *29632963+ * This behavior is still safe, as:29642964+ *29652965+ * 1) The vCPU(s) that recycled the MMU are responsible for invalidating29662966+ * the entire MMU before reusing it, which still honors the intent29672967+ * of a TLBI.29682968+ *29692969+ * 2) Until the guest TLBI instruction is 'retired' (i.e. increment PC29702970+ * and ERET to the guest), other vCPUs are allowed to use stale29712971+ * translations.29722972+ *29732973+ * 3) Accidentally unmapping an unrelated MMU context is nonfatal, and29742974+ * at worst may cause more aborts for shadow stage-2 fills.29752975+ *29762976+ * Dropping the MMU lock also implies that shadow stage-2 fills could29772977+ * happen behind the back of the TLBI. This is still safe, though, as29782978+ * the L1 needs to put its stage-2 in a consistent state before doing29792979+ * the TLBI.29802980+ */29812981+ kvm_stage2_unmap_range(mmu, info->range.start, info->range.size, true);29952982}2996298329972984static bool handle_vmalls12e1is(struct kvm_vcpu *vcpu, struct sys_reg_params *p,···31093050 max_size = compute_tlb_inval_range(mmu, info->ipa.addr);31103051 base_addr &= ~(max_size - 1);3111305231123112- kvm_stage2_unmap_range(mmu, base_addr, max_size);30533053+ /*30543054+ * See comment in s2_mmu_unmap_range() for why this is allowed to30553055+ * reschedule.30563056+ */30573057+ kvm_stage2_unmap_range(mmu, base_addr, max_size, true);31133058}3114305931153060static bool handle_ipas2e1is(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+35-6
arch/arm64/kvm/vgic/vgic-init.c
···417417 kfree(vgic_cpu->private_irqs);418418 vgic_cpu->private_irqs = NULL;419419420420- if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3)420420+ if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) {421421+ /*422422+ * If this vCPU is being destroyed because of a failed creation423423+ * then unregister the redistributor to avoid leaving behind a424424+ * dangling pointer to the vCPU struct.425425+ *426426+ * vCPUs that have been successfully created (i.e. added to427427+ * kvm->vcpu_array) get unregistered in kvm_vgic_destroy(), as428428+ * this function gets called while holding kvm->arch.config_lock429429+ * in the VM teardown path and would otherwise introduce a lock430430+ * inversion w.r.t. kvm->srcu.431431+ *432432+ * vCPUs that failed creation are torn down outside of the433433+ * kvm->arch.config_lock and do not get unregistered in434434+ * kvm_vgic_destroy(), meaning it is both safe and necessary to435435+ * do so here.436436+ */437437+ if (kvm_get_vcpu_by_id(vcpu->kvm, vcpu->vcpu_id) != vcpu)438438+ vgic_unregister_redist_iodev(vcpu);439439+421440 vgic_cpu->rd_iodev.base_addr = VGIC_ADDR_UNDEF;441441+ }422442}423443424444void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu)···544524 if (ret)545525 goto out;546526547547- dist->ready = true;548527 dist_base = dist->vgic_dist_base;549528 mutex_unlock(&kvm->arch.config_lock);550529551530 ret = vgic_register_dist_iodev(kvm, dist_base, type);552552- if (ret)531531+ if (ret) {553532 kvm_err("Unable to register VGIC dist MMIO regions\n");533533+ goto out_slots;534534+ }554535536536+ /*537537+ * kvm_io_bus_register_dev() guarantees all readers see the new MMIO538538+ * registration before returning through synchronize_srcu(), which also539539+ * implies a full memory barrier. As such, marking the distributor as540540+ * 'ready' here is guaranteed to be ordered after all vCPUs having seen541541+ * a completely configured distributor.542542+ */543543+ dist->ready = true;555544 goto out_slots;556545out:557546 mutex_unlock(&kvm->arch.config_lock);558547out_slots:559559- mutex_unlock(&kvm->slots_lock);560560-561548 if (ret)562562- kvm_vgic_destroy(kvm);549549+ kvm_vm_dead(kvm);550550+551551+ mutex_unlock(&kvm->slots_lock);563552564553 return ret;565554}
+6-1
arch/arm64/kvm/vgic/vgic-kvm-device.c
···236236237237 mutex_lock(&dev->kvm->arch.config_lock);238238239239- if (vgic_ready(dev->kvm) || dev->kvm->arch.vgic.nr_spis)239239+ /*240240+ * Either userspace has already configured NR_IRQS or241241+ * the vgic has already been initialized and vgic_init()242242+ * supplied a default amount of SPIs.243243+ */244244+ if (dev->kvm->arch.vgic.nr_spis)240245 ret = -EBUSY;241246 else242247 dev->kvm->arch.vgic.nr_spis =
+4
arch/loongarch/include/asm/bootinfo.h
···26262727#define NR_WORDS DIV_ROUND_UP(NR_CPUS, BITS_PER_LONG)28282929+/*3030+ * The "core" of cores_per_node and cores_per_package stands for a3131+ * logical core, which means in a SMT system it stands for a thread.3232+ */2933struct loongson_system_configuration {3034 int nr_cpus;3135 int nr_nodes;
+1-1
arch/loongarch/include/asm/kasan.h
···1616#define XRANGE_SHIFT (48)17171818/* Valid address length */1919-#define XRANGE_SHADOW_SHIFT (PGDIR_SHIFT + PAGE_SHIFT - 3)1919+#define XRANGE_SHADOW_SHIFT min(cpu_vabits, VA_BITS)2020/* Used for taking out the valid address */2121#define XRANGE_SHADOW_MASK GENMASK_ULL(XRANGE_SHADOW_SHIFT - 1, 0)2222/* One segment whole address space size */
···269269extern void pgd_init(void *addr);270270extern void pud_init(void *addr);271271extern void pmd_init(void *addr);272272+extern void kernel_pte_init(void *addr);272273273274/*274275 * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that···326325{327326 WRITE_ONCE(*ptep, pteval);328327329329- if (pte_val(pteval) & _PAGE_GLOBAL) {330330- pte_t *buddy = ptep_buddy(ptep);331331- /*332332- * Make sure the buddy is global too (if it's !none,333333- * it better already be global)334334- */335335- if (pte_none(ptep_get(buddy))) {336328#ifdef CONFIG_SMP337337- /*338338- * For SMP, multiple CPUs can race, so we need339339- * to do this atomically.340340- */341341- __asm__ __volatile__(342342- __AMOR "$zero, %[global], %[buddy] \n"343343- : [buddy] "+ZB" (buddy->pte)344344- : [global] "r" (_PAGE_GLOBAL)345345- : "memory");346346-347347- DBAR(0b11000); /* o_wrw = 0b11000 */348348-#else /* !CONFIG_SMP */349349- WRITE_ONCE(*buddy, __pte(pte_val(ptep_get(buddy)) | _PAGE_GLOBAL));350350-#endif /* CONFIG_SMP */351351- }352352- }329329+ if (pte_val(pteval) & _PAGE_GLOBAL)330330+ DBAR(0b11000); /* o_wrw = 0b11000 */331331+#endif353332}354333355334static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)356335{357357- /* Preserve global status for the pair */358358- if (pte_val(ptep_get(ptep_buddy(ptep))) & _PAGE_GLOBAL)359359- set_pte(ptep, __pte(_PAGE_GLOBAL));360360- else361361- set_pte(ptep, __pte(0));336336+ pte_t pte = ptep_get(ptep);337337+ pte_val(pte) &= _PAGE_GLOBAL;338338+ set_pte(ptep, pte);362339}363340364341#define PGD_T_LOG2 (__builtin_ffs(sizeof(pgd_t)) - 1)
+8-6
arch/loongarch/kernel/process.c
···293293{294294 unsigned long top = TASK_SIZE & PAGE_MASK;295295296296- /* Space for the VDSO & data page */297297- top -= PAGE_ALIGN(current->thread.vdso->size);298298- top -= VVAR_SIZE;296296+ if (current->thread.vdso) {297297+ /* Space for the VDSO & data page */298298+ top -= PAGE_ALIGN(current->thread.vdso->size);299299+ top -= VVAR_SIZE;299300300300- /* Space to randomize the VDSO base */301301- if (current->flags & PF_RANDOMIZE)302302- top -= VDSO_RANDOMIZE_SIZE;301301+ /* Space to randomize the VDSO base */302302+ if (current->flags & PF_RANDOMIZE)303303+ top -= VDSO_RANDOMIZE_SIZE;304304+ }303305304306 return top;305307}
···161161 if (kvm_vcpu_is_blocking(vcpu)) {162162163163 /*164164- * HRTIMER_MODE_PINNED is suggested since vcpu may run in165165- * the same physical cpu in next time164164+ * HRTIMER_MODE_PINNED_HARD is suggested since vcpu may run in165165+ * the same physical cpu in next time, and the timer should run166166+ * in hardirq context even in the PREEMPT_RT case.166167 */167167- hrtimer_start(&vcpu->arch.swtimer, expire, HRTIMER_MODE_ABS_PINNED);168168+ hrtimer_start(&vcpu->arch.swtimer, expire, HRTIMER_MODE_ABS_PINNED_HARD);168169 }169170}170171
···5050CONFIG_HZ_100=y5151CONFIG_CERT_STORE=y5252CONFIG_EXPOLINE=y5353-# CONFIG_EXPOLINE_EXTERN is not set5453CONFIG_EXPOLINE_AUTO=y5554CONFIG_CHSC_SCH=y5655CONFIG_VFIO_CCW=m···9495CONFIG_ZSWAP=y9596CONFIG_ZSWAP_ZPOOL_DEFAULT_ZBUD=y9697CONFIG_ZSMALLOC_STAT=y9898+CONFIG_SLAB_BUCKETS=y9799CONFIG_SLUB_STATS=y98100# CONFIG_COMPAT_BRK is not set99101CONFIG_MEMORY_HOTPLUG=y···426426# CONFIG_FW_LOADER is not set427427CONFIG_CONNECTOR=y428428CONFIG_ZRAM=y429429+CONFIG_ZRAM_BACKEND_LZ4=y430430+CONFIG_ZRAM_BACKEND_LZ4HC=y431431+CONFIG_ZRAM_BACKEND_ZSTD=y432432+CONFIG_ZRAM_BACKEND_DEFLATE=y433433+CONFIG_ZRAM_BACKEND_842=y434434+CONFIG_ZRAM_BACKEND_LZO=y435435+CONFIG_ZRAM_DEF_COMP_DEFLATE=y429436CONFIG_BLK_DEV_LOOP=m430437CONFIG_BLK_DEV_DRBD=m431438CONFIG_BLK_DEV_NBD=m···493486CONFIG_DM_FLAKEY=m494487CONFIG_DM_VERITY=m495488CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG=y489489+CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG_PLATFORM_KEYRING=y496490CONFIG_DM_SWITCH=m497491CONFIG_DM_INTEGRITY=m498492CONFIG_DM_VDO=m···543535CONFIG_MLX4_EN=m544536CONFIG_MLX5_CORE=m545537CONFIG_MLX5_CORE_EN=y538538+# CONFIG_NET_VENDOR_META is not set546539# CONFIG_NET_VENDOR_MICREL is not set547540# CONFIG_NET_VENDOR_MICROCHIP is not set548541# CONFIG_NET_VENDOR_MICROSEMI is not set···704695CONFIG_NFSD_V3_ACL=y705696CONFIG_NFSD_V4=y706697CONFIG_NFSD_V4_SECURITY_LABEL=y698698+# CONFIG_NFSD_LEGACY_CLIENT_TRACKING is not set707699CONFIG_CIFS=m708700CONFIG_CIFS_UPCALL=y709701CONFIG_CIFS_XATTR=y···750740CONFIG_CRYPTO_ECDH=m751741CONFIG_CRYPTO_ECDSA=m752742CONFIG_CRYPTO_ECRDSA=m753753-CONFIG_CRYPTO_SM2=m754743CONFIG_CRYPTO_CURVE25519=m755744CONFIG_CRYPTO_AES_TI=m756745CONFIG_CRYPTO_ANUBIS=m
+12-2
arch/s390/configs/defconfig
···4848CONFIG_HZ_100=y4949CONFIG_CERT_STORE=y5050CONFIG_EXPOLINE=y5151-# CONFIG_EXPOLINE_EXTERN is not set5251CONFIG_EXPOLINE_AUTO=y5352CONFIG_CHSC_SCH=y5453CONFIG_VFIO_CCW=m···8889CONFIG_ZSWAP=y8990CONFIG_ZSWAP_ZPOOL_DEFAULT_ZBUD=y9091CONFIG_ZSMALLOC_STAT=y9292+CONFIG_SLAB_BUCKETS=y9193# CONFIG_COMPAT_BRK is not set9294CONFIG_MEMORY_HOTPLUG=y9395CONFIG_MEMORY_HOTREMOVE=y···416416# CONFIG_FW_LOADER is not set417417CONFIG_CONNECTOR=y418418CONFIG_ZRAM=y419419+CONFIG_ZRAM_BACKEND_LZ4=y420420+CONFIG_ZRAM_BACKEND_LZ4HC=y421421+CONFIG_ZRAM_BACKEND_ZSTD=y422422+CONFIG_ZRAM_BACKEND_DEFLATE=y423423+CONFIG_ZRAM_BACKEND_842=y424424+CONFIG_ZRAM_BACKEND_LZO=y425425+CONFIG_ZRAM_DEF_COMP_DEFLATE=y419426CONFIG_BLK_DEV_LOOP=m420427CONFIG_BLK_DEV_DRBD=m421428CONFIG_BLK_DEV_NBD=m···483476CONFIG_DM_FLAKEY=m484477CONFIG_DM_VERITY=m485478CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG=y479479+CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG_PLATFORM_KEYRING=y486480CONFIG_DM_SWITCH=m487481CONFIG_DM_INTEGRITY=m488482CONFIG_DM_VDO=m···533525CONFIG_MLX4_EN=m534526CONFIG_MLX5_CORE=m535527CONFIG_MLX5_CORE_EN=y528528+# CONFIG_NET_VENDOR_META is not set536529# CONFIG_NET_VENDOR_MICREL is not set537530# CONFIG_NET_VENDOR_MICROCHIP is not set538531# CONFIG_NET_VENDOR_MICROSEMI is not set···691682CONFIG_NFSD_V3_ACL=y692683CONFIG_NFSD_V4=y693684CONFIG_NFSD_V4_SECURITY_LABEL=y685685+# CONFIG_NFSD_LEGACY_CLIENT_TRACKING is not set694686CONFIG_CIFS=m695687CONFIG_CIFS_UPCALL=y696688CONFIG_CIFS_XATTR=y···736726CONFIG_CRYPTO_ECDH=m737727CONFIG_CRYPTO_ECDSA=m738728CONFIG_CRYPTO_ECRDSA=m739739-CONFIG_CRYPTO_SM2=m740729CONFIG_CRYPTO_CURVE25519=m741730CONFIG_CRYPTO_AES_TI=m742731CONFIG_CRYPTO_ANUBIS=m···776767CONFIG_CRYPTO_LZ4HC=m777768CONFIG_CRYPTO_ZSTD=m778769CONFIG_CRYPTO_ANSI_CPRNG=m770770+CONFIG_CRYPTO_JITTERENTROPY_OSR=1779771CONFIG_CRYPTO_USER_API_HASH=m780772CONFIG_CRYPTO_USER_API_SKCIPHER=m781773CONFIG_CRYPTO_USER_API_RNG=m
+1
arch/s390/configs/zfcpdump_defconfig
···4949# CONFIG_HVC_IUCV is not set5050# CONFIG_HW_RANDOM_S390 is not set5151# CONFIG_HMC_DRV is not set5252+# CONFIG_S390_UV_UAPI is not set5253# CONFIG_S390_TAPE is not set5354# CONFIG_VMCP is not set5455# CONFIG_MONWRITER is not set
···828828 const gfn_t gfn = gpa_to_gfn(gpa);829829 int rc;830830831831+ if (!gfn_to_memslot(kvm, gfn))832832+ return PGM_ADDRESSING;831833 if (mode == GACC_STORE)832834 rc = kvm_write_guest_page(kvm, gfn, data, offset, len);833835 else···987985 gra += fragment_len;988986 data += fragment_len;989987 }988988+ if (rc > 0)989989+ vcpu->arch.pgm.code = rc;990990 return rc;991991}992992
+8-6
arch/s390/kvm/gaccess.h
···405405 * @len: number of bytes to copy406406 *407407 * Copy @len bytes from @data (kernel space) to @gra (guest real address).408408- * It is up to the caller to ensure that the entire guest memory range is409409- * valid memory before calling this function.410408 * Guest low address and key protection are not checked.411409 *412412- * Returns zero on success or -EFAULT on error.410410+ * Returns zero on success, -EFAULT when copying from @data failed, or411411+ * PGM_ADRESSING in case @gra is outside a memslot. In this case, pgm check info412412+ * is also stored to allow injecting into the guest (if applicable) using413413+ * kvm_s390_inject_prog_cond().413414 *414415 * If an error occurs data may have been copied partially to guest memory.415416 */···429428 * @len: number of bytes to copy430429 *431430 * Copy @len bytes from @gra (guest real address) to @data (kernel space).432432- * It is up to the caller to ensure that the entire guest memory range is433433- * valid memory before calling this function.434431 * Guest key protection is not checked.435432 *436436- * Returns zero on success or -EFAULT on error.433433+ * Returns zero on success, -EFAULT when copying to @data failed, or434434+ * PGM_ADRESSING in case @gra is outside a memslot. In this case, pgm check info435435+ * is also stored to allow injecting into the guest (if applicable) using436436+ * kvm_s390_inject_prog_cond().437437 *438438 * If an error occurs data may have been copied partially to kernel space.439439 */
+9-8
arch/s390/pci/pci_event.c
···280280 goto no_pdev;281281282282 switch (ccdf->pec) {283283- case 0x003a: /* Service Action or Error Recovery Successful */283283+ case 0x002a: /* Error event concerns FMB */284284+ case 0x002b:285285+ case 0x002c:286286+ break;287287+ case 0x0040: /* Service Action or Error Recovery Failed */288288+ case 0x003b:289289+ zpci_event_io_failure(pdev, pci_channel_io_perm_failure);290290+ break;291291+ default: /* PCI function left in the error state attempt to recover */284292 ers_res = zpci_event_attempt_error_recovery(pdev);285293 if (ers_res != PCI_ERS_RESULT_RECOVERED)286294 zpci_event_io_failure(pdev, pci_channel_io_perm_failure);287287- break;288288- default:289289- /*290290- * Mark as frozen not permanently failed because the device291291- * could be subsequently recovered by the platform.292292- */293293- zpci_event_io_failure(pdev, pci_channel_io_frozen);294295 break;295296 }296297 pci_dev_put(pdev);
···215215#define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE ( 7*32+23) /* Disable Speculative Store Bypass. */216216#define X86_FEATURE_LS_CFG_SSBD ( 7*32+24) /* AMD SSBD implementation via LS_CFG MSR */217217#define X86_FEATURE_IBRS ( 7*32+25) /* "ibrs" Indirect Branch Restricted Speculation */218218-#define X86_FEATURE_IBPB ( 7*32+26) /* "ibpb" Indirect Branch Prediction Barrier */218218+#define X86_FEATURE_IBPB ( 7*32+26) /* "ibpb" Indirect Branch Prediction Barrier without a guaranteed RSB flush */219219#define X86_FEATURE_STIBP ( 7*32+27) /* "stibp" Single Thread Indirect Branch Predictors */220220#define X86_FEATURE_ZEN ( 7*32+28) /* Generic flag for all Zen and newer */221221#define X86_FEATURE_L1TF_PTEINV ( 7*32+29) /* L1TF workaround PTE inversion */···348348#define X86_FEATURE_CPPC (13*32+27) /* "cppc" Collaborative Processor Performance Control */349349#define X86_FEATURE_AMD_PSFD (13*32+28) /* Predictive Store Forwarding Disable */350350#define X86_FEATURE_BTC_NO (13*32+29) /* Not vulnerable to Branch Type Confusion */351351+#define X86_FEATURE_AMD_IBPB_RET (13*32+30) /* IBPB clears return address predictor */351352#define X86_FEATURE_BRS (13*32+31) /* "brs" Branch Sampling available */352353353354/* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */···524523#define X86_BUG_DIV0 X86_BUG(1*32 + 1) /* "div0" AMD DIV0 speculation bug */525524#define X86_BUG_RFDS X86_BUG(1*32 + 2) /* "rfds" CPU is vulnerable to Register File Data Sampling */526525#define X86_BUG_BHI X86_BUG(1*32 + 3) /* "bhi" CPU is affected by Branch History Injection */526526+#define X86_BUG_IBPB_NO_RET X86_BUG(1*32 + 4) /* "ibpb_no_ret" IBPB omits return target predictions */527527#endif /* _ASM_X86_CPUFEATURES_H */
+10-1
arch/x86/include/asm/nospec-branch.h
···323323 * Note: Only the memory operand variant of VERW clears the CPU buffers.324324 */325325.macro CLEAR_CPU_BUFFERS326326- ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF326326+#ifdef CONFIG_X86_64327327+ ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF328328+#else329329+ /*330330+ * In 32bit mode, the memory operand must be a %cs reference. The data331331+ * segments may not be usable (vm86 mode), and the stack segment may not332332+ * be flat (ESPFIX32).333333+ */334334+ ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF335335+#endif327336.endm328337329338#ifdef CONFIG_X86_64
···440440 v = apic_read(APIC_LVTT);441441 v |= (APIC_LVT_MASKED | LOCAL_TIMER_VECTOR);442442 apic_write(APIC_LVTT, v);443443- apic_write(APIC_TMICT, 0);443443+444444+ /*445445+ * Setting APIC_LVT_MASKED (above) should be enough to tell446446+ * the hardware that this timer will never fire. But AMD447447+ * erratum 411 and some Intel CPU behavior circa 2024 say448448+ * otherwise. Time for belt and suspenders programming: mask449449+ * the timer _and_ zero the counter registers:450450+ */451451+ if (v & APIC_LVT_TIMER_TSCDEADLINE)452452+ wrmsrl(MSR_IA32_TSC_DEADLINE, 0);453453+ else454454+ apic_write(APIC_TMICT, 0);455455+444456 return 0;445457}446458
+2-1
arch/x86/kernel/cpu/amd.c
···12021202 if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)12031203 return;1204120412051205- on_each_cpu(zenbleed_check_cpu, NULL, 1);12051205+ if (cpu_feature_enabled(X86_FEATURE_ZEN2))12061206+ on_each_cpu(zenbleed_check_cpu, NULL, 1);12061207}
+32
arch/x86/kernel/cpu/bugs.c
···1115111511161116 case RETBLEED_MITIGATION_IBPB:11171117 setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);11181118+11191119+ /*11201120+ * IBPB on entry already obviates the need for11211121+ * software-based untraining so clear those in case some11221122+ * other mitigation like SRSO has selected them.11231123+ */11241124+ setup_clear_cpu_cap(X86_FEATURE_UNRET);11251125+ setup_clear_cpu_cap(X86_FEATURE_RETHUNK);11261126+11181127 setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);11191128 mitigate_smt = true;11291129+11301130+ /*11311131+ * There is no need for RSB filling: entry_ibpb() ensures11321132+ * all predictions, including the RSB, are invalidated,11331133+ * regardless of IBPB implementation.11341134+ */11351135+ setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);11361136+11201137 break;1121113811221139 case RETBLEED_MITIGATION_STUFF:···26442627 if (has_microcode) {26452628 setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);26462629 srso_mitigation = SRSO_MITIGATION_IBPB;26302630+26312631+ /*26322632+ * IBPB on entry already obviates the need for26332633+ * software-based untraining so clear those in case some26342634+ * other mitigation like Retbleed has selected them.26352635+ */26362636+ setup_clear_cpu_cap(X86_FEATURE_UNRET);26372637+ setup_clear_cpu_cap(X86_FEATURE_RETHUNK);26472638 }26482639 } else {26492640 pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");···26632638 if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {26642639 setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);26652640 srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;26412641+26422642+ /*26432643+ * There is no need for RSB filling: entry_ibpb() ensures26442644+ * all predictions, including the RSB, are invalidated,26452645+ * regardless of IBPB implementation.26462646+ */26472647+ setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);26662648 }26672649 } else {26682650 pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
+3
arch/x86/kernel/cpu/common.c
···14431443 boot_cpu_has(X86_FEATURE_HYPERVISOR)))14441444 setup_force_cpu_bug(X86_BUG_BHI);1445144514461446+ if (cpu_has(c, X86_FEATURE_AMD_IBPB) && !cpu_has(c, X86_FEATURE_AMD_IBPB_RET))14471447+ setup_force_cpu_bug(X86_BUG_IBPB_NO_RET);14481448+14461449 if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))14471450 return;14481451
···2929 * hardware. The allocated bandwidth percentage is rounded to the next3030 * control step available on the hardware.3131 */3232-static bool bw_validate(char *buf, unsigned long *data, struct rdt_resource *r)3232+static bool bw_validate(char *buf, u32 *data, struct rdt_resource *r)3333{3434- unsigned long bw;3534 int ret;3535+ u32 bw;36363737 /*3838 * Only linear delay values is supported for current Intel SKUs.···4242 return false;4343 }44444545- ret = kstrtoul(buf, 10, &bw);4545+ ret = kstrtou32(buf, 10, &bw);4646 if (ret) {4747- rdt_last_cmd_printf("Non-decimal digit in MB value %s\n", buf);4747+ rdt_last_cmd_printf("Invalid MB value %s\n", buf);4848 return false;4949 }50505151- if ((bw < r->membw.min_bw || bw > r->default_ctrl) &&5252- !is_mba_sc(r)) {5353- rdt_last_cmd_printf("MB value %ld out of range [%d,%d]\n", bw,5454- r->membw.min_bw, r->default_ctrl);5151+ /* Nothing else to do if software controller is enabled. */5252+ if (is_mba_sc(r)) {5353+ *data = bw;5454+ return true;5555+ }5656+5757+ if (bw < r->membw.min_bw || bw > r->default_ctrl) {5858+ rdt_last_cmd_printf("MB value %u out of range [%d,%d]\n",5959+ bw, r->membw.min_bw, r->default_ctrl);5560 return false;5661 }5762···7065 struct resctrl_staged_config *cfg;7166 u32 closid = data->rdtgrp->closid;7267 struct rdt_resource *r = s->res;7373- unsigned long bw_val;6868+ u32 bw_val;74697570 cfg = &d->staged_config[s->conf_type];7671 if (cfg->have_new_ctrl) {
+4
arch/x86/kernel/kvm.c
···3737#include <asm/apic.h>3838#include <asm/apicdef.h>3939#include <asm/hypervisor.h>4040+#include <asm/mtrr.h>4041#include <asm/tlb.h>4142#include <asm/cpuidle_haltpoll.h>4243#include <asm/ptrace.h>···981980 }982981 kvmclock_init();983982 x86_platform.apic_post_init = kvm_apic_init;983983+984984+ /* Set WB as the default cache mode for SEV-SNP and TDX */985985+ mtrr_overwrite_state(NULL, 0, MTRR_TYPE_WRBACK);984986}985987986988#if defined(CONFIG_AMD_MEM_ENCRYPT)
+17-10
arch/x86/kvm/mmu/mmu.c
···15561556{15571557 bool flush = false;1558155815591559+ /*15601560+ * To prevent races with vCPUs faulting in a gfn using stale data,15611561+ * zapping a gfn range must be protected by mmu_invalidate_in_progress15621562+ * (and mmu_invalidate_seq). The only exception is memslot deletion;15631563+ * in that case, SRCU synchronization ensures that SPTEs are zapped15641564+ * after all vCPUs have unlocked SRCU, guaranteeing that vCPUs see the15651565+ * invalid slot.15661566+ */15671567+ lockdep_assert_once(kvm->mmu_invalidate_in_progress ||15681568+ lockdep_is_held(&kvm->slots_lock));15691569+15591570 if (kvm_memslots_have_rmaps(kvm))15601571 flush = __kvm_rmap_zap_gfn_range(kvm, range->slot,15611572 range->start, range->end,···18951884 if (is_obsolete_sp((_kvm), (_sp))) { \18961885 } else1897188618981898-#define for_each_gfn_valid_sp(_kvm, _sp, _gfn) \18871887+#define for_each_gfn_valid_sp_with_gptes(_kvm, _sp, _gfn) \18991888 for_each_valid_sp(_kvm, _sp, \19001889 &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)]) \19011901- if ((_sp)->gfn != (_gfn)) {} else19021902-19031903-#define for_each_gfn_valid_sp_with_gptes(_kvm, _sp, _gfn) \19041904- for_each_gfn_valid_sp(_kvm, _sp, _gfn) \19051905- if (!sp_has_gptes(_sp)) {} else18901890+ if ((_sp)->gfn != (_gfn) || !sp_has_gptes(_sp)) {} else1906189119071892static bool kvm_sync_page_check(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)19081893{···7070706370717064 /*70727065 * Since accounting information is stored in struct kvm_arch_memory_slot,70737073- * shadow pages deletion (e.g. unaccount_shadowed()) requires that all70747074- * gfns with a shadow page have a corresponding memslot. Do so before70757075- * the memslot goes away.70667066+ * all MMU pages that are shadowing guest PTEs must be zapped before the70677067+ * memslot is deleted, as freeing such pages after the memslot is freed70687068+ * will result in use-after-free, e.g. in unaccount_shadowed().70767069 */70777070 for (i = 0; i < slot->npages; i++) {70787071 struct kvm_mmu_page *sp;70797072 gfn_t gfn = slot->base_gfn + i;7080707370817081- for_each_gfn_valid_sp(kvm, sp, gfn)70747074+ for_each_gfn_valid_sp_with_gptes(kvm, sp, gfn)70827075 kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list);7083707670847077 if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) {
+5-1
arch/x86/kvm/svm/nested.c
···6363 u64 pdpte;6464 int ret;65656666+ /*6767+ * Note, nCR3 is "assumed" to be 32-byte aligned, i.e. the CPU ignores6868+ * nCR3[4:0] when loading PDPTEs from memory.6969+ */6670 ret = kvm_vcpu_read_guest_page(vcpu, gpa_to_gfn(cr3), &pdpte,6767- offset_in_page(cr3) + index * 8, 8);7171+ (cr3 & GENMASK(11, 5)) + index * 8, 8);6872 if (ret)6973 return 0;7074 return pdpte;
···471471 wait_for_completion(&thi->stop);472472}473473474474-int conn_lowest_minor(struct drbd_connection *connection)475475-{476476- struct drbd_peer_device *peer_device;477477- int vnr = 0, minor = -1;478478-479479- rcu_read_lock();480480- peer_device = idr_get_next(&connection->peer_devices, &vnr);481481- if (peer_device)482482- minor = device_to_minor(peer_device->device);483483- rcu_read_unlock();484484-485485- return minor;486486-}487487-488474#ifdef CONFIG_SMP489475/*490476 * drbd_calc_cpu_mask() - Generate CPU masks, spread over all CPUs
+10-1
drivers/block/ublk_drv.c
···23802380 * TODO: provide forward progress for RECOVERY handler, so that23812381 * unprivileged device can benefit from it23822382 */23832383- if (info.flags & UBLK_F_UNPRIVILEGED_DEV)23832383+ if (info.flags & UBLK_F_UNPRIVILEGED_DEV) {23842384 info.flags &= ~(UBLK_F_USER_RECOVERY_REISSUE |23852385 UBLK_F_USER_RECOVERY);23862386+23872387+ /*23882388+ * For USER_COPY, we depends on userspace to fill request23892389+ * buffer by pwrite() to ublk char device, which can't be23902390+ * used for unprivileged device23912391+ */23922392+ if (info.flags & UBLK_F_USER_COPY)23932393+ return -EINVAL;23942394+ }2386239523872396 /* the created device is always owned by current user */23882397 ublk_store_owner_uid_gid(&info.owner_uid, &info.owner_gid);
+9-18
drivers/bluetooth/btusb.c
···13451345 if (!urb)13461346 return -ENOMEM;1347134713481348- /* Use maximum HCI Event size so the USB stack handles13491349- * ZPL/short-transfer automatically.13501350- */13511351- size = HCI_MAX_EVENT_SIZE;13481348+ if (le16_to_cpu(data->udev->descriptor.idVendor) == 0x0a12 &&13491349+ le16_to_cpu(data->udev->descriptor.idProduct) == 0x0001)13501350+ /* Fake CSR devices don't seem to support sort-transter */13511351+ size = le16_to_cpu(data->intr_ep->wMaxPacketSize);13521352+ else13531353+ /* Use maximum HCI Event size so the USB stack handles13541354+ * ZPL/short-transfer automatically.13551355+ */13561356+ size = HCI_MAX_EVENT_SIZE;1352135713531358 buf = kmalloc(size, mem_flags);13541359 if (!buf) {···40434038static int btusb_suspend(struct usb_interface *intf, pm_message_t message)40444039{40454040 struct btusb_data *data = usb_get_intfdata(intf);40464046- int err;4047404140484042 BT_DBG("intf %p", intf);40494043···40554051 if (data->suspend_count++)40564052 return 0;4057405340584058- /* Notify Host stack to suspend; this has to be done before stopping40594059- * the traffic since the hci_suspend_dev itself may generate some40604060- * traffic.40614061- */40624062- err = hci_suspend_dev(data->hdev);40634063- if (err) {40644064- data->suspend_count--;40654065- return err;40664066- }40674067-40684054 spin_lock_irq(&data->txlock);40694055 if (!(PMSG_IS_AUTO(message) && data->tx_in_flight)) {40704056 set_bit(BTUSB_SUSPENDING, &data->flags);···40624068 } else {40634069 spin_unlock_irq(&data->txlock);40644070 data->suspend_count--;40654065- hci_resume_dev(data->hdev);40664071 return -EBUSY;40674072 }40684073···41814188 clear_bit(BTUSB_SUSPENDING, &data->flags);41824189 spin_unlock_irq(&data->txlock);41834190 schedule_work(&data->work);41844184-41854185- hci_resume_dev(data->hdev);4186419141874192 return 0;41884193
+1-1
drivers/cdrom/cdrom.c
···23132313 return -EINVAL;2314231423152315 /* Prevent arg from speculatively bypassing the length check */23162316- barrier_nospec();23162316+ arg = array_index_nospec(arg, cdi->capacity);2317231723182318 info = kmalloc(sizeof(*info), GFP_KERNEL);23192319 if (!info)
···439439 if (list->id > max)440440 max = list->id;441441 if (list->child && list->child->id > max)442442- max = list->id;442442+ max = list->child->id;443443 }444444445445 return max;
···11# SPDX-License-Identifier: GPL-2.0-only22-scmi_transport_mailbox-objs := mailbox.o33-obj-$(CONFIG_ARM_SCMI_TRANSPORT_MAILBOX) += scmi_transport_mailbox.o22+# Keep before scmi_transport_mailbox.o to allow precedence33+# while matching the compatible.44scmi_transport_smc-objs := smc.o55obj-$(CONFIG_ARM_SCMI_TRANSPORT_SMC) += scmi_transport_smc.o66+scmi_transport_mailbox-objs := mailbox.o77+obj-$(CONFIG_ARM_SCMI_TRANSPORT_MAILBOX) += scmi_transport_mailbox.o68scmi_transport_optee-objs := optee.o79obj-$(CONFIG_ARM_SCMI_TRANSPORT_OPTEE) += scmi_transport_optee.o810scmi_transport_virtio-objs := virtio.o
+21-11
drivers/firmware/arm_scmi/transports/mailbox.c
···2525 * @chan_platform_receiver: Optional Platform Receiver mailbox unidirectional channel2626 * @cinfo: SCMI channel info2727 * @shmem: Transmit/Receive shared memory area2828+ * @chan_lock: Lock that prevents multiple xfers from being queued2829 */2930struct scmi_mailbox {3031 struct mbox_client cl;···3433 struct mbox_chan *chan_platform_receiver;3534 struct scmi_chan_info *cinfo;3635 struct scmi_shared_mem __iomem *shmem;3636+ struct mutex chan_lock;3737};38383939#define client_to_scmi_mailbox(c) container_of(c, struct scmi_mailbox, cl)···240238241239 cinfo->transport_info = smbox;242240 smbox->cinfo = cinfo;241241+ mutex_init(&smbox->chan_lock);243242244243 return 0;245244}···270267 struct scmi_mailbox *smbox = cinfo->transport_info;271268 int ret;272269270270+ /*271271+ * The mailbox layer has its own queue. However the mailbox queue272272+ * confuses the per message SCMI timeouts since the clock starts when273273+ * the message is submitted into the mailbox queue. So when multiple274274+ * messages are queued up the clock starts on all messages instead of275275+ * only the one inflight.276276+ */277277+ mutex_lock(&smbox->chan_lock);278278+273279 ret = mbox_send_message(smbox->chan, xfer);280280+ /* mbox_send_message returns non-negative value on success */281281+ if (ret < 0) {282282+ mutex_unlock(&smbox->chan_lock);283283+ return ret;284284+ }274285275275- /* mbox_send_message returns non-negative value on success, so reset */276276- if (ret > 0)277277- ret = 0;278278-279279- return ret;286286+ return 0;280287}281288282289static void mailbox_mark_txdone(struct scmi_chan_info *cinfo, int ret,···294281{295282 struct scmi_mailbox *smbox = cinfo->transport_info;296283297297- /*298298- * NOTE: we might prefer not to need the mailbox ticker to manage the299299- * transfer queueing since the protocol layer queues things by itself.300300- * Unfortunately, we have to kick the mailbox framework after we have301301- * received our message.302302- */303284 mbox_client_txdone(smbox->chan, ret);285285+286286+ /* Release channel */287287+ mutex_unlock(&smbox->chan_lock);304288}305289306290static void mailbox_fetch_response(struct scmi_chan_info *cinfo,
+1-1
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
···265265266266 /* Only a single BO list is allowed to simplify handling. */267267 if (p->bo_list)268268- ret = -EINVAL;268268+ goto free_partial_kdata;269269270270 ret = amdgpu_cs_p1_bo_handles(p, p->chunks[i].kdata);271271 if (ret)
+4-7
drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
···16351635{16361636 int r;1637163716381638- if (!amdgpu_sriov_vf(adev)) {16391639- r = device_create_file(adev->dev, &dev_attr_enforce_isolation);16401640- if (r)16411641- return r;16421642- }16381638+ r = device_create_file(adev->dev, &dev_attr_enforce_isolation);16391639+ if (r)16401640+ return r;1643164116441642 r = device_create_file(adev->dev, &dev_attr_run_cleaner_shader);16451643 if (r)···1648165016491651void amdgpu_gfx_sysfs_isolation_shader_fini(struct amdgpu_device *adev)16501652{16511651- if (!amdgpu_sriov_vf(adev))16521652- device_remove_file(adev->dev, &dev_attr_enforce_isolation);16531653+ device_remove_file(adev->dev, &dev_attr_enforce_isolation);16531654 device_remove_file(adev->dev, &dev_attr_run_cleaner_shader);16541655}16551656
···2929 if (ast_connector->physical_status == connector_status_connected) {3030 count = drm_connector_helper_get_modes(connector);3131 } else {3232+ drm_edid_connector_update(connector, NULL);3333+3234 /*3335 * There's no EDID data without a connected monitor. Set BMC-3436 * compatible modes in this case. The XGA default resolution
+2
drivers/gpu/drm/ast/ast_vga.c
···2929 if (ast_connector->physical_status == connector_status_connected) {3030 count = drm_connector_helper_get_modes(connector);3131 } else {3232+ drm_edid_connector_update(connector, NULL);3333+3234 /*3335 * There's no EDID data without a connected monitor. Set BMC-3436 * compatible modes in this case. The XGA default resolution
+30-10
drivers/gpu/drm/i915/display/intel_dp_mst.c
···89899090static int intel_dp_mst_bw_overhead(const struct intel_crtc_state *crtc_state,9191 const struct intel_connector *connector,9292- bool ssc, bool dsc, int bpp_x16)9292+ bool ssc, int dsc_slice_count, int bpp_x16)9393{9494 const struct drm_display_mode *adjusted_mode =9595 &crtc_state->hw.adjusted_mode;9696 unsigned long flags = DRM_DP_BW_OVERHEAD_MST;9797- int dsc_slice_count = 0;9897 int overhead;999810099 flags |= intel_dp_is_uhbr(crtc_state) ? DRM_DP_BW_OVERHEAD_UHBR : 0;101100 flags |= ssc ? DRM_DP_BW_OVERHEAD_SSC_REF_CLK : 0;102101 flags |= crtc_state->fec_enable ? DRM_DP_BW_OVERHEAD_FEC : 0;103102104104- if (dsc) {103103+ if (dsc_slice_count)105104 flags |= DRM_DP_BW_OVERHEAD_DSC;106106- dsc_slice_count = intel_dp_dsc_get_slice_count(connector,107107- adjusted_mode->clock,108108- adjusted_mode->hdisplay,109109- crtc_state->joiner_pipes);110110- }111105112106 overhead = drm_dp_bw_overhead(crtc_state->lane_count,113107 adjusted_mode->hdisplay,···147153 return DIV_ROUND_UP(effective_data_rate * 64, 54 * 1000);148154}149155156156+static int intel_dp_mst_dsc_get_slice_count(const struct intel_connector *connector,157157+ const struct intel_crtc_state *crtc_state)158158+{159159+ const struct drm_display_mode *adjusted_mode =160160+ &crtc_state->hw.adjusted_mode;161161+ int num_joined_pipes = crtc_state->joiner_pipes;162162+163163+ return intel_dp_dsc_get_slice_count(connector,164164+ adjusted_mode->clock,165165+ adjusted_mode->hdisplay,166166+ num_joined_pipes);167167+}168168+150169static int intel_dp_mst_find_vcpi_slots_for_bpp(struct intel_encoder *encoder,151170 struct intel_crtc_state *crtc_state,152171 int max_bpp,···179172 const struct drm_display_mode *adjusted_mode =180173 &crtc_state->hw.adjusted_mode;181174 int bpp, slots = -EINVAL;175175+ int dsc_slice_count = 0;182176 int max_dpt_bpp;183177 int ret = 0;184178···211203 drm_dbg_kms(&i915->drm, "Looking for slots in range min bpp %d max bpp %d\n",212204 min_bpp, max_bpp);213205206206+ if (dsc) {207207+ dsc_slice_count = intel_dp_mst_dsc_get_slice_count(connector, crtc_state);208208+ if (!dsc_slice_count) {209209+ drm_dbg_kms(&i915->drm, "Can't get valid DSC slice count\n");210210+211211+ return -ENOSPC;212212+ }213213+ }214214+214215 for (bpp = max_bpp; bpp >= min_bpp; bpp -= step) {215216 int local_bw_overhead;216217 int remote_bw_overhead;···233216 intel_dp_output_bpp(crtc_state->output_format, bpp));234217235218 local_bw_overhead = intel_dp_mst_bw_overhead(crtc_state, connector,236236- false, dsc, link_bpp_x16);219219+ false, dsc_slice_count, link_bpp_x16);237220 remote_bw_overhead = intel_dp_mst_bw_overhead(crtc_state, connector,238238- true, dsc, link_bpp_x16);221221+ true, dsc_slice_count, link_bpp_x16);239222240223 intel_dp_mst_compute_m_n(crtc_state, connector,241224 local_bw_overhead,···464447 return false;465448466449 if (mode_hblank_period_ns(adjusted_mode) > hblank_limit)450450+ return false;451451+452452+ if (!intel_dp_mst_dsc_get_slice_count(connector, crtc_state))467453 return false;468454469455 return true;
+13
drivers/gpu/drm/i915/display/intel_fb.c
···438438 INTEL_PLANE_CAP_NEED64K_PHYS);439439}440440441441+/**442442+ * intel_fb_is_tile4_modifier: Check if a modifier is a tile4 modifier type443443+ * @modifier: Modifier to check444444+ *445445+ * Returns:446446+ * Returns %true if @modifier is a tile4 modifier.447447+ */448448+bool intel_fb_is_tile4_modifier(u64 modifier)449449+{450450+ return plane_caps_contain_any(lookup_modifier(modifier)->plane_caps,451451+ INTEL_PLANE_CAP_TILING_4);452452+}453453+441454static bool check_modifier_display_ver_range(const struct intel_modifier_desc *md,442455 u8 display_ver_from, u8 display_ver_until)443456{
···635635 kunmap_atomic(d.src_addr);636636 if (d.dst_addr)637637 kunmap_atomic(d.dst_addr);638638- if (src_pages)639639- kvfree(src_pages);640640- if (dst_pages)641641- kvfree(dst_pages);638638+ kvfree(src_pages);639639+ kvfree(dst_pages);642640643641 return ret;644642}
···980980 return;981981 }982982983983+ xe_pm_runtime_get_noresume(xe);984984+983985 if (drmm_add_action_or_reset(&xe->drm, xe_device_wedged_fini, xe)) {984986 drm_err(&xe->drm, "Failed to register xe_device_wedged_fini clean-up. Although device is wedged.\n");985987 return;986988 }987987-988988- xe_pm_runtime_get_noresume(xe);989989990990 if (!atomic_xchg(&xe->wedged.flag, 1)) {991991 xe->needs_flr_on_fini = true;
+4-8
drivers/gpu/drm/xe/xe_exec.c
···4141 * user knows an exec writes to a BO and reads from the BO in the next exec, it4242 * is the user's responsibility to pass in / out fence between the two execs).4343 *4444- * Implicit dependencies for external BOs are handled by using the dma-buf4545- * implicit dependency uAPI (TODO: add link). To make this works each exec must4646- * install the job's fence into the DMA_RESV_USAGE_WRITE slot of every external4747- * BO mapped in the VM.4848- *4944 * We do not allow a user to trigger a bind at exec time rather we have a VM5045 * bind IOCTL which uses the same in / out fence interface as exec. In that5146 * sense, a VM bind is basically the same operation as an exec from the user···5459 * behind any pending kernel operations on any external BOs in VM or any BOs5560 * private to the VM. This is accomplished by the rebinds waiting on BOs5661 * DMA_RESV_USAGE_KERNEL slot (kernel ops) and kernel ops waiting on all BOs5757- * slots (inflight execs are in the DMA_RESV_USAGE_BOOKING for private BOs and5858- * in DMA_RESV_USAGE_WRITE for external BOs).6262+ * slots (inflight execs are in the DMA_RESV_USAGE_BOOKKEEP for private BOs and6363+ * for external BOs).5964 *6065 * Rebinds / dma-resv usage applies to non-compute mode VMs only as for compute6166 * mode VMs we use preempt fences and a rebind worker (TODO: add link).···299304 xe_sched_job_arm(job);300305 if (!xe_vm_in_lr_mode(vm))301306 drm_gpuvm_resv_add_fence(&vm->gpuvm, exec, &job->drm.s_fence->finished,302302- DMA_RESV_USAGE_BOOKKEEP, DMA_RESV_USAGE_WRITE);307307+ DMA_RESV_USAGE_BOOKKEEP,308308+ DMA_RESV_USAGE_BOOKKEEP);303309304310 for (i = 0; i < num_syncs; i++) {305311 xe_sync_entry_signal(&syncs[i], &job->drm.s_fence->finished);
···1030103010311031 /*10321032 * TDR has fired before free job worker. Common if exec queue10331033- * immediately closed after last fence signaled.10331033+ * immediately closed after last fence signaled. Add back to pending10341034+ * list so job can be freed and kick scheduler ensuring free job is not10351035+ * lost.10341036 */10351037 if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &job->fence->flags)) {10361036- guc_exec_queue_free_job(drm_job);10381038+ xe_sched_add_pending_job(sched, job);10391039+ xe_sched_submission_start(sched);1037104010381041 return DRM_GPU_SCHED_STAT_NOMINAL;10391042 }
+5-1
drivers/gpu/drm/xe/xe_query.c
···161161 cpu_clock);162162163163 xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL);164164- resp.width = 36;164164+165165+ if (GRAPHICS_VER(xe) >= 20)166166+ resp.width = 64;167167+ else168168+ resp.width = 36;165169166170 /* Only write to the output fields of user query */167171 if (put_user(resp.cpu_timestamp, &query_ptr->cpu_timestamp))
+1-1
drivers/gpu/drm/xe/xe_sync.c
···5858 if (!access_ok(ptr, sizeof(*ptr)))5959 return ERR_PTR(-EFAULT);60606161- ufence = kmalloc(sizeof(*ufence), GFP_KERNEL);6161+ ufence = kzalloc(sizeof(*ufence), GFP_KERNEL);6262 if (!ufence)6363 return ERR_PTR(-ENOMEM);6464
···12181218static int bma400_tap_event_en(struct bma400_data *data,12191219 enum iio_event_direction dir, int state)12201220{12211221- unsigned int mask, field_value;12211221+ unsigned int mask;12221222+ unsigned int field_value = 0;12221223 int ret;1223122412241225 /*
+11
drivers/iio/adc/Kconfig
···5252 tristate "Analog Device AD4695 ADC Driver"5353 depends on SPI5454 select REGMAP_SPI5555+ select IIO_BUFFER5656+ select IIO_TRIGGERED_BUFFER5557 help5658 Say yes here to build support for Analog Devices AD4695 and similar5759 analog to digital converters (ADC).···330328config AD7944331329 tristate "Analog Devices AD7944 and similar ADCs driver"332330 depends on SPI331331+ select IIO_BUFFER332332+ select IIO_TRIGGERED_BUFFER333333 help334334 Say yes here to build support for Analog Devices335335 AD7944, AD7985, AD7986 ADCs.···14851481config TI_ADS868814861482 tristate "Texas Instruments ADS8688"14871483 depends on SPI14841484+ select IIO_BUFFER14851485+ select IIO_TRIGGERED_BUFFER14881486 help14891487 If you say yes here you get support for Texas Instruments ADS8684 and14901488 and ADS8688 ADC chips···14971491config TI_ADS124S0814981492 tristate "Texas Instruments ADS124S08"14991493 depends on SPI14941494+ select IIO_BUFFER14951495+ select IIO_TRIGGERED_BUFFER15001496 help15011497 If you say yes here you get support for Texas Instruments ADS124S0815021498 and ADS124S06 ADC chips···15331525config TI_LMP9206415341526 tristate "Texas Instruments LMP92064 ADC driver"15351527 depends on SPI15281528+ select REGMAP_SPI15291529+ select IIO_BUFFER15301530+ select IIO_TRIGGERED_BUFFER15361531 help15371532 Say yes here to build support for the LMP92064 Precision Current and Voltage15381533 sensor.
+1
drivers/iio/amplifiers/Kconfig
···2727config ADA42502828 tristate "Analog Devices ADA4250 Instrumentation Amplifier"2929 depends on SPI3030+ select REGMAP_SPI3031 help3132 Say yes here to build support for Analog Devices ADA42503233 SPI Amplifier's support. The driver provides direct access via
+2
drivers/iio/chemical/Kconfig
···8080 tristate "ScioSense ENS160 sensor driver"8181 depends on (I2C || SPI)8282 select REGMAP8383+ select IIO_BUFFER8484+ select IIO_TRIGGERED_BUFFER8385 select ENS160_I2C if I2C8486 select ENS160_SPI if SPI8587 help
···99config AD3552R1010 tristate "Analog Devices AD3552R DAC driver"1111 depends on SPI_MASTER1212+ select IIO_BUFFER1313+ select IIO_TRIGGERED_BUFFER1214 help1315 Say yes here to build support for Analog Devices AD3552R1416 Digital to Analog Converter.···254252config AD5766255253 tristate "Analog Devices AD5766/AD5767 DAC driver"256254 depends on SPI_MASTER255255+ select IIO_BUFFER256256+ select IIO_TRIGGERED_BUFFER257257 help258258 Say yes here to build support for Analog Devices AD5766, AD5767259259 Digital to Analog Converter.···266262config AD5770R267263 tristate "Analog Devices AD5770R IDAC driver"268264 depends on SPI_MASTER265265+ select REGMAP_SPI269266 help270267 Say yes here to build support for Analog Devices AD5770R Digital to271268 Analog Converter.···358353config LTC1660359354 tristate "Linear Technology LTC1660/LTC1665 DAC SPI driver"360355 depends on SPI356356+ select REGMAP_SPI361357 help362358 Say yes here to build support for Linear Technology363359 LTC1660 and LTC1665 Digital to Analog Converters.···489483490484config STM32_DAC_CORE491485 tristate486486+ select REGMAP_MMIO492487493488config TI_DAC082S085494489 tristate "Texas Instruments 8/10/12-bit 2/4-channel DAC driver"
+9-8
drivers/iio/dac/ltc2664.c
···516516 const struct ltc2664_chip_info *chip_info = st->chip_info;517517 struct device *dev = &st->spi->dev;518518 u32 reg, tmp[2], mspan;519519- int ret, span = 0;519519+ int ret;520520521521 mspan = LTC2664_MSPAN_SOFTSPAN;522522 ret = device_property_read_u32(dev, "adi,manual-span-operation-config",···579579 ret = fwnode_property_read_u32_array(child, "output-range-microvolt",580580 tmp, ARRAY_SIZE(tmp));581581 if (!ret && mspan == LTC2664_MSPAN_SOFTSPAN) {582582- chan->span = ltc2664_set_span(st, tmp[0] / 1000,583583- tmp[1] / 1000, reg);584584- if (span < 0)585585- return dev_err_probe(dev, span,582582+ ret = ltc2664_set_span(st, tmp[0] / 1000, tmp[1] / 1000, reg);583583+ if (ret < 0)584584+ return dev_err_probe(dev, ret,586585 "Failed to set span\n");586586+ chan->span = ret;587587 }588588589589 ret = fwnode_property_read_u32_array(child, "output-range-microamp",590590 tmp, ARRAY_SIZE(tmp));591591 if (!ret) {592592- chan->span = ltc2664_set_span(st, 0, tmp[1] / 1000, reg);593593- if (span < 0)594594- return dev_err_probe(dev, span,592592+ ret = ltc2664_set_span(st, 0, tmp[1] / 1000, reg);593593+ if (ret < 0)594594+ return dev_err_probe(dev, ret,595595 "Failed to set span\n");596596+ chan->span = ret;596597 }597598 }598599
+17-15
drivers/iio/frequency/Kconfig
···5353config ADF43775454 tristate "Analog Devices ADF4377 Microwave Wideband Synthesizer"5555 depends on SPI && COMMON_CLK5656+ select REGMAP_SPI5657 help5758 Say yes here to build support for Analog Devices ADF4377 Microwave5859 Wideband Synthesizer.···9291 module will be called admv1014.93929493config ADMV44209595- tristate "Analog Devices ADMV4420 K Band Downconverter"9696- depends on SPI9797- help9898- Say yes here to build support for Analog Devices K Band9999- Downconverter with integrated Fractional-N PLL and VCO.9494+ tristate "Analog Devices ADMV4420 K Band Downconverter"9595+ depends on SPI9696+ select REGMAP_SPI9797+ help9898+ Say yes here to build support for Analog Devices K Band9999+ Downconverter with integrated Fractional-N PLL and VCO.100100101101- To compile this driver as a module, choose M here: the102102- module will be called admv4420.101101+ To compile this driver as a module, choose M here: the102102+ module will be called admv4420.103103104104config ADRF6780105105- tristate "Analog Devices ADRF6780 Microwave Upconverter"106106- depends on SPI107107- depends on COMMON_CLK108108- help109109- Say yes here to build support for Analog Devices ADRF6780110110- 5.9 GHz to 23.6 GHz, Wideband, Microwave Upconverter.105105+ tristate "Analog Devices ADRF6780 Microwave Upconverter"106106+ depends on SPI107107+ depends on COMMON_CLK108108+ help109109+ Say yes here to build support for Analog Devices ADRF6780110110+ 5.9 GHz to 23.6 GHz, Wideband, Microwave Upconverter.111111112112- To compile this driver as a module, choose M here: the113113- module will be called adrf6780.112112+ To compile this driver as a module, choose M here: the113113+ module will be called adrf6780.114114115115endmenu116116endmenu
+11-12
drivers/iio/imu/bmi323/bmi323_core.c
···21722172}21732173EXPORT_SYMBOL_NS_GPL(bmi323_core_probe, IIO_BMI323);2174217421752175-#if defined(CONFIG_PM)21762175static int bmi323_core_runtime_suspend(struct device *dev)21772176{21782177 struct iio_dev *indio_dev = dev_get_drvdata(dev);···21982199 }2199220022002201 for (unsigned int i = 0; i < ARRAY_SIZE(bmi323_ext_reg_savestate); i++) {22012201- ret = bmi323_read_ext_reg(data, bmi323_reg_savestate[i],22022202- &savestate->reg_settings[i]);22022202+ ret = bmi323_read_ext_reg(data, bmi323_ext_reg_savestate[i],22032203+ &savestate->ext_reg_settings[i]);22032204 if (ret) {22042205 dev_err(data->dev,22052206 "Error reading bmi323 external reg 0x%x: %d\n",22062206- bmi323_reg_savestate[i], ret);22072207+ bmi323_ext_reg_savestate[i], ret);22072208 return ret;22082209 }22092210 }···22312232 * after being reset in the lower power state by runtime-pm.22322233 */22332234 ret = bmi323_init(data);22342234- if (!ret)22352235+ if (ret) {22362236+ dev_err(data->dev, "Device power-on and init failed: %d", ret);22352237 return ret;22382238+ }2236223922372240 /* Register must be cleared before changing an active config */22382241 ret = regmap_write(data->regmap, BMI323_FEAT_IO0_REG, 0);···22442243 }2245224422462245 for (unsigned int i = 0; i < ARRAY_SIZE(bmi323_ext_reg_savestate); i++) {22472247- ret = bmi323_write_ext_reg(data, bmi323_reg_savestate[i],22482248- savestate->reg_settings[i]);22462246+ ret = bmi323_write_ext_reg(data, bmi323_ext_reg_savestate[i],22472247+ savestate->ext_reg_settings[i]);22492248 if (ret) {22502249 dev_err(data->dev,22512250 "Error writing bmi323 external reg 0x%x: %d\n",22522252- bmi323_reg_savestate[i], ret);22512251+ bmi323_ext_reg_savestate[i], ret);22532252 return ret;22542253 }22552254 }···22942293 return iio_device_resume_triggering(indio_dev);22952294}2296229522972297-#endif22982298-22992296const struct dev_pm_ops bmi323_core_pm_ops = {23002300- SET_RUNTIME_PM_OPS(bmi323_core_runtime_suspend,23012301- bmi323_core_runtime_resume, NULL)22972297+ RUNTIME_PM_OPS(bmi323_core_runtime_suspend,22982298+ bmi323_core_runtime_resume, NULL)23022299};23032300EXPORT_SYMBOL_NS_GPL(bmi323_core_pm_ops, IIO_BMI323);23042301
+2
drivers/iio/light/Kconfig
···335335 depends on I2C336336 select REGMAP_I2C337337 select IIO_GTS_HELPER338338+ select IIO_BUFFER339339+ select IIO_TRIGGERED_BUFFER338340 help339341 Enable support for the ROHM BU27008 color sensor.340342 The ROHM BU27008 is a sensor with 5 photodiodes (red, green,
···1111 depends on I2C1212 depends on OF1313 select REGMAP_I2C1414+ select IIO_BUFFER1515+ select IIO_TRIGGERED_BUFFER1416 help1517 Say yes here to build support for Voltafield AF8133J I2C-based1618 3-axis magnetometer chip.
+4
drivers/iio/pressure/Kconfig
···1919config ROHM_BM13902020 tristate "ROHM BM1390GLV-Z pressure sensor driver"2121 depends on I2C2222+ select REGMAP_I2C2323+ select IIO_BUFFER2424+ select IIO_TRIGGERED_BUFFER2225 help2326 Support for the ROHM BM1390 pressure sensor. The BM1390GLV-Z2427 can measure pressures ranging from 300 hPa to 1300 hPa with···256253config SDP500257254 tristate "Sensirion SDP500 differential pressure sensor I2C driver"258255 depends on I2C256256+ select CRC8259257 help260258 Say Y here to build support for Sensirion SDP500 differential pressure261259 sensor I2C driver.
+2
drivers/iio/proximity/Kconfig
···8686config MB12328787 tristate "MaxSonar I2CXL family ultrasonic sensors"8888 depends on I2C8989+ select IIO_BUFFER9090+ select IIO_TRIGGERED_BUFFER8991 help9092 Say Y to build a driver for the ultrasonic sensors I2CXL of9193 MaxBotix which have an i2c interface. It can be used to measure
+3
drivers/iio/resolver/Kconfig
···3131 depends on SPI3232 depends on COMMON_CLK3333 depends on GPIOLIB || COMPILE_TEST3434+ select REGMAP3535+ select IIO_BUFFER3636+ select IIO_TRIGGERED_BUFFER3437 help3538 Say yes here to build support for Analog Devices spi resolver3639 to digital converters, ad2s1210, provides direct access via sysfs.
···130130131131 /*132132 * Disable MMU-500's not-particularly-beneficial next-page133133- * prefetcher for the sake of errata #841119 and #826419.133133+ * prefetcher for the sake of at least 5 known errata.134134 */135135 for (i = 0; i < smmu->num_context_banks; ++i) {136136 reg = arm_smmu_cb_read(smmu, i, ARM_SMMU_CB_ACTLR);···138138 arm_smmu_cb_write(smmu, i, ARM_SMMU_CB_ACTLR, reg);139139 reg = arm_smmu_cb_read(smmu, i, ARM_SMMU_CB_ACTLR);140140 if (reg & ARM_MMU500_ACTLR_CPRE)141141- dev_warn_once(smmu->dev, "Failed to disable prefetcher [errata #841119 and #826419], check ACR.CACHE_LOCK\n");141141+ dev_warn_once(smmu->dev, "Failed to disable prefetcher for errata workarounds, check SACR.CACHE_LOCK\n");142142 }143143144144 return 0;
+3-1
drivers/iommu/intel/iommu.c
···33403340 */33413341static void domain_context_clear(struct device_domain_info *info)33423342{33433343- if (!dev_is_pci(info->dev))33433343+ if (!dev_is_pci(info->dev)) {33443344 domain_context_clear_one(info, info->bus, info->devfn);33453345+ return;33463346+ }3345334733463348 pci_for_each_dma_alias(to_pci_dev(info->dev),33473349 &domain_context_clear_one_cb, info);
-7
drivers/irqchip/Kconfig
···4545 select IRQ_MSI_LIB4646 default ARM_GIC_V347474848-config ARM_GIC_V3_ITS_PCI4949- bool5050- depends on ARM_GIC_V3_ITS5151- depends on PCI5252- depends on PCI_MSI5353- default ARM_GIC_V3_ITS5454-5548config ARM_GIC_V3_ITS_FSL_MC5649 bool5750 depends on ARM_GIC_V3_ITS
+12-6
drivers/irqchip/irq-gic-v3-its.c
···797797 its_encode_valid(cmd, desc->its_vmapp_cmd.valid);798798799799 if (!desc->its_vmapp_cmd.valid) {800800+ alloc = !atomic_dec_return(&desc->its_vmapp_cmd.vpe->vmapp_count);800801 if (is_v4_1(its)) {801801- alloc = !atomic_dec_return(&desc->its_vmapp_cmd.vpe->vmapp_count);802802 its_encode_alloc(cmd, alloc);803803 /*804804 * Unmapping a VPE is self-synchronizing on GICv4.1,···817817 its_encode_vpt_addr(cmd, vpt_addr);818818 its_encode_vpt_size(cmd, LPI_NRBITS - 1);819819820820+ alloc = !atomic_fetch_inc(&desc->its_vmapp_cmd.vpe->vmapp_count);821821+820822 if (!is_v4_1(its))821823 goto out;822824823825 vconf_addr = virt_to_phys(page_address(desc->its_vmapp_cmd.vpe->its_vm->vprop_page));824824-825825- alloc = !atomic_fetch_inc(&desc->its_vmapp_cmd.vpe->vmapp_count);826826827827 its_encode_alloc(cmd, alloc);828828···38073807 unsigned long flags;3808380838093809 /*38103810+ * Check if we're racing against a VPE being destroyed, for38113811+ * which we don't want to allow a VMOVP.38123812+ */38133813+ if (!atomic_read(&vpe->vmapp_count))38143814+ return -EINVAL;38153815+38163816+ /*38103817 * Changing affinity is mega expensive, so let's be as lazy as38113818 * we can and only do it if we really have to. Also, if mapped38123819 * into the proxy device, we need to move the doorbell···44704463 raw_spin_lock_init(&vpe->vpe_lock);44714464 vpe->vpe_id = vpe_id;44724465 vpe->vpt_page = vpt_page;44734473- if (gic_rdists->has_rvpeid)44744474- atomic_set(&vpe->vmapp_count, 0);44754475- else44664466+ atomic_set(&vpe->vmapp_count, 0);44674467+ if (!gic_rdists->has_rvpeid)44764468 vpe->vpe_proxy_event = -1;4477446944784470 return 0;
+8-2
drivers/irqchip/irq-mscc-ocelot.c
···3737 .reg_off_ena_clr = 0x1c,3838 .reg_off_ena_set = 0x20,3939 .reg_off_ident = 0x38,4040- .reg_off_trigger = 0x5c,4040+ .reg_off_trigger = 0x4,4141 .n_irq = 24,4242};4343···7070 .reg_off_ena_clr = 0x1c,7171 .reg_off_ena_set = 0x20,7272 .reg_off_ident = 0x38,7373- .reg_off_trigger = 0x5c,7373+ .reg_off_trigger = 0x4,7474 .n_irq = 29,7575};7676···8484 u32 val;85858686 irq_gc_lock(gc);8787+ /*8888+ * Clear sticky bits for edge mode interrupts.8989+ * Serval has only one trigger register replication, but the adjacent9090+ * register is always read as zero, so there's no need to handle this9191+ * case separately.9292+ */8793 val = irq_reg_readl(gc, ICPU_CFG_INTR_INTR_TRIGGER(p, 0)) |8894 irq_reg_readl(gc, ICPU_CFG_INTR_INTR_TRIGGER(p, 1));8995 if (!(val & mask))
+14-2
drivers/irqchip/irq-renesas-rzg2l.c
···88 */991010#include <linux/bitfield.h>1111+#include <linux/cleanup.h>1112#include <linux/clk.h>1213#include <linux/err.h>1314#include <linux/io.h>···531530static int rzg2l_irqc_common_init(struct device_node *node, struct device_node *parent,532531 const struct irq_chip *irq_chip)533532{533533+ struct platform_device *pdev = of_find_device_by_node(node);534534+ struct device *dev __free(put_device) = pdev ? &pdev->dev : NULL;534535 struct irq_domain *irq_domain, *parent_domain;535535- struct platform_device *pdev;536536 struct reset_control *resetn;537537 int ret;538538539539- pdev = of_find_device_by_node(node);540539 if (!pdev)541540 return -ENODEV;542541···591590 }592591593592 register_syscore_ops(&rzg2l_irqc_syscore_ops);593593+594594+ /*595595+ * Prevent the cleanup function from invoking put_device by assigning596596+ * NULL to dev.597597+ *598598+ * make coccicheck will complain about missing put_device calls, but599599+ * those are false positives, as dev will be automatically "put" via600600+ * __free_put_device on the failing path.601601+ * On the successful path we don't actually want to "put" dev.602602+ */603603+ dev = NULL;594604595605 return 0;596606
+1-1
drivers/irqchip/irq-riscv-imsic-platform.c
···341341 imsic->fwnode, global->hart_index_bits, global->guest_index_bits);342342 pr_info("%pfwP: group-index-bits: %d, group-index-shift: %d\n",343343 imsic->fwnode, global->group_index_bits, global->group_index_shift);344344- pr_info("%pfwP: per-CPU IDs %d at base PPN %pa\n",344344+ pr_info("%pfwP: per-CPU IDs %d at base address %pa\n",345345 imsic->fwnode, global->nr_ids, &global->base_addr);346346 pr_info("%pfwP: total %d interrupts available\n",347347 imsic->fwnode, num_possible_cpus() * (global->nr_ids - 1));
+18-1
drivers/irqchip/irq-riscv-intc.c
···265265};266266267267static u32 nr_rintc;268268-static struct rintc_data *rintc_acpi_data[NR_CPUS];268268+static struct rintc_data **rintc_acpi_data;269269270270#define for_each_matching_plic(_plic_id) \271271 unsigned int _plic; \···329329 return 0;330330}331331332332+static int __init riscv_intc_acpi_match(union acpi_subtable_headers *header,333333+ const unsigned long end)334334+{335335+ return 0;336336+}337337+332338static int __init riscv_intc_acpi_init(union acpi_subtable_headers *header,333339 const unsigned long end)334340{335341 struct acpi_madt_rintc *rintc;336342 struct fwnode_handle *fn;343343+ int count;337344 int rc;345345+346346+ if (!rintc_acpi_data) {347347+ count = acpi_table_parse_madt(ACPI_MADT_TYPE_RINTC, riscv_intc_acpi_match, 0);348348+ if (count <= 0)349349+ return -EINVAL;350350+351351+ rintc_acpi_data = kcalloc(count, sizeof(*rintc_acpi_data), GFP_KERNEL);352352+ if (!rintc_acpi_data)353353+ return -ENOMEM;354354+ }338355339356 rintc = (struct acpi_madt_rintc *)header;340357 rintc_acpi_data[nr_rintc] = kzalloc(sizeof(*rintc_acpi_data[0]), GFP_KERNEL);
···27332733 return MICREL_KSZ8_P1_ERRATA;27342734 break;27352735 case KSZ8567_CHIP_ID:27362736+ /* KSZ8567R Errata DS80000752C Module 4 */27372737+ case KSZ8765_CHIP_ID:27382738+ case KSZ8794_CHIP_ID:27392739+ case KSZ8795_CHIP_ID:27402740+ /* KSZ879x/KSZ877x/KSZ876x Errata DS80000687C Module 2 */27362741 case KSZ9477_CHIP_ID:27422742+ /* KSZ9477S Errata DS80000754A Module 4 */27372743 case KSZ9567_CHIP_ID:27442744+ /* KSZ9567S Errata DS80000756A Module 4 */27382745 case KSZ9896_CHIP_ID:27462746+ /* KSZ9896C Errata DS80000757A Module 3 */27392747 case KSZ9897_CHIP_ID:27402740- /* KSZ9477 Errata DS80000754C27412741- *27422742- * Module 4: Energy Efficient Ethernet (EEE) feature select must27432743- * be manually disabled27482748+ /* KSZ9897R Errata DS80000758C Module 4 */27492749+ /* Energy Efficient Ethernet (EEE) feature select must be manually disabled27442750 * The EEE feature is enabled by default, but it is not fully27452751 * operational. It must be manually disabled through register27462752 * controls. If not disabled, the PHY ports can auto-negotiate27472753 * to enable EEE, and this feature can cause link drops when27482754 * linked to another device supporting EEE.27492755 *27502750- * The same item appears in the errata for the KSZ9567, KSZ9896,27512751- * and KSZ9897.27522752- *27532753- * A similar item appears in the errata for the KSZ8567, but27542754- * provides an alternative workaround. For now, use the simple27552755- * workaround of disabling the EEE feature for this device too.27562756+ * The same item appears in the errata for all switches above.27562757 */27572758 return MICREL_NO_EEE;27582759 }
···13811381 be_get_wrb_params_from_skb(adapter, skb, &wrb_params);1382138213831383 wrb_cnt = be_xmit_enqueue(adapter, txo, skb, &wrb_params);13841384- if (unlikely(!wrb_cnt)) {13851385- dev_kfree_skb_any(skb);13861386- goto drop;13871387- }13841384+ if (unlikely(!wrb_cnt))13851385+ goto drop_skb;1388138613891387 /* if os2bmc is enabled and if the pkt is destined to bmc,13901388 * enqueue the pkt a 2nd time with mgmt bit set.···13911393 BE_WRB_F_SET(wrb_params.features, OS2BMC, 1);13921394 wrb_cnt = be_xmit_enqueue(adapter, txo, skb, &wrb_params);13931395 if (unlikely(!wrb_cnt))13941394- goto drop;13961396+ goto drop_skb;13951397 else13961398 skb_get(skb);13971399 }···14051407 be_xmit_flush(adapter, txo);1406140814071409 return NETDEV_TX_OK;14101410+drop_skb:14111411+ dev_kfree_skb_any(skb);14081412drop:14091413 tx_stats(txo)->tx_drv_drops++;14101414 /* Flush the already enqueued tx requests */
+51-17
drivers/net/ethernet/freescale/fman/mac.c
···155155 err = -EINVAL;156156 goto _return_of_node_put;157157 }158158+ mac_dev->fman_dev = &of_dev->dev;158159159160 /* Get the FMan cell-index */160161 err = of_property_read_u32(dev_node, "cell-index", &val);161162 if (err) {162163 dev_err(dev, "failed to read cell-index for %pOF\n", dev_node);163164 err = -EINVAL;164164- goto _return_of_node_put;165165+ goto _return_dev_put;165166 }166167 /* cell-index 0 => FMan id 1 */167168 fman_id = (u8)(val + 1);168169169169- priv->fman = fman_bind(&of_dev->dev);170170+ priv->fman = fman_bind(mac_dev->fman_dev);170171 if (!priv->fman) {171172 dev_err(dev, "fman_bind(%pOF) failed\n", dev_node);172173 err = -ENODEV;173173- goto _return_of_node_put;174174+ goto _return_dev_put;174175 }175176177177+ /* Two references have been taken in of_find_device_by_node()178178+ * and fman_bind(). Release one of them here. The second one179179+ * will be released in mac_remove().180180+ */181181+ put_device(mac_dev->fman_dev);176182 of_node_put(dev_node);183183+ dev_node = NULL;177184178185 /* Get the address of the memory mapped registers */179186 mac_dev->res = platform_get_mem_or_io(_of_dev, 0);180187 if (!mac_dev->res) {181188 dev_err(dev, "could not get registers\n");182182- return -EINVAL;189189+ err = -EINVAL;190190+ goto _return_dev_put;183191 }184192185193 err = devm_request_resource(dev, fman_get_mem_region(priv->fman),186194 mac_dev->res);187195 if (err) {188196 dev_err_probe(dev, err, "could not request resource\n");189189- return err;197197+ goto _return_dev_put;190198 }191199192200 mac_dev->vaddr = devm_ioremap(dev, mac_dev->res->start,193201 resource_size(mac_dev->res));194202 if (!mac_dev->vaddr) {195203 dev_err(dev, "devm_ioremap() failed\n");196196- return -EIO;204204+ err = -EIO;205205+ goto _return_dev_put;197206 }198207199199- if (!of_device_is_available(mac_node))200200- return -ENODEV;208208+ if (!of_device_is_available(mac_node)) {209209+ err = -ENODEV;210210+ goto _return_dev_put;211211+ }201212202213 /* Get the cell-index */203214 err = of_property_read_u32(mac_node, "cell-index", &val);204215 if (err) {205216 dev_err(dev, "failed to read cell-index for %pOF\n", mac_node);206206- return -EINVAL;217217+ err = -EINVAL;218218+ goto _return_dev_put;207219 }208220 priv->cell_index = (u8)val;209221···229217 if (unlikely(nph < 0)) {230218 dev_err(dev, "of_count_phandle_with_args(%pOF, fsl,fman-ports) failed\n",231219 mac_node);232232- return nph;220220+ err = nph;221221+ goto _return_dev_put;233222 }234223235224 if (nph != ARRAY_SIZE(mac_dev->port)) {236225 dev_err(dev, "Not supported number of fman-ports handles of mac node %pOF from device tree\n",237226 mac_node);238238- return -EINVAL;227227+ err = -EINVAL;228228+ goto _return_dev_put;239229 }240230241241- for (i = 0; i < ARRAY_SIZE(mac_dev->port); i++) {231231+ /* PORT_NUM determines the size of the port array */232232+ for (i = 0; i < PORT_NUM; i++) {242233 /* Find the port node */243234 dev_node = of_parse_phandle(mac_node, "fsl,fman-ports", i);244235 if (!dev_node) {245236 dev_err(dev, "of_parse_phandle(%pOF, fsl,fman-ports) failed\n",246237 mac_node);247247- return -EINVAL;238238+ err = -EINVAL;239239+ goto _return_dev_arr_put;248240 }249241250242 of_dev = of_find_device_by_node(dev_node);···256240 dev_err(dev, "of_find_device_by_node(%pOF) failed\n",257241 dev_node);258242 err = -EINVAL;259259- goto _return_of_node_put;243243+ goto _return_dev_arr_put;260244 }245245+ mac_dev->fman_port_devs[i] = &of_dev->dev;261246262262- mac_dev->port[i] = fman_port_bind(&of_dev->dev);247247+ mac_dev->port[i] = fman_port_bind(mac_dev->fman_port_devs[i]);263248 if (!mac_dev->port[i]) {264249 dev_err(dev, "dev_get_drvdata(%pOF) failed\n",265250 dev_node);266251 err = -EINVAL;267267- goto _return_of_node_put;252252+ goto _return_dev_arr_put;268253 }254254+ /* Two references have been taken in of_find_device_by_node()255255+ * and fman_port_bind(). Release one of them here. The second256256+ * one will be released in mac_remove().257257+ */258258+ put_device(mac_dev->fman_port_devs[i]);269259 of_node_put(dev_node);260260+ dev_node = NULL;270261 }271262272263 /* Get the PHY connection type */···293270294271 err = init(mac_dev, mac_node, ¶ms);295272 if (err < 0)296296- return err;273273+ goto _return_dev_arr_put;297274298275 if (!is_zero_ether_addr(mac_dev->addr))299276 dev_info(dev, "FMan MAC address: %pM\n", mac_dev->addr);···308285309286 return err;310287288288+_return_dev_arr_put:289289+ /* mac_dev is kzalloc'ed */290290+ for (i = 0; i < PORT_NUM; i++)291291+ put_device(mac_dev->fman_port_devs[i]);292292+_return_dev_put:293293+ put_device(mac_dev->fman_dev);311294_return_of_node_put:312295 of_node_put(dev_node);313296 return err;···322293static void mac_remove(struct platform_device *pdev)323294{324295 struct mac_device *mac_dev = platform_get_drvdata(pdev);296296+ int i;297297+298298+ for (i = 0; i < PORT_NUM; i++)299299+ put_device(mac_dev->fman_port_devs[i]);300300+ put_device(mac_dev->fman_dev);325301326302 platform_device_unregister(mac_dev->priv->eth_dev);327303}
···10121012 if(skb->len > XMIT_BUFF_SIZE)10131013 {10141014 printk("%s: Sorry, max. framelength is %d bytes. The length of your frame is %d bytes.\n",dev->name,XMIT_BUFF_SIZE,skb->len);10151015+ dev_kfree_skb(skb);10151016 return NETDEV_TX_OK;10161017 }10171018
+59-23
drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
···337337}338338339339/**340340+ * octep_oq_next_pkt() - Move to the next packet in Rx queue.341341+ *342342+ * @oq: Octeon Rx queue data structure.343343+ * @buff_info: Current packet buffer info.344344+ * @read_idx: Current packet index in the ring.345345+ * @desc_used: Current packet descriptor number.346346+ *347347+ * Free the resources associated with a packet.348348+ * Increment packet index in the ring and packet descriptor number.349349+ */350350+static void octep_oq_next_pkt(struct octep_oq *oq,351351+ struct octep_rx_buffer *buff_info,352352+ u32 *read_idx, u32 *desc_used)353353+{354354+ dma_unmap_page(oq->dev, oq->desc_ring[*read_idx].buffer_ptr,355355+ PAGE_SIZE, DMA_FROM_DEVICE);356356+ buff_info->page = NULL;357357+ (*read_idx)++;358358+ (*desc_used)++;359359+ if (*read_idx == oq->max_count)360360+ *read_idx = 0;361361+}362362+363363+/**364364+ * octep_oq_drop_rx() - Free the resources associated with a packet.365365+ *366366+ * @oq: Octeon Rx queue data structure.367367+ * @buff_info: Current packet buffer info.368368+ * @read_idx: Current packet index in the ring.369369+ * @desc_used: Current packet descriptor number.370370+ *371371+ */372372+static void octep_oq_drop_rx(struct octep_oq *oq,373373+ struct octep_rx_buffer *buff_info,374374+ u32 *read_idx, u32 *desc_used)375375+{376376+ int data_len = buff_info->len - oq->max_single_buffer_size;377377+378378+ while (data_len > 0) {379379+ octep_oq_next_pkt(oq, buff_info, read_idx, desc_used);380380+ data_len -= oq->buffer_size;381381+ };382382+}383383+384384+/**340385 * __octep_oq_process_rx() - Process hardware Rx queue and push to stack.341386 *342387 * @oct: Octeon device private data structure.···412367 desc_used = 0;413368 for (pkt = 0; pkt < pkts_to_process; pkt++) {414369 buff_info = (struct octep_rx_buffer *)&oq->buff_info[read_idx];415415- dma_unmap_page(oq->dev, oq->desc_ring[read_idx].buffer_ptr,416416- PAGE_SIZE, DMA_FROM_DEVICE);417370 resp_hw = page_address(buff_info->page);418418- buff_info->page = NULL;419371420372 /* Swap the length field that is in Big-Endian to CPU */421373 buff_info->len = be64_to_cpu(resp_hw->length);···436394 data_offset = OCTEP_OQ_RESP_HW_SIZE;437395 rx_ol_flags = 0;438396 }397397+398398+ octep_oq_next_pkt(oq, buff_info, &read_idx, &desc_used);399399+400400+ skb = build_skb((void *)resp_hw, PAGE_SIZE);401401+ if (!skb) {402402+ octep_oq_drop_rx(oq, buff_info,403403+ &read_idx, &desc_used);404404+ oq->stats.alloc_failures++;405405+ continue;406406+ }407407+ skb_reserve(skb, data_offset);408408+439409 rx_bytes += buff_info->len;440410441411 if (buff_info->len <= oq->max_single_buffer_size) {442442- skb = build_skb((void *)resp_hw, PAGE_SIZE);443443- skb_reserve(skb, data_offset);444412 skb_put(skb, buff_info->len);445445- read_idx++;446446- desc_used++;447447- if (read_idx == oq->max_count)448448- read_idx = 0;449413 } else {450414 struct skb_shared_info *shinfo;451415 u16 data_len;452416453453- skb = build_skb((void *)resp_hw, PAGE_SIZE);454454- skb_reserve(skb, data_offset);455417 /* Head fragment includes response header(s);456418 * subsequent fragments contains only data.457419 */458420 skb_put(skb, oq->max_single_buffer_size);459459- read_idx++;460460- desc_used++;461461- if (read_idx == oq->max_count)462462- read_idx = 0;463463-464421 shinfo = skb_shinfo(skb);465422 data_len = buff_info->len - oq->max_single_buffer_size;466423 while (data_len) {467467- dma_unmap_page(oq->dev, oq->desc_ring[read_idx].buffer_ptr,468468- PAGE_SIZE, DMA_FROM_DEVICE);469424 buff_info = (struct octep_rx_buffer *)470425 &oq->buff_info[read_idx];471426 if (data_len < oq->buffer_size) {···477438 buff_info->page, 0,478439 buff_info->len,479440 buff_info->len);480480- buff_info->page = NULL;481481- read_idx++;482482- desc_used++;483483- if (read_idx == oq->max_count)484484- read_idx = 0;441441+442442+ octep_oq_next_pkt(oq, buff_info, &read_idx, &desc_used);485443 }486444 }487445
···47494749 if ((status & 0xffff) == 0xffff || !(status & tp->irq_mask))47504750 return IRQ_NONE;4751475147524752- if (unlikely(status & SYSErr)) {47524752+ /* At least RTL8168fp may unexpectedly set the SYSErr bit */47534753+ if (unlikely(status & SYSErr &&47544754+ tp->mac_version <= RTL_GIGA_MAC_VER_06)) {47534755 rtl8169_pcierr_interrupt(tp->dev);47544756 goto out;47554757 }
+30
drivers/net/hyperv/netvsc_drv.c
···27982798 },27992799};2800280028012801+/* Set VF's namespace same as the synthetic NIC */28022802+static void netvsc_event_set_vf_ns(struct net_device *ndev)28032803+{28042804+ struct net_device_context *ndev_ctx = netdev_priv(ndev);28052805+ struct net_device *vf_netdev;28062806+ int ret;28072807+28082808+ vf_netdev = rtnl_dereference(ndev_ctx->vf_netdev);28092809+ if (!vf_netdev)28102810+ return;28112811+28122812+ if (!net_eq(dev_net(ndev), dev_net(vf_netdev))) {28132813+ ret = dev_change_net_namespace(vf_netdev, dev_net(ndev),28142814+ "eth%d");28152815+ if (ret)28162816+ netdev_err(vf_netdev,28172817+ "Cannot move to same namespace as %s: %d\n",28182818+ ndev->name, ret);28192819+ else28202820+ netdev_info(vf_netdev,28212821+ "Moved VF to namespace with: %s\n",28222822+ ndev->name);28232823+ }28242824+}28252825+28012826/*28022827 * On Hyper-V, every VF interface is matched with a corresponding28032828 * synthetic interface. The synthetic interface is presented first···28342809{28352810 struct net_device *event_dev = netdev_notifier_info_to_dev(ptr);28362811 int ret = 0;28122812+28132813+ if (event_dev->netdev_ops == &device_ops && event == NETDEV_REGISTER) {28142814+ netvsc_event_set_vf_ns(event_dev);28152815+ return NOTIFY_DONE;28162816+ }2837281728382818 ret = check_dev_is_matching_vf(event_dev);28392819 if (ret != 0)
···113113{114114 int i;115115116116- for (i = 0; i <= pcdev->nr_lines; i++) {116116+ for (i = 0; i < pcdev->nr_lines; i++) {117117 of_node_put(pcdev->pi[i].pairset[0].np);118118 of_node_put(pcdev->pi[i].pairset[1].np);119119 of_node_put(pcdev->pi[i].np);···647647{648648 int i;649649650650- for (i = 0; i <= pcdev->nr_lines; i++) {650650+ for (i = 0; i < pcdev->nr_lines; i++) {651651 if (pcdev->pi[i].np == np)652652 return i;653653 }
···12921292 queue_delayed_work(nvme_wq, &ctrl->ka_work, delay);12931293}1294129412951295-static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,12961296- blk_status_t status)12951295+static void nvme_keep_alive_finish(struct request *rq,12961296+ blk_status_t status, struct nvme_ctrl *ctrl)12971297{12981298- struct nvme_ctrl *ctrl = rq->end_io_data;12991299- unsigned long flags;13001300- bool startka = false;13011298 unsigned long rtt = jiffies - (rq->deadline - rq->timeout);13021299 unsigned long delay = nvme_keep_alive_work_period(ctrl);13001300+ enum nvme_ctrl_state state = nvme_ctrl_state(ctrl);1303130113041302 /*13051303 * Subtract off the keepalive RTT so nvme_keep_alive_work runs···13111313 delay = 0;13121314 }1313131513141314- blk_mq_free_request(rq);13151315-13161316 if (status) {13171317 dev_err(ctrl->device,13181318 "failed nvme_keep_alive_end_io error=%d\n",13191319 status);13201320- return RQ_END_IO_NONE;13201320+ return;13211321 }1322132213231323 ctrl->ka_last_check_time = jiffies;13241324 ctrl->comp_seen = false;13251325- spin_lock_irqsave(&ctrl->lock, flags);13261326- if (ctrl->state == NVME_CTRL_LIVE ||13271327- ctrl->state == NVME_CTRL_CONNECTING)13281328- startka = true;13291329- spin_unlock_irqrestore(&ctrl->lock, flags);13301330- if (startka)13251325+ if (state == NVME_CTRL_LIVE || state == NVME_CTRL_CONNECTING)13311326 queue_delayed_work(nvme_wq, &ctrl->ka_work, delay);13321332- return RQ_END_IO_NONE;13331327}1334132813351329static void nvme_keep_alive_work(struct work_struct *work)···13301340 struct nvme_ctrl, ka_work);13311341 bool comp_seen = ctrl->comp_seen;13321342 struct request *rq;13431343+ blk_status_t status;1333134413341345 ctrl->ka_last_check_time = jiffies;13351346···13531362 nvme_init_request(rq, &ctrl->ka_cmd);1354136313551364 rq->timeout = ctrl->kato * HZ;13561356- rq->end_io = nvme_keep_alive_end_io;13571357- rq->end_io_data = ctrl;13581358- blk_execute_rq_nowait(rq, false);13651365+ status = blk_execute_rq(rq, false);13661366+ nvme_keep_alive_finish(rq, status, ctrl);13671367+ blk_mq_free_request(rq);13591368}1360136913611370static void nvme_start_keep_alive(struct nvme_ctrl *ctrl)···24492458 else24502459 ctrl->ctrl_config = NVME_CC_CSS_NVM;2451246024522452- if (ctrl->cap & NVME_CAP_CRMS_CRWMS && ctrl->cap & NVME_CAP_CRMS_CRIMS)24532453- ctrl->ctrl_config |= NVME_CC_CRIME;24612461+ /*24622462+ * Setting CRIME results in CSTS.RDY before the media is ready. This24632463+ * makes it possible for media related commands to return the error24642464+ * NVME_SC_ADMIN_COMMAND_MEDIA_NOT_READY. Until the driver is24652465+ * restructured to handle retries, disable CC.CRIME.24662466+ */24672467+ ctrl->ctrl_config &= ~NVME_CC_CRIME;2454246824552469 ctrl->ctrl_config |= (NVME_CTRL_PAGE_SHIFT - 12) << NVME_CC_MPS_SHIFT;24562470 ctrl->ctrl_config |= NVME_CC_AMS_RR | NVME_CC_SHN_NONE;···24852489 * devices are known to get this wrong. Use the larger of the24862490 * two values.24872491 */24882488- if (ctrl->ctrl_config & NVME_CC_CRIME)24892489- ready_timeout = NVME_CRTO_CRIMT(crto);24902490- else24912491- ready_timeout = NVME_CRTO_CRWMT(crto);24922492+ ready_timeout = NVME_CRTO_CRWMT(crto);2492249324932494 if (ready_timeout < timeout)24942495 dev_warn_once(ctrl->device, "bad crto:%x cap:%llx\n",
+33-7
drivers/nvme/host/multipath.c
···431431 case NVME_CTRL_LIVE:432432 case NVME_CTRL_RESETTING:433433 case NVME_CTRL_CONNECTING:434434- /* fallthru */435434 return true;436435 default:437436 break;···579580 return ret;580581}581582583583+static void nvme_partition_scan_work(struct work_struct *work)584584+{585585+ struct nvme_ns_head *head =586586+ container_of(work, struct nvme_ns_head, partition_scan_work);587587+588588+ if (WARN_ON_ONCE(!test_and_clear_bit(GD_SUPPRESS_PART_SCAN,589589+ &head->disk->state)))590590+ return;591591+592592+ mutex_lock(&head->disk->open_mutex);593593+ bdev_disk_changed(head->disk, false);594594+ mutex_unlock(&head->disk->open_mutex);595595+}596596+582597static void nvme_requeue_work(struct work_struct *work)583598{584599 struct nvme_ns_head *head =···619606 bio_list_init(&head->requeue_list);620607 spin_lock_init(&head->requeue_lock);621608 INIT_WORK(&head->requeue_work, nvme_requeue_work);609609+ INIT_WORK(&head->partition_scan_work, nvme_partition_scan_work);622610623611 /*624612 * Add a multipath node if the subsystems supports multiple controllers.···643629 return PTR_ERR(head->disk);644630 head->disk->fops = &nvme_ns_head_ops;645631 head->disk->private_data = head;632632+633633+ /*634634+ * We need to suppress the partition scan from occuring within the635635+ * controller's scan_work context. If a path error occurs here, the IO636636+ * will wait until a path becomes available or all paths are torn down,637637+ * but that action also occurs within scan_work, so it would deadlock.638638+ * Defer the partion scan to a different context that does not block639639+ * scan_work.640640+ */641641+ set_bit(GD_SUPPRESS_PART_SCAN, &head->disk->state);646642 sprintf(head->disk->disk_name, "nvme%dn%d",647643 ctrl->subsys->instance, head->instance);648644 return 0;···679655 return;680656 }681657 nvme_add_ns_head_cdev(head);658658+ kblockd_schedule_work(&head->partition_scan_work);682659 }683660684661 mutex_lock(&head->lock);···999974 return;1000975 if (test_and_clear_bit(NVME_NSHEAD_DISK_LIVE, &head->flags)) {1001976 nvme_cdev_del(&head->cdev, &head->cdev_device);977977+ /*978978+ * requeue I/O after NVME_NSHEAD_DISK_LIVE has been cleared979979+ * to allow multipath to fail all I/O.980980+ */981981+ synchronize_srcu(&head->srcu);982982+ kblockd_schedule_work(&head->requeue_work);1002983 del_gendisk(head->disk);1003984 }10041004- /*10051005- * requeue I/O after NVME_NSHEAD_DISK_LIVE has been cleared10061006- * to allow multipath to fail all I/O.10071007- */10081008- synchronize_srcu(&head->srcu);10091009- kblockd_schedule_work(&head->requeue_work);1010985}10119861012987void nvme_mpath_remove_disk(struct nvme_ns_head *head)···1016991 /* make sure all pending bios are cleaned up */1017992 kblockd_schedule_work(&head->requeue_work);1018993 flush_work(&head->requeue_work);994994+ flush_work(&head->partition_scan_work);1019995 put_disk(head->disk);1020996}1021997
···25062506 return 1;25072507}2508250825092509-static void nvme_pci_update_nr_queues(struct nvme_dev *dev)25092509+static bool nvme_pci_update_nr_queues(struct nvme_dev *dev)25102510{25112511 if (!dev->ctrl.tagset) {25122512 nvme_alloc_io_tag_set(&dev->ctrl, &dev->tagset, &nvme_mq_ops,25132513 nvme_pci_nr_maps(dev), sizeof(struct nvme_iod));25142514- return;25142514+ return true;25152515+ }25162516+25172517+ /* Give up if we are racing with nvme_dev_disable() */25182518+ if (!mutex_trylock(&dev->shutdown_lock))25192519+ return false;25202520+25212521+ /* Check if nvme_dev_disable() has been executed already */25222522+ if (!dev->online_queues) {25232523+ mutex_unlock(&dev->shutdown_lock);25242524+ return false;25152525 }2516252625172527 blk_mq_update_nr_hw_queues(&dev->tagset, dev->online_queues - 1);25182528 /* free previously allocated queues that are no longer usable */25192529 nvme_free_queues(dev, dev->online_queues);25302530+ mutex_unlock(&dev->shutdown_lock);25312531+ return true;25202532}2521253325222534static int nvme_pci_enable(struct nvme_dev *dev)···28092797 nvme_dbbuf_set(dev);28102798 nvme_unquiesce_io_queues(&dev->ctrl);28112799 nvme_wait_freeze(&dev->ctrl);28122812- nvme_pci_update_nr_queues(dev);28002800+ if (!nvme_pci_update_nr_queues(dev))28012801+ goto out;28132802 nvme_unfreeze(&dev->ctrl);28142803 } else {28152804 dev_warn(dev->ctrl.device, "IO queues lost\n");
+4-3
drivers/nvme/host/tcp.c
···2644264426452645 len = nvmf_get_address(ctrl, buf, size);2646264626472647+ if (!test_bit(NVME_TCP_Q_LIVE, &queue->flags))26482648+ return len;26492649+26472650 mutex_lock(&queue->queue_lock);2648265126492649- if (!test_bit(NVME_TCP_Q_LIVE, &queue->flags))26502650- goto done;26512652 ret = kernel_getsockname(queue->sock, (struct sockaddr *)&src_addr);26522653 if (ret > 0) {26532654 if (len > 0)···26562655 len += scnprintf(buf + len, size - len, "%ssrc_addr=%pISc\n",26572656 (len) ? "," : "", &src_addr);26582657 }26592659-done:26582658+26602659 mutex_unlock(&queue->queue_lock);2661266026622661 return len;
+13
drivers/nvme/target/loop.c
···265265{266266 if (!test_and_clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags))267267 return;268268+ /*269269+ * It's possible that some requests might have been added270270+ * after admin queue is stopped/quiesced. So now start the271271+ * queue to flush these requests to the completion.272272+ */273273+ nvme_unquiesce_admin_queue(&ctrl->ctrl);274274+268275 nvmet_sq_destroy(&ctrl->queues[0].nvme_sq);269276 nvme_remove_admin_tag_set(&ctrl->ctrl);270277}···304297 nvmet_sq_destroy(&ctrl->queues[i].nvme_sq);305298 }306299 ctrl->ctrl.queue_count = 1;300300+ /*301301+ * It's possible that some requests might have been added302302+ * after io queue is stopped/quiesced. So now start the303303+ * queue to flush these requests to the completion.304304+ */305305+ nvme_unquiesce_io_queues(&ctrl->ctrl);307306}308307309308static int nvme_loop_init_io_queues(struct nvme_loop_ctrl *ctrl)
+2-4
drivers/nvme/target/passthru.c
···535535 break;536536 case nvme_admin_identify:537537 switch (req->cmd->identify.cns) {538538- case NVME_ID_CNS_CTRL:539539- req->execute = nvmet_passthru_execute_cmd;540540- req->p.use_workqueue = true;541541- return NVME_SC_SUCCESS;542538 case NVME_ID_CNS_CS_CTRL:543539 switch (req->cmd->identify.csi) {544540 case NVME_CSI_ZNS:···543547 return NVME_SC_SUCCESS;544548 }545549 return NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;550550+ case NVME_ID_CNS_CTRL:546551 case NVME_ID_CNS_NS:552552+ case NVME_ID_CNS_NS_DESC_LIST:547553 req->execute = nvmet_passthru_execute_cmd;548554 req->p.use_workqueue = true;549555 return NVME_SC_SUCCESS;
+27-29
drivers/nvme/target/rdma.c
···39394040#define NVMET_RDMA_BACKLOG 12841414242+#define NVMET_RDMA_DISCRETE_RSP_TAG -14343+4244struct nvmet_rdma_srq;43454446struct nvmet_rdma_cmd {···7775 u32 invalidate_rkey;78767977 struct list_head wait_list;8080- struct list_head free_list;7878+ int tag;8179};82808381enum nvmet_rdma_queue_state {···10098 struct nvmet_sq nvme_sq;10199102100 struct nvmet_rdma_rsp *rsps;103103- struct list_head free_rsps;104104- spinlock_t rsps_lock;101101+ struct sbitmap rsp_tags;105102 struct nvmet_rdma_cmd *cmds;106103107104 struct work_struct release_work;···173172static void nvmet_rdma_free_rsp(struct nvmet_rdma_device *ndev,174173 struct nvmet_rdma_rsp *r);175174static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev,176176- struct nvmet_rdma_rsp *r);175175+ struct nvmet_rdma_rsp *r,176176+ int tag);177177178178static const struct nvmet_fabrics_ops nvmet_rdma_ops;179179···212210static inline struct nvmet_rdma_rsp *213211nvmet_rdma_get_rsp(struct nvmet_rdma_queue *queue)214212{215215- struct nvmet_rdma_rsp *rsp;216216- unsigned long flags;213213+ struct nvmet_rdma_rsp *rsp = NULL;214214+ int tag;217215218218- spin_lock_irqsave(&queue->rsps_lock, flags);219219- rsp = list_first_entry_or_null(&queue->free_rsps,220220- struct nvmet_rdma_rsp, free_list);221221- if (likely(rsp))222222- list_del(&rsp->free_list);223223- spin_unlock_irqrestore(&queue->rsps_lock, flags);216216+ tag = sbitmap_get(&queue->rsp_tags);217217+ if (tag >= 0)218218+ rsp = &queue->rsps[tag];224219225220 if (unlikely(!rsp)) {226221 int ret;···225226 rsp = kzalloc(sizeof(*rsp), GFP_KERNEL);226227 if (unlikely(!rsp))227228 return NULL;228228- ret = nvmet_rdma_alloc_rsp(queue->dev, rsp);229229+ ret = nvmet_rdma_alloc_rsp(queue->dev, rsp,230230+ NVMET_RDMA_DISCRETE_RSP_TAG);229231 if (unlikely(ret)) {230232 kfree(rsp);231233 return NULL;232234 }233233-234234- rsp->allocated = true;235235 }236236237237 return rsp;···239241static inline void240242nvmet_rdma_put_rsp(struct nvmet_rdma_rsp *rsp)241243{242242- unsigned long flags;243243-244244- if (unlikely(rsp->allocated)) {244244+ if (unlikely(rsp->tag == NVMET_RDMA_DISCRETE_RSP_TAG)) {245245 nvmet_rdma_free_rsp(rsp->queue->dev, rsp);246246 kfree(rsp);247247 return;248248 }249249250250- spin_lock_irqsave(&rsp->queue->rsps_lock, flags);251251- list_add_tail(&rsp->free_list, &rsp->queue->free_rsps);252252- spin_unlock_irqrestore(&rsp->queue->rsps_lock, flags);250250+ sbitmap_clear_bit(&rsp->queue->rsp_tags, rsp->tag);253251}254252255253static void nvmet_rdma_free_inline_pages(struct nvmet_rdma_device *ndev,···398404}399405400406static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev,401401- struct nvmet_rdma_rsp *r)407407+ struct nvmet_rdma_rsp *r, int tag)402408{403409 /* NVMe CQE / RDMA SEND */404410 r->req.cqe = kmalloc(sizeof(*r->req.cqe), GFP_KERNEL);···426432 r->read_cqe.done = nvmet_rdma_read_data_done;427433 /* Data Out / RDMA WRITE */428434 r->write_cqe.done = nvmet_rdma_write_data_done;435435+ r->tag = tag;429436430437 return 0;431438···449454{450455 struct nvmet_rdma_device *ndev = queue->dev;451456 int nr_rsps = queue->recv_queue_size * 2;452452- int ret = -EINVAL, i;457457+ int ret = -ENOMEM, i;458458+459459+ if (sbitmap_init_node(&queue->rsp_tags, nr_rsps, -1, GFP_KERNEL,460460+ NUMA_NO_NODE, false, true))461461+ goto out;453462454463 queue->rsps = kcalloc(nr_rsps, sizeof(struct nvmet_rdma_rsp),455464 GFP_KERNEL);456465 if (!queue->rsps)457457- goto out;466466+ goto out_free_sbitmap;458467459468 for (i = 0; i < nr_rsps; i++) {460469 struct nvmet_rdma_rsp *rsp = &queue->rsps[i];461470462462- ret = nvmet_rdma_alloc_rsp(ndev, rsp);471471+ ret = nvmet_rdma_alloc_rsp(ndev, rsp, i);463472 if (ret)464473 goto out_free;465465-466466- list_add_tail(&rsp->free_list, &queue->free_rsps);467474 }468475469476 return 0;···474477 while (--i >= 0)475478 nvmet_rdma_free_rsp(ndev, &queue->rsps[i]);476479 kfree(queue->rsps);480480+out_free_sbitmap:481481+ sbitmap_free(&queue->rsp_tags);477482out:478483 return ret;479484}···488489 for (i = 0; i < nr_rsps; i++)489490 nvmet_rdma_free_rsp(ndev, &queue->rsps[i]);490491 kfree(queue->rsps);492492+ sbitmap_free(&queue->rsp_tags);491493}492494493495static int nvmet_rdma_post_recv(struct nvmet_rdma_device *ndev,···14471447 INIT_LIST_HEAD(&queue->rsp_wait_list);14481448 INIT_LIST_HEAD(&queue->rsp_wr_wait_list);14491449 spin_lock_init(&queue->rsp_wr_wait_lock);14501450- INIT_LIST_HEAD(&queue->free_rsps);14511451- spin_lock_init(&queue->rsps_lock);14521450 INIT_LIST_HEAD(&queue->queue_list);1453145114541452 queue->idx = ida_alloc(&nvmet_rdma_queue_ida, GFP_KERNEL);
+11-11
drivers/parport/procfs.c
···51515252 for (dev = port->devices; dev ; dev = dev->next) {5353 if(dev == port->cad) {5454- len += snprintf(buffer, sizeof(buffer), "%s\n", dev->name);5454+ len += scnprintf(buffer, sizeof(buffer), "%s\n", dev->name);5555 }5656 }57575858 if(!len) {5959- len += snprintf(buffer, sizeof(buffer), "%s\n", "none");5959+ len += scnprintf(buffer, sizeof(buffer), "%s\n", "none");6060 }61616262 if (len > *lenp)···8787 }88888989 if ((str = info->class_name) != NULL)9090- len += snprintf (buffer + len, sizeof(buffer) - len, "CLASS:%s;\n", str);9090+ len += scnprintf (buffer + len, sizeof(buffer) - len, "CLASS:%s;\n", str);91919292 if ((str = info->model) != NULL)9393- len += snprintf (buffer + len, sizeof(buffer) - len, "MODEL:%s;\n", str);9393+ len += scnprintf (buffer + len, sizeof(buffer) - len, "MODEL:%s;\n", str);94949595 if ((str = info->mfr) != NULL)9696- len += snprintf (buffer + len, sizeof(buffer) - len, "MANUFACTURER:%s;\n", str);9696+ len += scnprintf (buffer + len, sizeof(buffer) - len, "MANUFACTURER:%s;\n", str);97979898 if ((str = info->description) != NULL)9999- len += snprintf (buffer + len, sizeof(buffer) - len, "DESCRIPTION:%s;\n", str);9999+ len += scnprintf (buffer + len, sizeof(buffer) - len, "DESCRIPTION:%s;\n", str);100100101101 if ((str = info->cmdset) != NULL)102102- len += snprintf (buffer + len, sizeof(buffer) - len, "COMMAND SET:%s;\n", str);102102+ len += scnprintf (buffer + len, sizeof(buffer) - len, "COMMAND SET:%s;\n", str);103103104104 if (len > *lenp)105105 len = *lenp;···128128 if (write) /* permissions prevent this anyway */129129 return -EACCES;130130131131- len += snprintf (buffer, sizeof(buffer), "%lu\t%lu\n", port->base, port->base_hi);131131+ len += scnprintf (buffer, sizeof(buffer), "%lu\t%lu\n", port->base, port->base_hi);132132133133 if (len > *lenp)134134 len = *lenp;···155155 if (write) /* permissions prevent this anyway */156156 return -EACCES;157157158158- len += snprintf (buffer, sizeof(buffer), "%d\n", port->irq);158158+ len += scnprintf (buffer, sizeof(buffer), "%d\n", port->irq);159159160160 if (len > *lenp)161161 len = *lenp;···182182 if (write) /* permissions prevent this anyway */183183 return -EACCES;184184185185- len += snprintf (buffer, sizeof(buffer), "%d\n", port->dma);185185+ len += scnprintf (buffer, sizeof(buffer), "%d\n", port->dma);186186187187 if (len > *lenp)188188 len = *lenp;···213213#define printmode(x) \214214do { \215215 if (port->modes & PARPORT_MODE_##x) \216216- len += snprintf(buffer + len, sizeof(buffer) - len, "%s%s", f++ ? "," : "", #x); \216216+ len += scnprintf(buffer + len, sizeof(buffer) - len, "%s%s", f++ ? "," : "", #x); \217217} while (0)218218 int f = 0;219219 printmode(PCSPP);
+1
drivers/pinctrl/intel/Kconfig
···4646 of Intel PCH pins and using them as GPIOs. Currently the following4747 Intel SoCs / platforms require this to be functional:4848 - Lunar Lake4949+ - Panther Lake49505051config PINCTRL_ALDERLAKE5152 tristate "Intel Alder Lake pinctrl and GPIO driver"
+2-3
drivers/pinctrl/intel/pinctrl-intel-platform.c
···9090 struct intel_community *community,9191 struct intel_platform_pins *pins)9292{9393- struct fwnode_handle *child;9493 struct intel_padgroup *gpps;9594 unsigned int group;9695 size_t ngpps;···130131 return -ENOMEM;131132132133 group = 0;133133- device_for_each_child_node(dev, child) {134134+ device_for_each_child_node_scoped(dev, child) {134135 struct intel_padgroup *gpp = &gpps[group];135136136137 gpp->reg_num = group;···158159 int ret;159160160161 /* Version 1.0 of the specification assumes only a single community per device node */161161- ncommunities = 1,162162+ ncommunities = 1;162163 communities = devm_kcalloc(dev, ncommunities, sizeof(*communities), GFP_KERNEL);163164 if (!communities)164165 return -ENOMEM;
···542542 * @port_list: List of ports belonging to a SAS node543543 * @num_phys: Number of phys associated with port544544 * @marked_responding: used while refresing the sas ports545545- * @lowest_phy: lowest phy ID of current sas port546546- * @phy_mask: phy_mask of current sas port545545+ * @lowest_phy: lowest phy ID of current sas port, valid for controller port546546+ * @phy_mask: phy_mask of current sas port, valid for controller port547547 * @hba_port: HBA port entry548548 * @remote_identify: Attached device identification549549 * @rphy: SAS transport layer rphy object
+27-15
drivers/scsi/mpi3mr/mpi3mr_transport.c
···590590 * @mrioc: Adapter instance reference591591 * @mr_sas_port: Internal Port object592592 * @mr_sas_phy: Internal Phy object593593+ * @host_node: Flag to indicate this is a host_node593594 *594595 * Return: None.595596 */596597static void mpi3mr_delete_sas_phy(struct mpi3mr_ioc *mrioc,597598 struct mpi3mr_sas_port *mr_sas_port,598598- struct mpi3mr_sas_phy *mr_sas_phy)599599+ struct mpi3mr_sas_phy *mr_sas_phy, u8 host_node)599600{600601 u64 sas_address = mr_sas_port->remote_identify.sas_address;601602···606605607606 list_del(&mr_sas_phy->port_siblings);608607 mr_sas_port->num_phys--;609609- mr_sas_port->phy_mask &= ~(1 << mr_sas_phy->phy_id);610610- if (mr_sas_port->lowest_phy == mr_sas_phy->phy_id)611611- mr_sas_port->lowest_phy = ffs(mr_sas_port->phy_mask) - 1;608608+609609+ if (host_node) {610610+ mr_sas_port->phy_mask &= ~(1 << mr_sas_phy->phy_id);611611+612612+ if (mr_sas_port->lowest_phy == mr_sas_phy->phy_id)613613+ mr_sas_port->lowest_phy = ffs(mr_sas_port->phy_mask) - 1;614614+ }612615 sas_port_delete_phy(mr_sas_port->port, mr_sas_phy->phy);613616 mr_sas_phy->phy_belongs_to_port = 0;614617}···622617 * @mrioc: Adapter instance reference623618 * @mr_sas_port: Internal Port object624619 * @mr_sas_phy: Internal Phy object620620+ * @host_node: Flag to indicate this is a host_node625621 *626622 * Return: None.627623 */628624static void mpi3mr_add_sas_phy(struct mpi3mr_ioc *mrioc,629625 struct mpi3mr_sas_port *mr_sas_port,630630- struct mpi3mr_sas_phy *mr_sas_phy)626626+ struct mpi3mr_sas_phy *mr_sas_phy, u8 host_node)631627{632628 u64 sas_address = mr_sas_port->remote_identify.sas_address;633629···638632639633 list_add_tail(&mr_sas_phy->port_siblings, &mr_sas_port->phy_list);640634 mr_sas_port->num_phys++;641641- mr_sas_port->phy_mask |= (1 << mr_sas_phy->phy_id);642642- if (mr_sas_phy->phy_id < mr_sas_port->lowest_phy)643643- mr_sas_port->lowest_phy = ffs(mr_sas_port->phy_mask) - 1;635635+ if (host_node) {636636+ mr_sas_port->phy_mask |= (1 << mr_sas_phy->phy_id);637637+638638+ if (mr_sas_phy->phy_id < mr_sas_port->lowest_phy)639639+ mr_sas_port->lowest_phy = ffs(mr_sas_port->phy_mask) - 1;640640+ }644641 sas_port_add_phy(mr_sas_port->port, mr_sas_phy->phy);645642 mr_sas_phy->phy_belongs_to_port = 1;646643}···684675 if (srch_phy == mr_sas_phy)685676 return;686677 }687687- mpi3mr_add_sas_phy(mrioc, mr_sas_port, mr_sas_phy);678678+ mpi3mr_add_sas_phy(mrioc, mr_sas_port, mr_sas_phy, mr_sas_node->host_node);688679 return;689680 }690681}···745736 mpi3mr_delete_sas_port(mrioc, mr_sas_port);746737 else747738 mpi3mr_delete_sas_phy(mrioc, mr_sas_port,748748- mr_sas_phy);739739+ mr_sas_phy, mr_sas_node->host_node);749740 return;750741 }751742 }···10371028/**10381029 * mpi3mr_get_hba_port_by_id - find hba port by id10391030 * @mrioc: Adapter instance reference10401040- * @port_id - Port ID to search10311031+ * @port_id: Port ID to search10411032 *10421033 * Return: mpi3mr_hba_port reference for the matched port10431034 */···13761367 mpi3mr_sas_port_sanity_check(mrioc, mr_sas_node,13771368 mr_sas_port->remote_identify.sas_address, hba_port);1378136913791379- if (mr_sas_node->num_phys >= sizeof(mr_sas_port->phy_mask) * 8)13701370+ if (mr_sas_node->host_node && mr_sas_node->num_phys >=13711371+ sizeof(mr_sas_port->phy_mask) * 8)13801372 ioc_info(mrioc, "max port count %u could be too high\n",13811373 mr_sas_node->num_phys);13821374···13871377 (mr_sas_node->phy[i].hba_port != hba_port))13881378 continue;1389137913901390- if (i >= sizeof(mr_sas_port->phy_mask) * 8) {13801380+ if (mr_sas_node->host_node && (i >= sizeof(mr_sas_port->phy_mask) * 8)) {13911381 ioc_warn(mrioc, "skipping port %u, max allowed value is %zu\n",13921382 i, sizeof(mr_sas_port->phy_mask) * 8);13931383 goto out_fail;···13951385 list_add_tail(&mr_sas_node->phy[i].port_siblings,13961386 &mr_sas_port->phy_list);13971387 mr_sas_port->num_phys++;13981398- mr_sas_port->phy_mask |= (1 << i);13881388+ if (mr_sas_node->host_node)13891389+ mr_sas_port->phy_mask |= (1 << i);13991390 }1400139114011392 if (!mr_sas_port->num_phys) {···14051394 goto out_fail;14061395 }1407139614081408- mr_sas_port->lowest_phy = ffs(mr_sas_port->phy_mask) - 1;13971397+ if (mr_sas_node->host_node)13981398+ mr_sas_port->lowest_phy = ffs(mr_sas_port->phy_mask) - 1;1409139914101400 if (mr_sas_port->remote_identify.device_type == SAS_END_DEVICE) {14111401 tgtdev = mpi3mr_get_tgtdev_by_addr(mrioc,
···31573157 mutex_unlock(&gsm->mutex);31583158 /* Now wipe the queues */31593159 tty_ldisc_flush(gsm->tty);31603160+31613161+ guard(spinlock_irqsave)(&gsm->tx_lock);31603162 list_for_each_entry_safe(txq, ntxq, &gsm->tx_ctrl_list, list)31613163 kfree(txq);31623164 INIT_LIST_HEAD(&gsm->tx_ctrl_list);
+15
drivers/tty/serial/imx.c
···762762763763 imx_uart_writel(sport, USR1_RTSD, USR1);764764 usr1 = imx_uart_readl(sport, USR1) & USR1_RTSS;765765+ /*766766+ * Update sport->old_status here, so any follow-up calls to767767+ * imx_uart_mctrl_check() will be able to recognize that RTS768768+ * state changed since last imx_uart_mctrl_check() call.769769+ *770770+ * In case RTS has been detected as asserted here and later on771771+ * deasserted by the time imx_uart_mctrl_check() was called,772772+ * imx_uart_mctrl_check() can detect the RTS state change and773773+ * trigger uart_handle_cts_change() to unblock the port for774774+ * further TX transfers.775775+ */776776+ if (usr1 & USR1_RTSS)777777+ sport->old_status |= TIOCM_CTS;778778+ else779779+ sport->old_status &= ~TIOCM_CTS;765780 uart_handle_cts_change(&sport->port, usr1);766781 wake_up_interruptible(&sport->port.state->port.delta_msr_wait);767782
+48-55
drivers/tty/serial/qcom_geni_serial.c
···147147148148static void __qcom_geni_serial_cancel_tx_cmd(struct uart_port *uport);149149static void qcom_geni_serial_cancel_tx_cmd(struct uart_port *uport);150150+static int qcom_geni_serial_port_setup(struct uart_port *uport);150151151152static inline struct qcom_geni_serial_port *to_dev_port(struct uart_port *uport)152153{···396395 writel(c, uport->membase + SE_GENI_TX_FIFOn);397396 qcom_geni_serial_poll_tx_done(uport);398397}398398+399399+static int qcom_geni_serial_poll_init(struct uart_port *uport)400400+{401401+ struct qcom_geni_serial_port *port = to_dev_port(uport);402402+ int ret;403403+404404+ if (!port->setup) {405405+ ret = qcom_geni_serial_port_setup(uport);406406+ if (ret)407407+ return ret;408408+ }409409+410410+ if (!qcom_geni_serial_secondary_active(uport))411411+ geni_se_setup_s_cmd(&port->se, UART_START_READ, 0);412412+413413+ return 0;414414+}399415#endif400416401417#ifdef CONFIG_SERIAL_QCOM_GENI_CONSOLE···580562}581563#endif /* CONFIG_SERIAL_QCOM_GENI_CONSOLE */582564583583-static void handle_rx_uart(struct uart_port *uport, u32 bytes, bool drop)565565+static void handle_rx_uart(struct uart_port *uport, u32 bytes)584566{585567 struct qcom_geni_serial_port *port = to_dev_port(uport);586568 struct tty_port *tport = &uport->state->port;···588570589571 ret = tty_insert_flip_string(tport, port->rx_buf, bytes);590572 if (ret != bytes) {591591- dev_err(uport->dev, "%s:Unable to push data ret %d_bytes %d\n",592592- __func__, ret, bytes);593593- WARN_ON_ONCE(1);573573+ dev_err_ratelimited(uport->dev, "failed to push data (%d < %u)\n",574574+ ret, bytes);594575 }595576 uport->icount.rx += ret;596577 tty_flip_buffer_push(tport);···804787static void qcom_geni_serial_stop_rx_dma(struct uart_port *uport)805788{806789 struct qcom_geni_serial_port *port = to_dev_port(uport);790790+ bool done;807791808792 if (!qcom_geni_serial_secondary_active(uport))809793 return;810794811795 geni_se_cancel_s_cmd(&port->se);812812- qcom_geni_serial_poll_bit(uport, SE_GENI_S_IRQ_STATUS,813813- S_CMD_CANCEL_EN, true);814814-815815- if (qcom_geni_serial_secondary_active(uport))796796+ done = qcom_geni_serial_poll_bit(uport, SE_DMA_RX_IRQ_STAT,797797+ RX_EOT, true);798798+ if (done) {799799+ writel(RX_EOT | RX_DMA_DONE,800800+ uport->membase + SE_DMA_RX_IRQ_CLR);801801+ } else {816802 qcom_geni_serial_abort_rx(uport);803803+804804+ writel(1, uport->membase + SE_DMA_RX_FSM_RST);805805+ qcom_geni_serial_poll_bit(uport, SE_DMA_RX_IRQ_STAT,806806+ RX_RESET_DONE, true);807807+ writel(RX_RESET_DONE | RX_DMA_DONE,808808+ uport->membase + SE_DMA_RX_IRQ_CLR);809809+ }817810818811 if (port->rx_dma_addr) {819812 geni_se_rx_dma_unprep(&port->se, port->rx_dma_addr,···873846 }874847875848 if (!drop)876876- handle_rx_uart(uport, rx_in, drop);849849+ handle_rx_uart(uport, rx_in);877850878851 ret = geni_se_rx_dma_prep(&port->se, port->rx_buf,879852 DMA_RX_BUF_SIZE,···11231096{11241097 disable_irq(uport->irq);1125109810991099+ uart_port_lock_irq(uport);11261100 qcom_geni_serial_stop_tx(uport);11271101 qcom_geni_serial_stop_rx(uport);1128110211291103 qcom_geni_serial_cancel_tx_cmd(uport);11041104+ uart_port_unlock_irq(uport);11301105}1131110611321107static void qcom_geni_serial_flush_buffer(struct uart_port *uport)···11811152 false, true, true);11821153 geni_se_init(&port->se, UART_RX_WM, port->rx_fifo_depth - 2);11831154 geni_se_select_mode(&port->se, port->dev_data->mode);11841184- qcom_geni_serial_start_rx(uport);11851155 port->setup = true;1186115611871157 return 0;···11961168 if (ret)11971169 return ret;11981170 }11711171+11721172+ uart_port_lock_irq(uport);11731173+ qcom_geni_serial_start_rx(uport);11741174+ uart_port_unlock_irq(uport);11751175+11991176 enable_irq(uport->irq);1200117712011178 return 0;···12861253 unsigned int avg_bw_core;12871254 unsigned long timeout;1288125512891289- qcom_geni_serial_stop_rx(uport);12901256 /* baud rate */12911257 baud = uart_get_baud_rate(uport, termios, old, 300, 4000000);12921258···13011269 dev_err(port->se.dev,13021270 "Couldn't find suitable clock rate for %u\n",13031271 baud * sampling_rate);13041304- goto out_restart_rx;12721272+ return;13051273 }1306127413071275 dev_dbg(port->se.dev, "desired_rate = %u, clk_rate = %lu, clk_div = %u\n",···13921360 writel(stop_bit_len, uport->membase + SE_UART_TX_STOP_BIT_LEN);13931361 writel(ser_clk_cfg, uport->membase + GENI_SER_M_CLK_CFG);13941362 writel(ser_clk_cfg, uport->membase + GENI_SER_S_CLK_CFG);13951395-out_restart_rx:13961396- qcom_geni_serial_start_rx(uport);13971363}1398136413991365#ifdef CONFIG_SERIAL_QCOM_GENI_CONSOLE···16121582#ifdef CONFIG_CONSOLE_POLL16131583 .poll_get_char = qcom_geni_serial_get_char,16141584 .poll_put_char = qcom_geni_serial_poll_put_char,16151615- .poll_init = qcom_geni_serial_port_setup,15851585+ .poll_init = qcom_geni_serial_poll_init,16161586#endif16171587 .pm = qcom_geni_serial_pm,16181588};···17791749 uart_remove_one_port(drv, &port->uport);17801750}1781175117821782-static int qcom_geni_serial_sys_suspend(struct device *dev)17521752+static int qcom_geni_serial_suspend(struct device *dev)17831753{17841754 struct qcom_geni_serial_port *port = dev_get_drvdata(dev);17851755 struct uart_port *uport = &port->uport;···17961766 return uart_suspend_port(private_data->drv, uport);17971767}1798176817991799-static int qcom_geni_serial_sys_resume(struct device *dev)17691769+static int qcom_geni_serial_resume(struct device *dev)18001770{18011771 int ret;18021772 struct qcom_geni_serial_port *port = dev_get_drvdata(dev);···18071777 if (uart_console(uport)) {18081778 geni_icc_set_tag(&port->se, QCOM_ICC_TAG_ALWAYS);18091779 geni_icc_set_bw(&port->se);18101810- }18111811- return ret;18121812-}18131813-18141814-static int qcom_geni_serial_sys_hib_resume(struct device *dev)18151815-{18161816- int ret = 0;18171817- struct uart_port *uport;18181818- struct qcom_geni_private_data *private_data;18191819- struct qcom_geni_serial_port *port = dev_get_drvdata(dev);18201820-18211821- uport = &port->uport;18221822- private_data = uport->private_data;18231823-18241824- if (uart_console(uport)) {18251825- geni_icc_set_tag(&port->se, QCOM_ICC_TAG_ALWAYS);18261826- geni_icc_set_bw(&port->se);18271827- ret = uart_resume_port(private_data->drv, uport);18281828- /*18291829- * For hibernation usecase clients for18301830- * console UART won't call port setup during restore,18311831- * hence call port setup for console uart.18321832- */18331833- qcom_geni_serial_port_setup(uport);18341834- } else {18351835- /*18361836- * Peripheral register settings are lost during hibernation.18371837- * Update setup flag such that port setup happens again18381838- * during next session. Clients of HS-UART will close and18391839- * open the port during hibernation.18401840- */18411841- port->setup = false;18421780 }18431781 return ret;18441782}···18221824};1823182518241826static const struct dev_pm_ops qcom_geni_serial_pm_ops = {18251825- .suspend = pm_sleep_ptr(qcom_geni_serial_sys_suspend),18261826- .resume = pm_sleep_ptr(qcom_geni_serial_sys_resume),18271827- .freeze = pm_sleep_ptr(qcom_geni_serial_sys_suspend),18281828- .poweroff = pm_sleep_ptr(qcom_geni_serial_sys_suspend),18291829- .restore = pm_sleep_ptr(qcom_geni_serial_sys_hib_resume),18301830- .thaw = pm_sleep_ptr(qcom_geni_serial_sys_hib_resume),18271827+ SYSTEM_SLEEP_PM_OPS(qcom_geni_serial_suspend, qcom_geni_serial_resume)18311828};1832182918331830static const struct of_device_id qcom_geni_serial_match_table[] = {
+1-1
drivers/tty/vt/vt.c
···47264726 return -EINVAL;4727472747284728 if (op->data) {47294729- font.data = kvmalloc(max_font_size, GFP_KERNEL);47294729+ font.data = kvzalloc(max_font_size, GFP_KERNEL);47304730 if (!font.data)47314731 return -ENOMEM;47324732 } else
···54165416 }54175417 break;54185418 case OCS_ABORTED:54195419- result |= DID_ABORT << 16;54205420- break;54215419 case OCS_INVALID_COMMAND_STATUS:54225420 result |= DID_REQUEUE << 16;54215421+ dev_warn(hba->dev,54225422+ "OCS %s from controller for tag %d\n",54235423+ (ocs == OCS_ABORTED ? "aborted" : "invalid"),54245424+ lrbp->task_tag);54235425 break;54245426 case OCS_INVALID_CMD_TABLE_ATTR:54255427 case OCS_INVALID_PRDT_ATTR:···64676465 struct scsi_device *sdev = cmd->device;64686466 struct Scsi_Host *shost = sdev->host;64696467 struct ufs_hba *hba = shost_priv(shost);64706470- struct ufshcd_lrb *lrbp = &hba->lrb[tag];64716471- struct ufs_hw_queue *hwq;64726472- unsigned long flags;6473646864746469 *ret = ufshcd_try_to_abort_task(hba, tag);64756470 dev_err(hba->dev, "Aborting tag %d / CDB %#02x %s\n", tag,64766471 hba->lrb[tag].cmd ? hba->lrb[tag].cmd->cmnd[0] : -1,64776472 *ret ? "failed" : "succeeded");64786478-64796479- /* Release cmd in MCQ mode if abort succeeds */64806480- if (hba->mcq_enabled && (*ret == 0)) {64816481- hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(lrbp->cmd));64826482- if (!hwq)64836483- return 0;64846484- spin_lock_irqsave(&hwq->cq_lock, flags);64856485- if (ufshcd_cmd_inflight(lrbp->cmd))64866486- ufshcd_release_scsi_cmd(hba, lrbp);64876487- spin_unlock_irqrestore(&hwq->cq_lock, flags);64886488- }6489647364906474 return *ret == 0;64916475}···1019710209 shost_for_each_device(sdev, hba->host) {1019810210 if (sdev == hba->ufs_device_wlun)1019910211 continue;1020010200- scsi_device_quiesce(sdev);1021210212+ mutex_lock(&sdev->state_mutex);1021310213+ scsi_device_set_state(sdev, SDEV_OFFLINE);1021410214+ mutex_unlock(&sdev->state_mutex);1020110215 }1020210216 __ufshcd_wl_suspend(hba, UFS_SHUTDOWN_PM);1020310217
+19
drivers/usb/dwc3/core.c
···23422342 u32 reg;23432343 int i;2344234423452345+ dwc->susphy_state = (dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0)) &23462346+ DWC3_GUSB2PHYCFG_SUSPHY) ||23472347+ (dwc3_readl(dwc->regs, DWC3_GUSB3PIPECTL(0)) &23482348+ DWC3_GUSB3PIPECTL_SUSPHY);23492349+23452350 switch (dwc->current_dr_role) {23462351 case DWC3_GCTL_PRTCAP_DEVICE:23472352 if (pm_runtime_suspended(dwc->dev))···23962391 default:23972392 /* do nothing */23982393 break;23942394+ }23952395+23962396+ if (!PMSG_IS_AUTO(msg)) {23972397+ /*23982398+ * TI AM62 platform requires SUSPHY to be23992399+ * enabled for system suspend to work.24002400+ */24012401+ if (!dwc->susphy_state)24022402+ dwc3_enable_susphy(dwc, true);23992403 }2400240424012405 return 0;···24722458 default:24732459 /* do nothing */24742460 break;24612461+ }24622462+24632463+ if (!PMSG_IS_AUTO(msg)) {24642464+ /* restore SUSPHY state to that before system suspend. */24652465+ dwc3_enable_susphy(dwc, dwc->susphy_state);24752466 }2476246724772468 return 0;
+3
drivers/usb/dwc3/core.h
···11501150 * @sys_wakeup: set if the device may do system wakeup.11511151 * @wakeup_configured: set if the device is configured for remote wakeup.11521152 * @suspended: set to track suspend event due to U3/L2.11531153+ * @susphy_state: state of DWC3_GUSB2PHYCFG_SUSPHY + DWC3_GUSB3PIPECTL_SUSPHY11541154+ * before PM suspend.11531155 * @imod_interval: set the interrupt moderation interval in 250ns11541156 * increments or 0 to disable.11551157 * @max_cfg_eps: current max number of IN eps used across all USB configs.···13841382 unsigned sys_wakeup:1;13851383 unsigned wakeup_configured:1;13861384 unsigned suspended:1;13851385+ unsigned susphy_state:1;1387138613881387 u16 imod_interval;13891388
+6-4
drivers/usb/dwc3/gadget.c
···438438 dwc3_gadget_ep_get_transfer_index(dep);439439 }440440441441+ if (DWC3_DEPCMD_CMD(cmd) == DWC3_DEPCMD_ENDTRANSFER &&442442+ !(cmd & DWC3_DEPCMD_CMDIOC))443443+ mdelay(1);444444+441445 if (saved_config) {442446 reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));443447 reg |= saved_config;···17191715 WARN_ON_ONCE(ret);17201716 dep->resource_index = 0;1721171717221722- if (!interrupt) {17231723- mdelay(1);17181718+ if (!interrupt)17241719 dep->flags &= ~DWC3_EP_TRANSFER_STARTED;17251725- } else if (!ret) {17201720+ else if (!ret)17261721 dep->flags |= DWC3_EP_END_TRANSFER_PENDING;17271727- }1728172217291723 dep->flags &= ~DWC3_EP_DELAY_STOP;17301724 return ret;
+3-3
drivers/usb/gadget/function/f_uac2.c
···20612061 const char *page, size_t len) \20622062{ \20632063 struct f_uac2_opts *opts = to_f_uac2_opts(item); \20642064- int ret = 0; \20642064+ int ret = len; \20652065 \20662066 mutex_lock(&opts->lock); \20672067 if (opts->refcnt) { \···20722072 if (len && page[len - 1] == '\n') \20732073 len--; \20742074 \20752075- ret = scnprintf(opts->name, min(sizeof(opts->name), len + 1), \20762076- "%s", page); \20752075+ scnprintf(opts->name, min(sizeof(opts->name), len + 1), \20762076+ "%s", page); \20772077 \20782078end: \20792079 mutex_unlock(&opts->lock); \
+15-5
drivers/usb/gadget/udc/dummy_hcd.c
···254254 u32 stream_en_ep;255255 u8 num_stream[30 / 2];256256257257+ unsigned timer_pending:1;257258 unsigned active:1;258259 unsigned old_active:1;259260 unsigned resuming:1;···13041303 urb->error_count = 1; /* mark as a new urb */1305130413061305 /* kick the scheduler, it'll do the rest */13071307- if (!hrtimer_active(&dum_hcd->timer))13061306+ if (!dum_hcd->timer_pending) {13071307+ dum_hcd->timer_pending = 1;13081308 hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS),13091309 HRTIMER_MODE_REL_SOFT);13101310+ }1310131113111312 done:13121313 spin_unlock_irqrestore(&dum_hcd->dum->lock, flags);···13271324 spin_lock_irqsave(&dum_hcd->dum->lock, flags);1328132513291326 rc = usb_hcd_check_unlink_urb(hcd, urb, status);13301330- if (!rc && dum_hcd->rh_state != DUMMY_RH_RUNNING &&13311331- !list_empty(&dum_hcd->urbp_list))13271327+ if (rc == 0 && !dum_hcd->timer_pending) {13281328+ dum_hcd->timer_pending = 1;13321329 hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL_SOFT);13301330+ }1333133113341332 spin_unlock_irqrestore(&dum_hcd->dum->lock, flags);13351333 return rc;···1817181318181814 /* look at each urb queued by the host side driver */18191815 spin_lock_irqsave(&dum->lock, flags);18161816+ dum_hcd->timer_pending = 0;1820181718211818 if (!dum_hcd->udev) {18221819 dev_err(dummy_dev(dum_hcd),···19991994 if (list_empty(&dum_hcd->urbp_list)) {20001995 usb_put_dev(dum_hcd->udev);20011996 dum_hcd->udev = NULL;20022002- } else if (dum_hcd->rh_state == DUMMY_RH_RUNNING) {19971997+ } else if (!dum_hcd->timer_pending &&19981998+ dum_hcd->rh_state == DUMMY_RH_RUNNING) {20031999 /* want a 1 msec delay here */20002000+ dum_hcd->timer_pending = 1;20042001 hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS),20052002 HRTIMER_MODE_REL_SOFT);20062003 }···23972390 } else {23982391 dum_hcd->rh_state = DUMMY_RH_RUNNING;23992392 set_link_state(dum_hcd);24002400- if (!list_empty(&dum_hcd->urbp_list))23932393+ if (!list_empty(&dum_hcd->urbp_list)) {23942394+ dum_hcd->timer_pending = 1;24012395 hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL_SOFT);23962396+ }24022397 hcd->state = HC_STATE_RUNNING;24032398 }24042399 spin_unlock_irq(&dum_hcd->dum->lock);···25312522 struct dummy_hcd *dum_hcd = hcd_to_dummy_hcd(hcd);2532252325332524 hrtimer_cancel(&dum_hcd->timer);25252525+ dum_hcd->timer_pending = 0;25342526 device_remove_file(dummy_dev(dum_hcd), &dev_attr_urbs);25352527 dev_info(dummy_dev(dum_hcd), "stopped\n");25362528}
···2424 return dbc->priv;2525}26262727+static unsigned int2828+dbc_kfifo_to_req(struct dbc_port *port, char *packet)2929+{3030+ unsigned int len;3131+3232+ len = kfifo_len(&port->port.xmit_fifo);3333+3434+ if (len == 0)3535+ return 0;3636+3737+ len = min(len, DBC_MAX_PACKET);3838+3939+ if (port->tx_boundary)4040+ len = min(port->tx_boundary, len);4141+4242+ len = kfifo_out(&port->port.xmit_fifo, packet, len);4343+4444+ if (port->tx_boundary)4545+ port->tx_boundary -= len;4646+4747+ return len;4848+}4949+2750static int dbc_start_tx(struct dbc_port *port)2851 __releases(&port->port_lock)2952 __acquires(&port->port_lock)···59366037 while (!list_empty(pool)) {6138 req = list_entry(pool->next, struct dbc_request, list_pool);6262- len = kfifo_out(&port->port.xmit_fifo, req->buf, DBC_MAX_PACKET);3939+ len = dbc_kfifo_to_req(port, req->buf);6340 if (len == 0)6441 break;6542 do_tty_wake = true;···223200{224201 struct dbc_port *port = tty->driver_data;225202 unsigned long flags;203203+ unsigned int written = 0;226204227205 spin_lock_irqsave(&port->port_lock, flags);228228- if (count)229229- count = kfifo_in(&port->port.xmit_fifo, buf, count);230230- dbc_start_tx(port);206206+207207+ /*208208+ * Treat tty write as one usb transfer. Make sure the writes are turned209209+ * into TRB request having the same size boundaries as the tty writes.210210+ * Don't add data to kfifo before previous write is turned into TRBs211211+ */212212+ if (port->tx_boundary) {213213+ spin_unlock_irqrestore(&port->port_lock, flags);214214+ return 0;215215+ }216216+217217+ if (count) {218218+ written = kfifo_in(&port->port.xmit_fifo, buf, count);219219+220220+ if (written == count)221221+ port->tx_boundary = kfifo_len(&port->port.xmit_fifo);222222+223223+ dbc_start_tx(port);224224+ }225225+231226 spin_unlock_irqrestore(&port->port_lock, flags);232227233233- return count;228228+ return written;234229}235230236231static int dbc_tty_put_char(struct tty_struct *tty, u8 ch)···282241283242 spin_lock_irqsave(&port->port_lock, flags);284243 room = kfifo_avail(&port->port.xmit_fifo);244244+245245+ if (port->tx_boundary)246246+ room = 0;247247+285248 spin_unlock_irqrestore(&port->port_lock, flags);286249287250 return room;
+30-38
drivers/usb/host/xhci-ring.c
···10231023 td_to_noop(xhci, ring, cached_td, false);10241024 cached_td->cancel_status = TD_CLEARED;10251025 }10261026-10261026+ td_to_noop(xhci, ring, td, false);10271027 td->cancel_status = TD_CLEARING_CACHE;10281028 cached_td = td;10291029 break;···27752775 return 0;27762776 }2777277727782778+ /*27792779+ * xhci 4.10.2 states isoc endpoints should continue27802780+ * processing the next TD if there was an error mid TD.27812781+ * So host like NEC don't generate an event for the last27822782+ * isoc TRB even if the IOC flag is set.27832783+ * xhci 4.9.1 states that if there are errors in mult-TRB27842784+ * TDs xHC should generate an error for that TRB, and if xHC27852785+ * proceeds to the next TD it should genete an event for27862786+ * any TRB with IOC flag on the way. Other host follow this.27872787+ *27882788+ * We wait for the final IOC event, but if we get an event27892789+ * anywhere outside this TD, just give it back already.27902790+ */27912791+ td = list_first_entry_or_null(&ep_ring->td_list, struct xhci_td, td_list);27922792+27932793+ if (td && td->error_mid_td && !trb_in_td(xhci, td, ep_trb_dma, false)) {27942794+ xhci_dbg(xhci, "Missing TD completion event after mid TD error\n");27952795+ ep_ring->dequeue = td->last_trb;27962796+ ep_ring->deq_seg = td->last_trb_seg;27972797+ inc_deq(xhci, ep_ring);27982798+ xhci_td_cleanup(xhci, td, ep_ring, td->status);27992799+ }28002800+27782801 if (list_empty(&ep_ring->td_list)) {27792802 /*27802803 * Don't print wanings if ring is empty due to a stopped endpoint generating an···28592836 return 0;28602837 }2861283828622862- /*28632863- * xhci 4.10.2 states isoc endpoints should continue28642864- * processing the next TD if there was an error mid TD.28652865- * So host like NEC don't generate an event for the last28662866- * isoc TRB even if the IOC flag is set.28672867- * xhci 4.9.1 states that if there are errors in mult-TRB28682868- * TDs xHC should generate an error for that TRB, and if xHC28692869- * proceeds to the next TD it should genete an event for28702870- * any TRB with IOC flag on the way. Other host follow this.28712871- * So this event might be for the next TD.28722872- */28732873- if (td->error_mid_td &&28742874- !list_is_last(&td->td_list, &ep_ring->td_list)) {28752875- struct xhci_td *td_next = list_next_entry(td, td_list);28392839+ /* HC is busted, give up! */28402840+ xhci_err(xhci,28412841+ "ERROR Transfer event TRB DMA ptr not part of current TD ep_index %d comp_code %u\n",28422842+ ep_index, trb_comp_code);28432843+ trb_in_td(xhci, td, ep_trb_dma, true);2876284428772877- ep_seg = trb_in_td(xhci, td_next, ep_trb_dma, false);28782878- if (ep_seg) {28792879- /* give back previous TD, start handling new */28802880- xhci_dbg(xhci, "Missing TD completion event after mid TD error\n");28812881- ep_ring->dequeue = td->last_trb;28822882- ep_ring->deq_seg = td->last_trb_seg;28832883- inc_deq(xhci, ep_ring);28842884- xhci_td_cleanup(xhci, td, ep_ring, td->status);28852885- td = td_next;28862886- }28872887- }28882888-28892889- if (!ep_seg) {28902890- /* HC is busted, give up! */28912891- xhci_err(xhci,28922892- "ERROR Transfer event TRB DMA ptr not "28932893- "part of current TD ep_index %d "28942894- "comp_code %u\n", ep_index,28952895- trb_comp_code);28962896- trb_in_td(xhci, td, ep_trb_dma, true);28972897-28982898- return -ESHUTDOWN;28992899- }28452845+ return -ESHUTDOWN;29002846 }2901284729022848 if (ep->skip) {
+1-1
drivers/usb/host/xhci-tegra.c
···21832183 goto out;21842184 }2185218521862186- for (i = 0; i < tegra->num_usb_phys; i++) {21862186+ for (i = 0; i < xhci->usb2_rhub.num_ports; i++) {21872187 if (!xhci->usb2_rhub.ports[i])21882188 continue;21892189 portsc = readl(xhci->usb2_rhub.ports[i]->addr);
···225225226226 opt_set(thr->opts, stdio, (u64)(unsigned long)&thr->thr.stdio);227227 opt_set(thr->opts, read_only, 1);228228+ opt_set(thr->opts, ratelimit_errors, 0);228229229230 /* We need request_key() to be called before we punt to kthread: */230231 opt_set(thr->opts, nostart, true);
+14-1
fs/bcachefs/darray.c
···2233#include <linux/log2.h>44#include <linux/slab.h>55+#include <linux/vmalloc.h>56#include "darray.h"6778int __bch2_darray_resize_noprof(darray_char *d, size_t element_size, size_t new_size, gfp_t gfp)···109 if (new_size > d->size) {1110 new_size = roundup_pow_of_two(new_size);12111313- void *data = kvmalloc_array_noprof(new_size, element_size, gfp);1212+ /*1313+ * This is a workaround: kvmalloc() doesn't support > INT_MAX1414+ * allocations, but vmalloc() does.1515+ * The limit needs to be lifted from kvmalloc, and when it does1616+ * we'll go back to just using that.1717+ */1818+ size_t bytes;1919+ if (unlikely(check_mul_overflow(new_size, element_size, &bytes)))2020+ return -ENOMEM;2121+2222+ void *data = likely(bytes < INT_MAX)2323+ ? kvmalloc_noprof(bytes, gfp)2424+ : vmalloc_noprof(bytes);1425 if (!data)1526 return -ENOMEM;1627
···377377 * check for missing subvolume before fpunch, as in resume we don't want378378 * it to be a fatal error379379 */380380- ret = __bch2_subvolume_get_snapshot(trans, inum.subvol, &snapshot, warn_errors);380380+ ret = lockrestart_do(trans, __bch2_subvolume_get_snapshot(trans, inum.subvol, &snapshot, warn_errors));381381 if (ret)382382 return ret;383383
+4-4
fs/bcachefs/io_read.c
···409409 bch2_trans_begin(trans);410410 rbio->bio.bi_status = 0;411411412412- k = bch2_btree_iter_peek_slot(&iter);413413- if (bkey_err(k))412412+ ret = lockrestart_do(trans, bkey_err(k = bch2_btree_iter_peek_slot(&iter)));413413+ if (ret)414414 goto err;415415416416 bch2_bkey_buf_reassemble(&sk, c, k);···557557558558static noinline void bch2_rbio_narrow_crcs(struct bch_read_bio *rbio)559559{560560- bch2_trans_do(rbio->c, NULL, NULL, BCH_TRANS_COMMIT_no_enospc,561561- __bch2_rbio_narrow_crcs(trans, rbio));560560+ bch2_trans_commit_do(rbio->c, NULL, NULL, BCH_TRANS_COMMIT_no_enospc,561561+ __bch2_rbio_narrow_crcs(trans, rbio));562562}563563564564/* Inner part that may run in process context */
+2-2
fs/bcachefs/io_write.c
···14371437 * freeing up space on specific disks, which means that14381438 * allocations for specific disks may hang arbitrarily long:14391439 */14401440- ret = bch2_trans_do(c, NULL, NULL, 0,14401440+ ret = bch2_trans_run(c, lockrestart_do(trans,14411441 bch2_alloc_sectors_start_trans(trans,14421442 op->target,14431443 op->opts.erasure_code && !(op->flags & BCH_WRITE_CACHED),···14471447 op->nr_replicas_required,14481448 op->watermark,14491449 op->flags,14501450- &op->cl, &wp));14501450+ &op->cl, &wp)));14511451 if (unlikely(ret)) {14521452 if (bch2_err_matches(ret, BCH_ERR_operation_blocked))14531453 break;
···347347 return di;348348 }349349 /* Adjust return code if the key was not found in the next leaf. */350350- if (ret > 0)351351- ret = 0;350350+ if (ret >= 0)351351+ ret = -ENOENT;352352353353 return ERR_PTR(ret);354354}
···243243/*244244 * Handle the on-disk data extents merge for @prev and @next.245245 *246246+ * @prev: left extent to merge247247+ * @next: right extent to merge248248+ * @merged: the extent we will not discard after the merge; updated with new values249249+ *250250+ * After this, one of the two extents is the new merged extent and the other is251251+ * removed from the tree and likely freed. Note that @merged is one of @prev/@next252252+ * so there is const/non-const aliasing occurring here.253253+ *246254 * Only touches disk_bytenr/disk_num_bytes/offset/ram_bytes.247255 * For now only uncompressed regular extent can be merged.248248- *249249- * @prev and @next will be both updated to point to the new merged range.250250- * Thus one of them should be removed by the caller.251256 */252252-static void merge_ondisk_extents(struct extent_map *prev, struct extent_map *next)257257+static void merge_ondisk_extents(const struct extent_map *prev, const struct extent_map *next,258258+ struct extent_map *merged)253259{254260 u64 new_disk_bytenr;255261 u64 new_disk_num_bytes;···290284 new_disk_bytenr;291285 new_offset = prev->disk_bytenr + prev->offset - new_disk_bytenr;292286293293- prev->disk_bytenr = new_disk_bytenr;294294- prev->disk_num_bytes = new_disk_num_bytes;295295- prev->ram_bytes = new_disk_num_bytes;296296- prev->offset = new_offset;297297-298298- next->disk_bytenr = new_disk_bytenr;299299- next->disk_num_bytes = new_disk_num_bytes;300300- next->ram_bytes = new_disk_num_bytes;301301- next->offset = new_offset;287287+ merged->disk_bytenr = new_disk_bytenr;288288+ merged->disk_num_bytes = new_disk_num_bytes;289289+ merged->ram_bytes = new_disk_num_bytes;290290+ merged->offset = new_offset;302291}303292304293static void dump_extent_map(struct btrfs_fs_info *fs_info, const char *prefix,···362361 em->generation = max(em->generation, merge->generation);363362364363 if (em->disk_bytenr < EXTENT_MAP_LAST_BYTE)365365- merge_ondisk_extents(merge, em);364364+ merge_ondisk_extents(merge, em, em);366365 em->flags |= EXTENT_FLAG_MERGED;367366368367 validate_extent_map(fs_info, em);···379378 if (rb && can_merge_extent_map(merge) && mergeable_maps(em, merge)) {380379 em->len += merge->len;381380 if (em->disk_bytenr < EXTENT_MAP_LAST_BYTE)382382- merge_ondisk_extents(em, merge);381381+ merge_ondisk_extents(em, merge, em);383382 validate_extent_map(fs_info, em);384383 rb_erase(&merge->rb_node, &tree->root);385384 RB_CLEAR_NODE(&merge->rb_node);
+2-5
fs/btrfs/inode.c
···43684368 */43694369 if (btrfs_ino(inode) == BTRFS_EMPTY_SUBVOL_DIR_OBJECTID) {43704370 di = btrfs_search_dir_index_item(root, path, dir_ino, &fname.disk_name);43714371- if (IS_ERR_OR_NULL(di)) {43724372- if (!di)43734373- ret = -ENOENT;43744374- else43754375- ret = PTR_ERR(di);43714371+ if (IS_ERR(di)) {43724372+ ret = PTR_ERR(di);43764373 btrfs_abort_transaction(trans, ret);43774374 goto out;43784375 }
···121121#define BTRFS_QGROUP_RUNTIME_FLAG_CANCEL_RESCAN (1ULL << 63)122122#define BTRFS_QGROUP_RUNTIME_FLAG_NO_ACCOUNTING (1ULL << 62)123123124124+#define BTRFS_QGROUP_DROP_SUBTREE_THRES_DEFAULT (3)125125+124126/*125127 * Record a dirty extent, and info qgroup to update quota on it126128 */
+10-2
fs/btrfs/super.c
···340340 fallthrough;341341 case Opt_compress:342342 case Opt_compress_type:343343+ /*344344+ * Provide the same semantics as older kernels that don't use fs345345+ * context, specifying the "compress" option clears346346+ * "force-compress" without the need to pass347347+ * "compress-force=[no|none]" before specifying "compress".348348+ */349349+ if (opt != Opt_compress_force && opt != Opt_compress_force_type)350350+ btrfs_clear_opt(ctx->mount_opt, FORCE_COMPRESS);351351+343352 if (opt == Opt_compress || opt == Opt_compress_force) {344353 ctx->compress_type = BTRFS_COMPRESS_ZLIB;345354 ctx->compress_level = BTRFS_ZLIB_DEFAULT_LEVEL;···15071498 sync_filesystem(sb);15081499 set_bit(BTRFS_FS_STATE_REMOUNTING, &fs_info->fs_state);1509150015101510- if (!mount_reconfigure &&15111511- !btrfs_check_options(fs_info, &ctx->mount_opt, fc->sb_flags))15011501+ if (!btrfs_check_options(fs_info, &ctx->mount_opt, fc->sb_flags))15121502 return -EINVAL;1513150315141504 ret = btrfs_check_features(fs_info, !(fc->sb_flags & SB_RDONLY));
···11451145}1146114611471147/*11481148+ * When a short write occurs, the filesystem might need to use ->iomap_end11491149+ * to remove space reservations created in ->iomap_begin.11501150+ *11511151+ * For filesystems that use delayed allocation, there can be dirty pages over11521152+ * the delalloc extent outside the range of a short write but still within the11531153+ * delalloc extent allocated for this iomap if the write raced with page11541154+ * faults.11551155+ *11481156 * Punch out all the delalloc blocks in the range given except for those that11491157 * have dirty data still pending in the page cache - those are going to be11501158 * written and so must still retain the delalloc backing for writeback.11591159+ *11601160+ * The punch() callback *must* only punch delalloc extents in the range passed11611161+ * to it. It must skip over all other types of extents in the range and leave11621162+ * them completely unchanged. It must do this punch atomically with respect to11631163+ * other extent modifications.11641164+ *11651165+ * The punch() callback may be called with a folio locked to prevent writeback11661166+ * extent allocation racing at the edge of the range we are currently punching.11671167+ * The locked folio may or may not cover the range being punched, so it is not11681168+ * safe for the punch() callback to lock folios itself.11691169+ *11701170+ * Lock order is:11711171+ *11721172+ * inode->i_rwsem (shared or exclusive)11731173+ * inode->i_mapping->invalidate_lock (exclusive)11741174+ * folio_lock()11751175+ * ->punch11761176+ * internal filesystem allocation lock11511177 *11521178 * As we are scanning the page cache for data, we don't need to reimplement the11531179 * wheel - mapping_seek_hole_data() does exactly what we need to identify the···12031177 * require sprinkling this code with magic "+ 1" and "- 1" arithmetic and expose12041178 * the code to subtle off-by-one bugs....12051179 */12061206-static void iomap_write_delalloc_release(struct inode *inode, loff_t start_byte,11801180+void iomap_write_delalloc_release(struct inode *inode, loff_t start_byte,12071181 loff_t end_byte, unsigned flags, struct iomap *iomap,12081182 iomap_punch_t punch)12091183{···12111185 loff_t scan_end_byte = min(i_size_read(inode), end_byte);1212118612131187 /*12141214- * Lock the mapping to avoid races with page faults re-instantiating12151215- * folios and dirtying them via ->page_mkwrite whilst we walk the12161216- * cache and perform delalloc extent removal. Failing to do this can12171217- * leave dirty pages with no space reservation in the cache.11881188+ * The caller must hold invalidate_lock to avoid races with page faults11891189+ * re-instantiating folios and dirtying them via ->page_mkwrite whilst11901190+ * we walk the cache and perform delalloc extent removal. Failing to do11911191+ * this can leave dirty pages with no space reservation in the cache.12181192 */12191219- filemap_invalidate_lock(inode->i_mapping);11931193+ lockdep_assert_held_write(&inode->i_mapping->invalidate_lock);11941194+12201195 while (start_byte < scan_end_byte) {12211196 loff_t data_end;12221197···12341207 if (start_byte == -ENXIO || start_byte == scan_end_byte)12351208 break;12361209 if (WARN_ON_ONCE(start_byte < 0))12371237- goto out_unlock;12101210+ return;12381211 WARN_ON_ONCE(start_byte < punch_start_byte);12391212 WARN_ON_ONCE(start_byte > scan_end_byte);12401213···12451218 data_end = mapping_seek_hole_data(inode->i_mapping, start_byte,12461219 scan_end_byte, SEEK_HOLE);12471220 if (WARN_ON_ONCE(data_end < 0))12481248- goto out_unlock;12211221+ return;1249122212501223 /*12511224 * If we race with post-direct I/O invalidation of the page cache,···12671240 if (punch_start_byte < end_byte)12681241 punch(inode, punch_start_byte, end_byte - punch_start_byte,12691242 iomap);12701270-out_unlock:12711271- filemap_invalidate_unlock(inode->i_mapping);12721243}12731273-12741274-/*12751275- * When a short write occurs, the filesystem may need to remove reserved space12761276- * that was allocated in ->iomap_begin from it's ->iomap_end method. For12771277- * filesystems that use delayed allocation, we need to punch out delalloc12781278- * extents from the range that are not dirty in the page cache. As the write can12791279- * race with page faults, there can be dirty pages over the delalloc extent12801280- * outside the range of a short write but still within the delalloc extent12811281- * allocated for this iomap.12821282- *12831283- * This function uses [start_byte, end_byte) intervals (i.e. open ended) to12841284- * simplify range iterations.12851285- *12861286- * The punch() callback *must* only punch delalloc extents in the range passed12871287- * to it. It must skip over all other types of extents in the range and leave12881288- * them completely unchanged. It must do this punch atomically with respect to12891289- * other extent modifications.12901290- *12911291- * The punch() callback may be called with a folio locked to prevent writeback12921292- * extent allocation racing at the edge of the range we are currently punching.12931293- * The locked folio may or may not cover the range being punched, so it is not12941294- * safe for the punch() callback to lock folios itself.12951295- *12961296- * Lock order is:12971297- *12981298- * inode->i_rwsem (shared or exclusive)12991299- * inode->i_mapping->invalidate_lock (exclusive)13001300- * folio_lock()13011301- * ->punch13021302- * internal filesystem allocation lock13031303- */13041304-void iomap_file_buffered_write_punch_delalloc(struct inode *inode,13051305- loff_t pos, loff_t length, ssize_t written, unsigned flags,13061306- struct iomap *iomap, iomap_punch_t punch)13071307-{13081308- loff_t start_byte;13091309- loff_t end_byte;13101310- unsigned int blocksize = i_blocksize(inode);13111311-13121312- if (iomap->type != IOMAP_DELALLOC)13131313- return;13141314-13151315- /* If we didn't reserve the blocks, we're not allowed to punch them. */13161316- if (!(iomap->flags & IOMAP_F_NEW))13171317- return;13181318-13191319- /*13201320- * start_byte refers to the first unused block after a short write. If13211321- * nothing was written, round offset down to point at the first block in13221322- * the range.13231323- */13241324- if (unlikely(!written))13251325- start_byte = round_down(pos, blocksize);13261326- else13271327- start_byte = round_up(pos + written, blocksize);13281328- end_byte = round_up(pos + length, blocksize);13291329-13301330- /* Nothing to do if we've written the entire delalloc extent */13311331- if (start_byte >= end_byte)13321332- return;13331333-13341334- iomap_write_delalloc_release(inode, start_byte, end_byte, flags, iomap,13351335- punch);13361336-}13371337-EXPORT_SYMBOL_GPL(iomap_file_buffered_write_punch_delalloc);12441244+EXPORT_SYMBOL_GPL(iomap_write_delalloc_release);1338124513391246static loff_t iomap_unshare_iter(struct iomap_iter *iter)13401247{
···39443944 new = copy_tree(old, old->mnt.mnt_root, copy_flags);39453945 if (IS_ERR(new)) {39463946 namespace_unlock();39473947- free_mnt_ns(new_ns);39473947+ ns_free_inum(&new_ns->ns);39483948+ dec_mnt_namespaces(new_ns->ucounts);39493949+ mnt_ns_release(new_ns);39483950 return ERR_CAST(new);39493951 }39503952 if (user_ns != ns->user_ns) {
+14-33
fs/netfs/buffered_read.c
···6767 * Decant the list of folios to read into a rolling buffer.6868 */6969static size_t netfs_load_buffer_from_ra(struct netfs_io_request *rreq,7070- struct folio_queue *folioq)7070+ struct folio_queue *folioq,7171+ struct folio_batch *put_batch)7172{7273 unsigned int order, nr;7374 size_t size = 0;···8382 order = folio_order(folio);8483 folioq->orders[i] = order;8584 size += PAGE_SIZE << order;8585+8686+ if (!folio_batch_add(put_batch, folio))8787+ folio_batch_release(put_batch);8688 }87898890 for (int i = nr; i < folioq_nr_slots(folioq); i++)···124120 * that we will need to release later - but we don't want to do125121 * that until after we've started the I/O.126122 */123123+ struct folio_batch put_batch;124124+125125+ folio_batch_init(&put_batch);127126 while (rreq->submitted < subreq->start + rsize) {128127 struct folio_queue *tail = rreq->buffer_tail, *new;129128 size_t added;···139132 new->prev = tail;140133 tail->next = new;141134 rreq->buffer_tail = new;142142- added = netfs_load_buffer_from_ra(rreq, new);135135+ added = netfs_load_buffer_from_ra(rreq, new, &put_batch);143136 rreq->iter.count += added;144137 rreq->submitted += added;145138 }139139+ folio_batch_release(&put_batch);146140 }147141148142 subreq->len = rsize;···356348static int netfs_prime_buffer(struct netfs_io_request *rreq)357349{358350 struct folio_queue *folioq;351351+ struct folio_batch put_batch;359352 size_t added;360353361354 folioq = kmalloc(sizeof(*folioq), GFP_KERNEL);···369360 rreq->submitted = rreq->start;370361 iov_iter_folio_queue(&rreq->iter, ITER_DEST, folioq, 0, 0, 0);371362372372- added = netfs_load_buffer_from_ra(rreq, folioq);363363+ folio_batch_init(&put_batch);364364+ added = netfs_load_buffer_from_ra(rreq, folioq, &put_batch);365365+ folio_batch_release(&put_batch);373366 rreq->iter.count += added;374367 rreq->submitted += added;375368 return 0;376376-}377377-378378-/*379379- * Drop the ref on each folio that we inherited from the VM readahead code. We380380- * still have the folio locks to pin the page until we complete the I/O.381381- *382382- * Note that we can't just release the batch in each queue struct as we use the383383- * occupancy count in other places.384384- */385385-static void netfs_put_ra_refs(struct folio_queue *folioq)386386-{387387- struct folio_batch fbatch;388388-389389- folio_batch_init(&fbatch);390390- while (folioq) {391391- for (unsigned int slot = 0; slot < folioq_count(folioq); slot++) {392392- struct folio *folio = folioq_folio(folioq, slot);393393- if (!folio)394394- continue;395395- trace_netfs_folio(folio, netfs_folio_trace_read_put);396396- if (!folio_batch_add(&fbatch, folio))397397- folio_batch_release(&fbatch);398398- }399399- folioq = folioq->next;400400- }401401-402402- folio_batch_release(&fbatch);403369}404370405371/**···419435 if (netfs_prime_buffer(rreq) < 0)420436 goto cleanup_free;421437 netfs_read_to_pagecache(rreq);422422-423423- /* Release the folio refs whilst we're waiting for the I/O. */424424- netfs_put_ra_refs(rreq->buffer);425438426439 netfs_put_request(rreq, true, netfs_rreq_trace_put_return);427440 return;
···11291129 trace_ocfs2_setattr(inode, dentry,11301130 (unsigned long long)OCFS2_I(inode)->ip_blkno,11311131 dentry->d_name.len, dentry->d_name.name,11321132- attr->ia_valid, attr->ia_mode,11331133- from_kuid(&init_user_ns, attr->ia_uid),11341134- from_kgid(&init_user_ns, attr->ia_gid));11321132+ attr->ia_valid,11331133+ attr->ia_valid & ATTR_MODE ? attr->ia_mode : 0,11341134+ attr->ia_valid & ATTR_UID ?11351135+ from_kuid(&init_user_ns, attr->ia_uid) : 0,11361136+ attr->ia_valid & ATTR_GID ?11371137+ from_kgid(&init_user_ns, attr->ia_gid) : 0);1135113811361139 /* ensuring we don't even attempt to truncate a symlink */11371140 if (S_ISLNK(inode->i_mode))
+2
fs/open.c
···1457145714581458 if (unlikely(usize < OPEN_HOW_SIZE_VER0))14591459 return -EINVAL;14601460+ if (unlikely(usize > PAGE_SIZE))14611461+ return -E2BIG;1460146214611463 err = copy_struct_from_user(&tmp, sizeof(tmp), how, usize);14621464 if (err)
+1-1
fs/proc/fd.c
···7777 return single_open(file, seq_show, inode);7878}79798080-/**8080+/*8181 * Shared /proc/pid/fdinfo and /proc/pid/fdinfo/fd permission helper to ensure8282 * that the current task has PTRACE_MODE_READ in addition to the normal8383 * POSIX-like checks.
+10-6
fs/proc/task_mmu.c
···909909{910910 /*911911 * Don't forget to update Documentation/ on changes.912912+ *913913+ * The length of the second argument of mnemonics[]914914+ * needs to be 3 instead of previously set 2915915+ * (i.e. from [BITS_PER_LONG][2] to [BITS_PER_LONG][3])916916+ * to avoid spurious917917+ * -Werror=unterminated-string-initialization warning918918+ * with GCC 15912919 */913913- static const char mnemonics[BITS_PER_LONG][2] = {920920+ static const char mnemonics[BITS_PER_LONG][3] = {914921 /*915922 * In case if we meet a flag we don't know about.916923 */···994987 for (i = 0; i < BITS_PER_LONG; i++) {995988 if (!mnemonics[i][0])996989 continue;997997- if (vma->vm_flags & (1UL << i)) {998998- seq_putc(m, mnemonics[i][0]);999999- seq_putc(m, mnemonics[i][1]);10001000- seq_putc(m, ' ');10011001- }990990+ if (vma->vm_flags & (1UL << i))991991+ seq_printf(m, "%s ", mnemonics[i]);1002992 }1003993 seq_putc(m, '\n');1004994}
···115115 ses->chans[chan_index].in_reconnect = false;116116}117117118118-bool119119-cifs_chan_in_reconnect(struct cifs_ses *ses,120120- struct TCP_Server_Info *server)121121-{122122- unsigned int chan_index = cifs_ses_get_chan_index(ses, server);123123-124124- if (chan_index == CIFS_INVAL_CHAN_INDEX)125125- return true; /* err on the safer side */126126-127127- return CIFS_CHAN_IN_RECONNECT(ses, chan_index);128128-}129129-130118void131119cifs_chan_set_need_reconnect(struct cifs_ses *ses,132120 struct TCP_Server_Info *server)···473485474486 ses->chans[chan_index].iface = iface;475487 spin_unlock(&ses->chan_lock);476476-}477477-478478-/*479479- * If server is a channel of ses, return the corresponding enclosing480480- * cifs_chan otherwise return NULL.481481- */482482-struct cifs_chan *483483-cifs_ses_find_chan(struct cifs_ses *ses, struct TCP_Server_Info *server)484484-{485485- int i;486486-487487- spin_lock(&ses->chan_lock);488488- for (i = 0; i < ses->chan_count; i++) {489489- if (ses->chans[i].server == server) {490490- spin_unlock(&ses->chan_lock);491491- return &ses->chans[i];492492- }493493- }494494- spin_unlock(&ses->chan_lock);495495- return NULL;496488}497489498490static int
+2-1
fs/smb/client/smb2ops.c
···11581158 struct cifs_fid fid;11591159 unsigned int size[1];11601160 void *data[1];11611161- struct smb2_file_full_ea_info *ea = NULL;11611161+ struct smb2_file_full_ea_info *ea;11621162 struct smb2_query_info_rsp *rsp;11631163 int rc, used_len = 0;11641164 int retries = 0, cur_sleep = 1;···11791179 if (!utf16_path)11801180 return -ENOMEM;1181118111821182+ ea = NULL;11821183 resp_buftype[0] = resp_buftype[1] = resp_buftype[2] = CIFS_NO_BUFFER;11831184 vars = kzalloc(sizeof(*vars), GFP_KERNEL);11841185 if (!vars) {
+9
fs/smb/client/smb2pdu.c
···33133313 return rc;3314331433153315 if (indatalen) {33163316+ unsigned int len;33173317+33183318+ if (WARN_ON_ONCE(smb3_encryption_required(tcon) &&33193319+ (check_add_overflow(total_len - 1,33203320+ ALIGN(indatalen, 8), &len) ||33213321+ len > MAX_CIFS_SMALL_BUFFER_SIZE))) {33223322+ cifs_small_buf_release(req);33233323+ return -EIO;33243324+ }33163325 /*33173326 * indatalen is usually small at a couple of bytes max, so33183327 * just allocate through generic pool
+1-1
fs/xfs/scrub/bmap_repair.c
···801801{802802 struct xrep_bmap *rb;803803 char *descr;804804- unsigned int max_bmbt_recs;804804+ xfs_extnum_t max_bmbt_recs;805805 bool large_extcount;806806 int error = 0;807807
+2-2
fs/xfs/xfs_aops.c
···116116 if (unlikely(error)) {117117 if (ioend->io_flags & IOMAP_F_SHARED) {118118 xfs_reflink_cancel_cow_range(ip, offset, size, true);119119- xfs_bmap_punch_delalloc_range(ip, offset,119119+ xfs_bmap_punch_delalloc_range(ip, XFS_DATA_FORK, offset,120120 offset + size);121121 }122122 goto done;···456456 * byte of the next folio. Hence the end offset is only dependent on the457457 * folio itself and not the start offset that is passed in.458458 */459459- xfs_bmap_punch_delalloc_range(ip, pos,459459+ xfs_bmap_punch_delalloc_range(ip, XFS_DATA_FORK, pos,460460 folio_pos(folio) + folio_size(folio));461461}462462
···348348}349349350350/*351351+ * Take care of zeroing post-EOF blocks when they might exist.352352+ *353353+ * Returns 0 if successfully, a negative error for a failure, or 1 if this354354+ * function dropped the iolock and reacquired it exclusively and the caller355355+ * needs to restart the write sanity checks.356356+ */357357+static ssize_t358358+xfs_file_write_zero_eof(359359+ struct kiocb *iocb,360360+ struct iov_iter *from,361361+ unsigned int *iolock,362362+ size_t count,363363+ bool *drained_dio)364364+{365365+ struct xfs_inode *ip = XFS_I(iocb->ki_filp->f_mapping->host);366366+ loff_t isize;367367+ int error;368368+369369+ /*370370+ * We need to serialise against EOF updates that occur in IO completions371371+ * here. We want to make sure that nobody is changing the size while372372+ * we do this check until we have placed an IO barrier (i.e. hold373373+ * XFS_IOLOCK_EXCL) that prevents new IO from being dispatched. The374374+ * spinlock effectively forms a memory barrier once we have375375+ * XFS_IOLOCK_EXCL so we are guaranteed to see the latest EOF value and376376+ * hence be able to correctly determine if we need to run zeroing.377377+ */378378+ spin_lock(&ip->i_flags_lock);379379+ isize = i_size_read(VFS_I(ip));380380+ if (iocb->ki_pos <= isize) {381381+ spin_unlock(&ip->i_flags_lock);382382+ return 0;383383+ }384384+ spin_unlock(&ip->i_flags_lock);385385+386386+ if (iocb->ki_flags & IOCB_NOWAIT)387387+ return -EAGAIN;388388+389389+ if (!*drained_dio) {390390+ /*391391+ * If zeroing is needed and we are currently holding the iolock392392+ * shared, we need to update it to exclusive which implies393393+ * having to redo all checks before.394394+ */395395+ if (*iolock == XFS_IOLOCK_SHARED) {396396+ xfs_iunlock(ip, *iolock);397397+ *iolock = XFS_IOLOCK_EXCL;398398+ xfs_ilock(ip, *iolock);399399+ iov_iter_reexpand(from, count);400400+ }401401+402402+ /*403403+ * We now have an IO submission barrier in place, but AIO can do404404+ * EOF updates during IO completion and hence we now need to405405+ * wait for all of them to drain. Non-AIO DIO will have drained406406+ * before we are given the XFS_IOLOCK_EXCL, and so for most407407+ * cases this wait is a no-op.408408+ */409409+ inode_dio_wait(VFS_I(ip));410410+ *drained_dio = true;411411+ return 1;412412+ }413413+414414+ trace_xfs_zero_eof(ip, isize, iocb->ki_pos - isize);415415+416416+ xfs_ilock(ip, XFS_MMAPLOCK_EXCL);417417+ error = xfs_zero_range(ip, isize, iocb->ki_pos - isize, NULL);418418+ xfs_iunlock(ip, XFS_MMAPLOCK_EXCL);419419+420420+ return error;421421+}422422+423423+/*351424 * Common pre-write limit and setup checks.352425 *353353- * Called with the iolocked held either shared and exclusive according to426426+ * Called with the iolock held either shared and exclusive according to354427 * @iolock, and returns with it held. Might upgrade the iolock to exclusive355428 * if called for a direct write beyond i_size.356429 */···433360 struct iov_iter *from,434361 unsigned int *iolock)435362{436436- struct file *file = iocb->ki_filp;437437- struct inode *inode = file->f_mapping->host;438438- struct xfs_inode *ip = XFS_I(inode);439439- ssize_t error = 0;363363+ struct inode *inode = iocb->ki_filp->f_mapping->host;440364 size_t count = iov_iter_count(from);441365 bool drained_dio = false;442442- loff_t isize;366366+ ssize_t error;443367444368restart:445369 error = generic_write_checks(iocb, from);···459389 * exclusively.460390 */461391 if (*iolock == XFS_IOLOCK_SHARED && !IS_NOSEC(inode)) {462462- xfs_iunlock(ip, *iolock);392392+ xfs_iunlock(XFS_I(inode), *iolock);463393 *iolock = XFS_IOLOCK_EXCL;464394 error = xfs_ilock_iocb(iocb, *iolock);465395 if (error) {···470400 }471401472402 /*473473- * If the offset is beyond the size of the file, we need to zero any403403+ * If the offset is beyond the size of the file, we need to zero all474404 * blocks that fall between the existing EOF and the start of this475475- * write. If zeroing is needed and we are currently holding the iolock476476- * shared, we need to update it to exclusive which implies having to477477- * redo all checks before.405405+ * write.478406 *479479- * We need to serialise against EOF updates that occur in IO completions480480- * here. We want to make sure that nobody is changing the size while we481481- * do this check until we have placed an IO barrier (i.e. hold the482482- * XFS_IOLOCK_EXCL) that prevents new IO from being dispatched. The483483- * spinlock effectively forms a memory barrier once we have the484484- * XFS_IOLOCK_EXCL so we are guaranteed to see the latest EOF value and485485- * hence be able to correctly determine if we need to run zeroing.486486- *487487- * We can do an unlocked check here safely as IO completion can only488488- * extend EOF. Truncate is locked out at this point, so the EOF can489489- * not move backwards, only forwards. Hence we only need to take the490490- * slow path and spin locks when we are at or beyond the current EOF.407407+ * We can do an unlocked check for i_size here safely as I/O completion408408+ * can only extend EOF. Truncate is locked out at this point, so the409409+ * EOF can not move backwards, only forwards. Hence we only need to take410410+ * the slow path when we are at or beyond the current EOF.491411 */492492- if (iocb->ki_pos <= i_size_read(inode))493493- goto out;494494-495495- spin_lock(&ip->i_flags_lock);496496- isize = i_size_read(inode);497497- if (iocb->ki_pos > isize) {498498- spin_unlock(&ip->i_flags_lock);499499-500500- if (iocb->ki_flags & IOCB_NOWAIT)501501- return -EAGAIN;502502-503503- if (!drained_dio) {504504- if (*iolock == XFS_IOLOCK_SHARED) {505505- xfs_iunlock(ip, *iolock);506506- *iolock = XFS_IOLOCK_EXCL;507507- xfs_ilock(ip, *iolock);508508- iov_iter_reexpand(from, count);509509- }510510- /*511511- * We now have an IO submission barrier in place, but512512- * AIO can do EOF updates during IO completion and hence513513- * we now need to wait for all of them to drain. Non-AIO514514- * DIO will have drained before we are given the515515- * XFS_IOLOCK_EXCL, and so for most cases this wait is a516516- * no-op.517517- */518518- inode_dio_wait(inode);519519- drained_dio = true;412412+ if (iocb->ki_pos > i_size_read(inode)) {413413+ error = xfs_file_write_zero_eof(iocb, from, iolock, count,414414+ &drained_dio);415415+ if (error == 1)520416 goto restart;521521- }522522-523523- trace_xfs_zero_eof(ip, isize, iocb->ki_pos - isize);524524- error = xfs_zero_range(ip, isize, iocb->ki_pos - isize, NULL);525417 if (error)526418 return error;527527- } else528528- spin_unlock(&ip->i_flags_lock);419419+ }529420530530-out:531421 return kiocb_modified(iocb);532422}533423
+46-21
fs/xfs/xfs_iomap.c
···975975 int allocfork = XFS_DATA_FORK;976976 int error = 0;977977 unsigned int lockmode = XFS_ILOCK_EXCL;978978+ unsigned int iomap_flags = 0;978979 u64 seq;979980980981 if (xfs_is_shutdown(mp))···11461145 }11471146 }1148114711481148+ /*11491149+ * Flag newly allocated delalloc blocks with IOMAP_F_NEW so we punch11501150+ * them out if the write happens to fail.11511151+ */11521152+ iomap_flags |= IOMAP_F_NEW;11491153 if (allocfork == XFS_COW_FORK) {11501154 error = xfs_bmapi_reserve_delalloc(ip, allocfork, offset_fsb,11511155 end_fsb - offset_fsb, prealloc_blocks, &cmap,···11681162 if (error)11691163 goto out_unlock;1170116411711171- /*11721172- * Flag newly allocated delalloc blocks with IOMAP_F_NEW so we punch11731173- * them out if the write happens to fail.11741174- */11751175- seq = xfs_iomap_inode_sequence(ip, IOMAP_F_NEW);11761176- xfs_iunlock(ip, lockmode);11771165 trace_xfs_iomap_alloc(ip, offset, count, allocfork, &imap);11781178- return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, IOMAP_F_NEW, seq);11791179-11801166found_imap:11811181- seq = xfs_iomap_inode_sequence(ip, 0);11671167+ seq = xfs_iomap_inode_sequence(ip, iomap_flags);11821168 xfs_iunlock(ip, lockmode);11831183- return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, 0, seq);11691169+ return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, iomap_flags, seq);1184117011851171convert_delay:11861172 xfs_iunlock(ip, lockmode);···11861188 return 0;1187118911881190found_cow:11891189- seq = xfs_iomap_inode_sequence(ip, 0);11901191 if (imap.br_startoff <= offset_fsb) {11911191- error = xfs_bmbt_to_iomap(ip, srcmap, &imap, flags, 0, seq);11921192+ error = xfs_bmbt_to_iomap(ip, srcmap, &imap, flags, 0,11931193+ xfs_iomap_inode_sequence(ip, 0));11921194 if (error)11931195 goto out_unlock;11941194- seq = xfs_iomap_inode_sequence(ip, IOMAP_F_SHARED);11951195- xfs_iunlock(ip, lockmode);11961196- return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags,11971197- IOMAP_F_SHARED, seq);11961196+ } else {11971197+ xfs_trim_extent(&cmap, offset_fsb,11981198+ imap.br_startoff - offset_fsb);11981199 }1199120012001200- xfs_trim_extent(&cmap, offset_fsb, imap.br_startoff - offset_fsb);12011201+ iomap_flags |= IOMAP_F_SHARED;12021202+ seq = xfs_iomap_inode_sequence(ip, iomap_flags);12011203 xfs_iunlock(ip, lockmode);12021202- return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags, 0, seq);12041204+ return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags, iomap_flags, seq);1203120512041206out_unlock:12051207 xfs_iunlock(ip, lockmode);···12131215 loff_t length,12141216 struct iomap *iomap)12151217{12161216- xfs_bmap_punch_delalloc_range(XFS_I(inode), offset, offset + length);12181218+ xfs_bmap_punch_delalloc_range(XFS_I(inode),12191219+ (iomap->flags & IOMAP_F_SHARED) ?12201220+ XFS_COW_FORK : XFS_DATA_FORK,12211221+ offset, offset + length);12171222}1218122312191224static int···12281227 unsigned flags,12291228 struct iomap *iomap)12301229{12311231- iomap_file_buffered_write_punch_delalloc(inode, offset, length, written,12321232- flags, iomap, &xfs_buffered_write_delalloc_punch);12301230+ loff_t start_byte, end_byte;12311231+12321232+ /* If we didn't reserve the blocks, we're not allowed to punch them. */12331233+ if (iomap->type != IOMAP_DELALLOC || !(iomap->flags & IOMAP_F_NEW))12341234+ return 0;12351235+12361236+ /* Nothing to do if we've written the entire delalloc extent */12371237+ start_byte = iomap_last_written_block(inode, offset, written);12381238+ end_byte = round_up(offset + length, i_blocksize(inode));12391239+ if (start_byte >= end_byte)12401240+ return 0;12411241+12421242+ /* For zeroing operations the callers already hold invalidate_lock. */12431243+ if (flags & (IOMAP_UNSHARE | IOMAP_ZERO)) {12441244+ rwsem_assert_held_write(&inode->i_mapping->invalidate_lock);12451245+ iomap_write_delalloc_release(inode, start_byte, end_byte, flags,12461246+ iomap, xfs_buffered_write_delalloc_punch);12471247+ } else {12481248+ filemap_invalidate_lock(inode->i_mapping);12491249+ iomap_write_delalloc_release(inode, start_byte, end_byte, flags,12501250+ iomap, xfs_buffered_write_delalloc_punch);12511251+ filemap_invalidate_unlock(inode->i_mapping);12521252+ }12531253+12331254 return 0;12341255}12351256···14571434 bool *did_zero)14581435{14591436 struct inode *inode = VFS_I(ip);14371437+14381438+ xfs_assert_ilocked(ip, XFS_IOLOCK_EXCL | XFS_MMAPLOCK_EXCL);1460143914611440 if (IS_DAX(inode))14621441 return dax_zero_range(inode, pos, len, did_zero,
···322322 (transparent_hugepage_flags & \323323 (1<<TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG))324324325325+static inline bool vma_thp_disabled(struct vm_area_struct *vma,326326+ unsigned long vm_flags)327327+{328328+ /*329329+ * Explicitly disabled through madvise or prctl, or some330330+ * architectures may disable THP for some mappings, for331331+ * example, s390 kvm.332332+ */333333+ return (vm_flags & VM_NOHUGEPAGE) ||334334+ test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags);335335+}336336+337337+static inline bool thp_disabled_by_hw(void)338338+{339339+ /* If the hardware/firmware marked hugepage support disabled. */340340+ return transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED);341341+}342342+325343unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,326344 unsigned long len, unsigned long pgoff, unsigned long flags);327345unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long addr,
+17-3
include/linux/iomap.h
···256256 return &i->iomap;257257}258258259259+/*260260+ * Return the file offset for the first unchanged block after a short write.261261+ *262262+ * If nothing was written, round @pos down to point at the first block in263263+ * the range, else round up to include the partially written block.264264+ */265265+static inline loff_t iomap_last_written_block(struct inode *inode, loff_t pos,266266+ ssize_t written)267267+{268268+ if (unlikely(!written))269269+ return round_down(pos, i_blocksize(inode));270270+ return round_up(pos + written, i_blocksize(inode));271271+}272272+259273ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from,260274 const struct iomap_ops *ops, void *private);261275int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops);···290276291277typedef void (*iomap_punch_t)(struct inode *inode, loff_t offset, loff_t length,292278 struct iomap *iomap);293293-void iomap_file_buffered_write_punch_delalloc(struct inode *inode, loff_t pos,294294- loff_t length, ssize_t written, unsigned flag,295295- struct iomap *iomap, iomap_punch_t punch);279279+void iomap_write_delalloc_release(struct inode *inode, loff_t start_byte,280280+ loff_t end_byte, unsigned flags, struct iomap *iomap,281281+ iomap_punch_t punch);296282297283int iomap_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,298284 u64 start, u64 len, const struct iomap_ops *ops);
+3-1
include/linux/irqchip/arm-gic-v4.h
···6666 bool enabled;6767 bool group;6868 } sgi_config[16];6969- atomic_t vmapp_count;7069 };7170 };7171+7272+ /* Track the VPE being mapped */7373+ atomic_t vmapp_count;72747375 /*7476 * Ensures mutual exclusion between affinity setting of the
···38183818struct page * __populate_section_memmap(unsigned long pfn,38193819 unsigned long nr_pages, int nid, struct vmem_altmap *altmap,38203820 struct dev_pagemap *pgmap);38213821-void pmd_init(void *addr);38223821void pud_init(void *addr);38223822+void pmd_init(void *addr);38233823+void kernel_pte_init(void *addr);38233824pgd_t *vmemmap_pgd_populate(unsigned long addr, int node);38243825p4d_t *vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node);38253826pud_t *vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node);
+12
include/linux/netdevice.h
···3384338433853385static __always_inline void netif_tx_stop_queue(struct netdev_queue *dev_queue)33863386{33873387+ /* Paired with READ_ONCE() from dev_watchdog() */33883388+ WRITE_ONCE(dev_queue->trans_start, jiffies);33893389+33903390+ /* This barrier is paired with smp_mb() from dev_watchdog() */33913391+ smp_mb__before_atomic();33923392+33873393 /* Must be an atomic op see netif_txq_try_stop() */33883394 set_bit(__QUEUE_STATE_DRV_XOFF, &dev_queue->state);33893395}···3515350935163510 if (likely(dql_avail(&dev_queue->dql) >= 0))35173511 return;35123512+35133513+ /* Paired with READ_ONCE() from dev_watchdog() */35143514+ WRITE_ONCE(dev_queue->trans_start, jiffies);35153515+35163516+ /* This barrier is paired with smp_mb() from dev_watchdog() */35173517+ smp_mb__before_atomic();3518351835193519 set_bit(__QUEUE_STATE_STACK_XOFF, &dev_queue->state);35203520
···227227/**228228 * struct sdw_intel_acpi_info - Soundwire Intel information found in ACPI tables229229 * @handle: ACPI controller handle230230- * @count: link count found with "sdw-master-count" property230230+ * @count: link count found with "sdw-master-count" or "sdw-manager-list" property231231 * @link_mask: bit-wise mask listing links enabled by BIOS menu232232 *233233 * this structure could be expanded to e.g. provide all the _ADR
···403403void bt_sock_unregister(int proto);404404void bt_sock_link(struct bt_sock_list *l, struct sock *s);405405void bt_sock_unlink(struct bt_sock_list *l, struct sock *s);406406+bool bt_sock_linked(struct bt_sock_list *l, struct sock *s);406407struct sock *bt_sock_alloc(struct net *net, struct socket *sock,407408 struct proto *prot, int proto, gfp_t prio, int kern);408409int bt_sock_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
-1
include/net/netns/xfrm.h
···5151 struct hlist_head *policy_byidx;5252 unsigned int policy_idx_hmask;5353 unsigned int idx_generator;5454- struct hlist_head policy_inexact[XFRM_POLICY_MAX];5554 struct xfrm_policy_hash policy_bydst[XFRM_POLICY_MAX];5655 unsigned int policy_count[XFRM_POLICY_MAX * 2];5756 struct work_struct policy_hash_work;
+5
include/net/sock.h
···27422742 return sk->sk_family == AF_UNIX && sk->sk_type == SOCK_STREAM;27432743}2744274427452745+static inline bool sk_is_vsock(const struct sock *sk)27462746+{27472747+ return sk->sk_family == AF_VSOCK;27482748+}27492749+27452750/**27462751 * sk_eat_skb - Release a skb if it is no longer needed27472752 * @sk: socket to eat this skb from
+15-13
include/net/xfrm.h
···349349void xfrm_if_register_cb(const struct xfrm_if_cb *ifcb);350350void xfrm_if_unregister_cb(void);351351352352+struct xfrm_dst_lookup_params {353353+ struct net *net;354354+ int tos;355355+ int oif;356356+ xfrm_address_t *saddr;357357+ xfrm_address_t *daddr;358358+ u32 mark;359359+ __u8 ipproto;360360+ union flowi_uli uli;361361+};362362+352363struct net_device;353364struct xfrm_type;354365struct xfrm_dst;355366struct xfrm_policy_afinfo {356367 struct dst_ops *dst_ops;357357- struct dst_entry *(*dst_lookup)(struct net *net,358358- int tos, int oif,359359- const xfrm_address_t *saddr,360360- const xfrm_address_t *daddr,361361- u32 mark);362362- int (*get_saddr)(struct net *net, int oif,363363- xfrm_address_t *saddr,364364- xfrm_address_t *daddr,365365- u32 mark);368368+ struct dst_entry *(*dst_lookup)(const struct xfrm_dst_lookup_params *params);369369+ int (*get_saddr)(xfrm_address_t *saddr,370370+ const struct xfrm_dst_lookup_params *params);366371 int (*fill_dst)(struct xfrm_dst *xdst,367372 struct net_device *dev,368373 const struct flowi *fl);···17691764}17701765#endif1771176617721772-struct dst_entry *__xfrm_dst_lookup(struct net *net, int tos, int oif,17731773- const xfrm_address_t *saddr,17741774- const xfrm_address_t *daddr,17751775- int family, u32 mark);17671767+struct dst_entry *__xfrm_dst_lookup(int family, const struct xfrm_dst_lookup_params *params);1776176817771769struct xfrm_policy *xfrm_policy_alloc(struct net *net, gfp_t gfp);17781770
···60476047 BPF_F_MARK_ENFORCE = (1ULL << 6),60486048};6049604960506050-/* BPF_FUNC_clone_redirect and BPF_FUNC_redirect flags. */60516051-enum {60526052- BPF_F_INGRESS = (1ULL << 0),60536053-};60546054-60556050/* BPF_FUNC_skb_set_tunnel_key and BPF_FUNC_skb_get_tunnel_key flags. */60566051enum {60576052 BPF_F_TUNINFO_IPV6 = (1ULL << 0),···61936198 BPF_F_BPRM_SECUREEXEC = (1ULL << 0),61946199};6195620061966196-/* Flags for bpf_redirect_map helper */62016201+/* Flags for bpf_redirect and bpf_redirect_map helpers */61976202enum {61986198- BPF_F_BROADCAST = (1ULL << 3),61996199- BPF_F_EXCLUDE_INGRESS = (1ULL << 4),62036203+ BPF_F_INGRESS = (1ULL << 0), /* used for skb path */62046204+ BPF_F_BROADCAST = (1ULL << 3), /* used for XDP path */62056205+ BPF_F_EXCLUDE_INGRESS = (1ULL << 4), /* used for XDP path */62066206+#define BPF_F_REDIRECT_FLAGS (BPF_F_INGRESS | BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS)62006207};6201620862026209#define __bpf_md_ptr(type, name) \
+7-1
include/uapi/linux/ublk_cmd.h
···175175/* use ioctl encoding for uring command */176176#define UBLK_F_CMD_IOCTL_ENCODE (1UL << 6)177177178178-/* Copy between request and user buffer by pread()/pwrite() */178178+/*179179+ * Copy between request and user buffer by pread()/pwrite()180180+ *181181+ * Not available for UBLK_F_UNPRIVILEGED_DEV, otherwise userspace may182182+ * deceive us by not filling request buffer, then kernel uninitialized183183+ * data may be leaked.184184+ */179185#define UBLK_F_USER_COPY (1UL << 7)180186181187/*
+9-5
include/xen/acpi.h
···35353636#include <linux/types.h>37373838+typedef int (*get_gsi_from_sbdf_t)(u32 sbdf);3939+3840#ifdef CONFIG_XEN_DOM03941#include <asm/xen/hypervisor.h>4042#include <xen/xen.h>···7472 int *gsi_out,7573 int *trigger_out,7674 int *polarity_out);7575+void xen_acpi_register_get_gsi_func(get_gsi_from_sbdf_t func);7676+int xen_acpi_get_gsi_from_sbdf(u32 sbdf);7777#else7878static inline void xen_acpi_sleep_register(void)7979{···9389{9490 return -1;9591}9696-#endif97929898-#ifdef CONFIG_XEN_PCI_STUB9999-int pcistub_get_gsi_from_sbdf(unsigned int sbdf);100100-#else101101-static inline int pcistub_get_gsi_from_sbdf(unsigned int sbdf)9393+static inline void xen_acpi_register_get_gsi_func(get_gsi_from_sbdf_t func)9494+{9595+}9696+9797+static inline int xen_acpi_get_gsi_from_sbdf(u32 sbdf)10298{10399 return -1;104100}
+6-2
init/Kconfig
···62626363config RUSTC_VERSION6464 int6565- default $(shell,$(srctree)/scripts/rustc-version.sh $(RUSTC))6565+ default $(rustc-version)6666 help6767 It does not depend on `RUST` since that one may need to use the version6868 in a `depends on`.···77777878 In particular, the Makefile target 'rustavailable' is useful to check7979 why the Rust toolchain is not being detected.8080+8181+config RUSTC_LLVM_VERSION8282+ int8383+ default $(rustc-llvm-version)80848185config CC_CAN_LINK8286 bool···19501946 depends on !GCC_PLUGIN_RANDSTRUCT19511947 depends on !RANDSTRUCT19521948 depends on !DEBUG_INFO_BTF || PAHOLE_HAS_LANG_EXCLUDE19531953- depends on !CFI_CLANG || RUSTC_VERSION >= 107900 && HAVE_CFI_ICALL_NORMALIZE_INTEGERS19491949+ depends on !CFI_CLANG || HAVE_CFI_ICALL_NORMALIZE_INTEGERS_RUSTC19541950 select CFI_ICALL_NORMALIZE_INTEGERS if CFI_CLANG19551951 depends on !CALL_PADDING || RUSTC_VERSION >= 10810019561952 depends on !KASAN_SW_TAGS
+9-1
io_uring/io_uring.h
···284284{285285 struct io_rings *r = ctx->rings;286286287287- return READ_ONCE(r->sq.tail) - ctx->cached_sq_head == ctx->sq_entries;287287+ /*288288+ * SQPOLL must use the actual sqring head, as using the cached_sq_head289289+ * is race prone if the SQPOLL thread has grabbed entries but not yet290290+ * committed them to the ring. For !SQPOLL, this doesn't matter, but291291+ * since this helper is just used for SQPOLL sqring waits (or POLLOUT),292292+ * just read the actual sqring head unconditionally.293293+ */294294+ return READ_ONCE(r->sq.tail) - READ_ONCE(r->sq.head) == ctx->sq_entries;288295}289296290297static inline unsigned int io_sqring_entries(struct io_ring_ctx *ctx)···327320 if (current->io_uring) {328321 unsigned int count = 0;329322323323+ __set_current_state(TASK_RUNNING);330324 tctx_task_work_run(current->io_uring, UINT_MAX, &count);331325 if (count)332326 ret = true;
+2-1
io_uring/rsrc.c
···11761176 for (i = 0; i < nbufs; i++) {11771177 struct io_mapped_ubuf *src = src_ctx->user_bufs[i];1178117811791179- refcount_inc(&src->refs);11791179+ if (src != &dummy_ubuf)11801180+ refcount_inc(&src->refs);11801181 user_bufs[i] = src;11811182 }11821183
+1-1
io_uring/rw.c
···807807 * reliably. If not, or it IOCB_NOWAIT is set, don't retry.808808 */809809 if (kiocb->ki_flags & IOCB_NOWAIT ||810810- ((file->f_flags & O_NONBLOCK && (req->flags & REQ_F_SUPPORT_NOWAIT))))810810+ ((file->f_flags & O_NONBLOCK && !(req->flags & REQ_F_SUPPORT_NOWAIT))))811811 req->flags |= REQ_F_NOWAIT;812812813813 if (ctx->flags & IORING_SETUP_IOPOLL) {
···35233523 * (i + 1) * elem_size35243524 * where i is the repeat index and elem_size is the size of an element.35253525 */35263526-static int btf_repeat_fields(struct btf_field_info *info,35263526+static int btf_repeat_fields(struct btf_field_info *info, int info_cnt,35273527 u32 field_cnt, u32 repeat_cnt, u32 elem_size)35283528{35293529 u32 i, j;···35423542 return -EINVAL;35433543 }35443544 }35453545+35463546+ /* The type of struct size or variable size is u32,35473547+ * so the multiplication will not overflow.35483548+ */35493549+ if (field_cnt * (repeat_cnt + 1) > info_cnt)35503550+ return -E2BIG;3545355135463552 cur = field_cnt;35473553 for (i = 0; i < repeat_cnt; i++) {···35933587 info[i].off += off;3594358835953589 if (nelems > 1) {35963596- err = btf_repeat_fields(info, ret, nelems - 1, t->size);35903590+ err = btf_repeat_fields(info, info_cnt, ret, nelems - 1, t->size);35973591 if (err == 0)35983592 ret *= nelems;35993593 else···3687368136883682 if (ret == BTF_FIELD_IGNORE)36893683 return 0;36903690- if (nelems > info_cnt)36843684+ if (!info_cnt)36913685 return -E2BIG;36923686 if (nelems > 1) {36933693- ret = btf_repeat_fields(info, 1, nelems - 1, sz);36873687+ ret = btf_repeat_fields(info, info_cnt, 1, nelems - 1, sz);36943688 if (ret < 0)36953689 return ret;36963690 }···89678961 if (!type) {89688962 bpf_log(ctx->log, "relo #%u: bad type id %u\n",89698963 relo_idx, relo->type_id);89648964+ kfree(specs);89708965 return -EINVAL;89718966 }89728967
···688688 if (t == SCALAR_VALUE && reg->precise)689689 verbose(env, "P");690690 if (t == SCALAR_VALUE && tnum_is_const(reg->var_off)) {691691- /* reg->off should be 0 for SCALAR_VALUE */692692- verbose_snum(env, reg->var_off.value + reg->off);691691+ verbose_snum(env, reg->var_off.value);693692 return;694693 }695694
+6-6
kernel/bpf/ringbuf.c
···2929 u64 mask;3030 struct page **pages;3131 int nr_pages;3232- spinlock_t spinlock ____cacheline_aligned_in_smp;3232+ raw_spinlock_t spinlock ____cacheline_aligned_in_smp;3333 /* For user-space producer ring buffers, an atomic_t busy bit is used3434 * to synchronize access to the ring buffers in the kernel, rather than3535 * the spinlock that is used for kernel-producer ring buffers. This is···173173 if (!rb)174174 return NULL;175175176176- spin_lock_init(&rb->spinlock);176176+ raw_spin_lock_init(&rb->spinlock);177177 atomic_set(&rb->busy, 0);178178 init_waitqueue_head(&rb->waitq);179179 init_irq_work(&rb->work, bpf_ringbuf_notify);···421421 cons_pos = smp_load_acquire(&rb->consumer_pos);422422423423 if (in_nmi()) {424424- if (!spin_trylock_irqsave(&rb->spinlock, flags))424424+ if (!raw_spin_trylock_irqsave(&rb->spinlock, flags))425425 return NULL;426426 } else {427427- spin_lock_irqsave(&rb->spinlock, flags);427427+ raw_spin_lock_irqsave(&rb->spinlock, flags);428428 }429429430430 pend_pos = rb->pending_pos;···450450 */451451 if (new_prod_pos - cons_pos > rb->mask ||452452 new_prod_pos - pend_pos > rb->mask) {453453- spin_unlock_irqrestore(&rb->spinlock, flags);453453+ raw_spin_unlock_irqrestore(&rb->spinlock, flags);454454 return NULL;455455 }456456···462462 /* pairs with consumer's smp_load_acquire() */463463 smp_store_release(&rb->producer_pos, new_prod_pos);464464465465- spin_unlock_irqrestore(&rb->spinlock, flags);465465+ raw_spin_unlock_irqrestore(&rb->spinlock, flags);466466467467 return (void *)hdr + BPF_RINGBUF_HDR_SZ;468468}
···27502750 b->module = mod;27512751 b->offset = offset;2752275227532753+ /* sort() reorders entries by value, so b may no longer point27542754+ * to the right entry after this27552755+ */27532756 sort(tab->descs, tab->nr_descs, sizeof(tab->descs[0]),27542757 kfunc_btf_cmp_by_off, NULL);27582758+ } else {27592759+ btf = b->btf;27552760 }27562756- return b->btf;27612761+27622762+ return btf;27572763}2758276427592765void bpf_free_kfunc_btf_tab(struct bpf_kfunc_btf_tab *tab)···6339633363406334 /* both of s64_max/s64_min positive or negative */63416335 if ((s64_max >= 0) == (s64_min >= 0)) {63426342- reg->smin_value = reg->s32_min_value = s64_min;63436343- reg->smax_value = reg->s32_max_value = s64_max;63446344- reg->umin_value = reg->u32_min_value = s64_min;63456345- reg->umax_value = reg->u32_max_value = s64_max;63366336+ reg->s32_min_value = reg->smin_value = s64_min;63376337+ reg->s32_max_value = reg->smax_value = s64_max;63386338+ reg->u32_min_value = reg->umin_value = s64_min;63396339+ reg->u32_max_value = reg->umax_value = s64_max;63466340 reg->var_off = tnum_range(s64_min, s64_max);63476341 return;63486342 }···1427014264 * r1 += 0x11427114265 * if r2 < 1000 goto ...1427214266 * use r1 in memory access1427314273- * So remember constant delta between r2 and r1 and update r1 after1427414274- * 'if' condition.1426714267+ * So for 64-bit alu remember constant delta between r2 and r1 and1426814268+ * update r1 after 'if' condition.1427514269 */1427614276- if (env->bpf_capable && BPF_OP(insn->code) == BPF_ADD &&1427714277- dst_reg->id && is_reg_const(src_reg, alu32)) {1427814278- u64 val = reg_const_value(src_reg, alu32);1427014270+ if (env->bpf_capable &&1427114271+ BPF_OP(insn->code) == BPF_ADD && !alu32 &&1427214272+ dst_reg->id && is_reg_const(src_reg, false)) {1427314273+ u64 val = reg_const_value(src_reg, false);14279142741428014275 if ((dst_reg->id & BPF_ADD_CONST) ||1428114276 /* prevent overflow in sync_linked_regs() later */···1533315326 continue;1533415327 if ((!(reg->id & BPF_ADD_CONST) && !(known_reg->id & BPF_ADD_CONST)) ||1533515328 reg->off == known_reg->off) {1532915329+ s32 saved_subreg_def = reg->subreg_def;1533015330+1533615331 copy_register_state(reg, known_reg);1533215332+ reg->subreg_def = saved_subreg_def;1533715333 } else {1533415334+ s32 saved_subreg_def = reg->subreg_def;1533815335 s32 saved_off = reg->off;15339153361534015337 fake_reg.type = SCALAR_VALUE;···1535115340 * otherwise another sync_linked_regs() will be incorrect.1535215341 */1535315342 reg->off = saved_off;1534315343+ reg->subreg_def = saved_subreg_def;15354153441535515345 scalar32_min_max_add(reg, &fake_reg);1535615346 scalar_min_max_add(reg, &fake_reg);···2232222310 /* 'struct bpf_verifier_env' can be global, but since it's not small,2232322311 * allocate/free it every time bpf_check() is called2232422312 */2232522325- env = kzalloc(sizeof(struct bpf_verifier_env), GFP_KERNEL);2231322313+ env = kvzalloc(sizeof(struct bpf_verifier_env), GFP_KERNEL);2232622314 if (!env)2232722315 return -ENOMEM;2232822316···2255822546 mutex_unlock(&bpf_verifier_lock);2255922547 vfree(env->insn_aux_data);2256022548err_free_env:2256122561- kfree(env);2254922549+ kvfree(env);2256222550 return ret;2256322551}
+1-1
kernel/events/core.c
···92519251 },92529252 };9253925392549254- if (!sched_in && task->on_rq) {92549254+ if (!sched_in && task_is_runnable(task)) {92559255 switch_event.event_id.header.misc |=92569256 PERF_RECORD_MISC_SWITCH_OUT_PREEMPT;92579257 }
+6-1
kernel/freezer.c
···109109{110110 unsigned int state = READ_ONCE(p->__state);111111112112- if (p->on_rq)112112+ /*113113+ * Allow freezing the sched_delayed tasks; they will not execute until114114+ * ttwu() fixes them up, so it is safe to swap their state now, instead115115+ * of waiting for them to get fully dequeued.116116+ */117117+ if (task_is_runnable(p))113118 return 0;114119115120 if (p != current && task_curr(p))
+9
kernel/rcu/tasks.h
···986986 return false;987987988988 /*989989+ * t->on_rq && !t->se.sched_delayed *could* be considered sleeping but990990+ * since it is a spurious state (it will transition into the991991+ * traditional blocked state or get woken up without outside992992+ * dependencies), not considering it such should only affect timing.993993+ *994994+ * Be conservative for now and not include it.995995+ */996996+997997+ /*989998 * Idle tasks (or idle injection) within the idle loop are RCU-tasks990999 * quiescent states. But CPU boot code performed by the idle task9911000 * isn't a quiescent state.
+41-24
kernel/sched/core.c
···548548 * ON_RQ_MIGRATING state is used for migration without holding both549549 * rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().550550 *551551+ * Additionally it is possible to be ->on_rq but still be considered not552552+ * runnable when p->se.sched_delayed is true. These tasks are on the runqueue553553+ * but will be dequeued as soon as they get picked again. See the554554+ * task_is_runnable() helper.555555+ *551556 * p->on_cpu <- { 0, 1 }:552557 *553558 * is set by prepare_task() and cleared by finish_task() such that it will be···20172012 if (!(flags & ENQUEUE_NOCLOCK))20182013 update_rq_clock(rq);2019201420202020- if (!(flags & ENQUEUE_RESTORE)) {20212021- sched_info_enqueue(rq, p);20222022- psi_enqueue(p, (flags & ENQUEUE_WAKEUP) && !(flags & ENQUEUE_MIGRATED));20232023- }20242024-20252015 p->sched_class->enqueue_task(rq, p, flags);20262016 /*20272017 * Must be after ->enqueue_task() because ENQUEUE_DELAYED can clear20282018 * ->sched_delayed.20292019 */20302020 uclamp_rq_inc(rq, p);20212021+20222022+ if (!(flags & ENQUEUE_RESTORE)) {20232023+ sched_info_enqueue(rq, p);20242024+ psi_enqueue(p, flags & ENQUEUE_MIGRATED);20252025+ }2031202620322027 if (sched_core_enabled(rq))20332028 sched_core_enqueue(rq, p);···2046204120472042 if (!(flags & DEQUEUE_SAVE)) {20482043 sched_info_dequeue(rq, p);20492049- psi_dequeue(p, flags & DEQUEUE_SLEEP);20442044+ psi_dequeue(p, !(flags & DEQUEUE_SLEEP));20502045 }2051204620522047 /*···43284323 * @arg: Argument to function.43294324 *43304325 * Fix the task in it's current state by avoiding wakeups and or rq operations43314331- * and call @func(@arg) on it. This function can use ->on_rq and task_curr()43324332- * to work out what the state is, if required. Given that @func can be invoked43334333- * with a runqueue lock held, it had better be quite lightweight.43264326+ * and call @func(@arg) on it. This function can use task_is_runnable() and43274327+ * task_curr() to work out what the state is, if required. Given that @func43284328+ * can be invoked with a runqueue lock held, it had better be quite43294329+ * lightweight.43344330 *43354331 * Returns:43364332 * Whatever @func returns···65506544 * as a preemption by schedule_debug() and RCU.65516545 */65526546 bool preempt = sched_mode > SM_NONE;65476547+ bool block = false;65536548 unsigned long *switch_count;65546549 unsigned long prev_state;65556550 struct rq_flags rf;···66366629 * After this, schedule() must not care about p->state any more.66376630 */66386631 block_task(rq, prev, flags);66326632+ block = true;66396633 }66406634 switch_count = &prev->nvcsw;66416635 }···6682667466836675 migrate_disable_switch(rq, prev);66846676 psi_account_irqtime(rq, prev, next);66856685- psi_sched_switch(prev, next, !task_on_rq_queued(prev));66776677+ psi_sched_switch(prev, next, block);6686667866876679 trace_sched_switch(preempt, prev, next, prev_state);66886680···70257017}70267018EXPORT_SYMBOL(default_wake_function);7027701970287028-void __setscheduler_prio(struct task_struct *p, int prio)70207020+const struct sched_class *__setscheduler_class(struct task_struct *p, int prio)70297021{70307022 if (dl_prio(prio))70317031- p->sched_class = &dl_sched_class;70327032- else if (rt_prio(prio))70337033- p->sched_class = &rt_sched_class;70347034-#ifdef CONFIG_SCHED_CLASS_EXT70357035- else if (task_should_scx(p))70367036- p->sched_class = &ext_sched_class;70377037-#endif70387038- else70397039- p->sched_class = &fair_sched_class;70237023+ return &dl_sched_class;7040702470417041- p->prio = prio;70257025+ if (rt_prio(prio))70267026+ return &rt_sched_class;70277027+70287028+#ifdef CONFIG_SCHED_CLASS_EXT70297029+ if (task_should_scx(p))70307030+ return &ext_sched_class;70317031+#endif70327032+70337033+ return &fair_sched_class;70427034}7043703570447036#ifdef CONFIG_RT_MUTEXES···70847076{70857077 int prio, oldprio, queued, running, queue_flag =70867078 DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK;70877087- const struct sched_class *prev_class;70797079+ const struct sched_class *prev_class, *next_class;70887080 struct rq_flags rf;70897081 struct rq *rq;70907082···71427134 queue_flag &= ~DEQUEUE_MOVE;7143713571447136 prev_class = p->sched_class;71377137+ next_class = __setscheduler_class(p, prio);71387138+71397139+ if (prev_class != next_class && p->se.sched_delayed)71407140+ dequeue_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_DELAYED | DEQUEUE_NOCLOCK);71417141+71457142 queued = task_on_rq_queued(p);71467143 running = task_current(rq, p);71477144 if (queued)···71847171 p->rt.timeout = 0;71857172 }7186717371877187- __setscheduler_prio(p, prio);71747174+ p->sched_class = next_class;71757175+ p->prio = prio;71767176+71887177 check_class_changing(rq, p, prev_class);7189717871907179 if (queued)···1048010465 return;1048110466 if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))1048210467 return;1048310483- task_work_add(curr, work, TWA_RESUME);1046810468+1046910469+ /* No page allocation under rq lock */1047010470+ task_work_add(curr, work, TWA_RESUME | TWAF_NO_ALLOC);1048410471}10485104721048610473void sched_mm_cid_exit_signals(struct task_struct *t)
···1247124712481248 account_cfs_rq_runtime(cfs_rq, delta_exec);1249124912501250- if (rq->nr_running == 1)12501250+ if (cfs_rq->nr_running == 1)12511251 return;1252125212531253 if (resched || did_preempt_short(cfs_rq, curr)) {···60586058 for_each_sched_entity(se) {60596059 struct cfs_rq *qcfs_rq = cfs_rq_of(se);6060606060616061- if (se->on_rq) {60626062- SCHED_WARN_ON(se->sched_delayed);60616061+ /* Handle any unfinished DELAY_DEQUEUE business first. */60626062+ if (se->sched_delayed) {60636063+ int flags = DEQUEUE_SLEEP | DEQUEUE_DELAYED;60646064+60656065+ dequeue_entity(qcfs_rq, se, flags);60666066+ } else if (se->on_rq)60636067 break;60646064- }60656068 enqueue_entity(qcfs_rq, se, ENQUEUE_WAKEUP);6066606960676070 if (cfs_rq_is_idle(group_cfs_rq(se)))···1317713174static void switched_from_fair(struct rq *rq, struct task_struct *p)1317813175{1317913176 detach_task_cfs_rq(p);1318013180- /*1318113181- * Since this is called after changing class, this is a little weird1318213182- * and we cannot use DEQUEUE_DELAYED.1318313183- */1318413184- if (p->se.sched_delayed) {1318513185- /* First, dequeue it from its new class' structures */1318613186- dequeue_task(rq, p, DEQUEUE_NOCLOCK | DEQUEUE_SLEEP);1318713187- /*1318813188- * Now, clean up the fair_sched_class side of things1318913189- * related to sched_delayed being true and that wasn't done1319013190- * due to the generic dequeue not using DEQUEUE_DELAYED.1319113191- */1319213192- finish_delayed_dequeue_entity(&p->se);1319313193- p->se.rel_deadline = 0;1319413194- __block_task(rq, p);1319513195- }1319613177}13197131781319813179static void switched_to_fair(struct rq *rq, struct task_struct *p)
+1-1
kernel/sched/sched.h
···3800380038013801extern int __sched_setscheduler(struct task_struct *p, const struct sched_attr *attr, bool user, bool pi);38023802extern int __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx);38033803-extern void __setscheduler_prio(struct task_struct *p, int prio);38033803+extern const struct sched_class *__setscheduler_class(struct task_struct *p, int prio);38043804extern void set_load_weight(struct task_struct *p, bool update_load);38053805extern void enqueue_task(struct rq *rq, struct task_struct *p, int flags);38063806extern bool dequeue_task(struct rq *rq, struct task_struct *p, int flags);
+33-15
kernel/sched/stats.h
···119119/*120120 * PSI tracks state that persists across sleeps, such as iowaits and121121 * memory stalls. As a result, it has to distinguish between sleeps,122122- * where a task's runnable state changes, and requeues, where a task123123- * and its state are being moved between CPUs and runqueues.122122+ * where a task's runnable state changes, and migrations, where a task123123+ * and its runnable state are being moved between CPUs and runqueues.124124+ *125125+ * A notable case is a task whose dequeue is delayed. PSI considers126126+ * those sleeping, but because they are still on the runqueue they can127127+ * go through migration requeues. In this case, *sleeping* states need128128+ * to be transferred.124129 */125125-static inline void psi_enqueue(struct task_struct *p, bool wakeup)130130+static inline void psi_enqueue(struct task_struct *p, bool migrate)126131{127127- int clear = 0, set = TSK_RUNNING;132132+ int clear = 0, set = 0;128133129134 if (static_branch_likely(&psi_disabled))130135 return;131136132132- if (p->in_memstall)133133- set |= TSK_MEMSTALL_RUNNING;134134-135135- if (!wakeup) {137137+ if (p->se.sched_delayed) {138138+ /* CPU migration of "sleeping" task */139139+ SCHED_WARN_ON(!migrate);136140 if (p->in_memstall)137141 set |= TSK_MEMSTALL;142142+ if (p->in_iowait)143143+ set |= TSK_IOWAIT;144144+ } else if (migrate) {145145+ /* CPU migration of runnable task */146146+ set = TSK_RUNNING;147147+ if (p->in_memstall)148148+ set |= TSK_MEMSTALL | TSK_MEMSTALL_RUNNING;138149 } else {150150+ /* Wakeup of new or sleeping task */139151 if (p->in_iowait)140152 clear |= TSK_IOWAIT;153153+ set = TSK_RUNNING;154154+ if (p->in_memstall)155155+ set |= TSK_MEMSTALL_RUNNING;141156 }142157143158 psi_task_change(p, clear, set);144159}145160146146-static inline void psi_dequeue(struct task_struct *p, bool sleep)161161+static inline void psi_dequeue(struct task_struct *p, bool migrate)147162{148163 if (static_branch_likely(&psi_disabled))149164 return;165165+166166+ /*167167+ * When migrating a task to another CPU, clear all psi168168+ * state. The enqueue callback above will work it out.169169+ */170170+ if (migrate)171171+ psi_task_change(p, p->psi_flags, 0);150172151173 /*152174 * A voluntary sleep is a dequeue followed by a task switch. To···176154 * TSK_RUNNING and TSK_IOWAIT for us when it moves TSK_ONCPU.177155 * Do nothing here.178156 */179179- if (sleep)180180- return;181181-182182- psi_task_change(p, p->psi_flags, 0);183157}184158185159static inline void psi_ttwu_dequeue(struct task_struct *p)···208190}209191210192#else /* CONFIG_PSI */211211-static inline void psi_enqueue(struct task_struct *p, bool wakeup) {}212212-static inline void psi_dequeue(struct task_struct *p, bool sleep) {}193193+static inline void psi_enqueue(struct task_struct *p, bool migrate) {}194194+static inline void psi_dequeue(struct task_struct *p, bool migrate) {}213195static inline void psi_ttwu_dequeue(struct task_struct *p) {}214196static inline void psi_sched_switch(struct task_struct *prev,215197 struct task_struct *next,
···5555 enum task_work_notify_mode notify)5656{5757 struct callback_head *head;5858+ int flags = notify & TWA_FLAGS;58596060+ notify &= ~TWA_FLAGS;5961 if (notify == TWA_NMI_CURRENT) {6062 if (WARN_ON_ONCE(task != current))6163 return -EINVAL;6264 if (!IS_ENABLED(CONFIG_IRQ_WORK))6365 return -EINVAL;6466 } else {6565- /* record the work call stack in order to print it in KASAN reports */6666- kasan_record_aux_stack(work);6767+ /*6868+ * Record the work call stack in order to print it in KASAN6969+ * reports.7070+ *7171+ * Note that stack allocation can fail if TWAF_NO_ALLOC flag7272+ * is set and new page is needed to expand the stack buffer.7373+ */7474+ if (flags & TWAF_NO_ALLOC)7575+ kasan_record_aux_stack_noalloc(work);7676+ else7777+ kasan_record_aux_stack(work);6778 }68796980 head = READ_ONCE(task->task_works);
+3-3
kernel/time/posix-clock.c
···309309 struct posix_clock_desc cd;310310 int err;311311312312+ if (!timespec64_valid_strict(ts))313313+ return -EINVAL;314314+312315 err = get_clock_desc(id, &cd);313316 if (err)314317 return err;···320317 err = -EACCES;321318 goto out;322319 }323323-324324- if (!timespec64_valid_strict(ts))325325- return -EINVAL;326320327321 if (cd.clk->ops.clock_settime)328322 err = cd.clk->ops.clock_settime(cd.clk, ts);
+6
kernel/time/tick-sched.c
···434434 * smp_mb__after_spin_lock()435435 * tick_nohz_task_switch()436436 * LOAD p->tick_dep_mask437437+ *438438+ * XXX given a task picks up the dependency on schedule(), should we439439+ * only care about tasks that are currently on the CPU instead of all440440+ * that are on the runqueue?441441+ *442442+ * That is, does this want to be: task_on_cpu() / task_curr()?437443 */438444 if (!sched_task_on_rq(tsk))439445 return;
+17-19
kernel/trace/bpf_trace.c
···31333133 struct bpf_uprobe_multi_link *umulti_link;31343134 u32 ucount = info->uprobe_multi.count;31353135 int err = 0, i;31363136- long left;31363136+ char *p, *buf;31373137+ long left = 0;3137313831383139 if (!upath ^ !upath_size)31393140 return -EINVAL;···31483147 info->uprobe_multi.pid = umulti_link->task ?31493148 task_pid_nr_ns(umulti_link->task, task_active_pid_ns(current)) : 0;3150314931513151- if (upath) {31523152- char *p, *buf;31533153-31543154- upath_size = min_t(u32, upath_size, PATH_MAX);31553155-31563156- buf = kmalloc(upath_size, GFP_KERNEL);31573157- if (!buf)31583158- return -ENOMEM;31593159- p = d_path(&umulti_link->path, buf, upath_size);31603160- if (IS_ERR(p)) {31613161- kfree(buf);31623162- return PTR_ERR(p);31633163- }31643164- upath_size = buf + upath_size - p;31653165- left = copy_to_user(upath, p, upath_size);31503150+ upath_size = upath_size ? min_t(u32, upath_size, PATH_MAX) : PATH_MAX;31513151+ buf = kmalloc(upath_size, GFP_KERNEL);31523152+ if (!buf)31533153+ return -ENOMEM;31543154+ p = d_path(&umulti_link->path, buf, upath_size);31553155+ if (IS_ERR(p)) {31663156 kfree(buf);31673167- if (left)31683168- return -EFAULT;31693169- info->uprobe_multi.path_size = upath_size;31573157+ return PTR_ERR(p);31703158 }31593159+ upath_size = buf + upath_size - p;31603160+31613161+ if (upath)31623162+ left = copy_to_user(upath, p, upath_size);31633163+ kfree(buf);31643164+ if (left)31653165+ return -EFAULT;31663166+ info->uprobe_multi.path_size = upath_size;3171316731723168 if (!uoffsets && !ucookies && !uref_ctr_offsets)31733169 return 0;
+23-8
kernel/trace/fgraph.c
···11601160static int start_graph_tracing(void)11611161{11621162 unsigned long **ret_stack_list;11631163- int ret, cpu;11631163+ int ret;1164116411651165- ret_stack_list = kmalloc(SHADOW_STACK_SIZE, GFP_KERNEL);11651165+ ret_stack_list = kcalloc(FTRACE_RETSTACK_ALLOC_SIZE,11661166+ sizeof(*ret_stack_list), GFP_KERNEL);1166116711671168 if (!ret_stack_list)11681169 return -ENOMEM;11691169-11701170- /* The cpu_boot init_task->ret_stack will never be freed */11711171- for_each_online_cpu(cpu) {11721172- if (!idle_task(cpu)->ret_stack)11731173- ftrace_graph_init_idle_task(idle_task(cpu), cpu);11741174- }1175117011761171 do {11771172 ret = alloc_retstack_tasklist(ret_stack_list);···12371242 fgraph_direct_gops = &fgraph_stub;12381243}1239124412451245+/* The cpu_boot init_task->ret_stack will never be freed */12461246+static int fgraph_cpu_init(unsigned int cpu)12471247+{12481248+ if (!idle_task(cpu)->ret_stack)12491249+ ftrace_graph_init_idle_task(idle_task(cpu), cpu);12501250+ return 0;12511251+}12521252+12401253int register_ftrace_graph(struct fgraph_ops *gops)12411254{12551255+ static bool fgraph_initialized;12421256 int command = 0;12431257 int ret = 0;12441258 int i = -1;1245125912461260 mutex_lock(&ftrace_lock);12611261+12621262+ if (!fgraph_initialized) {12631263+ ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "fgraph_idle_init",12641264+ fgraph_cpu_init, NULL);12651265+ if (ret < 0) {12661266+ pr_warn("fgraph: Error to init cpu hotplug support\n");12671267+ return ret;12681268+ }12691269+ fgraph_initialized = true;12701270+ ret = 0;12711271+ }1247127212481273 if (!fgraph_array[0]) {12491274 /* The array must always have real data on it */
+6-1
kernel/trace/trace_eprobe.c
···912912 }913913 }914914915915+ if (argc - 2 > MAX_TRACE_ARGS) {916916+ ret = -E2BIG;917917+ goto error;918918+ }919919+915920 mutex_lock(&event_mutex);916921 event_call = find_and_get_event(sys_name, sys_event);917922 ep = alloc_event_probe(group, event, event_call, argc - 2);···942937943938 argc -= 2; argv += 2;944939 /* parse arguments */945945- for (i = 0; i < argc && i < MAX_TRACE_ARGS; i++) {940940+ for (i = 0; i < argc; i++) {946941 trace_probe_log_set_index(i + 2);947942 ret = trace_eprobe_tp_update_arg(ep, argv, i);948943 if (ret)
+5-1
kernel/trace/trace_fprobe.c
···11871187 argc = new_argc;11881188 argv = new_argv;11891189 }11901190+ if (argc > MAX_TRACE_ARGS) {11911191+ ret = -E2BIG;11921192+ goto out;11931193+ }1190119411911195 ret = traceprobe_expand_dentry_args(argc, argv, &dbuf);11921196 if (ret)···12071203 }1208120412091205 /* parse arguments */12101210- for (i = 0; i < argc && i < MAX_TRACE_ARGS; i++) {12061206+ for (i = 0; i < argc; i++) {12111207 trace_probe_log_set_index(i + 2);12121208 ctx.offset = 0;12131209 ret = traceprobe_parse_probe_arg(&tf->tp, i, argv[i], &ctx);
+5-1
kernel/trace/trace_kprobe.c
···10131013 argc = new_argc;10141014 argv = new_argv;10151015 }10161016+ if (argc > MAX_TRACE_ARGS) {10171017+ ret = -E2BIG;10181018+ goto out;10191019+ }1016102010171021 ret = traceprobe_expand_dentry_args(argc, argv, &dbuf);10181022 if (ret)···10331029 }1034103010351031 /* parse arguments */10361036- for (i = 0; i < argc && i < MAX_TRACE_ARGS; i++) {10321032+ for (i = 0; i < argc; i++) {10371033 trace_probe_log_set_index(i + 2);10381034 ctx.offset = 0;10391035 ret = traceprobe_parse_probe_arg(&tk->tp, i, argv[i], &ctx);
···14851485 /* reset the max latency */14861486 tr->max_latency = 0;1487148714881488- while (p->on_rq) {14881488+ while (task_is_runnable(p)) {14891489 /*14901490 * Sleep to make sure the -deadline thread is asleep too.14911491 * On virtual machines we can't rely on timings,
+9-4
kernel/trace/trace_uprobe.c
···565565566566 if (argc < 2)567567 return -ECANCELED;568568+ if (argc - 2 > MAX_TRACE_ARGS)569569+ return -E2BIG;568570569571 if (argv[0][1] == ':')570572 event = &argv[0][2];···692690 tu->filename = filename;693691694692 /* parse arguments */695695- for (i = 0; i < argc && i < MAX_TRACE_ARGS; i++) {693693+ for (i = 0; i < argc; i++) {696694 struct traceprobe_parse_context ctx = {697695 .flags = (is_return ? TPARG_FL_RETURN : 0) | TPARG_FL_USER,698696 };···877875};878876static struct uprobe_cpu_buffer __percpu *uprobe_cpu_buffer;879877static int uprobe_buffer_refcnt;878878+#define MAX_UCB_BUFFER_SIZE PAGE_SIZE880879881880static int uprobe_buffer_init(void)882881{···982979 ucb = uprobe_buffer_get();983980 ucb->dsize = tu->tp.size + dsize;984981982982+ if (WARN_ON_ONCE(ucb->dsize > MAX_UCB_BUFFER_SIZE)) {983983+ ucb->dsize = MAX_UCB_BUFFER_SIZE;984984+ dsize = MAX_UCB_BUFFER_SIZE - tu->tp.size;985985+ }986986+985987 store_trace_args(ucb->buf, &tu->tp, regs, NULL, esize, dsize);986988987989 *ucbp = ucb;···1005997 struct trace_event_call *call = trace_probe_event_call(&tu->tp);10069981007999 WARN_ON(call != trace_file->event_call);10081008-10091009- if (WARN_ON_ONCE(ucb->dsize > PAGE_SIZE))10101010- return;1011100010121001 if (trace_trigger_soft_disabled(trace_file))10131002 return;
+1-1
lib/Kconfig.debug
···30603060 bool "Allow unoptimized build-time assertions"30613061 depends on RUST30623062 help30633063- Controls how are `build_error!` and `build_assert!` handled during build.30633063+ Controls how `build_error!` and `build_assert!` are handled during the build.3064306430653065 If calls to them exist in the binary, it may indicate a violated invariant30663066 or that the optimizer failed to verify the invariant during compilation.
+5-2
lib/Kconfig.kasan
···2222config CC_HAS_KASAN_GENERIC2323 def_bool $(cc-option, -fsanitize=kernel-address)24242525+# GCC appears to ignore no_sanitize_address when -fsanitize=kernel-hwaddress2626+# is passed. See https://bugzilla.kernel.org/show_bug.cgi?id=218854 (and2727+# the linked LKML thread) for more details.2528config CC_HAS_KASAN_SW_TAGS2626- def_bool $(cc-option, -fsanitize=kernel-hwaddress)2929+ def_bool !CC_IS_GCC && $(cc-option, -fsanitize=kernel-hwaddress)27302831# This option is only required for software KASAN modes.2932# Old GCC versions do not have proper support for no_sanitize_address.···10198 help10299 Enables Software Tag-Based KASAN.103100104104- Requires GCC 11+ or Clang.101101+ Requires Clang.105102106103 Supported only on arm64 CPUs and relies on Top Byte Ignore.107104
+5
lib/buildid.c
···55#include <linux/elf.h>66#include <linux/kernel.h>77#include <linux/pagemap.h>88+#include <linux/secretmem.h>89910#define BUILD_ID 31011···6463 return 0;65646665 freader_put_folio(r);6666+6767+ /* reject secretmem folios created with memfd_secret() */6868+ if (secretmem_mapping(r->file->f_mapping))6969+ return -EFAULT;67706871 r->folio = filemap_get_folio(r->file->f_mapping, file_off >> PAGE_SHIFT);6972
+3
lib/codetag.c
···228228 if (!mod)229229 return true;230230231231+ /* await any module's kfree_rcu() operations to complete */232232+ kvfree_rcu_barrier();233233+231234 mutex_lock(&codetag_lock);232235 list_for_each_entry(cttype, &codetag_types, link) {233236 struct codetag_module *found = NULL;
+1-1
lib/crypto/mpi/mpi-mul.c
···2121 int usign, vsign, sign_product;2222 int assign_wp = 0;2323 mpi_ptr_t tmp_limb = NULL;2424- int err;2424+ int err = 0;25252626 if (u->nlimbs < v->nlimbs) {2727 /* Swap U and V. */
+7-7
lib/maple_tree.c
···2196219621972197/*21982198 * mas_wr_node_walk() - Find the correct offset for the index in the @mas.21992199+ * If @mas->index cannot be found within the containing22002200+ * node, we traverse to the last entry in the node.21992201 * @wr_mas: The maple write state22002202 *22012203 * Uses mas_slot_locked() and does not need to worry about dead nodes.···35343532 return true;35353533}3536353435373537-static bool mas_wr_walk_index(struct ma_wr_state *wr_mas)35353535+static void mas_wr_walk_index(struct ma_wr_state *wr_mas)35383536{35393537 struct ma_state *mas = wr_mas->mas;35403538···35433541 wr_mas->content = mas_slot_locked(mas, wr_mas->slots,35443542 mas->offset);35453543 if (ma_is_leaf(wr_mas->type))35463546- return true;35443544+ return;35473545 mas_wr_walk_traverse(wr_mas);35483548-35493546 }35503550- return true;35513547}35523548/*35533549 * mas_extend_spanning_null() - Extend a store of a %NULL to include surrounding %NULLs.···37653765 memset(&b_node, 0, sizeof(struct maple_big_node));37663766 /* Copy l_mas and store the value in b_node. */37673767 mas_store_b_node(&l_wr_mas, &b_node, l_mas.end);37683768- /* Copy r_mas into b_node. */37693769- if (r_mas.offset <= r_mas.end)37683768+ /* Copy r_mas into b_node if there is anything to copy. */37693769+ if (r_mas.max > r_mas.last)37703770 mas_mab_cp(&r_mas, r_mas.offset, r_mas.end,37713771 &b_node, b_node.b_end + 1);37723772 else···4218421842194219 /* Potential spanning rebalance collapsing a node */42204220 if (new_end < mt_min_slots[wr_mas->type]) {42214221- if (!mte_is_root(mas->node)) {42214221+ if (!mte_is_root(mas->node) && !(mas->mas_flags & MA_STATE_BULK)) {42224222 mas->store_type = wr_rebalance;42234223 return;42244224 }
+1-1
lib/objpool.c
···7676 * mimimal size of vmalloc is one page since vmalloc would7777 * always align the requested size to page size7878 */7979- if (pool->gfp & GFP_ATOMIC)7979+ if ((pool->gfp & GFP_ATOMIC) == GFP_ATOMIC)8080 slot = kmalloc_node(size, pool->gfp, cpu_to_node(i));8181 else8282 slot = __vmalloc_node(size, sizeof(void *), pool->gfp,
···109109 if (!vma->vm_mm) /* vdso */110110 return 0;111111112112- /*113113- * Explicitly disabled through madvise or prctl, or some114114- * architectures may disable THP for some mappings, for115115- * example, s390 kvm.116116- * */117117- if ((vm_flags & VM_NOHUGEPAGE) ||118118- test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))119119- return 0;120120- /*121121- * If the hardware/firmware marked hugepage support disabled.122122- */123123- if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED))112112+ if (thp_disabled_by_hw() || vma_thp_disabled(vma, vm_flags))124113 return 0;125114126115 /* khugepaged doesn't collapse DAX vma, but page fault is fine. */
+7-1
mm/kasan/init.c
···106106 }107107}108108109109+void __weak __meminit kernel_pte_init(void *addr)110110+{111111+}112112+109113static int __ref zero_pmd_populate(pud_t *pud, unsigned long addr,110114 unsigned long end)111115{···130126131127 if (slab_is_available())132128 p = pte_alloc_one_kernel(&init_mm);133133- else129129+ else {134130 p = early_alloc(PAGE_SIZE, NUMA_NO_NODE);131131+ kernel_pte_init(p);132132+ }135133 if (!p)136134 return -ENOMEM;137135
+3-3
mm/khugepaged.c
···22272227 folio_put(new_folio);22282228out:22292229 VM_BUG_ON(!list_empty(&pagelist));22302230- trace_mm_khugepaged_collapse_file(mm, new_folio, index, is_shmem, addr, file, HPAGE_PMD_NR, result);22302230+ trace_mm_khugepaged_collapse_file(mm, new_folio, index, addr, is_shmem, file, HPAGE_PMD_NR, result);22312231 return result;22322232}22332233···22522252 continue;2253225322542254 if (xa_is_value(folio)) {22552255- ++swap;22552255+ swap += 1 << xas_get_order(&xas);22562256 if (cc->is_khugepaged &&22572257 swap > khugepaged_max_ptes_swap) {22582258 result = SCAN_EXCEED_SWAP_PTE;···22992299 * is just too costly...23002300 */2301230123022302- present++;23022302+ present += folio_nr_pages(folio);2303230323042304 if (need_resched()) {23052305 xas_pause(&xas);
+11-6
mm/memory.c
···41814181 return __alloc_swap_folio(vmf);41824182}41834183#else /* !CONFIG_TRANSPARENT_HUGEPAGE */41844184-static inline bool can_swapin_thp(struct vm_fault *vmf, pte_t *ptep, int nr_pages)41854185-{41864186- return false;41874187-}41884188-41894184static struct folio *alloc_swap_folio(struct vm_fault *vmf)41904185{41914186 return __alloc_swap_folio(vmf);···49194924 unsigned long haddr = vmf->address & HPAGE_PMD_MASK;49204925 pmd_t entry;49214926 vm_fault_t ret = VM_FAULT_FALLBACK;49274927+49284928+ /*49294929+ * It is too late to allocate a small folio, we already have a large49304930+ * folio in the pagecache: especially s390 KVM cannot tolerate any49314931+ * PMD mappings, but PTE-mapped THP are fine. So let's simply refuse any49324932+ * PMD mappings if THPs are disabled.49334933+ */49344934+ if (thp_disabled_by_hw() || vma_thp_disabled(vma, vma->vm_flags))49354935+ return ret;4922493649234937 if (!thp_vma_suitable_order(vma, haddr, PMD_ORDER))49244938 return ret;···63506346static inline void pfnmap_lockdep_assert(struct vm_area_struct *vma)63516347{63526348#ifdef CONFIG_LOCKDEP63536353- struct address_space *mapping = vma->vm_file->f_mapping;63496349+ struct file *file = vma->vm_file;63506350+ struct address_space *mapping = file ? file->f_mapping : NULL;6354635163556352 if (mapping)63566353 lockdep_assert(lockdep_is_held(&vma->vm_file->f_mapping->i_mmap_rwsem) ||
+21-11
mm/mmap.c
···13711371 struct maple_tree mt_detach;13721372 unsigned long end = addr + len;13731373 bool writable_file_mapping = false;13741374- int error = -ENOMEM;13741374+ int error;13751375 VMA_ITERATOR(vmi, mm, addr);13761376 VMG_STATE(vmg, mm, &vmi, addr, end, vm_flags, pgoff);13771377···13961396 }1397139713981398 /* Check against address space limit. */13991399- if (!may_expand_vm(mm, vm_flags, pglen - vms.nr_pages))13991399+ if (!may_expand_vm(mm, vm_flags, pglen - vms.nr_pages)) {14001400+ error = -ENOMEM;14001401 goto abort_munmap;14021402+ }1401140314021404 /*14031405 * Private writable mapping: check memory availability···14071405 if (accountable_mapping(file, vm_flags)) {14081406 charged = pglen;14091407 charged -= vms.nr_accounted;14101410- if (charged && security_vm_enough_memory_mm(mm, charged))14111411- goto abort_munmap;14081408+ if (charged) {14091409+ error = security_vm_enough_memory_mm(mm, charged);14101410+ if (error)14111411+ goto abort_munmap;14121412+ }1412141314131414 vms.nr_accounted = 0;14141415 vm_flags |= VM_ACCOUNT;···14271422 * not unmapped, but the maps are removed from the list.14281423 */14291424 vma = vm_area_alloc(mm);14301430- if (!vma)14251425+ if (!vma) {14261426+ error = -ENOMEM;14311427 goto unacct_error;14281428+ }1432142914331430 vma_iter_config(&vmi, addr, end);14341431 vma_set_range(vma, addr, end, pgoff);···14601453 * Expansion is handled above, merging is handled below.14611454 * Drivers should not alter the address of the VMA.14621455 */14631463- error = -EINVAL;14641464- if (WARN_ON((addr != vma->vm_start)))14561456+ if (WARN_ON((addr != vma->vm_start))) {14571457+ error = -EINVAL;14651458 goto close_and_free_vma;14591459+ }1466146014671461 vma_iter_config(&vmi, addr, end);14681462 /*···15081500 }1509150115101502 /* Allow architectures to sanity-check the vm_flags */15111511- error = -EINVAL;15121512- if (!arch_validate_flags(vma->vm_flags))15031503+ if (!arch_validate_flags(vma->vm_flags)) {15041504+ error = -EINVAL;15131505 goto close_and_free_vma;15061506+ }1514150715151515- error = -ENOMEM;15161516- if (vma_iter_prealloc(&vmi, vma))15081508+ if (vma_iter_prealloc(&vmi, vma)) {15091509+ error = -ENOMEM;15171510 goto close_and_free_vma;15111511+ }1518151215191513 /* Lock the VMA since it is modified after insertion into VMA tree */15201514 vma_start_write(vma);
+9-2
mm/mremap.c
···238238{239239 spinlock_t *old_ptl, *new_ptl;240240 struct mm_struct *mm = vma->vm_mm;241241+ bool res = false;241242 pmd_t pmd;242243243244 if (!arch_supports_page_table_move())···278277 if (new_ptl != old_ptl)279278 spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);280279281281- /* Clear the pmd */282280 pmd = *old_pmd;281281+282282+ /* Racing with collapse? */283283+ if (unlikely(!pmd_present(pmd) || pmd_leaf(pmd)))284284+ goto out_unlock;285285+ /* Clear the pmd */283286 pmd_clear(old_pmd);287287+ res = true;284288285289 VM_BUG_ON(!pmd_none(*new_pmd));286290287291 pmd_populate(mm, new_pmd, pmd_pgtable(pmd));288292 flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE);293293+out_unlock:289294 if (new_ptl != old_ptl)290295 spin_unlock(new_ptl);291296 spin_unlock(old_ptl);292297293293- return true;298298+ return res;294299}295300#else296301static inline bool move_normal_pmd(struct vm_area_struct *vma,
+1-6
mm/shmem.c
···16641664 loff_t i_size;16651665 int order;1666166616671667- if (vma && ((vm_flags & VM_NOHUGEPAGE) ||16681668- test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)))16691669- return 0;16701670-16711671- /* If the hardware/firmware marked hugepage support disabled. */16721672- if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED))16671667+ if (thp_disabled_by_hw() || (vma && vma_thp_disabled(vma, vm_flags)))16731668 return 0;1674166916751670 global_huge = shmem_huge_global_enabled(inode, index, write_end,
···194194 if (IS_ERR(folio))195195 return 0;196196197197- /* offset could point to the middle of a large folio */198198- entry = folio->swap;199199- offset = swp_offset(entry);200197 nr_pages = folio_nr_pages(folio);201198 ret = -nr_pages;202199···206209 */207210 if (!folio_trylock(folio))208211 goto out;212212+213213+ /* offset could point to the middle of a large folio */214214+ entry = folio->swap;215215+ offset = swp_offset(entry);209216210217 need_reclaim = ((flags & TTRS_ANYWAY) ||211218 ((flags & TTRS_UNMAPPED) && !folio_mapped(folio)) ||···2313231223142313 mmap_read_lock(mm);23152314 for_each_vma(vmi, vma) {23162316- if (vma->anon_vma) {23152315+ if (vma->anon_vma && !is_vm_hugetlb_page(vma)) {23172316 ret = unuse_vma(vma, type);23182317 if (ret)23192318 break;
+2-2
mm/vmscan.c
···4963496349644964 blk_finish_plug(&plug);49654965done:49664966- /* kswapd should never fail */49674967- pgdat->kswapd_failures = 0;49664966+ if (sc->nr_reclaimed > reclaimed)49674967+ pgdat->kswapd_failures = 0;49684968}4969496949704970/******************************************************************************
+11-1
net/9p/client.c
···977977struct p9_client *p9_client_create(const char *dev_name, char *options)978978{979979 int err;980980+ static atomic_t seqno = ATOMIC_INIT(0);980981 struct p9_client *clnt;981982 char *client_id;983983+ char *cache_name;982984983985 clnt = kmalloc(sizeof(*clnt), GFP_KERNEL);984986 if (!clnt)···10371035 if (err)10381036 goto close_trans;1039103710381038+ cache_name = kasprintf(GFP_KERNEL,10391039+ "9p-fcall-cache-%u", atomic_inc_return(&seqno));10401040+ if (!cache_name) {10411041+ err = -ENOMEM;10421042+ goto close_trans;10431043+ }10441044+10401045 /* P9_HDRSZ + 4 is the smallest packet header we can have that is10411046 * followed by data accessed from userspace by read10421047 */10431048 clnt->fcall_cache =10441044- kmem_cache_create_usercopy("9p-fcall-cache", clnt->msize,10491049+ kmem_cache_create_usercopy(cache_name, clnt->msize,10451050 0, 0, P9_HDRSZ + 4,10461051 clnt->msize - (P9_HDRSZ + 4),10471052 NULL);1048105310541054+ kfree(cache_name);10491055 return clnt;1050105610511057close_trans:
···14971497 bool skip_sw = tc_skip_sw(fl_flags);14981498 bool skip_hw = tc_skip_hw(fl_flags);1499149915001500- if (tc_act_bind(act->tcfa_flags))15001500+ if (tc_act_bind(act->tcfa_flags)) {15011501+ /* Action is created by classifier and is not15021502+ * standalone. Check that the user did not set15031503+ * any action flags different than the15041504+ * classifier flags, and inherit the flags from15051505+ * the classifier for the compatibility case15061506+ * where no flags were specified at all.15071507+ */15081508+ if ((tc_act_skip_sw(act->tcfa_flags) && !skip_sw) ||15091509+ (tc_act_skip_hw(act->tcfa_flags) && !skip_hw)) {15101510+ NL_SET_ERR_MSG(extack,15111511+ "Mismatch between action and filter offload flags");15121512+ err = -EINVAL;15131513+ goto err;15141514+ }15151515+ if (skip_sw)15161516+ act->tcfa_flags |= TCA_ACT_FLAGS_SKIP_SW;15171517+ if (skip_hw)15181518+ act->tcfa_flags |= TCA_ACT_FLAGS_SKIP_HW;15011519 continue;15201520+ }15211521+15221522+ /* Action is standalone */15021523 if (skip_sw != tc_act_skip_sw(act->tcfa_flags) ||15031524 skip_hw != tc_act_skip_hw(act->tcfa_flags)) {15041525 NL_SET_ERR_MSG(extack,
···201201{202202 int err;203203 u8 sa_dir = attrs[XFRMA_SA_DIR] ? nla_get_u8(attrs[XFRMA_SA_DIR]) : 0;204204+ u16 family = p->sel.family;204205205206 err = -EINVAL;206207 switch (p->family) {···222221 goto out;223222 }224223225225- switch (p->sel.family) {224224+ if (!family && !(p->flags & XFRM_STATE_AF_UNSPEC))225225+ family = p->family;226226+227227+ switch (family) {226228 case AF_UNSPEC:227229 break;228230···11021098 if (!nla)11031099 return -EMSGSIZE;11041100 ap = nla_data(nla);11051105- memcpy(ap, auth, sizeof(struct xfrm_algo_auth));11011101+ strscpy_pad(ap->alg_name, auth->alg_name, sizeof(ap->alg_name));11021102+ ap->alg_key_len = auth->alg_key_len;11031103+ ap->alg_trunc_len = auth->alg_trunc_len;11061104 if (redact_secret && auth->alg_key_len)11071105 memset(ap->alg_key, 0, (auth->alg_key_len + 7) / 8);11081106 else
+3
scripts/Kconfig.include
···6565m32-flag := $(cc-option-bit,-m32)6666m64-flag := $(cc-option-bit,-m64)67676868+rustc-version := $(shell,$(srctree)/scripts/rustc-version.sh $(RUSTC))6969+rustc-llvm-version := $(shell,$(srctree)/scripts/rustc-llvm-version.sh $(RUSTC))7070+6871# $(rustc-option,<flag>)6972# Return y if the Rust compiler supports <flag>, n otherwise7073# Calls to this should be guarded so that they are not evaluated if
···11# SPDX-License-Identifier: GPL-2.0-only22menuconfig SOUND33 tristate "Sound card support"44- depends on HAS_IOMEM || UML44+ depends on HAS_IOMEM || INDIRECT_IOMEM55 help66 If you have a sound card in your computer, i.e. if it can say more77 than an occasional beep, say Y.
+22-11
sound/hda/intel-sdw-acpi.c
···5656sdw_intel_scan_controller(struct sdw_intel_acpi_info *info)5757{5858 struct acpi_device *adev = acpi_fetch_acpi_dev(info->handle);5959- u8 count, i;5959+ struct fwnode_handle *fwnode;6060+ unsigned long list;6161+ unsigned int i;6262+ u32 count;6363+ u32 tmp;6064 int ret;61656266 if (!adev)6367 return -EINVAL;64686565- /* Found controller, find links supported */6666- count = 0;6767- ret = fwnode_property_read_u8_array(acpi_fwnode_handle(adev),6868- "mipi-sdw-master-count", &count, 1);6969+ fwnode = acpi_fwnode_handle(adev);69707071 /*7272+ * Found controller, find links supported7373+ *7174 * In theory we could check the number of links supported in7275 * hardware, but in that step we cannot assume SoundWire IP is7376 * powered.···8178 *8279 * We will check the hardware capabilities in the startup() step8380 */8484-8181+ ret = fwnode_property_read_u32(fwnode, "mipi-sdw-manager-list", &tmp);8582 if (ret) {8686- dev_err(&adev->dev,8787- "Failed to read mipi-sdw-master-count: %d\n", ret);8888- return -EINVAL;8383+ ret = fwnode_property_read_u32(fwnode, "mipi-sdw-master-count", &count);8484+ if (ret) {8585+ dev_err(&adev->dev,8686+ "Failed to read mipi-sdw-master-count: %d\n",8787+ ret);8888+ return ret;8989+ }9090+ list = GENMASK(count - 1, 0);9191+ } else {9292+ list = tmp;9393+ count = hweight32(list);8994 }90959196 /* Check count is within bounds */···112101 info->count = count;113102 info->link_mask = 0;114103115115- for (i = 0; i < count; i++) {104104+ for_each_set_bit(i, &list, SDW_INTEL_MAX_LINKS) {116105 if (ctrl_link_mask && !(ctrl_link_mask & BIT(i))) {117106 dev_dbg(&adev->dev,118107 "Link %d masked, will not be enabled\n", i);119108 continue;120109 }121110122122- if (!is_link_enabled(acpi_fwnode_handle(adev), i)) {111111+ if (!is_link_enabled(fwnode, i)) {123112 dev_dbg(&adev->dev,124113 "Link %d not selected in firmware\n", i);125114 continue;
+19
sound/pci/hda/patch_conexant.c
···303303 CXT_FIXUP_HP_SPECTRE,304304 CXT_FIXUP_HP_GATE_MIC,305305 CXT_FIXUP_MUTE_LED_GPIO,306306+ CXT_FIXUP_HP_ELITEONE_OUT_DIS,306307 CXT_FIXUP_HP_ZBOOK_MUTE_LED,307308 CXT_FIXUP_HEADSET_MIC,308309 CXT_FIXUP_HP_MIC_NO_PRESENCE,···319318{320319 struct conexant_spec *spec = codec->spec;321320 spec->gen.inv_dmic_split = 1;321321+}322322+323323+/* fix widget control pin settings */324324+static void cxt_fixup_update_pinctl(struct hda_codec *codec,325325+ const struct hda_fixup *fix, int action)326326+{327327+ if (action == HDA_FIXUP_ACT_PROBE) {328328+ /* Unset OUT_EN for this Node pin, leaving only HP_EN.329329+ * This is the value stored in the codec register after330330+ * the correct initialization of the previous windows boot.331331+ */332332+ snd_hda_set_pin_ctl_cache(codec, 0x1d, AC_PINCTL_HP_EN);333333+ }322334}323335324336static void cxt5066_increase_mic_boost(struct hda_codec *codec,···985971 .type = HDA_FIXUP_FUNC,986972 .v.func = cxt_fixup_mute_led_gpio,987973 },974974+ [CXT_FIXUP_HP_ELITEONE_OUT_DIS] = {975975+ .type = HDA_FIXUP_FUNC,976976+ .v.func = cxt_fixup_update_pinctl,977977+ },988978 [CXT_FIXUP_HP_ZBOOK_MUTE_LED] = {989979 .type = HDA_FIXUP_FUNC,990980 .v.func = cxt_fixup_hp_zbook_mute_led,···10791061 SND_PCI_QUIRK(0x103c, 0x83b2, "HP EliteBook 840 G5", CXT_FIXUP_HP_DOCK),10801062 SND_PCI_QUIRK(0x103c, 0x83b3, "HP EliteBook 830 G5", CXT_FIXUP_HP_DOCK),10811063 SND_PCI_QUIRK(0x103c, 0x83d3, "HP ProBook 640 G4", CXT_FIXUP_HP_DOCK),10641064+ SND_PCI_QUIRK(0x103c, 0x83e5, "HP EliteOne 1000 G2", CXT_FIXUP_HP_ELITEONE_OUT_DIS),10821065 SND_PCI_QUIRK(0x103c, 0x8402, "HP ProBook 645 G4", CXT_FIXUP_MUTE_LED_GPIO),10831066 SND_PCI_QUIRK(0x103c, 0x8427, "HP ZBook Studio G5", CXT_FIXUP_HP_ZBOOK_MUTE_LED),10841067 SND_PCI_QUIRK(0x103c, 0x844f, "HP ZBook Studio G5", CXT_FIXUP_HP_ZBOOK_MUTE_LED),
···22/*33 * Line 6 Linux USB driver44 *55- * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at)55+ * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at)66 */7788#include <linux/slab.h>
+1-1
sound/usb/line6/capture.h
···22/*33 * Line 6 Linux USB driver44 *55- * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at)55+ * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at)66 */7788#ifndef CAPTURE_H
+2-2
sound/usb/line6/driver.c
···22/*33 * Line 6 Linux USB driver44 *55- * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at)55+ * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at)66 */7788#include <linux/kernel.h>···2020#include "midi.h"2121#include "playback.h"22222323-#define DRIVER_AUTHOR "Markus Grabner <grabner@icg.tugraz.at>"2323+#define DRIVER_AUTHOR "Markus Grabner <line6@grabner-graz.at>"2424#define DRIVER_DESC "Line 6 USB Driver"25252626/*
+1-1
sound/usb/line6/driver.h
···22/*33 * Line 6 Linux USB driver44 *55- * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at)55+ * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at)66 */7788#ifndef DRIVER_H
+1-1
sound/usb/line6/midi.c
···22/*33 * Line 6 Linux USB driver44 *55- * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at)55+ * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at)66 */7788#include <linux/slab.h>
+1-1
sound/usb/line6/midi.h
···22/*33 * Line 6 Linux USB driver44 *55- * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at)55+ * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at)66 */7788#ifndef MIDI_H
+1-1
sound/usb/line6/midibuf.c
···22/*33 * Line 6 Linux USB driver44 *55- * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at)55+ * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at)66 */7788#include <linux/slab.h>
+1-1
sound/usb/line6/midibuf.h
···22/*33 * Line 6 Linux USB driver44 *55- * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at)55+ * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at)66 */7788#ifndef MIDIBUF_H
+1-1
sound/usb/line6/pcm.c
···22/*33 * Line 6 Linux USB driver44 *55- * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at)55+ * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at)66 */7788#include <linux/slab.h>
+1-1
sound/usb/line6/pcm.h
···22/*33 * Line 6 Linux USB driver44 *55- * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at)55+ * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at)66 */7788/*
+1-1
sound/usb/line6/playback.c
···22/*33 * Line 6 Linux USB driver44 *55- * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at)55+ * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at)66 */7788#include <linux/slab.h>
+1-1
sound/usb/line6/playback.h
···22/*33 * Line 6 Linux USB driver44 *55- * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at)55+ * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at)66 */7788#ifndef PLAYBACK_H
+1-1
sound/usb/line6/pod.c
···22/*33 * Line 6 Linux USB driver44 *55- * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at)55+ * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at)66 */7788#include <linux/slab.h>
+1-1
sound/usb/line6/toneport.c
···22/*33 * Line 6 Linux USB driver44 *55- * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at)55+ * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at)66 * Emil Myhrman (emil.myhrman@gmail.com)77 */88
+1-1
sound/usb/line6/variax.c
···22/*33 * Line 6 Linux USB driver44 *55- * Copyright (C) 2004-2010 Markus Grabner (grabner@icg.tugraz.at)55+ * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at)66 */7788#include <linux/slab.h>
+2
sound/usb/mixer_scarlett2.c
···56135613 info->peq_flt_total_count *56145614 SCARLETT2_BIQUAD_COEFFS,56155615 peq_flt_values);56165616+ if (err < 0)56175617+ return err;5616561856175619 for (i = 0, dst_idx = 0; i < info->dsp_input_count; i++) {56185620 src_idx = i *
···55195519 * **-EOPNOTSUPP** if the hash calculation failed or **-EINVAL** if55205520 * invalid arguments are passed.55215521 *55225522- * void *bpf_kptr_xchg(void *map_value, void *ptr)55225522+ * void *bpf_kptr_xchg(void *dst, void *ptr)55235523 * Description55245524- * Exchange kptr at pointer *map_value* with *ptr*, and return the55255525- * old value. *ptr* can be NULL, otherwise it must be a referenced55265526- * pointer which will be released when this helper is called.55245524+ * Exchange kptr at pointer *dst* with *ptr*, and return the old value.55255525+ * *dst* can be map value or local kptr. *ptr* can be NULL, otherwise55265526+ * it must be a referenced pointer which will be released when this helper55275527+ * is called.55275528 * Return55285529 * The old value of kptr (which can be NULL). The returned pointer55295530 * if not NULL, is a reference which must be released using its···60476046 BPF_F_MARK_ENFORCE = (1ULL << 6),60486047};6049604860506050-/* BPF_FUNC_clone_redirect and BPF_FUNC_redirect flags. */60516051-enum {60526052- BPF_F_INGRESS = (1ULL << 0),60536053-};60546054-60556049/* BPF_FUNC_skb_set_tunnel_key and BPF_FUNC_skb_get_tunnel_key flags. */60566050enum {60576051 BPF_F_TUNINFO_IPV6 = (1ULL << 0),···61936197 BPF_F_BPRM_SECUREEXEC = (1ULL << 0),61946198};6195619961966196-/* Flags for bpf_redirect_map helper */62006200+/* Flags for bpf_redirect and bpf_redirect_map helpers */61976201enum {61986198- BPF_F_BROADCAST = (1ULL << 3),61996199- BPF_F_EXCLUDE_INGRESS = (1ULL << 4),62026202+ BPF_F_INGRESS = (1ULL << 0), /* used for skb path */62036203+ BPF_F_BROADCAST = (1ULL << 3), /* used for XDP path */62046204+ BPF_F_EXCLUDE_INGRESS = (1ULL << 4), /* used for XDP path */62056205+#define BPF_F_REDIRECT_FLAGS (BPF_F_INGRESS | BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS)62006206};6201620762026208#define __bpf_md_ptr(type, name) \
+110
tools/testing/radix-tree/maple.c
···3631736317 return 0;3631836318}36319363193632036320+/*3632136321+ * test to check that bulk stores do not use wr_rebalance as the store3632236322+ * type.3632336323+ */3632436324+static inline void check_bulk_rebalance(struct maple_tree *mt)3632536325+{3632636326+ MA_STATE(mas, mt, ULONG_MAX, ULONG_MAX);3632736327+ int max = 10;3632836328+3632936329+ build_full_tree(mt, 0, 2);3633036330+3633136331+ /* erase every entry in the tree */3633236332+ do {3633336333+ /* set up bulk store mode */3633436334+ mas_expected_entries(&mas, max);3633536335+ mas_erase(&mas);3633636336+ MT_BUG_ON(mt, mas.store_type == wr_rebalance);3633736337+ } while (mas_prev(&mas, 0) != NULL);3633836338+3633936339+ mas_destroy(&mas);3634036340+}3634136341+3632036342void farmer_tests(void)3632136343{3632236344 struct maple_node *node;···36348363263634936327 mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE | MT_FLAGS_LOCK_EXTERN | MT_FLAGS_USE_RCU);3635036328 check_vma_modification(&tree);3632936329+ mtree_destroy(&tree);3633036330+3633136331+ mt_init(&tree);3633236332+ check_bulk_rebalance(&tree);3635136333 mtree_destroy(&tree);36352363343635336335 tree.ma_root = xa_mk_value(0);···3643236406 check_nomem(&tree);3643336407}36434364083640936409+static unsigned long get_last_index(struct ma_state *mas)3641036410+{3641136411+ struct maple_node *node = mas_mn(mas);3641236412+ enum maple_type mt = mte_node_type(mas->node);3641336413+ unsigned long *pivots = ma_pivots(node, mt);3641436414+ unsigned long last_index = mas_data_end(mas);3641536415+3641636416+ BUG_ON(last_index == 0);3641736417+3641836418+ return pivots[last_index - 1] + 1;3641936419+}3642036420+3642136421+/*3642236422+ * Assert that we handle spanning stores that consume the entirety of the right3642336423+ * leaf node correctly.3642436424+ */3642536425+static void test_spanning_store_regression(void)3642636426+{3642736427+ unsigned long from = 0, to = 0;3642836428+ DEFINE_MTREE(tree);3642936429+ MA_STATE(mas, &tree, 0, 0);3643036430+3643136431+ /*3643236432+ * Build a 3-level tree. We require a parent node below the root node3643336433+ * and 2 leaf nodes under it, so we can span the entirety of the right3643436434+ * hand node.3643536435+ */3643636436+ build_full_tree(&tree, 0, 3);3643736437+3643836438+ /* Descend into position at depth 2. */3643936439+ mas_reset(&mas);3644036440+ mas_start(&mas);3644136441+ mas_descend(&mas);3644236442+ mas_descend(&mas);3644336443+3644436444+ /*3644536445+ * We need to establish a tree like the below.3644636446+ *3644736447+ * Then we can try a store in [from, to] which results in a spanned3644836448+ * store across nodes B and C, with the maple state at the time of the3644936449+ * write being such that only the subtree at A and below is considered.3645036450+ *3645136451+ * Height3645236452+ * 0 Root Node3645336453+ * / \3645436454+ * pivot = to / \ pivot = ULONG_MAX3645536455+ * / \3645636456+ * 1 A [-----] ...3645736457+ * / \3645836458+ * pivot = from / \ pivot = to3645936459+ * / \3646036460+ * 2 (LEAVES) B [-----] [-----] C3646136461+ * ^--- Last pivot to.3646236462+ */3646336463+ while (true) {3646436464+ unsigned long tmp = get_last_index(&mas);3646536465+3646636466+ if (mas_next_sibling(&mas)) {3646736467+ from = tmp;3646836468+ to = mas.max;3646936469+ } else {3647036470+ break;3647136471+ }3647236472+ }3647336473+3647436474+ BUG_ON(from == 0 && to == 0);3647536475+3647636476+ /* Perform the store. */3647736477+ mas_set_range(&mas, from, to);3647836478+ mas_store_gfp(&mas, xa_mk_value(0xdead), GFP_KERNEL);3647936479+3648036480+ /* If the regression occurs, the validation will fail. */3648136481+ mt_validate(&tree);3648236482+3648336483+ /* Cleanup. */3648436484+ __mt_destroy(&tree);3648536485+}3648636486+3648736487+static void regression_tests(void)3648836488+{3648936489+ test_spanning_store_regression();3649036490+}3649136491+3643536492void maple_tree_tests(void)3643636493{3643736494#if !defined(BENCH)3649536495+ regression_tests();3643836496 farmer_tests();3643936497#endif3644036498 maple_tree_seed();
···77#include "errno.h"88#include <stdbool.h>991010+/* Should use BTF_FIELDS_MAX, but it is not always available in vmlinux.h,1111+ * so use the hard-coded number as a workaround.1212+ */1313+#define CPUMASK_KPTR_FIELDS_MAX 111414+1015int err;11161217#define private(name) SEC(".bss." #name) __attribute__((aligned(8)))
···760760 : __clobber_all);761761}762762763763+SEC("socket")764764+/* Note the flag, see verifier.c:opt_subreg_zext_lo32_rnd_hi32() */765765+__flag(BPF_F_TEST_RND_HI32)766766+__success767767+/* This test was added because of a bug in verifier.c:sync_linked_regs(),768768+ * upon range propagation it destroyed subreg_def marks for registers.769769+ * The subreg_def mark is used to decide whether zero extension instructions770770+ * are needed when register is read. When BPF_F_TEST_RND_HI32 is set it771771+ * also causes generation of statements to randomize upper halves of772772+ * read registers.773773+ *774774+ * The test is written in a way to return an upper half of a register775775+ * that is affected by range propagation and must have it's subreg_def776776+ * preserved. This gives a return value of 0 and leads to undefined777777+ * return value if subreg_def mark is not preserved.778778+ */779779+__retval(0)780780+/* Check that verifier believes r1/r0 are zero at exit */781781+__log_level(2)782782+__msg("4: (77) r1 >>= 32 ; R1_w=0")783783+__msg("5: (bf) r0 = r1 ; R0_w=0 R1_w=0")784784+__msg("6: (95) exit")785785+__msg("from 3 to 4")786786+__msg("4: (77) r1 >>= 32 ; R1_w=0")787787+__msg("5: (bf) r0 = r1 ; R0_w=0 R1_w=0")788788+__msg("6: (95) exit")789789+/* Verify that statements to randomize upper half of r1 had not been790790+ * generated.791791+ */792792+__xlated("call unknown")793793+__xlated("r0 &= 2147483647")794794+__xlated("w1 = w0")795795+/* This is how disasm.c prints BPF_ZEXT_REG at the moment, x86 and arm796796+ * are the only CI archs that do not need zero extension for subregs.797797+ */798798+#if !defined(__TARGET_ARCH_x86) && !defined(__TARGET_ARCH_arm64)799799+__xlated("w1 = w1")800800+#endif801801+__xlated("if w0 < 0xa goto pc+0")802802+__xlated("r1 >>= 32")803803+__xlated("r0 = r1")804804+__xlated("exit")805805+__naked void linked_regs_and_subreg_def(void)806806+{807807+ asm volatile (808808+ "call %[bpf_ktime_get_ns];"809809+ /* make sure r0 is in 32-bit range, otherwise w1 = w0 won't810810+ * assign same IDs to registers.811811+ */812812+ "r0 &= 0x7fffffff;"813813+ /* link w1 and w0 via ID */814814+ "w1 = w0;"815815+ /* 'if' statement propagates range info from w0 to w1,816816+ * but should not affect w1->subreg_def property.817817+ */818818+ "if w0 < 10 goto +0;"819819+ /* r1 is read here, on archs that require subreg zero820820+ * extension this would cause zext patch generation.821821+ */822822+ "r1 >>= 32;"823823+ "r0 = r1;"824824+ "exit;"825825+ :826826+ : __imm(bpf_ktime_get_ns)827827+ : __clobber_all);828828+}829829+763830char _license[] SEC("license") = "GPL";
+22-12
tools/testing/selftests/bpf/testing_helpers.c
···367367 return syscall(__NR_delete_module, name, flags);368368}369369370370-int unload_bpf_testmod(bool verbose)370370+int unload_module(const char *name, bool verbose)371371{372372 int ret, cnt = 0;373373···375375 fprintf(stdout, "Failed to trigger kernel-side RCU sync!\n");376376377377 for (;;) {378378- ret = delete_module("bpf_testmod", 0);378378+ ret = delete_module(name, 0);379379 if (!ret || errno != EAGAIN)380380 break;381381 if (++cnt > 10000) {382382- fprintf(stdout, "Unload of bpf_testmod timed out\n");382382+ fprintf(stdout, "Unload of %s timed out\n", name);383383 break;384384 }385385 usleep(100);···388388 if (ret) {389389 if (errno == ENOENT) {390390 if (verbose)391391- fprintf(stdout, "bpf_testmod.ko is already unloaded.\n");391391+ fprintf(stdout, "%s.ko is already unloaded.\n", name);392392 return -1;393393 }394394- fprintf(stdout, "Failed to unload bpf_testmod.ko from kernel: %d\n", -errno);394394+ fprintf(stdout, "Failed to unload %s.ko from kernel: %d\n", name, -errno);395395 return -1;396396 }397397 if (verbose)398398- fprintf(stdout, "Successfully unloaded bpf_testmod.ko.\n");398398+ fprintf(stdout, "Successfully unloaded %s.ko.\n", name);399399 return 0;400400}401401402402-int load_bpf_testmod(bool verbose)402402+int load_module(const char *path, bool verbose)403403{404404 int fd;405405406406 if (verbose)407407- fprintf(stdout, "Loading bpf_testmod.ko...\n");407407+ fprintf(stdout, "Loading %s...\n", path);408408409409- fd = open("bpf_testmod.ko", O_RDONLY);409409+ fd = open(path, O_RDONLY);410410 if (fd < 0) {411411- fprintf(stdout, "Can't find bpf_testmod.ko kernel module: %d\n", -errno);411411+ fprintf(stdout, "Can't find %s kernel module: %d\n", path, -errno);412412 return -ENOENT;413413 }414414 if (finit_module(fd, "", 0)) {415415- fprintf(stdout, "Failed to load bpf_testmod.ko into the kernel: %d\n", -errno);415415+ fprintf(stdout, "Failed to load %s into the kernel: %d\n", path, -errno);416416 close(fd);417417 return -EINVAL;418418 }419419 close(fd);420420421421 if (verbose)422422- fprintf(stdout, "Successfully loaded bpf_testmod.ko.\n");422422+ fprintf(stdout, "Successfully loaded %s.\n", path);423423 return 0;424424+}425425+426426+int unload_bpf_testmod(bool verbose)427427+{428428+ return unload_module("bpf_testmod", verbose);429429+}430430+431431+int load_bpf_testmod(bool verbose)432432+{433433+ return load_module("bpf_testmod.ko", verbose);424434}425435426436/*
···6060{6161 int i;62626363- for (i = 0; i < sizeof(mangled_cpuids); i++) {6363+ for (i = 0; i < ARRAY_SIZE(mangled_cpuids); i++) {6464 if (mangled_cpuids[i].function == entrie->function &&6565 mangled_cpuids[i].index == entrie->index)6666 return true;
+1-1
tools/testing/selftests/mm/khugepaged.c
···10911091 fprintf(stderr, "\n\t\"file,all\" mem_type requires kernel built with\n");10921092 fprintf(stderr, "\tCONFIG_READ_ONLY_THP_FOR_FS=y\n");10931093 fprintf(stderr, "\n\tif [dir] is a (sub)directory of a tmpfs mount, tmpfs must be\n");10941094- fprintf(stderr, "\tmounted with huge=madvise option for khugepaged tests to work\n");10941094+ fprintf(stderr, "\tmounted with huge=advise option for khugepaged tests to work\n");10951095 fprintf(stderr, "\n\tSupported Options:\n");10961096 fprintf(stderr, "\t\t-h: This help message.\n");10971097 fprintf(stderr, "\t\t-s: mTHP size, expressed as page order.\n");
+3-2
tools/testing/selftests/mm/uffd-common.c
···1818unsigned long long *count_verify;1919uffd_test_ops_t *uffd_test_ops;2020uffd_test_case_ops_t *uffd_test_case_ops;2121-atomic_bool ready_for_fork;2121+pthread_barrier_t ready_for_fork;22222323static int uffd_mem_fd_create(off_t mem_size, bool hugetlb)2424{···519519 pollfd[1].fd = pipefd[cpu*2];520520 pollfd[1].events = POLLIN;521521522522- ready_for_fork = true;522522+ /* Ready for parent thread to fork */523523+ pthread_barrier_wait(&ready_for_fork);523524524525 for (;;) {525526 ret = poll(pollfd, 2, -1);