···270270illegal Windows/NTFS/SMB characters to a remap range (this mount parameter271271is the default for SMB3). This remap (``mapposix``) range is also272272compatible with Mac (and "Services for Mac" on some older Windows).273273+When POSIX Extensions for SMB 3.1.1 are negotiated, remapping is automatically274274+disabled.273275274276CIFS VFS Mount Options275277======================
+77
Documentation/block/ublk.rst
···352352parameter of `struct ublk_param_segment` with backend for avoiding353353unnecessary IO split, which usually hurts io_uring performance.354354355355+Auto Buffer Registration356356+------------------------357357+358358+The ``UBLK_F_AUTO_BUF_REG`` feature automatically handles buffer registration359359+and unregistration for I/O requests, which simplifies the buffer management360360+process and reduces overhead in the ublk server implementation.361361+362362+This is another feature flag for using zero copy, and it is compatible with363363+``UBLK_F_SUPPORT_ZERO_COPY``.364364+365365+Feature Overview366366+~~~~~~~~~~~~~~~~367367+368368+This feature automatically registers request buffers to the io_uring context369369+before delivering I/O commands to the ublk server and unregisters them when370370+completing I/O commands. This eliminates the need for manual buffer371371+registration/unregistration via ``UBLK_IO_REGISTER_IO_BUF`` and372372+``UBLK_IO_UNREGISTER_IO_BUF`` commands, then IO handling in ublk server373373+can avoid dependency on the two uring_cmd operations.374374+375375+IOs can't be issued concurrently to io_uring if there is any dependency376376+among these IOs. So this way not only simplifies ublk server implementation,377377+but also makes concurrent IO handling becomes possible by removing the378378+dependency on buffer registration & unregistration commands.379379+380380+Usage Requirements381381+~~~~~~~~~~~~~~~~~~382382+383383+1. The ublk server must create a sparse buffer table on the same ``io_ring_ctx``384384+ used for ``UBLK_IO_FETCH_REQ`` and ``UBLK_IO_COMMIT_AND_FETCH_REQ``. If385385+ uring_cmd is issued on a different ``io_ring_ctx``, manual buffer386386+ unregistration is required.387387+388388+2. Buffer registration data must be passed via uring_cmd's ``sqe->addr`` with the389389+ following structure::390390+391391+ struct ublk_auto_buf_reg {392392+ __u16 index; /* Buffer index for registration */393393+ __u8 flags; /* Registration flags */394394+ __u8 reserved0; /* Reserved for future use */395395+ __u32 reserved1; /* Reserved for future use */396396+ };397397+398398+ ublk_auto_buf_reg_to_sqe_addr() is for converting the above structure into399399+ ``sqe->addr``.400400+401401+3. All reserved fields in ``ublk_auto_buf_reg`` must be zeroed.402402+403403+4. Optional flags can be passed via ``ublk_auto_buf_reg.flags``.404404+405405+Fallback Behavior406406+~~~~~~~~~~~~~~~~~407407+408408+If auto buffer registration fails:409409+410410+1. When ``UBLK_AUTO_BUF_REG_FALLBACK`` is enabled:411411+412412+ - The uring_cmd is completed413413+ - ``UBLK_IO_F_NEED_REG_BUF`` is set in ``ublksrv_io_desc.op_flags``414414+ - The ublk server must manually deal with the failure, such as, register415415+ the buffer manually, or using user copy feature for retrieving the data416416+ for handling ublk IO417417+418418+2. If fallback is not enabled:419419+420420+ - The ublk I/O request fails silently421421+ - The uring_cmd won't be completed422422+423423+Limitations424424+~~~~~~~~~~~425425+426426+- Requires same ``io_ring_ctx`` for all operations427427+- May require manual buffer management in fallback cases428428+- io_ring_ctx buffer table has a max size of 16K, which may not be enough429429+ in case that too many ublk devices are handled by this single io_ring_ctx430430+ and each one has very large queue depth431431+355432References356433==========357434
···11-Device-tree bindings for persistent memory regions22------------------------------------------------------33-44-Persistent memory refers to a class of memory devices that are:55-66- a) Usable as main system memory (i.e. cacheable), and77- b) Retain their contents across power failure.88-99-Given b) it is best to think of persistent memory as a kind of memory mapped1010-storage device. To ensure data integrity the operating system needs to manage1111-persistent regions separately to the normal memory pool. To aid with that this1212-binding provides a standardised interface for discovering where persistent1313-memory regions exist inside the physical address space.1414-1515-Bindings for the region nodes:1616------------------------------1717-1818-Required properties:1919- - compatible = "pmem-region"2020-2121- - reg = <base, size>;2222- The reg property should specify an address range that is2323- translatable to a system physical address range. This address2424- range should be mappable as normal system memory would be2525- (i.e cacheable).2626-2727- If the reg property contains multiple address ranges2828- each address range will be treated as though it was specified2929- in a separate device node. Having multiple address ranges in a3030- node implies no special relationship between the two ranges.3131-3232-Optional properties:3333- - Any relevant NUMA associativity properties for the target platform.3434-3535- - volatile; This property indicates that this region is actually3636- backed by non-persistent memory. This lets the OS know that it3737- may skip the cache flushes required to ensure data is made3838- persistent after a write.3939-4040- If this property is absent then the OS must assume that the region4141- is backed by non-volatile memory.4242-4343-Examples:4444---------------------4545-4646- /*4747- * This node specifies one 4KB region spanning from4848- * 0x5000 to 0x5fff that is backed by non-volatile memory.4949- */5050- pmem@5000 {5151- compatible = "pmem-region";5252- reg = <0x00005000 0x00001000>;5353- };5454-5555- /*5656- * This node specifies two 4KB regions that are backed by5757- * volatile (normal) memory.5858- */5959- pmem@6000 {6060- compatible = "pmem-region";6161- reg = < 0x00006000 0x000010006262- 0x00008000 0x00001000 >;6363- volatile;6464- };6565-
···11+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)22+%YAML 1.233+---44+$id: http://devicetree.org/schemas/pmem-region.yaml#55+$schema: http://devicetree.org/meta-schemas/core.yaml#66+77+maintainers:88+ - Oliver O'Halloran <oohall@gmail.com>99+1010+title: Persistent Memory Regions1111+1212+description: |1313+ Persistent memory refers to a class of memory devices that are:1414+1515+ a) Usable as main system memory (i.e. cacheable), and1616+ b) Retain their contents across power failure.1717+1818+ Given b) it is best to think of persistent memory as a kind of memory mapped1919+ storage device. To ensure data integrity the operating system needs to manage2020+ persistent regions separately to the normal memory pool. To aid with that this2121+ binding provides a standardised interface for discovering where persistent2222+ memory regions exist inside the physical address space.2323+2424+properties:2525+ compatible:2626+ const: pmem-region2727+2828+ reg:2929+ maxItems: 13030+3131+ volatile:3232+ description:3333+ Indicates the region is volatile (non-persistent) and the OS can skip3434+ cache flushes for writes3535+ type: boolean3636+3737+required:3838+ - compatible3939+ - reg4040+4141+additionalProperties: false4242+4343+examples:4444+ - |4545+ pmem@5000 {4646+ compatible = "pmem-region";4747+ reg = <0x00005000 0x00001000>;4848+ };
+3-1
Documentation/filesystems/proc.rst
···584584 ms may share585585 gd stack segment growns down586586 pf pure PFN range587587- dw disabled write to the mapped file588587 lo pages are locked in memory589588 io memory mapped I/O area590589 sr sequential read advise provided···606607 mt arm64 MTE allocation tags are enabled607608 um userfaultfd missing tracking608609 uw userfaultfd wr-protect tracking610610+ ui userfaultfd minor fault609611 ss shadow/guarded control stack page610612 sl sealed613613+ lf lock on fault pages614614+ dp always lazily freeable mapping611615 == =======================================612616613617Note that there is no guarantee that every flag and associated mnemonic will
+3
Documentation/netlink/specs/ethtool.yaml
···77doc: Partial family for Ethtool Netlink.88uapi-header: linux/ethtool_netlink_generated.h991010+c-family-name: ethtool-genl-name1111+c-version-name: ethtool-genl-version1212+1013definitions:1114 -1215 name: udp-tunnel-type
···290290 AMD Tom Lendacky <thomas.lendacky@amd.com>291291 Ampere Darren Hart <darren@os.amperecomputing.com>292292 ARM Catalin Marinas <catalin.marinas@arm.com>293293+ IBM Power Madhavan Srinivasan <maddy@linux.ibm.com>293294 IBM Z Christian Borntraeger <borntraeger@de.ibm.com>294295 Intel Tony Luck <tony.luck@intel.com>295296 Qualcomm Trilok Soni <quic_tsoni@quicinc.com>
+41-42
MAINTAINERS
···207207X: include/uapi/208208209209ABIT UGURU 1,2 HARDWARE MONITOR DRIVER210210-M: Hans de Goede <hdegoede@redhat.com>210210+M: Hans de Goede <hansg@kernel.org>211211L: linux-hwmon@vger.kernel.org212212S: Maintained213213F: drivers/hwmon/abituguru.c···371371F: drivers/platform/x86/quickstart.c372372373373ACPI SERIAL MULTI INSTANTIATE DRIVER374374-M: Hans de Goede <hdegoede@redhat.com>374374+M: Hans de Goede <hansg@kernel.org>375375L: platform-driver-x86@vger.kernel.org376376S: Maintained377377F: drivers/platform/x86/serial-multi-instantiate.c···11571157F: arch/x86/kernel/amd_node.c1158115811591159AMD PDS CORE DRIVER11601160-M: Shannon Nelson <shannon.nelson@amd.com>11611160M: Brett Creeley <brett.creeley@amd.com>11621161L: netdev@vger.kernel.org11631162S: Maintained···35503551F: scripts/make_fit.py3551355235523553ARM64 PLATFORM DRIVERS35533553-M: Hans de Goede <hdegoede@redhat.com>35543554+M: Hans de Goede <hansg@kernel.org>35543555M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>35553556R: Bryan O'Donoghue <bryan.odonoghue@linaro.org>35563557L: platform-driver-x86@vger.kernel.org···37113712F: drivers/platform/x86/eeepc*.c3712371337133714ASUS TF103C DOCK DRIVER37143714-M: Hans de Goede <hdegoede@redhat.com>37153715+M: Hans de Goede <hansg@kernel.org>37153716L: platform-driver-x86@vger.kernel.org37163717S: Maintained37173718T: git git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86.git···56135614F: drivers/usb/chipidea/5614561556155616CHIPONE ICN8318 I2C TOUCHSCREEN DRIVER56165616-M: Hans de Goede <hdegoede@redhat.com>56175617+M: Hans de Goede <hansg@kernel.org>56175618L: linux-input@vger.kernel.org56185619S: Maintained56195620F: Documentation/devicetree/bindings/input/touchscreen/chipone,icn8318.yaml56205621F: drivers/input/touchscreen/chipone_icn8318.c5621562256225623CHIPONE ICN8505 I2C TOUCHSCREEN DRIVER56235623-M: Hans de Goede <hdegoede@redhat.com>56245624+M: Hans de Goede <hansg@kernel.org>56245625L: linux-input@vger.kernel.org56255626S: Maintained56265627F: drivers/input/touchscreen/chipone_icn8505.c···62546255F: include/linux/smpboot.h62556256F: kernel/cpu.c62566257F: kernel/smpboot.*62586258+F: rust/helper/cpu.c62576259F: rust/kernel/cpu.rs6258626062596261CPU IDLE TIME MANAGEMENT FRAMEWORK···69186918F: include/linux/devfreq-event.h6919691969206920DEVICE RESOURCE MANAGEMENT HELPERS69216921-M: Hans de Goede <hdegoede@redhat.com>69216921+M: Hans de Goede <hansg@kernel.org>69226922R: Matti Vaittinen <mazziesaccount@gmail.com>69236923S: Maintained69246924F: include/linux/devm-helpers.h···75177517F: include/drm/gud.h7518751875197519DRM DRIVER FOR GRAIN MEDIA GM12U320 PROJECTORS75207520-M: Hans de Goede <hdegoede@redhat.com>75207520+M: Hans de Goede <hansg@kernel.org>75217521S: Maintained75227522T: git https://gitlab.freedesktop.org/drm/misc/kernel.git75237523F: drivers/gpu/drm/tiny/gm12u320.c···79177917F: drivers/gpu/drm/vkms/7918791879197919DRM DRIVER FOR VIRTUALBOX VIRTUAL GPU79207920-M: Hans de Goede <hdegoede@redhat.com>79207920+M: Hans de Goede <hansg@kernel.org>79217921L: dri-devel@lists.freedesktop.org79227922S: Maintained79237923T: git https://gitlab.freedesktop.org/drm/misc/kernel.git···83188318F: include/drm/drm_panel.h8319831983208320DRM PRIVACY-SCREEN CLASS83218321-M: Hans de Goede <hdegoede@redhat.com>83218321+M: Hans de Goede <hansg@kernel.org>83228322L: dri-devel@lists.freedesktop.org83238323S: Maintained83248324T: git https://gitlab.freedesktop.org/drm/misc/kernel.git···9941994199429942FWCTL PDS DRIVER99439943M: Brett Creeley <brett.creeley@amd.com>99449944-R: Shannon Nelson <shannon.nelson@amd.com>99459944L: linux-kernel@vger.kernel.org99469945S: Maintained99479946F: drivers/fwctl/pds/···1022110222F: Documentation/devicetree/bindings/connector/gocontroll,moduline-module-slot.yaml10222102231022310224GOODIX TOUCHSCREEN1022410224-M: Hans de Goede <hdegoede@redhat.com>1022510225+M: Hans de Goede <hansg@kernel.org>1022510226L: linux-input@vger.kernel.org1022610227S: Maintained1022710228F: drivers/input/touchscreen/goodix*···1026010261K: [gG]oogle.?[tT]ensor10261102621026210263GPD POCKET FAN DRIVER1026310263-M: Hans de Goede <hdegoede@redhat.com>1026410264+M: Hans de Goede <hansg@kernel.org>1026410265L: platform-driver-x86@vger.kernel.org1026510266S: Maintained1026610267F: drivers/platform/x86/gpd-pocket-fan.c···1142111422F: drivers/i2c/busses/i2c-viapro.c11422114231142311424I2C/SMBUS INTEL CHT WHISKEY COVE PMIC DRIVER1142411424-M: Hans de Goede <hdegoede@redhat.com>1142511425+M: Hans de Goede <hansg@kernel.org>1142511426L: linux-i2c@vger.kernel.org1142611427S: Maintained1142711428F: drivers/i2c/busses/i2c-cht-wc.c···1201112012F: sound/soc/intel/12012120131201312014INTEL ATOMISP2 DUMMY / POWER-MANAGEMENT DRIVER1201412014-M: Hans de Goede <hdegoede@redhat.com>1201512015+M: Hans de Goede <hansg@kernel.org>1201512016L: platform-driver-x86@vger.kernel.org1201612017S: Maintained1201712018F: drivers/platform/x86/intel/atomisp2/pm.c12018120191201912020INTEL ATOMISP2 LED DRIVER1202012020-M: Hans de Goede <hdegoede@redhat.com>1202112021+M: Hans de Goede <hansg@kernel.org>1202112022L: platform-driver-x86@vger.kernel.org1202212023S: Maintained1202312024F: drivers/platform/x86/intel/atomisp2/led.c···1367813679F: drivers/platform/x86/lenovo-wmi-hotkey-utilities.c13679136801368013681LETSKETCH HID TABLET DRIVER1368113681-M: Hans de Goede <hdegoede@redhat.com>1368213682+M: Hans de Goede <hansg@kernel.org>1368213683L: linux-input@vger.kernel.org1368313684S: Maintained1368413685T: git git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid.git···1372813729F: drivers/ata/sata_gemini.h13729137301373013731LIBATA SATA AHCI PLATFORM devices support1373113731-M: Hans de Goede <hdegoede@redhat.com>1373213732+M: Hans de Goede <hansg@kernel.org>1373213733L: linux-ide@vger.kernel.org1373313734S: Maintained1373413735F: drivers/ata/ahci_platform.c···1379813799L: nvdimm@lists.linux.dev1379913800S: Supported1380013801Q: https://patchwork.kernel.org/project/linux-nvdimm/list/1380113801-F: Documentation/devicetree/bindings/pmem/pmem-region.txt1380213802+F: Documentation/devicetree/bindings/pmem/pmem-region.yaml1380213803F: drivers/nvdimm/of_pmem.c13803138041380413805LIBNVDIMM: NON-VOLATILE MEMORY DEVICE SUBSYSTEM···1409814099F: block/partitions/ldm.*14099141001410014101LOGITECH HID GAMING KEYBOARDS1410114101-M: Hans de Goede <hdegoede@redhat.com>1410214102+M: Hans de Goede <hansg@kernel.org>1410214103L: linux-input@vger.kernel.org1410314104S: Maintained1410414105T: git git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid.git···1478014781F: drivers/power/supply/max17040_battery.c14781147821478214783MAXIM MAX17042 FAMILY FUEL GAUGE DRIVERS1478314783-R: Hans de Goede <hdegoede@redhat.com>1478414784+R: Hans de Goede <hansg@kernel.org>1478414785R: Krzysztof Kozlowski <krzk@kernel.org>1478514786R: Marek Szyprowski <m.szyprowski@samsung.com>1478614787R: Sebastian Krzyszkowiak <sebastian.krzyszkowiak@puri.sm>···1558215583F: drivers/net/ethernet/mellanox/mlxfw/15583155841558415585MELLANOX HARDWARE PLATFORM SUPPORT1558515585-M: Hans de Goede <hdegoede@redhat.com>1558615586+M: Hans de Goede <hansg@kernel.org>1558615587M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>1558715588M: Vadim Pasternak <vadimp@nvidia.com>1558815589L: platform-driver-x86@vger.kernel.org···1591915920R: Nico Pache <npache@redhat.com>1592015921R: Ryan Roberts <ryan.roberts@arm.com>1592115922R: Dev Jain <dev.jain@arm.com>1592315923+R: Barry Song <baohua@kernel.org>1592215924L: linux-mm@kvack.org1592315925S: Maintained1592415926W: http://www.linux-mm.org···1653916539F: drivers/platform/surface/surface_gpe.c16540165401654116541MICROSOFT SURFACE HARDWARE PLATFORM SUPPORT1654216542-M: Hans de Goede <hdegoede@redhat.com>1654216542+M: Hans de Goede <hansg@kernel.org>1654316543M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>1654416544M: Maximilian Luz <luzmaximilian@gmail.com>1654516545L: platform-driver-x86@vger.kernel.org···1770717707F: tools/testing/selftests/nolibc/17708177081770917709NOVATEK NVT-TS I2C TOUCHSCREEN DRIVER1771017710-M: Hans de Goede <hdegoede@redhat.com>1771017710+M: Hans de Goede <hansg@kernel.org>1771117711L: linux-input@vger.kernel.org1771217712S: Maintained1771317713F: Documentation/devicetree/bindings/input/touchscreen/novatek,nvt-ts.yaml···1937719377F: include/crypto/pcrypt.h19378193781937919379PDS DSC VIRTIO DATA PATH ACCELERATOR1938019380-R: Shannon Nelson <shannon.nelson@amd.com>1938019380+R: Brett Creeley <brett.creeley@amd.com>1938119381F: drivers/vdpa/pds/19382193821938319383PECI HARDWARE MONITORING DRIVERS···1939919399F: include/linux/peci.h19400194001940119401PENSANDO ETHERNET DRIVERS1940219402-M: Shannon Nelson <shannon.nelson@amd.com>1940319402M: Brett Creeley <brett.creeley@amd.com>1940419403L: netdev@vger.kernel.org1940519404S: Maintained···2217122172R: David Vernet <void@manifault.com>2217222173R: Andrea Righi <arighi@nvidia.com>2217322174R: Changwoo Min <changwoo@igalia.com>2217422174-L: linux-kernel@vger.kernel.org2217522175+L: sched-ext@lists.linux.dev2217522176S: Maintained2217622177W: https://github.com/sched-ext/scx2217722178T: git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext.git···2270822709K: [^@]sifive22709227102271022711SILEAD TOUCHSCREEN DRIVER2271122711-M: Hans de Goede <hdegoede@redhat.com>2271222712+M: Hans de Goede <hansg@kernel.org>2271222713L: linux-input@vger.kernel.org2271322714L: platform-driver-x86@vger.kernel.org2271422715S: Maintained···2274122742F: drivers/i3c/master/svc-i3c-master.c22742227432274322744SIMPLEFB FB DRIVER2274422744-M: Hans de Goede <hdegoede@redhat.com>2274522745+M: Hans de Goede <hansg@kernel.org>2274522746L: linux-fbdev@vger.kernel.org2274622747S: Maintained2274722748F: Documentation/devicetree/bindings/display/simple-framebuffer.yaml···2287022871F: drivers/hwmon/emc2103.c22871228722287222873SMSC SCH5627 HARDWARE MONITOR DRIVER2287322873-M: Hans de Goede <hdegoede@redhat.com>2287422874+M: Hans de Goede <hansg@kernel.org>2287422875L: linux-hwmon@vger.kernel.org2287522876S: Supported2287622877F: Documentation/hwmon/sch5627.rst···2352523526F: Documentation/process/stable-kernel-rules.rst23526235272352723528STAGING - ATOMISP DRIVER2352823528-M: Hans de Goede <hdegoede@redhat.com>2352923529+M: Hans de Goede <hansg@kernel.org>2352923530M: Mauro Carvalho Chehab <mchehab@kernel.org>2353023531R: Sakari Ailus <sakari.ailus@linux.intel.com>2353123532L: linux-media@vger.kernel.org···2382123822F: drivers/net/ethernet/i825xx/sun3*23822238232382323824SUN4I LOW RES ADC ATTACHED TABLET KEYS DRIVER2382423824-M: Hans de Goede <hdegoede@redhat.com>2382523825+M: Hans de Goede <hansg@kernel.org>2382523826L: linux-input@vger.kernel.org2382623827S: Maintained2382723828F: Documentation/devicetree/bindings/input/allwinner,sun4i-a10-lradc-keys.yaml···2558925590F: drivers/hid/usbhid/25590255912559125592USB INTEL XHCI ROLE MUX DRIVER2559225592-M: Hans de Goede <hdegoede@redhat.com>2559325593+M: Hans de Goede <hansg@kernel.org>2559325594L: linux-usb@vger.kernel.org2559425595S: Maintained2559525596F: drivers/usb/roles/intel-xhci-usb-role-switch.c···2578025781F: drivers/usb/typec/mux/intel_pmc_mux.c25781257822578225783USB TYPEC PI3USB30532 MUX DRIVER2578325783-M: Hans de Goede <hdegoede@redhat.com>2578425784+M: Hans de Goede <hansg@kernel.org>2578425785L: linux-usb@vger.kernel.org2578525786S: Maintained2578625787F: drivers/usb/typec/mux/pi3usb30532.c···25809258102581025811USB VIDEO CLASS2581125812M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>2581225812-M: Hans de Goede <hdegoede@redhat.com>2581325813+M: Hans de Goede <hansg@kernel.org>2581325814L: linux-media@vger.kernel.org2581425815S: Maintained2581525816W: http://www.ideasonboard.org/uvc/···2634026341F: sound/virtio/*26341263422634226343VIRTUAL BOX GUEST DEVICE DRIVER2634326343-M: Hans de Goede <hdegoede@redhat.com>2634426344+M: Hans de Goede <hansg@kernel.org>2634426345M: Arnd Bergmann <arnd@arndb.de>2634526346M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>2634626347S: Maintained···2634926350F: include/uapi/linux/vbox*.h26350263512635126352VIRTUAL BOX SHARED FOLDER VFS DRIVER2635226352-M: Hans de Goede <hdegoede@redhat.com>2635326353+M: Hans de Goede <hansg@kernel.org>2635326354L: linux-fsdevel@vger.kernel.org2635426355S: Maintained2635526356F: fs/vboxsf/*···26604266052660526606WACOM PROTOCOL 4 SERIAL TABLETS2660626607M: Julian Squires <julian@cipht.net>2660726607-M: Hans de Goede <hdegoede@redhat.com>2660826608+M: Hans de Goede <hansg@kernel.org>2660826609L: linux-input@vger.kernel.org2660926610S: Maintained2661026611F: drivers/input/tablet/wacom_serial4.c···2677126772F: include/uapi/linux/wwan.h26772267732677326774X-POWERS AXP288 PMIC DRIVERS2677426774-M: Hans de Goede <hdegoede@redhat.com>2677526775+M: Hans de Goede <hansg@kernel.org>2677526776S: Maintained2677626777F: drivers/acpi/pmic/intel_pmic_xpower.c2677726778N: axp288···2686326864F: arch/x86/mm/26864268652686526866X86 PLATFORM ANDROID TABLETS DSDT FIXUP DRIVER2686626866-M: Hans de Goede <hdegoede@redhat.com>2686726867+M: Hans de Goede <hansg@kernel.org>2686726868L: platform-driver-x86@vger.kernel.org2686826869S: Maintained2686926870T: git git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86.git2687026871F: drivers/platform/x86/x86-android-tablets/26871268722687226873X86 PLATFORM DRIVERS2687326873-M: Hans de Goede <hdegoede@redhat.com>2687426874+M: Hans de Goede <hansg@kernel.org>2687426875M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>2687526876L: platform-driver-x86@vger.kernel.org2687626877S: Maintained
···26262727static void __hyp_sve_save_guest(struct kvm_vcpu *vcpu)2828{2929- __vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR);2929+ __vcpu_assign_sys_reg(vcpu, ZCR_EL1, read_sysreg_el1(SYS_ZCR));3030 /*3131 * On saving/restoring guest sve state, always use the maximum VL for3232 * the guest. The layout of the data when saving the sve state depends···79798080 has_fpmr = kvm_has_fpmr(kern_hyp_va(vcpu->kvm));8181 if (has_fpmr)8282- __vcpu_sys_reg(vcpu, FPMR) = read_sysreg_s(SYS_FPMR);8282+ __vcpu_assign_sys_reg(vcpu, FPMR, read_sysreg_s(SYS_FPMR));83838484 if (system_supports_sve())8585 __hyp_sve_restore_host();
+2-2
arch/arm64/kvm/hyp/vhe/switch.c
···223223 */224224 val = read_sysreg_el0(SYS_CNTP_CVAL);225225 if (map.direct_ptimer == vcpu_ptimer(vcpu))226226- __vcpu_sys_reg(vcpu, CNTP_CVAL_EL0) = val;226226+ __vcpu_assign_sys_reg(vcpu, CNTP_CVAL_EL0, val);227227 if (map.direct_ptimer == vcpu_hptimer(vcpu))228228- __vcpu_sys_reg(vcpu, CNTHP_CVAL_EL2) = val;228228+ __vcpu_assign_sys_reg(vcpu, CNTHP_CVAL_EL2, val);229229230230 offset = read_sysreg_s(SYS_CNTPOFF_EL2);231231
···183183/*184184 * Used to name C functions called from asm185185 */186186-#ifdef CONFIG_PPC_KERNEL_PCREL186186+#if defined(__powerpc64__) && defined(CONFIG_PPC_KERNEL_PCREL)187187#define CFUNC(name) name@notoc188188#else189189#define CFUNC(name) name
···5353ldflags-y += $(filter-out $(CC_AUTO_VAR_INIT_ZERO_ENABLER) $(CC_FLAGS_FTRACE) -Wa$(comma)%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))54545555CC32FLAGS := -m325656-CC32FLAGSREMOVE := -mcmodel=medium -mabi=elfv1 -mabi=elfv2 -mcall-aixdesc5656+CC32FLAGSREMOVE := -mcmodel=medium -mabi=elfv1 -mabi=elfv2 -mcall-aixdesc -mpcrel5757ifdef CONFIG_CC_IS_CLANG5858# This flag is supported by clang for 64-bit but not 32-bit so it will cause5959# an unused command line flag warning for this file.
+1-1
arch/x86/Kconfig
···8989 select ARCH_HAS_DMA_OPS if GART_IOMMU || XEN9090 select ARCH_HAS_EARLY_DEBUG if KGDB9191 select ARCH_HAS_ELF_RANDOMIZE9292- select ARCH_HAS_EXECMEM_ROX if X86_649292+ select ARCH_HAS_EXECMEM_ROX if X86_64 && STRICT_MODULE_RWX9393 select ARCH_HAS_FAST_MULTIPLIER9494 select ARCH_HAS_FORTIFY_SOURCE9595 select ARCH_HAS_GCOV_PROFILE_ALL
+8
arch/x86/include/asm/module.h
···55#include <asm-generic/module.h>66#include <asm/orc_types.h>7788+struct its_array {99+#ifdef CONFIG_MITIGATION_ITS1010+ void **pages;1111+ int num;1212+#endif1313+};1414+815struct mod_arch_specific {916#ifdef CONFIG_UNWINDER_ORC1017 unsigned int num_orcs;1118 int *orc_unwind_ip;1219 struct orc_entry *orc_unwind;1320#endif2121+ struct its_array its_pages;1422};15231624#endif /* _ASM_X86_MODULE_H */
+22
arch/x86/include/asm/sighandling.h
···2424int x64_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs);2525int x32_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs);26262727+/*2828+ * To prevent immediate repeat of single step trap on return from SIGTRAP2929+ * handler if the trap flag (TF) is set without an external debugger attached,3030+ * clear the software event flag in the augmented SS, ensuring no single-step3131+ * trap is pending upon ERETU completion.3232+ *3333+ * Note, this function should be called in sigreturn() before the original3434+ * state is restored to make sure the TF is read from the entry frame.3535+ */3636+static __always_inline void prevent_single_step_upon_eretu(struct pt_regs *regs)3737+{3838+ /*3939+ * If the trap flag (TF) is set, i.e., the sigreturn() SYSCALL instruction4040+ * is being single-stepped, do not clear the software event flag in the4141+ * augmented SS, thus a debugger won't skip over the following instruction.4242+ */4343+#ifdef CONFIG_X86_FRED4444+ if (!(regs->flags & X86_EFLAGS_TF))4545+ regs->fred_ss.swevent = 0;4646+#endif4747+}4848+2749#endif /* _ASM_X86_SIGHANDLING_H */
···299299 .send_call_func_single_ipi = native_send_call_func_single_ipi,300300};301301EXPORT_SYMBOL_GPL(smp_ops);302302+303303+int arch_cpu_rescan_dead_smt_siblings(void)304304+{305305+ enum cpuhp_smt_control old = cpu_smt_control;306306+ int ret;307307+308308+ /*309309+ * If SMT has been disabled and SMT siblings are in HLT, bring them back310310+ * online and offline them again so that they end up in MWAIT proper.311311+ *312312+ * Called with hotplug enabled.313313+ */314314+ if (old != CPU_SMT_DISABLED && old != CPU_SMT_FORCE_DISABLED)315315+ return 0;316316+317317+ ret = cpuhp_smt_enable();318318+ if (ret)319319+ return ret;320320+321321+ ret = cpuhp_smt_disable(old);322322+323323+ return ret;324324+}325325+EXPORT_SYMBOL_GPL(arch_cpu_rescan_dead_smt_siblings);
+7-47
arch/x86/kernel/smpboot.c
···12441244 local_irq_disable();12451245}1246124612471247+/*12481248+ * We need to flush the caches before going to sleep, lest we have12491249+ * dirty data in our caches when we come back up.12501250+ */12471251void __noreturn mwait_play_dead(unsigned int eax_hint)12481252{12491253 struct mwait_cpu_dead *md = this_cpu_ptr(&mwait_cpu_dead);···12911287 native_halt();12921288 }12931289 }12941294-}12951295-12961296-/*12971297- * We need to flush the caches before going to sleep, lest we have12981298- * dirty data in our caches when we come back up.12991299- */13001300-static inline void mwait_play_dead_cpuid_hint(void)13011301-{13021302- unsigned int eax, ebx, ecx, edx;13031303- unsigned int highest_cstate = 0;13041304- unsigned int highest_subcstate = 0;13051305- int i;13061306-13071307- if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||13081308- boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)13091309- return;13101310- if (!this_cpu_has(X86_FEATURE_MWAIT))13111311- return;13121312- if (!this_cpu_has(X86_FEATURE_CLFLUSH))13131313- return;13141314-13151315- eax = CPUID_LEAF_MWAIT;13161316- ecx = 0;13171317- native_cpuid(&eax, &ebx, &ecx, &edx);13181318-13191319- /*13201320- * eax will be 0 if EDX enumeration is not valid.13211321- * Initialized below to cstate, sub_cstate value when EDX is valid.13221322- */13231323- if (!(ecx & CPUID5_ECX_EXTENSIONS_SUPPORTED)) {13241324- eax = 0;13251325- } else {13261326- edx >>= MWAIT_SUBSTATE_SIZE;13271327- for (i = 0; i < 7 && edx; i++, edx >>= MWAIT_SUBSTATE_SIZE) {13281328- if (edx & MWAIT_SUBSTATE_MASK) {13291329- highest_cstate = i;13301330- highest_subcstate = edx & MWAIT_SUBSTATE_MASK;13311331- }13321332- }13331333- eax = (highest_cstate << MWAIT_SUBSTATE_SIZE) |13341334- (highest_subcstate - 1);13351335- }13361336-13371337- mwait_play_dead(eax);13381290}1339129113401292/*···13431383 play_dead_common();13441384 tboot_shutdown(TB_SHUTDOWN_WFS);1345138513461346- mwait_play_dead_cpuid_hint();13471347- if (cpuidle_play_dead())13481348- hlt_play_dead();13861386+ /* Below returns only on error. */13871387+ cpuidle_play_dead();13881388+ hlt_play_dead();13491389}1350139013511391#else /* ... !CONFIG_HOTPLUG_CPU */
+8-1
arch/x86/kvm/mmu/mmu.c
···48964896{48974897 u64 error_code = PFERR_GUEST_FINAL_MASK;48984898 u8 level = PG_LEVEL_4K;48994899+ u64 direct_bits;48994900 u64 end;49004901 int r;4901490249024903 if (!vcpu->kvm->arch.pre_fault_allowed)49034904 return -EOPNOTSUPP;49054905+49064906+ if (kvm_is_gfn_alias(vcpu->kvm, gpa_to_gfn(range->gpa)))49074907+ return -EINVAL;4904490849054909 /*49064910 * reload is efficient when called repeatedly, so we can do it on···49144910 if (r)49154911 return r;4916491249134913+ direct_bits = 0;49174914 if (kvm_arch_has_private_mem(vcpu->kvm) &&49184915 kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(range->gpa)))49194916 error_code |= PFERR_PRIVATE_ACCESS;49174917+ else49184918+ direct_bits = gfn_to_gpa(kvm_gfn_direct_bits(vcpu->kvm));4920491949214920 /*49224921 * Shadow paging uses GVA for kvm page fault, so restrict to49234922 * two-dimensional paging.49244923 */49254925- r = kvm_tdp_map_page(vcpu, range->gpa, error_code, &level);49244924+ r = kvm_tdp_map_page(vcpu, range->gpa | direct_bits, error_code, &level);49264925 if (r < 0)49274926 return r;49284927
+35-9
arch/x86/kvm/svm/sev.c
···28712871 }28722872}2873287328742874+static bool is_sev_snp_initialized(void)28752875+{28762876+ struct sev_user_data_snp_status *status;28772877+ struct sev_data_snp_addr buf;28782878+ bool initialized = false;28792879+ int ret, error = 0;28802880+28812881+ status = snp_alloc_firmware_page(GFP_KERNEL | __GFP_ZERO);28822882+ if (!status)28832883+ return false;28842884+28852885+ buf.address = __psp_pa(status);28862886+ ret = sev_do_cmd(SEV_CMD_SNP_PLATFORM_STATUS, &buf, &error);28872887+ if (ret) {28882888+ pr_err("SEV: SNP_PLATFORM_STATUS failed ret=%d, fw_error=%d (%#x)\n",28892889+ ret, error, error);28902890+ goto out;28912891+ }28922892+28932893+ initialized = !!status->state;28942894+28952895+out:28962896+ snp_free_firmware_page(status);28972897+28982898+ return initialized;28992899+}29002900+28742901void __init sev_hardware_setup(void)28752902{28762903 unsigned int eax, ebx, ecx, edx, sev_asid_count, sev_es_asid_count;···30022975 sev_snp_supported = sev_snp_enabled && cc_platform_has(CC_ATTR_HOST_SEV_SNP);3003297630042977out:29782978+ if (sev_enabled) {29792979+ init_args.probe = true;29802980+ if (sev_platform_init(&init_args))29812981+ sev_supported = sev_es_supported = sev_snp_supported = false;29822982+ else if (sev_snp_supported)29832983+ sev_snp_supported = is_sev_snp_initialized();29842984+ }29852985+30052986 if (boot_cpu_has(X86_FEATURE_SEV))30062987 pr_info("SEV %s (ASIDs %u - %u)\n",30072988 sev_supported ? min_sev_asid <= max_sev_asid ? "enabled" :···30363001 sev_supported_vmsa_features = 0;30373002 if (sev_es_debug_swap_enabled)30383003 sev_supported_vmsa_features |= SVM_SEV_FEAT_DEBUG_SWAP;30393039-30403040- if (!sev_enabled)30413041- return;30423042-30433043- /*30443044- * Do both SNP and SEV initialization at KVM module load.30453045- */30463046- init_args.probe = true;30473047- sev_platform_init(&init_args);30483004}3049300530503006void sev_hardware_unsetup(void)
···192192193193int arch_resume_nosmt(void)194194{195195- int ret = 0;195195+ int ret;196196+196197 /*197198 * We reached this while coming out of hibernation. This means198199 * that SMT siblings are sleeping in hlt, as mwait is not safe···207206 * Called with hotplug disabled.208207 */209208 cpu_hotplug_enable();210210- if (cpu_smt_control == CPU_SMT_DISABLED ||211211- cpu_smt_control == CPU_SMT_FORCE_DISABLED) {212212- enum cpuhp_smt_control old = cpu_smt_control;213209214214- ret = cpuhp_smt_enable();215215- if (ret)216216- goto out;217217- ret = cpuhp_smt_disable(old);218218- if (ret)219219- goto out;220220- }221221-out:210210+ ret = arch_cpu_rescan_dead_smt_siblings();211211+222212 cpu_hotplug_disable();213213+223214 return ret;224215}
···998998 if (!plug || rq_list_empty(&plug->mq_list))999999 return false;1000100010011001- rq_list_for_each(&plug->mq_list, rq) {10021002- if (rq->q == q) {10031003- if (blk_attempt_bio_merge(q, rq, bio, nr_segs, false) ==10041004- BIO_MERGE_OK)10051005- return true;10061006- break;10071007- }10011001+ rq = plug->mq_list.tail;10021002+ if (rq->q == q)10031003+ return blk_attempt_bio_merge(q, rq, bio, nr_segs, false) ==10041004+ BIO_MERGE_OK;10051005+ else if (!plug->multiple_queues)10061006+ return false;1008100710091009- /*10101010- * Only keep iterating plug list for merges if we have multiple10111011- * queues10121012- */10131013- if (!plug->multiple_queues)10141014- break;10081008+ rq_list_for_each(&plug->mq_list, rq) {10091009+ if (rq->q != q)10101010+ continue;10111011+ if (blk_attempt_bio_merge(q, rq, bio, nr_segs, false) ==10121012+ BIO_MERGE_OK)10131013+ return true;10141014+ break;10151015 }10161016 return false;10171017}
+6-2
block/blk-zoned.c
···12251225 if (bio_flagged(bio, BIO_EMULATES_ZONE_APPEND)) {12261226 bio->bi_opf &= ~REQ_OP_MASK;12271227 bio->bi_opf |= REQ_OP_ZONE_APPEND;12281228+ bio_clear_flag(bio, BIO_EMULATES_ZONE_APPEND);12281229 }1229123012301231 /*···13071306 spin_unlock_irqrestore(&zwplug->lock, flags);1308130713091308 bdev = bio->bi_bdev;13101310- submit_bio_noacct_nocheck(bio);1311130913121310 /*13131311 * blk-mq devices will reuse the extra reference on the request queue···13141314 * path for BIO-based devices will not do that. So drop this extra13151315 * reference here.13161316 */13171317- if (bdev_test_flag(bdev, BD_HAS_SUBMIT_BIO))13171317+ if (bdev_test_flag(bdev, BD_HAS_SUBMIT_BIO)) {13181318+ bdev->bd_disk->fops->submit_bio(bio);13181319 blk_queue_exit(bdev->bd_disk->queue);13201320+ } else {13211321+ blk_mq_submit_bio(bio);13221322+ }1319132313201324put_zwplug:13211325 /* Drop the reference we took in disk_zone_wplug_schedule_bio_work(). */
···3333static DEFINE_MUTEX(isolated_cpus_lock);3434static DEFINE_MUTEX(round_robin_lock);35353636-static unsigned long power_saving_mwait_eax;3636+static unsigned int power_saving_mwait_eax;37373838static unsigned char tsc_detected_unstable;3939static unsigned char tsc_marked_unstable;
+3-6
drivers/acpi/apei/einj-core.c
···883883 }884884885885 einj_dev = faux_device_create("acpi-einj", NULL, &einj_device_ops);886886- if (!einj_dev)887887- return -ENODEV;888886889889- einj_initialized = true;887887+ if (einj_dev)888888+ einj_initialized = true;890889891890 return 0;892891}893892894893static void __exit einj_exit(void)895894{896896- if (einj_initialized)897897- faux_device_destroy(einj_dev);898898-895895+ faux_device_destroy(einj_dev);899896}900897901898module_init(einj_init);
+1-1
drivers/acpi/cppc_acpi.c
···476476 struct cpc_desc *cpc_ptr;477477 int cpu;478478479479- for_each_possible_cpu(cpu) {479479+ for_each_present_cpu(cpu) {480480 cpc_ptr = per_cpu(cpc_desc_ptr, cpu);481481 desired_reg = &cpc_ptr->cpc_regs[DESIRED_PERF];482482 if (!CPC_IN_SYSTEM_MEMORY(desired_reg) &&
+17
drivers/acpi/ec.c
···2323#include <linux/delay.h>2424#include <linux/interrupt.h>2525#include <linux/list.h>2626+#include <linux/printk.h>2627#include <linux/spinlock.h>2728#include <linux/slab.h>2929+#include <linux/string.h>2830#include <linux/suspend.h>2931#include <linux/acpi.h>3032#include <linux/dmi.h>···20302028 * Asus X50GL:20312029 * https://bugzilla.kernel.org/show_bug.cgi?id=1188020322030 */20312031+ goto out;20322032+ }20332033+20342034+ if (!strstarts(ecdt_ptr->id, "\\")) {20352035+ /*20362036+ * The ECDT table on some MSI notebooks contains invalid data, together20372037+ * with an empty ID string ("").20382038+ *20392039+ * Section 5.2.15 of the ACPI specification requires the ID string to be20402040+ * a "fully qualified reference to the (...) embedded controller device",20412041+ * so this string always has to start with a backslash.20422042+ *20432043+ * By verifying this we can avoid such faulty ECDT tables in a safe way.20442044+ */20452045+ pr_err(FW_BUG "Ignoring ECDT due to invalid ID string \"%s\"\n", ecdt_ptr->id);20332046 goto out;20342047 }20352048
···279279 * after acpi_cppc_processor_probe() has been called for all online CPUs280280 */281281 acpi_processor_init_invariance_cppc();282282+283283+ acpi_idle_rescan_dead_smt_siblings();284284+282285 return 0;283286err:284287 driver_unregister(&acpi_processor_driver);
+8
drivers/acpi/processor_idle.c
···2424#include <acpi/processor.h>2525#include <linux/context_tracking.h>26262727+#include "internal.h"2828+2729/*2830 * Include the apic definitions for x86 to have the APIC timer related defines2931 * available also for UP (on SMP it gets magically included via linux/smp.h).···5755};58565957#ifdef CONFIG_ACPI_PROCESSOR_CSTATE5858+void acpi_idle_rescan_dead_smt_siblings(void)5959+{6060+ if (cpuidle_get_driver() == &acpi_idle_driver)6161+ arch_cpu_rescan_dead_smt_siblings();6262+}6363+6064static6165DEFINE_PER_CPU(struct acpi_processor_cx * [CPUIDLE_STATE_MAX], acpi_cstate);6266
+7
drivers/acpi/resource.c
···667667 },668668 },669669 {670670+ /* MACHENIKE L16P/L16P */671671+ .matches = {672672+ DMI_MATCH(DMI_SYS_VENDOR, "MACHENIKE"),673673+ DMI_MATCH(DMI_BOARD_NAME, "L16P"),674674+ },675675+ },676676+ {670677 /*671678 * TongFang GM5HG0A in case of the SKIKK Vanaheim relabel the672679 * board-name is changed, so check OEM strings instead. Note
+33-6
drivers/ata/ahci.c
···1410141014111411static bool ahci_broken_lpm(struct pci_dev *pdev)14121412{14131413+ /*14141414+ * Platforms with LPM problems.14151415+ * If driver_data is NULL, there is no existing BIOS version with14161416+ * functioning LPM.14171417+ * If driver_data is non-NULL, then driver_data contains the DMI BIOS14181418+ * build date of the first BIOS version with functioning LPM (i.e. older14191419+ * BIOS versions have broken LPM).14201420+ */14131421 static const struct dmi_system_id sysids[] = {14141414- /* Various Lenovo 50 series have LPM issues with older BIOSen */14151422 {14161423 .matches = {14171424 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),···14451438 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),14461439 DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad W541"),14471440 },14411441+ .driver_data = "20180409", /* 2.35 */14421442+ },14431443+ {14441444+ .matches = {14451445+ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),14461446+ DMI_MATCH(DMI_PRODUCT_VERSION, "ASUSPRO D840MB_M840SA"),14471447+ },14481448+ /* 320 is broken, there is no known good version. */14491449+ },14501450+ {14481451 /*14491449- * Note date based on release notes, 2.35 has been14501450- * reported to be good, but I've been unable to get14511451- * a hold of the reporter to get the DMI BIOS date.14521452- * TODO: fix this.14521452+ * AMD 500 Series Chipset SATA Controller [1022:43eb]14531453+ * on this motherboard timeouts on ports 5 and 6 when14541454+ * LPM is enabled, at least with WDC WD20EFAX-68FB5N014551455+ * hard drives. LPM with the same drive works fine on14561456+ * all other ports on the same controller.14531457 */14541454- .driver_data = "20180310", /* 2.35 */14581458+ .matches = {14591459+ DMI_MATCH(DMI_BOARD_VENDOR,14601460+ "ASUSTeK COMPUTER INC."),14611461+ DMI_MATCH(DMI_BOARD_NAME,14621462+ "ROG STRIX B550-F GAMING (WI-FI)"),14631463+ },14641464+ /* 3621 is broken, there is no known good version. */14551465 },14561466 { } /* terminate list */14571467 };···1478145414791455 if (!dmi)14801456 return false;14571457+14581458+ if (!dmi->driver_data)14591459+ return true;1481146014821461 dmi_get_date(DMI_BIOS_DATE, &year, &month, &date);14831462 snprintf(buf, sizeof(buf), "%04d%02d%02d", year, month, date);
+16-8
drivers/ata/libata-acpi.c
···514514EXPORT_SYMBOL_GPL(ata_acpi_gtm_xfermask);515515516516/**517517- * ata_acpi_cbl_80wire - Check for 80 wire cable517517+ * ata_acpi_cbl_pata_type - Return PATA cable type518518 * @ap: Port to check519519- * @gtm: GTM data to use520519 *521521- * Return 1 if the @gtm indicates the BIOS selected an 80wire mode.520520+ * Return ATA_CBL_PATA* according to the transfer mode selected by BIOS522521 */523523-int ata_acpi_cbl_80wire(struct ata_port *ap, const struct ata_acpi_gtm *gtm)522522+int ata_acpi_cbl_pata_type(struct ata_port *ap)524523{525524 struct ata_device *dev;525525+ int ret = ATA_CBL_PATA_UNK;526526+ const struct ata_acpi_gtm *gtm = ata_acpi_init_gtm(ap);527527+528528+ if (!gtm)529529+ return ATA_CBL_PATA40;526530527531 ata_for_each_dev(dev, &ap->link, ENABLED) {528532 unsigned int xfer_mask, udma_mask;···534530 xfer_mask = ata_acpi_gtm_xfermask(dev, gtm);535531 ata_unpack_xfermask(xfer_mask, NULL, NULL, &udma_mask);536532537537- if (udma_mask & ~ATA_UDMA_MASK_40C)538538- return 1;533533+ ret = ATA_CBL_PATA40;534534+535535+ if (udma_mask & ~ATA_UDMA_MASK_40C) {536536+ ret = ATA_CBL_PATA80;537537+ break;538538+ }539539 }540540541541- return 0;541541+ return ret;542542}543543-EXPORT_SYMBOL_GPL(ata_acpi_cbl_80wire);543543+EXPORT_SYMBOL_GPL(ata_acpi_cbl_pata_type);544544545545static void ata_acpi_gtf_to_tf(struct ata_device *dev,546546 const struct ata_acpi_gtf *gtf,
···12981298 priv->dev = &pdev->dev;1299129913001300 /* Get MMIO regions */13011301- if (pci_request_regions(pdev, "pata-macio")) {13011301+ if (pcim_request_all_regions(pdev, "pata-macio")) {13021302 dev_err(&pdev->dev,13031303 "Cannot obtain PCI resources\n");13041304 return -EBUSY;
+4-5
drivers/ata/pata_via.c
···201201 two drives */202202 if (ata66 & (0x10100000 >> (16 * ap->port_no)))203203 return ATA_CBL_PATA80;204204+204205 /* Check with ACPI so we can spot BIOS reported SATA bridges */205205- if (ata_acpi_init_gtm(ap) &&206206- ata_acpi_cbl_80wire(ap, ata_acpi_init_gtm(ap)))207207- return ATA_CBL_PATA80;208208- return ATA_CBL_PATA40;206206+ return ata_acpi_cbl_pata_type(ap);209207}210208211209static int via_pre_reset(struct ata_link *link, unsigned long deadline)···366368 }367369368370 if (dev->class == ATA_DEV_ATAPI &&369369- dmi_check_system(no_atapi_dma_dmi_table)) {371371+ (dmi_check_system(no_atapi_dma_dmi_table) ||372372+ config->id == PCI_DEVICE_ID_VIA_6415)) {370373 ata_dev_warn(dev, "controller locks up on ATAPI DMA, forcing PIO\n");371374 mask &= ATA_MASK_PIO;372375 }
+3-1
drivers/atm/atmtcp.c
···288288 struct sk_buff *new_skb;289289 int result = 0;290290291291- if (!skb->len) return 0;291291+ if (skb->len < sizeof(struct atmtcp_hdr))292292+ goto done;293293+292294 dev = vcc->dev_data;293295 hdr = (struct atmtcp_hdr *) skb->data;294296 if (hdr->length == ATMTCP_HDR_MAGIC) {
+2-1
drivers/base/faux.c
···8686 .name = "faux_driver",8787 .bus = &faux_bus_type,8888 .probe_type = PROBE_FORCE_SYNCHRONOUS,8989+ .suppress_bind_attrs = true,8990};90919192static void faux_device_release(struct device *dev)···170169 * successful is almost impossible to determine by the caller.171170 */172171 if (!dev->driver) {173173- dev_err(dev, "probe did not succeed, tearing down the device\n");172172+ dev_dbg(dev, "probe did not succeed, tearing down the device\n");174173 faux_device_destroy(faux_dev);175174 faux_dev = NULL;176175 }
+5-6
drivers/block/loop.c
···12481248 lo->lo_flags &= ~LOOP_SET_STATUS_CLEARABLE_FLAGS;12491249 lo->lo_flags |= (info->lo_flags & LOOP_SET_STATUS_SETTABLE_FLAGS);1250125012511251- if (size_changed) {12521252- loff_t new_size = get_size(lo->lo_offset, lo->lo_sizelimit,12531253- lo->lo_backing_file);12541254- loop_set_size(lo, new_size);12551255- }12561256-12571251 /* update the direct I/O flag if lo_offset changed */12581252 loop_update_dio(lo);12591253···12551261 blk_mq_unfreeze_queue(lo->lo_queue, memflags);12561262 if (partscan)12571263 clear_bit(GD_SUPPRESS_PART_SCAN, &lo->lo_disk->state);12641264+ if (!err && size_changed) {12651265+ loff_t new_size = get_size(lo->lo_offset, lo->lo_sizelimit,12661266+ lo->lo_backing_file);12671267+ loop_set_size(lo, new_size);12681268+ }12581269out_unlock:12591270 mutex_unlock(&lo->lo_mutex);12601271 if (partscan)
+2-2
drivers/cpufreq/rcpufreq_dt.rs
···2626}27272828/// Finds supply name for the CPU from DT.2929-fn find_supply_names(dev: &Device, cpu: u32) -> Option<KVec<CString>> {2929+fn find_supply_names(dev: &Device, cpu: cpu::CpuId) -> Option<KVec<CString>> {3030 // Try "cpu0" for older DTs, fallback to "cpu".3131- let name = (cpu == 0)3131+ let name = (cpu.as_u32() == 0)3232 .then(|| find_supply_name_exact(dev, "cpu0"))3333 .flatten()3434 .or_else(|| find_supply_name_exact(dev, "cpu"))?;
+1-1
drivers/dma-buf/dma-buf.c
···11181118 * Catch exporters making buffers inaccessible even when11191119 * attachments preventing that exist.11201120 */11211121- WARN_ON_ONCE(ret == EBUSY);11211121+ WARN_ON_ONCE(ret == -EBUSY);11221122 if (ret)11231123 return ERR_PTR(ret);11241124 }
···110110#define HDMI_PLL_LOCK BIT(31)111111#define HDMI_PLL_LOCK_G12A (3 << 30)112112113113-#define PIXEL_FREQ_1000_1001(_freq) \114114- DIV_ROUND_CLOSEST_ULL((_freq) * 1000ULL, 1001ULL)115115-#define PHY_FREQ_1000_1001(_freq) \116116- (PIXEL_FREQ_1000_1001(DIV_ROUND_DOWN_ULL(_freq, 10ULL)) * 10)113113+#define FREQ_1000_1001(_freq) DIV_ROUND_CLOSEST_ULL((_freq) * 1000ULL, 1001ULL)117114118115/* VID PLL Dividers */119116enum {···769772 pll_freq);770773}771774775775+static bool meson_vclk_freqs_are_matching_param(unsigned int idx,776776+ unsigned long long phy_freq,777777+ unsigned long long vclk_freq)778778+{779779+ DRM_DEBUG_DRIVER("i = %d vclk_freq = %lluHz alt = %lluHz\n",780780+ idx, params[idx].vclk_freq,781781+ FREQ_1000_1001(params[idx].vclk_freq));782782+ DRM_DEBUG_DRIVER("i = %d phy_freq = %lluHz alt = %lluHz\n",783783+ idx, params[idx].phy_freq,784784+ FREQ_1000_1001(params[idx].phy_freq));785785+786786+ /* Match strict frequency */787787+ if (phy_freq == params[idx].phy_freq &&788788+ vclk_freq == params[idx].vclk_freq)789789+ return true;790790+791791+ /* Match 1000/1001 variant: vclk deviation has to be less than 1kHz792792+ * (drm EDID is defined in 1kHz steps, so everything smaller must be793793+ * rounding error) and the PHY freq deviation has to be less than794794+ * 10kHz (as the TMDS clock is 10 times the pixel clock, so anything795795+ * smaller must be rounding error as well).796796+ */797797+ if (abs(vclk_freq - FREQ_1000_1001(params[idx].vclk_freq)) < 1000 &&798798+ abs(phy_freq - FREQ_1000_1001(params[idx].phy_freq)) < 10000)799799+ return true;800800+801801+ /* no match */802802+ return false;803803+}804804+772805enum drm_mode_status773806meson_vclk_vic_supported_freq(struct meson_drm *priv,774807 unsigned long long phy_freq,···817790 }818791819792 for (i = 0 ; params[i].pixel_freq ; ++i) {820820- DRM_DEBUG_DRIVER("i = %d pixel_freq = %lluHz alt = %lluHz\n",821821- i, params[i].pixel_freq,822822- PIXEL_FREQ_1000_1001(params[i].pixel_freq));823823- DRM_DEBUG_DRIVER("i = %d phy_freq = %lluHz alt = %lluHz\n",824824- i, params[i].phy_freq,825825- PHY_FREQ_1000_1001(params[i].phy_freq));826826- /* Match strict frequency */827827- if (phy_freq == params[i].phy_freq &&828828- vclk_freq == params[i].vclk_freq)829829- return MODE_OK;830830- /* Match 1000/1001 variant */831831- if (phy_freq == PHY_FREQ_1000_1001(params[i].phy_freq) &&832832- vclk_freq == PIXEL_FREQ_1000_1001(params[i].vclk_freq))793793+ if (meson_vclk_freqs_are_matching_param(i, phy_freq, vclk_freq))833794 return MODE_OK;834795 }835796···10901075 }1091107610921077 for (freq = 0 ; params[freq].pixel_freq ; ++freq) {10931093- if ((phy_freq == params[freq].phy_freq ||10941094- phy_freq == PHY_FREQ_1000_1001(params[freq].phy_freq)) &&10951095- (vclk_freq == params[freq].vclk_freq ||10961096- vclk_freq == PIXEL_FREQ_1000_1001(params[freq].vclk_freq))) {10781078+ if (meson_vclk_freqs_are_matching_param(freq, phy_freq,10791079+ vclk_freq)) {10971080 if (vclk_freq != params[freq].vclk_freq)10981081 vic_alternate_clock = true;10991082 else
+1
drivers/gpu/drm/sitronix/Kconfig
···55 select DRM_GEM_SHMEM_HELPER66 select DRM_KMS_HELPER77 select REGMAP_I2C88+ select VIDEOMODE_HELPERS89 help910 DRM driver for Sitronix ST7571 panels controlled over I2C.1011
+6-6
drivers/gpu/drm/vc4/vc4_hdmi.c
···560560 if (ret)561561 return ret;562562563563- ret = drm_connector_hdmi_audio_init(connector, dev->dev,564564- &vc4_hdmi_audio_funcs,565565- 8, false, -1);566566- if (ret)567567- return ret;568568-569563 drm_connector_helper_add(connector, &vc4_hdmi_connector_helper_funcs);570564571565 /*···22842290 dev_err(dev, "Could not register CPU DAI: %d\n", ret);22852291 return ret;22862292 }22932293+22942294+ ret = drm_connector_hdmi_audio_init(&vc4_hdmi->connector, dev,22952295+ &vc4_hdmi_audio_funcs, 8, false,22962296+ -1);22972297+ if (ret)22982298+ return ret;2287229922882300 dai_link->cpus = &vc4_hdmi->audio.cpu;22892301 dai_link->codecs = &vc4_hdmi->audio.codec;
···764764 return false;765765 }766766767767- if (range_size <= SZ_64K && !supports_4K_migration(vm->xe)) {767767+ if (range_size < SZ_64K && !supports_4K_migration(vm->xe)) {768768 drm_dbg(&vm->xe->drm, "Platform doesn't support SZ_4K range migration\n");769769 return false;770770 }
+6-3
drivers/hwmon/ftsteutates.c
···423423 break;424424 case hwmon_pwm:425425 switch (attr) {426426- case hwmon_pwm_auto_channels_temp:427427- if (data->fan_source[channel] == FTS_FAN_SOURCE_INVALID)426426+ case hwmon_pwm_auto_channels_temp: {427427+ u8 fan_source = data->fan_source[channel];428428+429429+ if (fan_source == FTS_FAN_SOURCE_INVALID || fan_source >= BITS_PER_LONG)428430 *val = 0;429431 else430430- *val = BIT(data->fan_source[channel]);432432+ *val = BIT(fan_source);431433432434 return 0;435435+ }433436 default:434437 break;435438 }
-7
drivers/hwmon/ltc4282.c
···15121512 }1513151315141514 if (device_property_read_bool(dev, "adi,fault-log-enable")) {15151515- ret = regmap_set_bits(st->map, LTC4282_ADC_CTRL,15161516- LTC4282_FAULT_LOG_EN_MASK);15171517- if (ret)15181518- return ret;15191519- }15201520-15211521- if (device_property_read_bool(dev, "adi,fault-log-enable")) {15221515 ret = regmap_set_bits(st->map, LTC4282_ADC_CTRL, LTC4282_FAULT_LOG_EN_MASK);15231516 if (ret)15241517 return ret;
···152152 int index, bool irqoff)153153{154154 struct cpuidle_state *state = &drv->states[index];155155- unsigned long eax = flg2MWAIT(state->flags);156156- unsigned long ecx = 1*irqoff; /* break on interrupt flag */155155+ unsigned int eax = flg2MWAIT(state->flags);156156+ unsigned int ecx = 1*irqoff; /* break on interrupt flag */157157158158 mwait_idle_with_hints(eax, ecx);159159···226226static __cpuidle int intel_idle_s2idle(struct cpuidle_device *dev,227227 struct cpuidle_driver *drv, int index)228228{229229- unsigned long ecx = 1; /* break on interrupt flag */230229 struct cpuidle_state *state = &drv->states[index];231231- unsigned long eax = flg2MWAIT(state->flags);230230+ unsigned int eax = flg2MWAIT(state->flags);231231+ unsigned int ecx = 1; /* break on interrupt flag */232232233233 if (state->flags & CPUIDLE_FLAG_INIT_XSTATE)234234 fpu_idle_fpregs();···25072507 pr_debug("Local APIC timer is reliable in %s\n",25082508 boot_cpu_has(X86_FEATURE_ARAT) ? "all C-states" : "C1");2509250925102510+ arch_cpu_rescan_dead_smt_siblings();25112511+25102512 return 0;2511251325122514hp_setup_fail:···25202518 return retval;2521251925222520}25232523-device_initcall(intel_idle_init);25212521+subsys_initcall_sync(intel_idle_init);2524252225252523/*25262524 * We are not really modular, but we used to support that. Meaning we also
+2-2
drivers/iommu/tegra-smmu.c
···559559{560560 unsigned int pd_index = iova_pd_index(iova);561561 struct tegra_smmu *smmu = as->smmu;562562- struct tegra_pd *pd = as->pd;562562+ u32 *pd = &as->pd->val[pd_index];563563 unsigned long offset = pd_index * sizeof(*pd);564564565565 /* Set the page directory entry first */566566- pd->val[pd_index] = value;566566+ *pd = value;567567568568 /* The flush the page directory entry from caches */569569 dma_sync_single_range_for_device(smmu->dev, as->pd_dma, offset,
+5-4
drivers/net/can/m_can/tcan4x5x-core.c
···411411 priv = cdev_to_priv(mcan_class);412412413413 priv->power = devm_regulator_get_optional(&spi->dev, "vsup");414414- if (PTR_ERR(priv->power) == -EPROBE_DEFER) {415415- ret = -EPROBE_DEFER;416416- goto out_m_can_class_free_dev;417417- } else {414414+ if (IS_ERR(priv->power)) {415415+ if (PTR_ERR(priv->power) == -EPROBE_DEFER) {416416+ ret = -EPROBE_DEFER;417417+ goto out_m_can_class_free_dev;418418+ }418419 priv->power = NULL;419420 }420421
+16-11
drivers/net/ethernet/airoha/airoha_eth.c
···1065106510661066static int airoha_qdma_init_hfwd_queues(struct airoha_qdma *qdma)10671067{10681068+ int size, index, num_desc = HW_DSCP_NUM;10681069 struct airoha_eth *eth = qdma->eth;10691070 int id = qdma - ð->qdma[0];10711071+ u32 status, buf_size;10701072 dma_addr_t dma_addr;10711073 const char *name;10721072- int size, index;10731073- u32 status;10741074-10751075- size = HW_DSCP_NUM * sizeof(struct airoha_qdma_fwd_desc);10761076- if (!dmam_alloc_coherent(eth->dev, size, &dma_addr, GFP_KERNEL))10771077- return -ENOMEM;10781078-10791079- airoha_qdma_wr(qdma, REG_FWD_DSCP_BASE, dma_addr);1080107410811075 name = devm_kasprintf(eth->dev, GFP_KERNEL, "qdma%d-buf", id);10821076 if (!name)10831077 return -ENOMEM;1084107810791079+ buf_size = id ? AIROHA_MAX_PACKET_SIZE / 2 : AIROHA_MAX_PACKET_SIZE;10851080 index = of_property_match_string(eth->dev->of_node,10861081 "memory-region-names", name);10871082 if (index >= 0) {···10941099 rmem = of_reserved_mem_lookup(np);10951100 of_node_put(np);10961101 dma_addr = rmem->base;11021102+ /* Compute the number of hw descriptors according to the11031103+ * reserved memory size and the payload buffer size11041104+ */11051105+ num_desc = div_u64(rmem->size, buf_size);10971106 } else {10981098- size = AIROHA_MAX_PACKET_SIZE * HW_DSCP_NUM;11071107+ size = buf_size * num_desc;10991108 if (!dmam_alloc_coherent(eth->dev, size, &dma_addr,11001109 GFP_KERNEL))11011110 return -ENOMEM;···1107110811081109 airoha_qdma_wr(qdma, REG_FWD_BUF_BASE, dma_addr);1109111011111111+ size = num_desc * sizeof(struct airoha_qdma_fwd_desc);11121112+ if (!dmam_alloc_coherent(eth->dev, size, &dma_addr, GFP_KERNEL))11131113+ return -ENOMEM;11141114+11151115+ airoha_qdma_wr(qdma, REG_FWD_DSCP_BASE, dma_addr);11161116+ /* QDMA0: 2KB. QDMA1: 1KB */11101117 airoha_qdma_rmw(qdma, REG_HW_FWD_DSCP_CFG,11111118 HW_FWD_DSCP_PAYLOAD_SIZE_MASK,11121112- FIELD_PREP(HW_FWD_DSCP_PAYLOAD_SIZE_MASK, 0));11191119+ FIELD_PREP(HW_FWD_DSCP_PAYLOAD_SIZE_MASK, !!id));11131120 airoha_qdma_rmw(qdma, REG_FWD_DSCP_LOW_THR, FWD_DSCP_LOW_THR_MASK,11141121 FIELD_PREP(FWD_DSCP_LOW_THR_MASK, 128));11151122 airoha_qdma_rmw(qdma, REG_LMGR_INIT_CFG,11161123 LMGR_INIT_START | LMGR_SRAM_MODE_MASK |11171124 HW_FWD_DESC_NUM_MASK,11181118- FIELD_PREP(HW_FWD_DESC_NUM_MASK, HW_DSCP_NUM) |11251125+ FIELD_PREP(HW_FWD_DESC_NUM_MASK, num_desc) |11191126 LMGR_INIT_START | LMGR_SRAM_MODE_MASK);1120112711211128 return read_poll_timeout(airoha_qdma_rr, status,
+3-1
drivers/net/ethernet/airoha/airoha_ppe.c
···819819 int idle;820820821821 hwe = airoha_ppe_foe_get_entry(ppe, iter->hash);822822- ib1 = READ_ONCE(hwe->ib1);822822+ if (!hwe)823823+ continue;823824825825+ ib1 = READ_ONCE(hwe->ib1);824826 state = FIELD_GET(AIROHA_FOE_IB1_BIND_STATE, ib1);825827 if (state != AIROHA_FOE_STATE_BIND) {826828 iter->hash = 0xffff;
+74-13
drivers/net/ethernet/broadcom/bnxt/bnxt.c
···1078010780 bp->num_rss_ctx--;1078110781}10782107821078310783+static bool bnxt_vnic_has_rx_ring(struct bnxt *bp, struct bnxt_vnic_info *vnic,1078410784+ int rxr_id)1078510785+{1078610786+ u16 tbl_size = bnxt_get_rxfh_indir_size(bp->dev);1078710787+ int i, vnic_rx;1078810788+1078910789+ /* Ntuple VNIC always has all the rx rings. Any change of ring id1079010790+ * must be updated because a future filter may use it.1079110791+ */1079210792+ if (vnic->flags & BNXT_VNIC_NTUPLE_FLAG)1079310793+ return true;1079410794+1079510795+ for (i = 0; i < tbl_size; i++) {1079610796+ if (vnic->flags & BNXT_VNIC_RSSCTX_FLAG)1079710797+ vnic_rx = ethtool_rxfh_context_indir(vnic->rss_ctx)[i];1079810798+ else1079910799+ vnic_rx = bp->rss_indir_tbl[i];1080010800+1080110801+ if (rxr_id == vnic_rx)1080210802+ return true;1080310803+ }1080410804+1080510805+ return false;1080610806+}1080710807+1080810808+static int bnxt_set_vnic_mru_p5(struct bnxt *bp, struct bnxt_vnic_info *vnic,1080910809+ u16 mru, int rxr_id)1081010810+{1081110811+ int rc;1081210812+1081310813+ if (!bnxt_vnic_has_rx_ring(bp, vnic, rxr_id))1081410814+ return 0;1081510815+1081610816+ if (mru) {1081710817+ rc = bnxt_hwrm_vnic_set_rss_p5(bp, vnic, true);1081810818+ if (rc) {1081910819+ netdev_err(bp->dev, "hwrm vnic %d set rss failure rc: %d\n",1082010820+ vnic->vnic_id, rc);1082110821+ return rc;1082210822+ }1082310823+ }1082410824+ vnic->mru = mru;1082510825+ bnxt_hwrm_vnic_update(bp, vnic,1082610826+ VNIC_UPDATE_REQ_ENABLES_MRU_VALID);1082710827+1082810828+ return 0;1082910829+}1083010830+1083110831+static int bnxt_set_rss_ctx_vnic_mru(struct bnxt *bp, u16 mru, int rxr_id)1083210832+{1083310833+ struct ethtool_rxfh_context *ctx;1083410834+ unsigned long context;1083510835+ int rc;1083610836+1083710837+ xa_for_each(&bp->dev->ethtool->rss_ctx, context, ctx) {1083810838+ struct bnxt_rss_ctx *rss_ctx = ethtool_rxfh_context_priv(ctx);1083910839+ struct bnxt_vnic_info *vnic = &rss_ctx->vnic;1084010840+1084110841+ rc = bnxt_set_vnic_mru_p5(bp, vnic, mru, rxr_id);1084210842+ if (rc)1084310843+ return rc;1084410844+ }1084510845+1084610846+ return 0;1084710847+}1084810848+1078310849static void bnxt_hwrm_realloc_rss_ctx_vnic(struct bnxt *bp)1078410850{1078510851 bool set_tpa = !!(bp->flags & BNXT_FLAG_TPA);···1597315907 struct bnxt_vnic_info *vnic;1597415908 struct bnxt_napi *bnapi;1597515909 int i, rc;1591015910+ u16 mru;15976159111597715912 rxr = &bp->rx_ring[idx];1597815913 clone = qmem;···1602415957 napi_enable_locked(&bnapi->napi);1602515958 bnxt_db_nq_arm(bp, &cpr->cp_db, cpr->cp_raw_cons);16026159591596015960+ mru = bp->dev->mtu + ETH_HLEN + VLAN_HLEN;1602715961 for (i = 0; i < bp->nr_vnics; i++) {1602815962 vnic = &bp->vnic_info[i];16029159631603016030- rc = bnxt_hwrm_vnic_set_rss_p5(bp, vnic, true);1603116031- if (rc) {1603216032- netdev_err(bp->dev, "hwrm vnic %d set rss failure rc: %d\n",1603316033- vnic->vnic_id, rc);1596415964+ rc = bnxt_set_vnic_mru_p5(bp, vnic, mru, idx);1596515965+ if (rc)1603415966 return rc;1603516035- }1603616036- vnic->mru = bp->dev->mtu + ETH_HLEN + VLAN_HLEN;1603716037- bnxt_hwrm_vnic_update(bp, vnic,1603816038- VNIC_UPDATE_REQ_ENABLES_MRU_VALID);1603915967 }1604016040-1604116041- return 0;1596815968+ return bnxt_set_rss_ctx_vnic_mru(bp, mru, idx);16042159691604315970err_reset:1604415971 netdev_err(bp->dev, "Unexpected HWRM error during queue start rc: %d\n",···16054159931605515994 for (i = 0; i < bp->nr_vnics; i++) {1605615995 vnic = &bp->vnic_info[i];1605716057- vnic->mru = 0;1605816058- bnxt_hwrm_vnic_update(bp, vnic,1605916059- VNIC_UPDATE_REQ_ENABLES_MRU_VALID);1599615996+1599715997+ bnxt_set_vnic_mru_p5(bp, vnic, 0, idx);1606015998 }1599915999+ bnxt_set_rss_ctx_vnic_mru(bp, 0, idx);1606116000 /* Make sure NAPI sees that the VNIC is disabled */1606216001 synchronize_net();1606316002 rxr = &bp->rx_ring[idx];
+10-14
drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
···231231 return;232232233233 mutex_lock(&edev->en_dev_lock);234234- if (!bnxt_ulp_registered(edev)) {235235- mutex_unlock(&edev->en_dev_lock);236236- return;237237- }234234+ if (!bnxt_ulp_registered(edev) ||235235+ (edev->flags & BNXT_EN_FLAG_ULP_STOPPED))236236+ goto ulp_stop_exit;238237239238 edev->flags |= BNXT_EN_FLAG_ULP_STOPPED;240239 if (aux_priv) {···249250 adrv->suspend(adev, pm);250251 }251252 }253253+ulp_stop_exit:252254 mutex_unlock(&edev->en_dev_lock);253255}254256···258258 struct bnxt_aux_priv *aux_priv = bp->aux_priv;259259 struct bnxt_en_dev *edev = bp->edev;260260261261- if (!edev)262262- return;263263-264264- edev->flags &= ~BNXT_EN_FLAG_ULP_STOPPED;265265-266266- if (err)261261+ if (!edev || err)267262 return;268263269264 mutex_lock(&edev->en_dev_lock);270270- if (!bnxt_ulp_registered(edev)) {271271- mutex_unlock(&edev->en_dev_lock);272272- return;273273- }265265+ if (!bnxt_ulp_registered(edev) ||266266+ !(edev->flags & BNXT_EN_FLAG_ULP_STOPPED))267267+ goto ulp_start_exit;274268275269 if (edev->ulp_tbl->msix_requested)276270 bnxt_fill_msix_vecs(bp, edev->msix_entries);···281287 adrv->resume(adev);282288 }283289 }290290+ulp_start_exit:291291+ edev->flags &= ~BNXT_EN_FLAG_ULP_STOPPED;284292 mutex_unlock(&edev->en_dev_lock);285293}286294
+1
drivers/net/ethernet/faraday/Kconfig
···3131 depends on ARM || COMPILE_TEST3232 depends on !64BIT || BROKEN3333 select PHYLIB3434+ select FIXED_PHY3435 select MDIO_ASPEED if MACH_ASPEED_G63536 select CRC323637 help
+11-3
drivers/net/ethernet/intel/e1000e/netdev.c
···35343534 case e1000_pch_cnp:35353535 case e1000_pch_tgp:35363536 case e1000_pch_adp:35373537- case e1000_pch_mtp:35383538- case e1000_pch_lnp:35393539- case e1000_pch_ptp:35403537 case e1000_pch_nvp:35413538 if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI) {35423539 /* Stable 24MHz frequency */···35483551 shift = INCVALUE_SHIFT_38400KHZ;35493552 adapter->cc.shift = shift;35503553 }35543554+ break;35553555+ case e1000_pch_mtp:35563556+ case e1000_pch_lnp:35573557+ case e1000_pch_ptp:35583558+ /* System firmware can misreport this value, so set it to a35593559+ * stable 38400KHz frequency.35603560+ */35613561+ incperiod = INCPERIOD_38400KHZ;35623562+ incvalue = INCVALUE_38400KHZ;35633563+ shift = INCVALUE_SHIFT_38400KHZ;35643564+ adapter->cc.shift = shift;35513565 break;35523566 case e1000_82574:35533567 case e1000_82583:
+5-3
drivers/net/ethernet/intel/e1000e/ptp.c
···295295 case e1000_pch_cnp:296296 case e1000_pch_tgp:297297 case e1000_pch_adp:298298- case e1000_pch_mtp:299299- case e1000_pch_lnp:300300- case e1000_pch_ptp:301298 case e1000_pch_nvp:302299 if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI)303300 adapter->ptp_clock_info.max_adj = MAX_PPB_24MHZ;304301 else305302 adapter->ptp_clock_info.max_adj = MAX_PPB_38400KHZ;303303+ break;304304+ case e1000_pch_mtp:305305+ case e1000_pch_lnp:306306+ case e1000_pch_ptp:307307+ adapter->ptp_clock_info.max_adj = MAX_PPB_38400KHZ;306308 break;307309 case e1000_82574:308310 case e1000_82583:
+48
drivers/net/ethernet/intel/ice/ice_arfs.c
···378378}379379380380/**381381+ * ice_arfs_cmp - Check if aRFS filter matches this flow.382382+ * @fltr_info: filter info of the saved ARFS entry.383383+ * @fk: flow dissector keys.384384+ * @n_proto: One of htons(ETH_P_IP) or htons(ETH_P_IPV6).385385+ * @ip_proto: One of IPPROTO_TCP or IPPROTO_UDP.386386+ *387387+ * Since this function assumes limited values for n_proto and ip_proto, it388388+ * is meant to be called only from ice_rx_flow_steer().389389+ *390390+ * Return:391391+ * * true - fltr_info refers to the same flow as fk.392392+ * * false - fltr_info and fk refer to different flows.393393+ */394394+static bool395395+ice_arfs_cmp(const struct ice_fdir_fltr *fltr_info, const struct flow_keys *fk,396396+ __be16 n_proto, u8 ip_proto)397397+{398398+ /* Determine if the filter is for IPv4 or IPv6 based on flow_type,399399+ * which is one of ICE_FLTR_PTYPE_NONF_IPV{4,6}_{TCP,UDP}.400400+ */401401+ bool is_v4 = fltr_info->flow_type == ICE_FLTR_PTYPE_NONF_IPV4_TCP ||402402+ fltr_info->flow_type == ICE_FLTR_PTYPE_NONF_IPV4_UDP;403403+404404+ /* Following checks are arranged in the quickest and most discriminative405405+ * fields first for early failure.406406+ */407407+ if (is_v4)408408+ return n_proto == htons(ETH_P_IP) &&409409+ fltr_info->ip.v4.src_port == fk->ports.src &&410410+ fltr_info->ip.v4.dst_port == fk->ports.dst &&411411+ fltr_info->ip.v4.src_ip == fk->addrs.v4addrs.src &&412412+ fltr_info->ip.v4.dst_ip == fk->addrs.v4addrs.dst &&413413+ fltr_info->ip.v4.proto == ip_proto;414414+415415+ return fltr_info->ip.v6.src_port == fk->ports.src &&416416+ fltr_info->ip.v6.dst_port == fk->ports.dst &&417417+ fltr_info->ip.v6.proto == ip_proto &&418418+ !memcmp(&fltr_info->ip.v6.src_ip, &fk->addrs.v6addrs.src,419419+ sizeof(struct in6_addr)) &&420420+ !memcmp(&fltr_info->ip.v6.dst_ip, &fk->addrs.v6addrs.dst,421421+ sizeof(struct in6_addr));422422+}423423+424424+/**381425 * ice_rx_flow_steer - steer the Rx flow to where application is being run382426 * @netdev: ptr to the netdev being adjusted383427 * @skb: buffer with required header information···492448 continue;493449494450 fltr_info = &arfs_entry->fltr_info;451451+452452+ if (!ice_arfs_cmp(fltr_info, &fk, n_proto, ip_proto))453453+ continue;454454+495455 ret = fltr_info->fltr_id;496456497457 if (fltr_info->q_index == rxq_idx ||
···516516 unsigned long start_time;517517 unsigned long max_wait;518518 unsigned long duration;519519- int done = 0;520519 bool fw_up;521520 int opcode;521521+ bool done;522522 int err;523523524524 /* Wait for dev cmd to complete, retrying if we get EAGAIN,···526526 */527527 max_wait = jiffies + (max_seconds * HZ);528528try_again:529529+ done = false;529530 opcode = idev->opcode;530531 start_time = jiffies;531532 for (fw_up = ionic_is_fw_running(idev);
···8787 * We need to do some backwards compatibility to make this work.8888 */8989 if (le32_to_cpu(targ_info->byte_count) != sizeof(*targ_info)) {9090- WARN_ON(1);9090+ ath6kl_err("mismatched byte count %d vs. expected %zd\n",9191+ le32_to_cpu(targ_info->byte_count),9292+ sizeof(*targ_info));9193 return -EINVAL;9294 }9395
+13-6
drivers/net/wireless/ath/carl9170/usb.c
···438438439439 if (atomic_read(&ar->rx_anch_urbs) == 0) {440440 /*441441- * The system is too slow to cope with442442- * the enormous workload. We have simply443443- * run out of active rx urbs and this444444- * unfortunately leads to an unpredictable445445- * device.441441+ * At this point, either the system is too slow to442442+ * cope with the enormous workload (so we have simply443443+ * run out of active rx urbs and this unfortunately444444+ * leads to an unpredictable device), or the device445445+ * is not fully functional after an unsuccessful446446+ * firmware loading attempts (so it doesn't pass447447+ * ieee80211_register_hw() and there is no internal448448+ * workqueue at all).446449 */447450448448- ieee80211_queue_work(ar->hw, &ar->ping_work);451451+ if (ar->registered)452452+ ieee80211_queue_work(ar->hw, &ar->ping_work);453453+ else454454+ pr_warn_once("device %s is not registered\n",455455+ dev_name(&ar->udev->dev));449456 }450457 } else {451458 /*
···429429 pdu->result = le64_to_cpu(nvme_req(req)->result.u64);430430431431 /*432432- * For iopoll, complete it directly. Note that using the uring_cmd433433- * helper for this is safe only because we check blk_rq_is_poll().434434- * As that returns false if we're NOT on a polled queue, then it's435435- * safe to use the polled completion helper.436436- *437437- * Otherwise, move the completion to task work.432432+ * IOPOLL could potentially complete this request directly, but433433+ * if multiple rings are polling on the same queue, then it's possible434434+ * for one ring to find completions for another ring. Punting the435435+ * completion via task_work will always direct it to the right436436+ * location, rather than potentially complete requests for ringA437437+ * under iopoll invocations from ringB.438438 */439439- if (blk_rq_is_poll(req)) {440440- if (pdu->bio)441441- blk_rq_unmap_user(pdu->bio);442442- io_uring_cmd_iopoll_done(ioucmd, pdu->result, pdu->status);443443- } else {444444- io_uring_cmd_do_in_task_lazy(ioucmd, nvme_uring_task_cb);445445- }446446-439439+ io_uring_cmd_do_in_task_lazy(ioucmd, nvme_uring_task_cb);447440 return RQ_END_IO_FREE;448441}449442
+6-8
drivers/platform/x86/amd/hsmp/hsmp.c
···9797 short_sleep = jiffies + msecs_to_jiffies(HSMP_SHORT_SLEEP);9898 timeout = jiffies + msecs_to_jiffies(HSMP_MSG_TIMEOUT);9999100100- while (time_before(jiffies, timeout)) {100100+ while (true) {101101 ret = sock->amd_hsmp_rdwr(sock, mbinfo->msg_resp_off, &mbox_status, HSMP_RD);102102 if (ret) {103103 dev_err(sock->dev, "Error %d reading mailbox status\n", ret);···106106107107 if (mbox_status != HSMP_STATUS_NOT_READY)108108 break;109109+110110+ if (!time_before(jiffies, timeout))111111+ break;112112+109113 if (time_before(jiffies, short_sleep))110114 usleep_range(50, 100);111115 else···214210 return -ENODEV;215211 sock = &hsmp_pdev.sock[msg->sock_ind];216212217217- /*218218- * The time taken by smu operation to complete is between219219- * 10us to 1ms. Sometime it may take more time.220220- * In SMP system timeout of 100 millisecs should221221- * be enough for the previous thread to finish the operation222222- */223223- ret = down_timeout(&sock->hsmp_sem, msecs_to_jiffies(HSMP_MSG_TIMEOUT));213213+ ret = down_interruptible(&sock->hsmp_sem);224214 if (ret < 0)225215 return ret;226216
+9
drivers/platform/x86/amd/pmc/pmc-quirks.c
···225225 DMI_MATCH(DMI_BOARD_NAME, "WUJIE14-GX4HRXL"),226226 }227227 },228228+ /* https://bugzilla.kernel.org/show_bug.cgi?id=220116 */229229+ {230230+ .ident = "PCSpecialist Lafite Pro V 14M",231231+ .driver_data = &quirk_spurious_8042,232232+ .matches = {233233+ DMI_MATCH(DMI_SYS_VENDOR, "PCSpecialist"),234234+ DMI_MATCH(DMI_PRODUCT_NAME, "Lafite Pro V 14M"),235235+ }236236+ },228237 {}229238};230239
···4545MODULE_AUTHOR("Abhay Salunke <abhay_salunke@dell.com>");4646MODULE_DESCRIPTION("Driver for updating BIOS image on DELL systems");4747MODULE_LICENSE("GPL");4848-MODULE_VERSION("3.2");4848+MODULE_VERSION("3.3");49495050#define BIOS_SCAN_LIMIT 0xffffffff5151#define MAX_IMAGE_LENGTH 16···9191 rbu_data.imagesize = 0;9292}93939494-static int create_packet(void *data, size_t length)9494+static int create_packet(void *data, size_t length) __must_hold(&rbu_data.lock)9595{9696 struct packet_data *newpacket;9797 int ordernum = 0;···292292 remaining_bytes = *pread_length;293293 bytes_read = rbu_data.packet_read_count;294294295295- list_for_each_entry(newpacket, (&packet_data_head.list)->next, list) {295295+ list_for_each_entry(newpacket, &packet_data_head.list, list) {296296 bytes_copied = do_packet_read(pdest, newpacket,297297 remaining_bytes, bytes_read, &temp_count);298298 remaining_bytes -= bytes_copied;···315315{316316 struct packet_data *newpacket, *tmp;317317318318- list_for_each_entry_safe(newpacket, tmp, (&packet_data_head.list)->next, list) {318318+ list_for_each_entry_safe(newpacket, tmp, &packet_data_head.list, list) {319319 list_del(&newpacket->list);320320321321 /*322322 * zero out the RBU packet memory before freeing323323 * to make sure there are no stale RBU packets left in memory324324 */325325- memset(newpacket->data, 0, rbu_data.packetsize);325325+ memset(newpacket->data, 0, newpacket->length);326326 set_memory_wb((unsigned long)newpacket->data,327327 1 << newpacket->ordernum);328328 free_pages((unsigned long) newpacket->data,
+17-2
drivers/platform/x86/ideapad-laptop.c
···1515#include <linux/bug.h>1616#include <linux/cleanup.h>1717#include <linux/debugfs.h>1818+#include <linux/delay.h>1819#include <linux/device.h>1920#include <linux/dmi.h>2021#include <linux/i8042.h>···268267 */269268#define IDEAPAD_EC_TIMEOUT 200 /* in ms */270269270270+/*271271+ * Some models (e.g., ThinkBook since 2024) have a low tolerance for being272272+ * polled too frequently. Doing so may break the state machine in the EC,273273+ * resulting in a hard shutdown.274274+ *275275+ * It is also observed that frequent polls may disturb the ongoing operation276276+ * and notably delay the availability of EC response.277277+ *278278+ * These values are used as the delay before the first poll and the interval279279+ * between subsequent polls to solve the above issues.280280+ */281281+#define IDEAPAD_EC_POLL_MIN_US 150282282+#define IDEAPAD_EC_POLL_MAX_US 300283283+271284static int eval_int(acpi_handle handle, const char *name, unsigned long *res)272285{273286 unsigned long long result;···398383 end_jiffies = jiffies + msecs_to_jiffies(IDEAPAD_EC_TIMEOUT) + 1;399384400385 while (time_before(jiffies, end_jiffies)) {401401- schedule();386386+ usleep_range(IDEAPAD_EC_POLL_MIN_US, IDEAPAD_EC_POLL_MAX_US);402387403388 err = eval_vpcr(handle, 1, &val);404389 if (err)···429414 end_jiffies = jiffies + msecs_to_jiffies(IDEAPAD_EC_TIMEOUT) + 1;430415431416 while (time_before(jiffies, end_jiffies)) {432432- schedule();417417+ usleep_range(IDEAPAD_EC_POLL_MIN_US, IDEAPAD_EC_POLL_MAX_US);433418434419 err = eval_vpcr(handle, 1, &val);435420 if (err)
···511511512512 /* Get the package ID from the TPMI core */513513 plat_info = tpmi_get_platform_data(auxdev);514514- if (plat_info)515515- pkg = plat_info->package_id;516516- else514514+ if (unlikely(!plat_info)) {517515 dev_info(&auxdev->dev, "Platform information is NULL\n");516516+ ret = -ENODEV;517517+ goto err_rem_common;518518+ }519519+520520+ pkg = plat_info->package_id;518521519522 for (i = 0; i < num_resources; ++i) {520523 struct tpmi_uncore_power_domain_info *pd_info;
···121121 struct ptp_clock_info *ops;122122 int err = -EOPNOTSUPP;123123124124- if (ptp_clock_freerun(ptp)) {124124+ if (tx->modes & (ADJ_SETOFFSET | ADJ_FREQUENCY | ADJ_OFFSET) &&125125+ ptp_clock_freerun(ptp)) {125126 pr_err("ptp: physical clock is free running\n");126127 return -EBUSY;127128 }
+21-1
drivers/ptp/ptp_private.h
···9898/* Check if ptp virtual clock is in use */9999static inline bool ptp_vclock_in_use(struct ptp_clock *ptp)100100{101101- return !ptp->is_virtual_clock;101101+ bool in_use = false;102102+103103+ /* Virtual clocks can't be stacked on top of virtual clocks.104104+ * Avoid acquiring the n_vclocks_mux on virtual clocks, to allow this105105+ * function to be called from code paths where the n_vclocks_mux of the106106+ * parent physical clock is already held. Functionally that's not an107107+ * issue, but lockdep would complain, because they have the same lock108108+ * class.109109+ */110110+ if (ptp->is_virtual_clock)111111+ return false;112112+113113+ if (mutex_lock_interruptible(&ptp->n_vclocks_mux))114114+ return true;115115+116116+ if (ptp->n_vclocks)117117+ in_use = true;118118+119119+ mutex_unlock(&ptp->n_vclocks_mux);120120+121121+ return in_use;102122}103123104124/* Check if ptp clock shall be free running */
+3
drivers/rapidio/rio_cm.c
···783783 if (buf == NULL || ch_id == 0 || len == 0 || len > RIO_MAX_MSG_SIZE)784784 return -EINVAL;785785786786+ if (len < sizeof(struct rio_ch_chan_hdr))787787+ return -EINVAL; /* insufficient data from user */788788+786789 ch = riocm_get_channel(ch_id);787790 if (!ch) {788791 riocm_error("%s(%d) ch_%d not found", current->comm,
+3-3
drivers/regulator/max20086-regulator.c
···55// Copyright (C) 2022 Laurent Pinchart <laurent.pinchart@idesonboard.com>66// Copyright (C) 2018 Avnet, Inc.7788+#include <linux/cleanup.h>89#include <linux/err.h>910#include <linux/gpio/consumer.h>1011#include <linux/i2c.h>···134133static int max20086_parse_regulators_dt(struct max20086 *chip, bool *boot_on)135134{136135 struct of_regulator_match *matches;137137- struct device_node *node;138136 unsigned int i;139137 int ret;140138141141- node = of_get_child_by_name(chip->dev->of_node, "regulators");139139+ struct device_node *node __free(device_node) =140140+ of_get_child_by_name(chip->dev->of_node, "regulators");142141 if (!node) {143142 dev_err(chip->dev, "regulators node not found\n");144143 return -ENODEV;···154153155154 ret = of_regulator_match(chip->dev, node, matches,156155 chip->info->num_outputs);157157- of_node_put(node);158156 if (ret < 0) {159157 dev_err(chip->dev, "Failed to match regulators\n");160158 return -EINVAL;
+2
drivers/s390/scsi/zfcp_sysfs.c
···449449 if (kstrtoull(buf, 0, (unsigned long long *) &fcp_lun))450450 return -EINVAL;451451452452+ flush_work(&port->rport_work);453453+452454 retval = zfcp_unit_add(port, fcp_lun);453455 if (retval)454456 return retval;
···665665 * if the device is in the process of becoming ready, we666666 * should retry.667667 */668668- if ((sshdr.asc == 0x04) && (sshdr.ascq == 0x01))668668+ if ((sshdr.asc == 0x04) &&669669+ (sshdr.ascq == 0x01 || sshdr.ascq == 0x0a))669670 return NEEDS_RETRY;670671 /*671672 * if the device is not started, we need to wake
+5-6
drivers/scsi/scsi_transport_iscsi.c
···34993499 pr_err("%s could not find host no %u\n",35003500 __func__, ev->u.new_flashnode.host_no);35013501 err = -ENODEV;35023502- goto put_host;35023502+ goto exit_new_fnode;35033503 }3504350435053505 index = transport->new_flashnode(shost, data, len);···35093509 else35103510 err = -EIO;3511351135123512-put_host:35133512 scsi_host_put(shost);3514351335153514exit_new_fnode:···35333534 pr_err("%s could not find host no %u\n",35343535 __func__, ev->u.del_flashnode.host_no);35353536 err = -ENODEV;35363536- goto put_host;35373537+ goto exit_del_fnode;35373538 }3538353935393540 idx = ev->u.del_flashnode.flashnode_idx;···35753576 pr_err("%s could not find host no %u\n",35763577 __func__, ev->u.login_flashnode.host_no);35773578 err = -ENODEV;35783578- goto put_host;35793579+ goto exit_login_fnode;35793580 }3580358135813582 idx = ev->u.login_flashnode.flashnode_idx;···36273628 pr_err("%s could not find host no %u\n",36283629 __func__, ev->u.logout_flashnode.host_no);36293630 err = -ENODEV;36303630- goto put_host;36313631+ goto exit_logout_fnode;36313632 }3632363336333634 idx = ev->u.logout_flashnode.flashnode_idx;···36773678 pr_err("%s could not find host no %u\n",36783679 __func__, ev->u.logout_flashnode.host_no);36793680 err = -ENODEV;36803680- goto put_host;36813681+ goto exit_logout_sid;36813682 }3682368336833684 session = iscsi_session_lookup(ev->u.logout_flashnode_sid.sid);
+6-4
drivers/scsi/storvsc_drv.c
···362362/*363363 * Timeout in seconds for all devices managed by this driver.364364 */365365-static int storvsc_timeout = 180;365365+static const int storvsc_timeout = 180;366366367367#if IS_ENABLED(CONFIG_SCSI_FC_ATTRS)368368static struct scsi_transport_template *fc_transport_template;···768768 return;769769 }770770771771- t = wait_for_completion_timeout(&request->wait_event, 10*HZ);771771+ t = wait_for_completion_timeout(&request->wait_event, storvsc_timeout * HZ);772772 if (t == 0) {773773 dev_err(dev, "Failed to create sub-channel: timed out\n");774774 return;···833833 if (ret != 0)834834 return ret;835835836836- t = wait_for_completion_timeout(&request->wait_event, 5*HZ);836836+ t = wait_for_completion_timeout(&request->wait_event, storvsc_timeout * HZ);837837 if (t == 0)838838 return -ETIMEDOUT;839839···13501350 return ret;1351135113521352 ret = storvsc_channel_init(device, is_fc);13531353+ if (ret)13541354+ vmbus_close(device->channel);1353135513541356 return ret;13551357}···16701668 if (ret != 0)16711669 return FAILED;1672167016731673- t = wait_for_completion_timeout(&request->wait_event, 5*HZ);16711671+ t = wait_for_completion_timeout(&request->wait_event, storvsc_timeout * HZ);16741672 if (t == 0)16751673 return TIMEOUT_ERROR;16761674
···297297 if (trigger->ops->enable) {298298 ret = trigger->ops->enable(trigger, config);299299 if (ret) {300300- if (offload->ops->trigger_disable)300300+ if (offload->ops && offload->ops->trigger_disable)301301 offload->ops->trigger_disable(offload);302302 return ret;303303 }
+18-12
drivers/spi/spi-omap2-mcspi.c
···134134 size_t max_xfer_len;135135 u32 ref_clk_hz;136136 bool use_multi_mode;137137+ bool last_msg_kept_cs;137138};138139139140struct omap2_mcspi_cs {···12701269 * multi-mode is applicable.12711270 */12721271 mcspi->use_multi_mode = true;12721272+12731273+ if (mcspi->last_msg_kept_cs)12741274+ mcspi->use_multi_mode = false;12751275+12731276 list_for_each_entry(tr, &msg->transfers, transfer_list) {12741277 if (!tr->bits_per_word)12751278 bits_per_word = msg->spi->bits_per_word;···12921287 mcspi->use_multi_mode = false;12931288 }1294128912951295- /* Check if transfer asks to change the CS status after the transfer */12961296- if (!tr->cs_change)12971297- mcspi->use_multi_mode = false;12981298-12991299- /*13001300- * If at least one message is not compatible, switch back to single mode13011301- *13021302- * The bits_per_word of certain transfer can be different, but it will have no13031303- * impact on the signal itself.13041304- */13051305- if (!mcspi->use_multi_mode)13061306- break;12901290+ if (list_is_last(&tr->transfer_list, &msg->transfers)) {12911291+ /* Check if transfer asks to keep the CS status after the whole message */12921292+ if (tr->cs_change) {12931293+ mcspi->use_multi_mode = false;12941294+ mcspi->last_msg_kept_cs = true;12951295+ } else {12961296+ mcspi->last_msg_kept_cs = false;12971297+ }12981298+ } else {12991299+ /* Check if transfer asks to change the CS status after the transfer */13001300+ if (!tr->cs_change)13011301+ mcspi->use_multi_mode = false;13021302+ }13071303 }1308130413091305 omap2_mcspi_set_mode(ctlr);
···399399 return ret;400400}401401402402-static long bch2_ioctl_fs_usage(struct bch_fs *c,402402+static noinline_for_stack long bch2_ioctl_fs_usage(struct bch_fs *c,403403 struct bch_ioctl_fs_usage __user *user_arg)404404{405405 struct bch_ioctl_fs_usage arg = {};···469469}470470471471/* obsolete, didn't allow for new data types: */472472-static long bch2_ioctl_dev_usage(struct bch_fs *c,472472+static noinline_for_stack long bch2_ioctl_dev_usage(struct bch_fs *c,473473 struct bch_ioctl_dev_usage __user *user_arg)474474{475475 struct bch_ioctl_dev_usage arg;
+3-1
fs/bcachefs/disk_accounting.c
···618618 for (unsigned j = 0; j < nr; j++)619619 src_v[j] -= dst_v[j];620620621621- if (fsck_err(trans, accounting_mismatch, "%s", buf.buf)) {621621+ bch2_trans_unlock_long(trans);622622+623623+ if (fsck_err(c, accounting_mismatch, "%s", buf.buf)) {622624 percpu_up_write(&c->mark_lock);623625 ret = commit_do(trans, NULL, NULL, 0,624626 bch2_disk_accounting_mod(trans, &acc_k, src_v, nr, false));
+4-1
fs/bcachefs/error.c
···6969 if (trans)7070 bch2_trans_updates_to_text(&buf, trans);7171 bool ret = __bch2_inconsistent_error(c, &buf);7272- bch2_print_str_nonblocking(c, KERN_ERR, buf.buf);7272+ bch2_print_str(c, KERN_ERR, buf.buf);73737474 printbuf_exit(&buf);7575 return ret;···620620621621 if (s)622622 s->ret = ret;623623+624624+ if (trans)625625+ ret = bch2_trans_log_str(trans, bch2_sb_error_strs[err]) ?: ret;623626err_unlock:624627 mutex_unlock(&c->fsck_error_msgs_lock);625628err:
+8
fs/bcachefs/fs.c
···24902490 if (ret)24912491 goto err_stop_fs;2492249224932493+ /*24942494+ * We might be doing a RO mount because other options required it, or we24952495+ * have no alloc info and it's a small image with no room to regenerate24962496+ * it24972497+ */24982498+ if (c->opts.read_only)24992499+ fc->sb_flags |= SB_RDONLY;25002500+24932501 sb = sget(fc->fs_type, NULL, bch2_set_super, fc->sb_flags|SB_NOSEC, c);24942502 ret = PTR_ERR_OR_ZERO(sb);24952503 if (ret)
···175175 new_inode->bi_dir_offset = dir_offset;176176 }177177178178+ if (S_ISDIR(mode)) {179179+ ret = bch2_maybe_propagate_has_case_insensitive(trans,180180+ (subvol_inum) {181181+ new_inode->bi_subvol ?: dir.subvol,182182+ new_inode->bi_inum },183183+ new_inode);184184+ if (ret)185185+ goto err;186186+ }187187+178188 if (S_ISDIR(mode) &&179189 !new_inode->bi_subvol)180190 new_inode->bi_depth = dir_u->bi_depth + 1;
+11-11
fs/bcachefs/rcu_pending.c
···182182 while (nr--)183183 kfree(*p);184184}185185-186186-#define local_irq_save(flags) \187187-do { \188188- flags = 0; \189189-} while (0)190185#endif191186192187static noinline void __process_finished_items(struct rcu_pending *pending,···424429425430 BUG_ON((ptr != NULL) != (pending->process == RCU_PENDING_KVFREE_FN));426431427427- local_irq_save(flags);428428- p = this_cpu_ptr(pending->p);429429- spin_lock(&p->lock);432432+ /* We could technically be scheduled before taking the lock and end up433433+ * using a different cpu's rcu_pending_pcpu: that's ok, it needs a lock434434+ * anyways435435+ *436436+ * And we have to do it this way to avoid breaking PREEMPT_RT, which437437+ * redefines how spinlocks work:438438+ */439439+ p = raw_cpu_ptr(pending->p);440440+ spin_lock_irqsave(&p->lock, flags);430441 rcu_gp_poll_state_t seq = __get_state_synchronize_rcu(pending->srcu);431442restart:432443 if (may_sleep &&···521520 goto free_node;522521 }523522524524- local_irq_save(flags);525525- p = this_cpu_ptr(pending->p);526526- spin_lock(&p->lock);523523+ p = raw_cpu_ptr(pending->p);524524+ spin_lock_irqsave(&p->lock, flags);527525 goto restart;528526}529527
+21-6
fs/bcachefs/recovery.c
···9999 goto out;100100 case BTREE_ID_snapshots:101101 ret = __bch2_run_explicit_recovery_pass(c, msg, BCH_RECOVERY_PASS_reconstruct_snapshots, 0) ?: ret;102102+ ret = __bch2_run_explicit_recovery_pass(c, msg, BCH_RECOVERY_PASS_check_topology, 0) ?: ret;102103 ret = __bch2_run_explicit_recovery_pass(c, msg, BCH_RECOVERY_PASS_scan_for_btree_nodes, 0) ?: ret;103104 goto out;104105 default:106106+ ret = __bch2_run_explicit_recovery_pass(c, msg, BCH_RECOVERY_PASS_check_topology, 0) ?: ret;105107 ret = __bch2_run_explicit_recovery_pass(c, msg, BCH_RECOVERY_PASS_scan_for_btree_nodes, 0) ?: ret;106108 goto out;107109 }···273271 goto out;274272275273 struct btree_path *path = btree_iter_path(trans, &iter);276276- if (unlikely(!btree_path_node(path, k->level))) {274274+ if (unlikely(!btree_path_node(path, k->level) &&275275+ !k->allocated)) {276276+ struct bch_fs *c = trans->c;277277+278278+ if (!(c->recovery.passes_complete & (BIT_ULL(BCH_RECOVERY_PASS_scan_for_btree_nodes)|279279+ BIT_ULL(BCH_RECOVERY_PASS_check_topology)))) {280280+ bch_err(c, "have key in journal replay for btree depth that does not exist, confused");281281+ ret = -EINVAL;282282+ }283283+#if 0277284 bch2_trans_iter_exit(trans, &iter);278285 bch2_trans_node_iter_init(trans, &iter, k->btree_id, k->k->k.p,279286 BTREE_MAX_DEPTH, 0, iter_flags);280287 ret = bch2_btree_iter_traverse(trans, &iter) ?:281288 bch2_btree_increase_depth(trans, iter.path, 0) ?:282289 -BCH_ERR_transaction_restart_nested;290290+#endif291291+ k->overwritten = true;283292 goto out;284293 }285294···752739 ? min(c->opts.recovery_pass_last, BCH_RECOVERY_PASS_snapshots_read)753740 : BCH_RECOVERY_PASS_snapshots_read;754741 c->opts.nochanges = true;755755- c->opts.read_only = true;756742 }743743+744744+ if (c->opts.nochanges)745745+ c->opts.read_only = true;757746758747 mutex_lock(&c->sb_lock);759748 struct bch_sb_field_ext *ext = bch2_sb_field_get(c->disk_sb.sb, ext);···11081093out:11091094 bch2_flush_fsck_errs(c);1110109511111111- if (!IS_ERR(clean))11121112- kfree(clean);11131113-11141096 if (!ret &&11151097 test_bit(BCH_FS_need_delete_dead_snapshots, &c->flags) &&11161098 !c->opts.nochanges) {···11161104 }1117110511181106 bch_err_fn(c, ret);11071107+final_out:11081108+ if (!IS_ERR(clean))11091109+ kfree(clean);11191110 return ret;11201111err:11211112fsck_err:···11321117 bch2_print_str(c, KERN_ERR, buf.buf);11331118 printbuf_exit(&buf);11341119 }11351135- return ret;11201120+ goto final_out;11361121}1137112211381123int bch2_fs_initialize(struct bch_fs *c)
+11-3
fs/bcachefs/recovery_passes.c
···294294 enum bch_run_recovery_pass_flags *flags)295295{296296 struct bch_fs_recovery *r = &c->recovery;297297- bool in_recovery = test_bit(BCH_FS_in_recovery, &c->flags);298298- bool persistent = !in_recovery || !(*flags & RUN_RECOVERY_PASS_nopersistent);297297+298298+ /*299299+ * Never run scan_for_btree_nodes persistently: check_topology will run300300+ * it if required301301+ */302302+ if (pass == BCH_RECOVERY_PASS_scan_for_btree_nodes)303303+ *flags |= RUN_RECOVERY_PASS_nopersistent;299304300305 if ((*flags & RUN_RECOVERY_PASS_ratelimit) &&301306 !bch2_recovery_pass_want_ratelimit(c, pass))···315310 * Otherwise, we run run_explicit_recovery_pass when we find damage, so316311 * it should run again even if it's already run:317312 */313313+ bool in_recovery = test_bit(BCH_FS_in_recovery, &c->flags);314314+ bool persistent = !in_recovery || !(*flags & RUN_RECOVERY_PASS_nopersistent);318315319316 if (persistent320317 ? !(c->sb.recovery_passes_required & BIT_ULL(pass))···340333{341334 struct bch_fs_recovery *r = &c->recovery;342335 int ret = 0;336336+343337344338 lockdep_assert_held(&c->sb_lock);345339···454446455447int bch2_run_print_explicit_recovery_pass(struct bch_fs *c, enum bch_recovery_pass pass)456448{457457- enum bch_run_recovery_pass_flags flags = RUN_RECOVERY_PASS_nopersistent;449449+ enum bch_run_recovery_pass_flags flags = 0;458450459451 if (!recovery_pass_needs_set(c, pass, &flags))460452 return 0;
+4-1
fs/bcachefs/sb-downgrade.c
···253253254254static int downgrade_table_extra(struct bch_fs *c, darray_char *table)255255{256256+ unsigned dst_offset = table->nr;256257 struct bch_sb_field_downgrade_entry *dst = (void *) &darray_top(*table);257258 unsigned bytes = sizeof(*dst) + sizeof(dst->errors[0]) * le16_to_cpu(dst->nr_errors);258259 int ret = 0;···269268 if (ret)270269 return ret;271270271271+ dst = (void *) &table->data[dst_offset];272272+ dst->nr_errors = cpu_to_le16(nr_errors + 1);273273+272274 /* open coded __set_bit_le64, as dst is packed and273275 * dst->recovery_passes is misaligned */274276 unsigned b = BCH_RECOVERY_PASS_STABLE_check_allocations;···282278 break;283279 }284280285285- dst->nr_errors = cpu_to_le16(nr_errors);286281 return ret;287282}288283
···11981198 if (!(file->f_mode & FMODE_ATOMIC_POS) && !file->f_op->iterate_shared)11991199 return false;1200120012011201- VFS_WARN_ON_ONCE((file_count(file) > 1) &&12021202- !mutex_is_locked(&file->f_pos_lock));12011201+ /*12021202+ * Note that we are not guaranteed to be called after fdget_pos() on12031203+ * this file obj, in which case the caller is expected to provide the12041204+ * appropriate locking.12051205+ */12061206+12031207 return true;12041208}12051209
+13-4
fs/namei.c
···29172917 * @base: base directory to lookup from29182918 *29192919 * Look up a dentry by name in the dcache, returning NULL if it does not29202920- * currently exist. The function does not try to create a dentry.29202920+ * currently exist. The function does not try to create a dentry and if one29212921+ * is found it doesn't try to revalidate it.29212922 *29222923 * Note that this routine is purely a helper for filesystem usage and should29232924 * not be called by generic code. It does no permission checking.···29342933 if (err)29352934 return ERR_PTR(err);2936293529372937- return lookup_dcache(name, base, 0);29362936+ return d_lookup(base, name);29382937}29392938EXPORT_SYMBOL(try_lookup_noperm);29402939···30583057 * Note that this routine is purely a helper for filesystem usage and should30593058 * not be called by generic code. It does no permission checking.30603059 *30613061- * Unlike lookup_noperm, it should be called without the parent30603060+ * Unlike lookup_noperm(), it should be called without the parent30623061 * i_rwsem held, and will take the i_rwsem itself if necessary.30623062+ *30633063+ * Unlike try_lookup_noperm() it *does* revalidate the dentry if it already30643064+ * existed.30633065 */30643066struct dentry *lookup_noperm_unlocked(struct qstr *name, struct dentry *base)30653067{30663068 struct dentry *ret;30693069+ int err;3067307030683068- ret = try_lookup_noperm(name, base);30713071+ err = lookup_noperm_common(name, base);30723072+ if (err)30733073+ return ERR_PTR(err);30743074+30753075+ ret = lookup_dcache(name, base, 0);30693076 if (!ret)30703077 ret = lookup_slow(name, base, 0);30713078 return ret;
+8-2
fs/overlayfs/namei.c
···13931393bool ovl_lower_positive(struct dentry *dentry)13941394{13951395 struct ovl_entry *poe = OVL_E(dentry->d_parent);13961396- struct qstr *name = &dentry->d_name;13961396+ const struct qstr *name = &dentry->d_name;13971397 const struct cred *old_cred;13981398 unsigned int i;13991399 bool positive = false;···14161416 struct dentry *this;14171417 struct ovl_path *parentpath = &ovl_lowerstack(poe)[i];1418141814191419+ /*14201420+ * We need to make a non-const copy of dentry->d_name,14211421+ * because lookup_one_positive_unlocked() will hash name14221422+ * with parentpath base, which is on another (lower fs).14231423+ */14191424 this = lookup_one_positive_unlocked(14201425 mnt_idmap(parentpath->layer->mnt),14211421- name, parentpath->dentry);14261426+ &QSTR_LEN(name->name, name->len),14271427+ parentpath->dentry);14221428 if (IS_ERR(this)) {14231429 switch (PTR_ERR(this)) {14241430 case -ENOENT:
···2121struct cached_dirents {2222 bool is_valid:1;2323 bool is_failed:1;2424- struct dir_context *ctx; /*2525- * Only used to make sure we only take entries2626- * from a single context. Never dereferenced.2727- */2424+ struct file *file; /*2525+ * Used to associate the cache with a single2626+ * open file instance.2727+ */2828 struct mutex de_mutex;2929 int pos; /* Expected ctx->pos */3030 struct list_head entries;
+8-2
fs/smb/client/connect.c
···37183718 goto out;37193719 }3720372037213721- /* if new SMB3.11 POSIX extensions are supported do not remap / and \ */37223722- if (tcon->posix_extensions)37213721+ /*37223722+ * if new SMB3.11 POSIX extensions are supported, do not change anything in the37233723+ * path (i.e., do not remap / and \ and do not map any special characters)37243724+ */37253725+ if (tcon->posix_extensions) {37233726 cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_POSIX_PATHS;37273727+ cifs_sb->mnt_cifs_flags &= ~(CIFS_MOUNT_MAP_SFM_CHR |37283728+ CIFS_MOUNT_MAP_SPECIAL_CHR);37293729+ }3724373037253731#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY37263732 /* tell server which Unix caps we support */
+6-3
fs/smb/client/file.c
···999999 rc = cifs_get_readable_path(tcon, full_path, &cfile);10001000 }10011001 if (rc == 0) {10021002- if (file->f_flags == cfile->f_flags) {10021002+ unsigned int oflags = file->f_flags & ~(O_CREAT|O_EXCL|O_TRUNC);10031003+ unsigned int cflags = cfile->f_flags & ~(O_CREAT|O_EXCL|O_TRUNC);10041004+10051005+ if (cifs_convert_flags(oflags, 0) == cifs_convert_flags(cflags, 0) &&10061006+ (oflags & (O_SYNC|O_DIRECT)) == (cflags & (O_SYNC|O_DIRECT))) {10031007 file->private_data = cfile;10041008 spin_lock(&CIFS_I(inode)->deferred_lock);10051009 cifs_del_deferred_close(cfile);10061010 spin_unlock(&CIFS_I(inode)->deferred_lock);10071011 goto use_cache;10081008- } else {10091009- _cifsFileInfo_put(cfile, true, false);10101012 }10131013+ _cifsFileInfo_put(cfile, true, false);10111014 } else {10121015 /* hard link on the defeered close file */10131016 rc = cifs_get_hardlink_path(tcon, inode, file);
···9999 * @sg: The current sg entry100100 *101101 * Description:102102- * Usually the next entry will be @sg@ + 1, but if this sg element is part102102+ * Usually the next entry will be @sg + 1, but if this sg element is part103103 * of a chained scatterlist, it could jump to the start of a new104104 * scatterlist array.105105 *···254254 * @sgl: Second scatterlist255255 *256256 * Description:257257- * Links @prv@ and @sgl@ together, to form a longer scatterlist.257257+ * Links @prv and @sgl together, to form a longer scatterlist.258258 *259259 **/260260static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents,
···270270 /* truncate end piece, if needed, for non partial buffers */271271 if (len > arg->max_len) {272272 len = arg->max_len;273273- if (!(bl->flags & IOBL_INC))273273+ if (!(bl->flags & IOBL_INC)) {274274+ if (iov != arg->iovs)275275+ break;274276 buf->len = len;277277+ }275278 }276279277280 iov->iov_base = u64_to_user_ptr(buf->addr);
+5-2
io_uring/register.c
···273273 if (ctx->flags & IORING_SETUP_SQPOLL) {274274 sqd = ctx->sq_data;275275 if (sqd) {276276+ struct task_struct *tsk;277277+276278 /*277279 * Observe the correct sqd->lock -> ctx->uring_lock278280 * ordering. Fine to drop uring_lock here, we hold···284282 mutex_unlock(&ctx->uring_lock);285283 mutex_lock(&sqd->lock);286284 mutex_lock(&ctx->uring_lock);287287- if (sqd->thread)288288- tctx = sqd->thread->io_uring;285285+ tsk = sqpoll_task_locked(sqd);286286+ if (tsk)287287+ tctx = tsk->io_uring;289288 }290289 } else {291290 tctx = current->io_uring;
+28-15
io_uring/sqpoll.c
···3030void io_sq_thread_unpark(struct io_sq_data *sqd)3131 __releases(&sqd->lock)3232{3333- WARN_ON_ONCE(sqd->thread == current);3333+ WARN_ON_ONCE(sqpoll_task_locked(sqd) == current);34343535 /*3636 * Do the dance but not conditional clear_bit() because it'd race with···4646void io_sq_thread_park(struct io_sq_data *sqd)4747 __acquires(&sqd->lock)4848{4949- WARN_ON_ONCE(data_race(sqd->thread) == current);4949+ struct task_struct *tsk;50505151 atomic_inc(&sqd->park_pending);5252 set_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state);5353 mutex_lock(&sqd->lock);5454- if (sqd->thread)5555- wake_up_process(sqd->thread);5454+5555+ tsk = sqpoll_task_locked(sqd);5656+ if (tsk) {5757+ WARN_ON_ONCE(tsk == current);5858+ wake_up_process(tsk);5959+ }5660}57615862void io_sq_thread_stop(struct io_sq_data *sqd)5963{6060- WARN_ON_ONCE(sqd->thread == current);6464+ struct task_struct *tsk;6565+6166 WARN_ON_ONCE(test_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state));62676368 set_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state);6469 mutex_lock(&sqd->lock);6565- if (sqd->thread)6666- wake_up_process(sqd->thread);7070+ tsk = sqpoll_task_locked(sqd);7171+ if (tsk) {7272+ WARN_ON_ONCE(tsk == current);7373+ wake_up_process(tsk);7474+ }6775 mutex_unlock(&sqd->lock);6876 wait_for_completion(&sqd->exited);6977}···278270 /* offload context creation failed, just exit */279271 if (!current->io_uring) {280272 mutex_lock(&sqd->lock);281281- sqd->thread = NULL;273273+ rcu_assign_pointer(sqd->thread, NULL);274274+ put_task_struct(current);282275 mutex_unlock(&sqd->lock);283276 goto err_out;284277 }···388379 io_sq_tw(&retry_list, UINT_MAX);389380390381 io_uring_cancel_generic(true, sqd);391391- sqd->thread = NULL;382382+ rcu_assign_pointer(sqd->thread, NULL);383383+ put_task_struct(current);392384 list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)393385 atomic_or(IORING_SQ_NEED_WAKEUP, &ctx->rings->sq_flags);394386 io_run_task_work();···494484 goto err_sqpoll;495485 }496486497497- sqd->thread = tsk;487487+ mutex_lock(&sqd->lock);488488+ rcu_assign_pointer(sqd->thread, tsk);489489+ mutex_unlock(&sqd->lock);490490+498491 task_to_put = get_task_struct(tsk);499492 ret = io_uring_alloc_task_context(tsk, ctx);500493 wake_up_new_task(tsk);···508495 ret = -EINVAL;509496 goto err;510497 }511511-512512- if (task_to_put)513513- put_task_struct(task_to_put);514498 return 0;515499err_sqpoll:516500 complete(&ctx->sq_data->exited);···525515 int ret = -EINVAL;526516527517 if (sqd) {518518+ struct task_struct *tsk;519519+528520 io_sq_thread_park(sqd);529521 /* Don't set affinity for a dying thread */530530- if (sqd->thread)531531- ret = io_wq_cpu_affinity(sqd->thread->io_uring, mask);522522+ tsk = sqpoll_task_locked(sqd);523523+ if (tsk)524524+ ret = io_wq_cpu_affinity(tsk->io_uring, mask);532525 io_sq_thread_unpark(sqd);533526 }534527
···14061406 lockdep_assert_irqs_disabled();1407140714081408 /*14091409+ * Ensure that release_task(tsk) can't happen while14101410+ * handle_posix_cpu_timers() is running. Otherwise, a concurrent14111411+ * posix_cpu_timer_del() may fail to lock_task_sighand(tsk) and14121412+ * miss timer->it.cpu.firing != 0.14131413+ */14141414+ if (tsk->exit_state)14151415+ return;14161416+14171417+ /*14091418 * If the actual expiry is deferred to task work context and the14101419 * work is already scheduled there is no point to do anything here.14111420 */
···455455 return 0;456456}457457458458+static struct tracer graph_trace;459459+458460static int ftrace_graph_trace_args(struct trace_array *tr, int set)459461{460462 trace_func_graph_ent_t entry;463463+464464+ /* Do nothing if the current tracer is not this tracer */465465+ if (tr->current_trace != &graph_trace)466466+ return 0;461467462468 if (set)463469 entry = trace_graph_entry_args;
···7373 * Should only be used casually, it (currently) scans the entire list7474 * to get the last entry.7575 *7676- * Note that the @sgl@ pointer passed in need not be the first one,7777- * the important bit is that @nents@ denotes the number of entries that7878- * exist from @sgl@.7676+ * Note that the @sgl pointer passed in need not be the first one,7777+ * the important bit is that @nents denotes the number of entries that7878+ * exist from @sgl.7979 *8080 **/8181struct scatterlist *sg_last(struct scatterlist *sgl, unsigned int nents)···345345 * @gfp_mask: GFP allocation mask346346 *347347 * Description:348348- * Allocate and initialize an sg table. If @nents@ is larger than348348+ * Allocate and initialize an sg table. If @nents is larger than349349 * SG_MAX_SINGLE_ALLOC a chained sg table will be setup.350350 *351351 **/
···508508 pte_offset_map_lock(mm, pmd, addr, &ptl);509509 if (!start_pte)510510 break;511511+ flush_tlb_batched_pending(mm);511512 arch_enter_lazy_mmu_mode();512513 if (!err)513514 nr = 0;···742741 start_pte = pte;743742 if (!start_pte)744743 break;744744+ flush_tlb_batched_pending(mm);745745 arch_enter_lazy_mmu_mode();746746 if (!err)747747 nr = 0;
+40
mm/util.c
···11311131}11321132EXPORT_SYMBOL(flush_dcache_folio);11331133#endif11341134+11351135+/**11361136+ * compat_vma_mmap_prepare() - Apply the file's .mmap_prepare() hook to an11371137+ * existing VMA11381138+ * @file: The file which possesss an f_op->mmap_prepare() hook11391139+ * @vma: The VMA to apply the .mmap_prepare() hook to.11401140+ *11411141+ * Ordinarily, .mmap_prepare() is invoked directly upon mmap(). However, certain11421142+ * 'wrapper' file systems invoke a nested mmap hook of an underlying file.11431143+ *11441144+ * Until all filesystems are converted to use .mmap_prepare(), we must be11451145+ * conservative and continue to invoke these 'wrapper' filesystems using the11461146+ * deprecated .mmap() hook.11471147+ *11481148+ * However we have a problem if the underlying file system possesses an11491149+ * .mmap_prepare() hook, as we are in a different context when we invoke the11501150+ * .mmap() hook, already having a VMA to deal with.11511151+ *11521152+ * compat_vma_mmap_prepare() is a compatibility function that takes VMA state,11531153+ * establishes a struct vm_area_desc descriptor, passes to the underlying11541154+ * .mmap_prepare() hook and applies any changes performed by it.11551155+ *11561156+ * Once the conversion of filesystems is complete this function will no longer11571157+ * be required and will be removed.11581158+ *11591159+ * Returns: 0 on success or error.11601160+ */11611161+int compat_vma_mmap_prepare(struct file *file, struct vm_area_struct *vma)11621162+{11631163+ struct vm_area_desc desc;11641164+ int err;11651165+11661166+ err = file->f_op->mmap_prepare(vma_to_desc(vma, &desc));11671167+ if (err)11681168+ return err;11691169+ set_vma_from_desc(vma, &desc);11701170+11711171+ return 0;11721172+}11731173+EXPORT_SYMBOL(compat_vma_mmap_prepare);
+4-19
mm/vma.c
···967967 err = dup_anon_vma(next, middle, &anon_dup);968968 }969969970970- if (err)970970+ if (err || commit_merge(vmg))971971 goto abort;972972-973973- err = commit_merge(vmg);974974- if (err) {975975- VM_WARN_ON(err != -ENOMEM);976976-977977- if (anon_dup)978978- unlink_anon_vmas(anon_dup);979979-980980- /*981981- * We've cleaned up any cloned anon_vma's, no VMAs have been982982- * modified, no harm no foul if the user requests that we not983983- * report this and just give up, leaving the VMAs unmerged.984984- */985985- if (!vmg->give_up_on_oom)986986- vmg->state = VMA_MERGE_ERROR_NOMEM;987987- return NULL;988988- }989972990973 khugepaged_enter_vma(vmg->target, vmg->flags);991974 vmg->state = VMA_MERGE_SUCCESS;···977994abort:978995 vma_iter_set(vmg->vmi, start);979996 vma_iter_load(vmg->vmi);997997+998998+ if (anon_dup)999999+ unlink_anon_vmas(anon_dup);98010009811001 /*9821002 * This means we have failed to clone anon_vma's correctly, but no···31123126 userfaultfd_unmap_complete(mm, &uf);31133127 return ret;31143128}31153115-3116312931173130/* Insert vm structure into process list sorted by address31183131 * and into the inode's i_mmap tree. If vm_file is non-NULL
+47
mm/vma.h
···222222 return 0;223223}224224225225+226226+/*227227+ * Temporary helper functions for file systems which wrap an invocation of228228+ * f_op->mmap() but which might have an underlying file system which implements229229+ * f_op->mmap_prepare().230230+ */231231+232232+static inline struct vm_area_desc *vma_to_desc(struct vm_area_struct *vma,233233+ struct vm_area_desc *desc)234234+{235235+ desc->mm = vma->vm_mm;236236+ desc->start = vma->vm_start;237237+ desc->end = vma->vm_end;238238+239239+ desc->pgoff = vma->vm_pgoff;240240+ desc->file = vma->vm_file;241241+ desc->vm_flags = vma->vm_flags;242242+ desc->page_prot = vma->vm_page_prot;243243+244244+ desc->vm_ops = NULL;245245+ desc->private_data = NULL;246246+247247+ return desc;248248+}249249+250250+static inline void set_vma_from_desc(struct vm_area_struct *vma,251251+ struct vm_area_desc *desc)252252+{253253+ /*254254+ * Since we're invoking .mmap_prepare() despite having a partially255255+ * established VMA, we must take care to handle setting fields256256+ * correctly.257257+ */258258+259259+ /* Mutable fields. Populated with initial state. */260260+ vma->vm_pgoff = desc->pgoff;261261+ if (vma->vm_file != desc->file)262262+ vma_set_file(vma, desc->file);263263+ if (vma->vm_flags != desc->vm_flags)264264+ vm_flags_set(vma, desc->vm_flags);265265+ vma->vm_page_prot = desc->page_prot;266266+267267+ /* User-defined fields. */268268+ vma->vm_ops = desc->vm_ops;269269+ vma->vm_private_data = desc->private_data;270270+}271271+225272int226273do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,227274 struct mm_struct *mm, unsigned long start,
+1
net/atm/common.c
···635635636636 skb->dev = NULL; /* for paths shared with net_device interfaces */637637 if (!copy_from_iter_full(skb_put(skb, size), size, &m->msg_iter)) {638638+ atm_return_tx(vcc, skb);638639 kfree_skb(skb);639640 error = -EFAULT;640641 goto out;
+10-2
net/atm/lec.c
···124124125125/* Device structures */126126static struct net_device *dev_lec[MAX_LEC_ITF];127127+static DEFINE_MUTEX(lec_mutex);127128128129#if IS_ENABLED(CONFIG_BRIDGE)129130static void lec_handle_bridge(struct sk_buff *skb, struct net_device *dev)···686685 int bytes_left;687686 struct atmlec_ioc ioc_data;688687688688+ lockdep_assert_held(&lec_mutex);689689 /* Lecd must be up in this case */690690 bytes_left = copy_from_user(&ioc_data, arg, sizeof(struct atmlec_ioc));691691 if (bytes_left != 0)···712710713711static int lec_mcast_attach(struct atm_vcc *vcc, int arg)714712{713713+ lockdep_assert_held(&lec_mutex);715714 if (arg < 0 || arg >= MAX_LEC_ITF)716715 return -EINVAL;717716 arg = array_index_nospec(arg, MAX_LEC_ITF);···728725 int i;729726 struct lec_priv *priv;730727728728+ lockdep_assert_held(&lec_mutex);731729 if (arg < 0)732730 arg = 0;733731 if (arg >= MAX_LEC_ITF)···746742 snprintf(dev_lec[i]->name, IFNAMSIZ, "lec%d", i);747743 if (register_netdev(dev_lec[i])) {748744 free_netdev(dev_lec[i]);745745+ dev_lec[i] = NULL;749746 return -EINVAL;750747 }751748···909904 v = (dev && netdev_priv(dev)) ?910905 lec_priv_walk(state, l, netdev_priv(dev)) : NULL;911906 if (!v && dev) {912912- dev_put(dev);913907 /* Partial state reset for the next time we get called */914908 dev = NULL;915909 }···932928{933929 struct lec_state *state = seq->private;934930931931+ mutex_lock(&lec_mutex);935932 state->itf = 0;936933 state->dev = NULL;937934 state->locked = NULL;···950945 if (state->dev) {951946 spin_unlock_irqrestore(&state->locked->lec_arp_lock,952947 state->flags);953953- dev_put(state->dev);948948+ state->dev = NULL;954949 }950950+ mutex_unlock(&lec_mutex);955951}956952957953static void *lec_seq_next(struct seq_file *seq, void *v, loff_t *pos)···10091003 return -ENOIOCTLCMD;10101004 }1011100510061006+ mutex_lock(&lec_mutex);10121007 switch (cmd) {10131008 case ATMLEC_CTRL:10141009 err = lecd_attach(vcc, (int)arg);···10241017 break;10251018 }1026101910201020+ mutex_unlock(&lec_mutex);10271021 return err;10281022}10291023
···62616261 if (!pskb_may_pull(skb, write_len))62626262 return -ENOMEM;6263626362646264- if (!skb_frags_readable(skb))62656265- return -EFAULT;62666266-62676264 if (!skb_cloned(skb) || skb_clone_writable(skb, write_len))62686265 return 0;62696266
+3
net/ipv4/tcp_fastopen.c
···33#include <linux/tcp.h>44#include <linux/rcupdate.h>55#include <net/tcp.h>66+#include <net/busy_poll.h>6778void tcp_fastopen_init_key_once(struct net *net)89{···279278 req->timeout, false);280279281280 refcount_set(&req->rsk_refcnt, 2);281281+282282+ sk_mark_napi_id_set(child, skb);282283283284 /* Now finish processing the fastopen child socket. */284285 tcp_init_transfer(child, BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB, skb);
+24-11
net/ipv4/tcp_input.c
···23672367{23682368 const struct sock *sk = (const struct sock *)tp;2369236923702370- if (tp->retrans_stamp &&23712371- tcp_tsopt_ecr_before(tp, tp->retrans_stamp))23722372- return true; /* got echoed TS before first retransmission */23702370+ /* Received an echoed timestamp before the first retransmission? */23712371+ if (tp->retrans_stamp)23722372+ return tcp_tsopt_ecr_before(tp, tp->retrans_stamp);2373237323742374- /* Check if nothing was retransmitted (retrans_stamp==0), which may23752375- * happen in fast recovery due to TSQ. But we ignore zero retrans_stamp23762376- * in TCP_SYN_SENT, since when we set FLAG_SYN_ACKED we also clear23772377- * retrans_stamp even if we had retransmitted the SYN.23742374+ /* We set tp->retrans_stamp upon the first retransmission of a loss23752375+ * recovery episode, so normally if tp->retrans_stamp is 0 then no23762376+ * retransmission has happened yet (likely due to TSQ, which can cause23772377+ * fast retransmits to be delayed). So if snd_una advanced while23782378+ * (tp->retrans_stamp is 0 then apparently a packet was merely delayed,23792379+ * not lost. But there are exceptions where we retransmit but then23802380+ * clear tp->retrans_stamp, so we check for those exceptions.23782381 */23792379- if (!tp->retrans_stamp && /* no record of a retransmit/SYN? */23802380- sk->sk_state != TCP_SYN_SENT) /* not the FLAG_SYN_ACKED case? */23812381- return true; /* nothing was retransmitted */2382238223832383- return false;23832383+ /* (1) For non-SACK connections, tcp_is_non_sack_preventing_reopen()23842384+ * clears tp->retrans_stamp when snd_una == high_seq.23852385+ */23862386+ if (!tcp_is_sack(tp) && !before(tp->snd_una, tp->high_seq))23872387+ return false;23882388+23892389+ /* (2) In TCP_SYN_SENT tcp_clean_rtx_queue() clears tp->retrans_stamp23902390+ * when setting FLAG_SYN_ACKED is set, even if the SYN was23912391+ * retransmitted.23922392+ */23932393+ if (sk->sk_state == TCP_SYN_SENT)23942394+ return false;23952395+23962396+ return true; /* tp->retrans_stamp is zero; no retransmit yet */23842397}2385239823862399/* Undo procedures. */
+8
net/ipv6/calipso.c
···12071207 struct ipv6_opt_hdr *old, *new;12081208 struct sock *sk = sk_to_full_sk(req_to_sk(req));1209120912101210+ /* sk is NULL for SYN+ACK w/ SYN Cookie */12111211+ if (!sk)12121212+ return -ENOMEM;12131213+12101214 if (req_inet->ipv6_opt && req_inet->ipv6_opt->hopopt)12111215 old = req_inet->ipv6_opt->hopopt;12121216 else···12501246 struct ipv6_opt_hdr *new;12511247 struct ipv6_txoptions *txopts;12521248 struct sock *sk = sk_to_full_sk(req_to_sk(req));12491249+12501250+ /* sk is NULL for SYN+ACK w/ SYN Cookie */12511251+ if (!sk)12521252+ return;1253125312541254 if (!req_inet->ipv6_opt || !req_inet->ipv6_opt->hopopt)12551255 return;
···44324432 if (!multicast &&44334433 !ether_addr_equal(sdata->dev->dev_addr, hdr->addr1))44344434 return false;44354435+ /* reject invalid/our STA address */44364436+ if (!is_valid_ether_addr(hdr->addr2) ||44374437+ ether_addr_equal(sdata->dev->dev_addr, hdr->addr2))44384438+ return false;44354439 if (!rx->sta) {44364440 int rate_idx;44374441 if (status->encoding != RX_ENC_LEGACY)
+21-8
net/mac80211/tx.c
···55 * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>66 * Copyright 2007 Johannes Berg <johannes@sipsolutions.net>77 * Copyright 2013-2014 Intel Mobile Communications GmbH88- * Copyright (C) 2018-2024 Intel Corporation88+ * Copyright (C) 2018-2025 Intel Corporation99 *1010 * Transmit and frame generation functions.1111 */···50165016 }50175017}5018501850195019-static u8 __ieee80211_beacon_update_cntdwn(struct beacon_data *beacon)50195019+static u8 __ieee80211_beacon_update_cntdwn(struct ieee80211_link_data *link,50205020+ struct beacon_data *beacon)50205021{50215021- beacon->cntdwn_current_counter--;50225022+ if (beacon->cntdwn_current_counter == 1) {50235023+ /*50245024+ * Channel switch handling is done by a worker thread while50255025+ * beacons get pulled from hardware timers. It's therefore50265026+ * possible that software threads are slow enough to not be50275027+ * able to complete CSA handling in a single beacon interval,50285028+ * in which case we get here. There isn't much to do about50295029+ * it, other than letting the user know that the AP isn't50305030+ * behaving correctly.50315031+ */50325032+ link_err_once(link,50335033+ "beacon TX faster than countdown (channel/color switch) completion\n");50345034+ return 0;50355035+ }5022503650235023- /* the counter should never reach 0 */50245024- WARN_ON_ONCE(!beacon->cntdwn_current_counter);50375037+ beacon->cntdwn_current_counter--;5025503850265039 return beacon->cntdwn_current_counter;50275040}···50655052 if (!beacon)50665053 goto unlock;5067505450685068- count = __ieee80211_beacon_update_cntdwn(beacon);50555055+ count = __ieee80211_beacon_update_cntdwn(link, beacon);5069505650705057unlock:50715058 rcu_read_unlock();···5463545054645451 if (beacon->cntdwn_counter_offsets[0]) {54655452 if (!is_template)54665466- __ieee80211_beacon_update_cntdwn(beacon);54535453+ __ieee80211_beacon_update_cntdwn(link, beacon);5467545454685455 ieee80211_set_beacon_cntdwn(sdata, beacon, link);54695456 }···54955482 * for now we leave it consistent with overall54965483 * mac80211's behavior.54975484 */54985498- __ieee80211_beacon_update_cntdwn(beacon);54855485+ __ieee80211_beacon_update_cntdwn(link, beacon);5499548655005487 ieee80211_set_beacon_cntdwn(sdata, beacon, link);55015488 }
···220220 struct task_struct *owner;221221 local_lock_t bh_lock;222222};223223-DECLARE_PER_CPU(struct ovs_pcpu_storage, ovs_pcpu_storage);223223+224224+extern struct ovs_pcpu_storage __percpu *ovs_pcpu_storage;224225225226/**226227 * enum ovs_pkt_hash_types - hash info to include with a packet
···6677use crate::{bindings, device::Device, error::Result, prelude::ENODEV};8899+/// Returns the maximum number of possible CPUs in the current system configuration.1010+#[inline]1111+pub fn nr_cpu_ids() -> u32 {1212+ #[cfg(any(NR_CPUS_1, CONFIG_FORCE_NR_CPUS))]1313+ {1414+ bindings::NR_CPUS1515+ }1616+1717+ #[cfg(not(any(NR_CPUS_1, CONFIG_FORCE_NR_CPUS)))]1818+ // SAFETY: `nr_cpu_ids` is a valid global provided by the kernel.1919+ unsafe {2020+ bindings::nr_cpu_ids2121+ }2222+}2323+2424+/// The CPU ID.2525+///2626+/// Represents a CPU identifier as a wrapper around an [`u32`].2727+///2828+/// # Invariants2929+///3030+/// The CPU ID lies within the range `[0, nr_cpu_ids())`.3131+///3232+/// # Examples3333+///3434+/// ```3535+/// use kernel::cpu::CpuId;3636+///3737+/// let cpu = 0;3838+///3939+/// // SAFETY: 0 is always a valid CPU number.4040+/// let id = unsafe { CpuId::from_u32_unchecked(cpu) };4141+///4242+/// assert_eq!(id.as_u32(), cpu);4343+/// assert!(CpuId::from_i32(0).is_some());4444+/// assert!(CpuId::from_i32(-1).is_none());4545+/// ```4646+#[derive(Copy, Clone, PartialEq, Eq, Debug)]4747+pub struct CpuId(u32);4848+4949+impl CpuId {5050+ /// Creates a new [`CpuId`] from the given `id` without checking bounds.5151+ ///5252+ /// # Safety5353+ ///5454+ /// The caller must ensure that `id` is a valid CPU ID (i.e., `0 <= id < nr_cpu_ids()`).5555+ #[inline]5656+ pub unsafe fn from_i32_unchecked(id: i32) -> Self {5757+ debug_assert!(id >= 0);5858+ debug_assert!((id as u32) < nr_cpu_ids());5959+6060+ // INVARIANT: The function safety guarantees `id` is a valid CPU id.6161+ Self(id as u32)6262+ }6363+6464+ /// Creates a new [`CpuId`] from the given `id`, checking that it is valid.6565+ pub fn from_i32(id: i32) -> Option<Self> {6666+ if id < 0 || id as u32 >= nr_cpu_ids() {6767+ None6868+ } else {6969+ // INVARIANT: `id` has just been checked as a valid CPU ID.7070+ Some(Self(id as u32))7171+ }7272+ }7373+7474+ /// Creates a new [`CpuId`] from the given `id` without checking bounds.7575+ ///7676+ /// # Safety7777+ ///7878+ /// The caller must ensure that `id` is a valid CPU ID (i.e., `0 <= id < nr_cpu_ids()`).7979+ #[inline]8080+ pub unsafe fn from_u32_unchecked(id: u32) -> Self {8181+ debug_assert!(id < nr_cpu_ids());8282+8383+ // Ensure the `id` fits in an [`i32`] as it's also representable that way.8484+ debug_assert!(id <= i32::MAX as u32);8585+8686+ // INVARIANT: The function safety guarantees `id` is a valid CPU id.8787+ Self(id)8888+ }8989+9090+ /// Creates a new [`CpuId`] from the given `id`, checking that it is valid.9191+ pub fn from_u32(id: u32) -> Option<Self> {9292+ if id >= nr_cpu_ids() {9393+ None9494+ } else {9595+ // INVARIANT: `id` has just been checked as a valid CPU ID.9696+ Some(Self(id))9797+ }9898+ }9999+100100+ /// Returns CPU number.101101+ #[inline]102102+ pub fn as_u32(&self) -> u32 {103103+ self.0104104+ }105105+106106+ /// Returns the ID of the CPU the code is currently running on.107107+ ///108108+ /// The returned value is considered unstable because it may change109109+ /// unexpectedly due to preemption or CPU migration. It should only be110110+ /// used when the context ensures that the task remains on the same CPU111111+ /// or the users could use a stale (yet valid) CPU ID.112112+ pub fn current() -> Self {113113+ // SAFETY: raw_smp_processor_id() always returns a valid CPU ID.114114+ unsafe { Self::from_u32_unchecked(bindings::raw_smp_processor_id()) }115115+ }116116+}117117+118118+impl From<CpuId> for u32 {119119+ fn from(id: CpuId) -> Self {120120+ id.as_u32()121121+ }122122+}123123+124124+impl From<CpuId> for i32 {125125+ fn from(id: CpuId) -> Self {126126+ id.as_u32() as i32127127+ }128128+}129129+9130/// Creates a new instance of CPU's device.10131///11132/// # Safety···13817/// Callers must ensure that the CPU device is not used after it has been unregistered.13918/// This can be achieved, for example, by registering a CPU hotplug notifier and removing14019/// any references to the CPU device within the notifier's callback.141141-pub unsafe fn from_cpu(cpu: u32) -> Result<&'static Device> {2020+pub unsafe fn from_cpu(cpu: CpuId) -> Result<&'static Device> {14221 // SAFETY: It is safe to call `get_cpu_device()` for any CPU.143143- let ptr = unsafe { bindings::get_cpu_device(cpu) };2222+ let ptr = unsafe { bindings::get_cpu_device(u32::from(cpu)) };14423 if ptr.is_null() {14524 return Err(ENODEV);14625 }
+128-45
rust/kernel/cpufreq.rs
···10101111use crate::{1212 clk::Hertz,1313+ cpu::CpuId,1314 cpumask,1415 device::{Bound, Device},1516 devres::Devres,···466465467466 /// Returns the primary CPU for the [`Policy`].468467 #[inline]469469- pub fn cpu(&self) -> u32 {470470- self.as_ref().cpu468468+ pub fn cpu(&self) -> CpuId {469469+ // SAFETY: The C API guarantees that `cpu` refers to a valid CPU number.470470+ unsafe { CpuId::from_u32_unchecked(self.as_ref().cpu) }471471 }472472473473 /// Returns the minimum frequency for the [`Policy`].···527525 #[inline]528526 pub fn generic_get(&self) -> Result<u32> {529527 // SAFETY: By the type invariant, the pointer stored in `self` is valid.530530- Ok(unsafe { bindings::cpufreq_generic_get(self.cpu()) })528528+ Ok(unsafe { bindings::cpufreq_generic_get(u32::from(self.cpu())) })531529 }532530533531 /// Provides a wrapper to the register with energy model using the OPP core.···680678struct PolicyCpu<'a>(&'a mut Policy);681679682680impl<'a> PolicyCpu<'a> {683683- fn from_cpu(cpu: u32) -> Result<Self> {681681+ fn from_cpu(cpu: CpuId) -> Result<Self> {684682 // SAFETY: It is safe to call `cpufreq_cpu_get` for any valid CPU.685685- let ptr = from_err_ptr(unsafe { bindings::cpufreq_cpu_get(cpu) })?;683683+ let ptr = from_err_ptr(unsafe { bindings::cpufreq_cpu_get(u32::from(cpu)) })?;686684687685 Ok(Self(688686 // SAFETY: The `ptr` is guaranteed to be valid and remains valid for the lifetime of···10571055impl<T: Driver> Registration<T> {10581056 /// Driver's `init` callback.10591057 ///10601060- /// SAFETY: Called from C. Inputs must be valid pointers.10611061- extern "C" fn init_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int {10581058+ /// # Safety10591059+ ///10601060+ /// - This function may only be called from the cpufreq C infrastructure.10611061+ /// - The pointer arguments must be valid pointers.10621062+ unsafe extern "C" fn init_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int {10621063 from_result(|| {10631064 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the10641065 // lifetime of `policy`.···1075107010761071 /// Driver's `exit` callback.10771072 ///10781078- /// SAFETY: Called from C. Inputs must be valid pointers.10791079- extern "C" fn exit_callback(ptr: *mut bindings::cpufreq_policy) {10731073+ /// # Safety10741074+ ///10751075+ /// - This function may only be called from the cpufreq C infrastructure.10761076+ /// - The pointer arguments must be valid pointers.10771077+ unsafe extern "C" fn exit_callback(ptr: *mut bindings::cpufreq_policy) {10801078 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the10811079 // lifetime of `policy`.10821080 let policy = unsafe { Policy::from_raw_mut(ptr) };···1090108210911083 /// Driver's `online` callback.10921084 ///10931093- /// SAFETY: Called from C. Inputs must be valid pointers.10941094- extern "C" fn online_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int {10851085+ /// # Safety10861086+ ///10871087+ /// - This function may only be called from the cpufreq C infrastructure.10881088+ /// - The pointer arguments must be valid pointers.10891089+ unsafe extern "C" fn online_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int {10951090 from_result(|| {10961091 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the10971092 // lifetime of `policy`.···1105109411061095 /// Driver's `offline` callback.11071096 ///11081108- /// SAFETY: Called from C. Inputs must be valid pointers.11091109- extern "C" fn offline_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int {10971097+ /// # Safety10981098+ ///10991099+ /// - This function may only be called from the cpufreq C infrastructure.11001100+ /// - The pointer arguments must be valid pointers.11011101+ unsafe extern "C" fn offline_callback(11021102+ ptr: *mut bindings::cpufreq_policy,11031103+ ) -> kernel::ffi::c_int {11101104 from_result(|| {11111105 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the11121106 // lifetime of `policy`.···1122110611231107 /// Driver's `suspend` callback.11241108 ///11251125- /// SAFETY: Called from C. Inputs must be valid pointers.11261126- extern "C" fn suspend_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int {11091109+ /// # Safety11101110+ ///11111111+ /// - This function may only be called from the cpufreq C infrastructure.11121112+ /// - The pointer arguments must be valid pointers.11131113+ unsafe extern "C" fn suspend_callback(11141114+ ptr: *mut bindings::cpufreq_policy,11151115+ ) -> kernel::ffi::c_int {11271116 from_result(|| {11281117 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the11291118 // lifetime of `policy`.···1139111811401119 /// Driver's `resume` callback.11411120 ///11421142- /// SAFETY: Called from C. Inputs must be valid pointers.11431143- extern "C" fn resume_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int {11211121+ /// # Safety11221122+ ///11231123+ /// - This function may only be called from the cpufreq C infrastructure.11241124+ /// - The pointer arguments must be valid pointers.11251125+ unsafe extern "C" fn resume_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int {11441126 from_result(|| {11451127 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the11461128 // lifetime of `policy`.···1154113011551131 /// Driver's `ready` callback.11561132 ///11571157- /// SAFETY: Called from C. Inputs must be valid pointers.11581158- extern "C" fn ready_callback(ptr: *mut bindings::cpufreq_policy) {11331133+ /// # Safety11341134+ ///11351135+ /// - This function may only be called from the cpufreq C infrastructure.11361136+ /// - The pointer arguments must be valid pointers.11371137+ unsafe extern "C" fn ready_callback(ptr: *mut bindings::cpufreq_policy) {11591138 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the11601139 // lifetime of `policy`.11611140 let policy = unsafe { Policy::from_raw_mut(ptr) };···1167114011681141 /// Driver's `verify` callback.11691142 ///11701170- /// SAFETY: Called from C. Inputs must be valid pointers.11711171- extern "C" fn verify_callback(ptr: *mut bindings::cpufreq_policy_data) -> kernel::ffi::c_int {11431143+ /// # Safety11441144+ ///11451145+ /// - This function may only be called from the cpufreq C infrastructure.11461146+ /// - The pointer arguments must be valid pointers.11471147+ unsafe extern "C" fn verify_callback(11481148+ ptr: *mut bindings::cpufreq_policy_data,11491149+ ) -> kernel::ffi::c_int {11721150 from_result(|| {11731151 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the11741152 // lifetime of `policy`.···1184115211851153 /// Driver's `setpolicy` callback.11861154 ///11871187- /// SAFETY: Called from C. Inputs must be valid pointers.11881188- extern "C" fn setpolicy_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int {11551155+ /// # Safety11561156+ ///11571157+ /// - This function may only be called from the cpufreq C infrastructure.11581158+ /// - The pointer arguments must be valid pointers.11591159+ unsafe extern "C" fn setpolicy_callback(11601160+ ptr: *mut bindings::cpufreq_policy,11611161+ ) -> kernel::ffi::c_int {11891162 from_result(|| {11901163 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the11911164 // lifetime of `policy`.···1201116412021165 /// Driver's `target` callback.12031166 ///12041204- /// SAFETY: Called from C. Inputs must be valid pointers.12051205- extern "C" fn target_callback(11671167+ /// # Safety11681168+ ///11691169+ /// - This function may only be called from the cpufreq C infrastructure.11701170+ /// - The pointer arguments must be valid pointers.11711171+ unsafe extern "C" fn target_callback(12061172 ptr: *mut bindings::cpufreq_policy,12071173 target_freq: u32,12081174 relation: u32,···1220118012211181 /// Driver's `target_index` callback.12221182 ///12231223- /// SAFETY: Called from C. Inputs must be valid pointers.12241224- extern "C" fn target_index_callback(11831183+ /// # Safety11841184+ ///11851185+ /// - This function may only be called from the cpufreq C infrastructure.11861186+ /// - The pointer arguments must be valid pointers.11871187+ unsafe extern "C" fn target_index_callback(12251188 ptr: *mut bindings::cpufreq_policy,12261189 index: u32,12271190 ) -> kernel::ffi::c_int {···1243120012441201 /// Driver's `fast_switch` callback.12451202 ///12461246- /// SAFETY: Called from C. Inputs must be valid pointers.12471247- extern "C" fn fast_switch_callback(12031203+ /// # Safety12041204+ ///12051205+ /// - This function may only be called from the cpufreq C infrastructure.12061206+ /// - The pointer arguments must be valid pointers.12071207+ unsafe extern "C" fn fast_switch_callback(12481208 ptr: *mut bindings::cpufreq_policy,12491209 target_freq: u32,12501210 ) -> kernel::ffi::c_uint {···12581212 }1259121312601214 /// Driver's `adjust_perf` callback.12611261- extern "C" fn adjust_perf_callback(12151215+ ///12161216+ /// # Safety12171217+ ///12181218+ /// - This function may only be called from the cpufreq C infrastructure.12191219+ unsafe extern "C" fn adjust_perf_callback(12621220 cpu: u32,12631221 min_perf: usize,12641222 target_perf: usize,12651223 capacity: usize,12661224 ) {12671267- if let Ok(mut policy) = PolicyCpu::from_cpu(cpu) {12251225+ // SAFETY: The C API guarantees that `cpu` refers to a valid CPU number.12261226+ let cpu_id = unsafe { CpuId::from_u32_unchecked(cpu) };12271227+12281228+ if let Ok(mut policy) = PolicyCpu::from_cpu(cpu_id) {12681229 T::adjust_perf(&mut policy, min_perf, target_perf, capacity);12691230 }12701231 }1271123212721233 /// Driver's `get_intermediate` callback.12731234 ///12741274- /// SAFETY: Called from C. Inputs must be valid pointers.12751275- extern "C" fn get_intermediate_callback(12351235+ /// # Safety12361236+ ///12371237+ /// - This function may only be called from the cpufreq C infrastructure.12381238+ /// - The pointer arguments must be valid pointers.12391239+ unsafe extern "C" fn get_intermediate_callback(12761240 ptr: *mut bindings::cpufreq_policy,12771241 index: u32,12781242 ) -> kernel::ffi::c_uint {···1299124313001244 /// Driver's `target_intermediate` callback.13011245 ///13021302- /// SAFETY: Called from C. Inputs must be valid pointers.13031303- extern "C" fn target_intermediate_callback(12461246+ /// # Safety12471247+ ///12481248+ /// - This function may only be called from the cpufreq C infrastructure.12491249+ /// - The pointer arguments must be valid pointers.12501250+ unsafe extern "C" fn target_intermediate_callback(13041251 ptr: *mut bindings::cpufreq_policy,13051252 index: u32,13061253 ) -> kernel::ffi::c_int {···13211262 }1322126313231264 /// Driver's `get` callback.13241324- extern "C" fn get_callback(cpu: u32) -> kernel::ffi::c_uint {13251325- PolicyCpu::from_cpu(cpu).map_or(0, |mut policy| T::get(&mut policy).map_or(0, |f| f))12651265+ ///12661266+ /// # Safety12671267+ ///12681268+ /// - This function may only be called from the cpufreq C infrastructure.12691269+ unsafe extern "C" fn get_callback(cpu: u32) -> kernel::ffi::c_uint {12701270+ // SAFETY: The C API guarantees that `cpu` refers to a valid CPU number.12711271+ let cpu_id = unsafe { CpuId::from_u32_unchecked(cpu) };12721272+12731273+ PolicyCpu::from_cpu(cpu_id).map_or(0, |mut policy| T::get(&mut policy).map_or(0, |f| f))13261274 }1327127513281276 /// Driver's `update_limit` callback.13291329- extern "C" fn update_limits_callback(ptr: *mut bindings::cpufreq_policy) {12771277+ ///12781278+ /// # Safety12791279+ ///12801280+ /// - This function may only be called from the cpufreq C infrastructure.12811281+ /// - The pointer arguments must be valid pointers.12821282+ unsafe extern "C" fn update_limits_callback(ptr: *mut bindings::cpufreq_policy) {13301283 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the13311284 // lifetime of `policy`.13321285 let policy = unsafe { Policy::from_raw_mut(ptr) };···1347127613481277 /// Driver's `bios_limit` callback.13491278 ///13501350- /// SAFETY: Called from C. Inputs must be valid pointers.13511351- extern "C" fn bios_limit_callback(cpu: i32, limit: *mut u32) -> kernel::ffi::c_int {12791279+ /// # Safety12801280+ ///12811281+ /// - This function may only be called from the cpufreq C infrastructure.12821282+ /// - The pointer arguments must be valid pointers.12831283+ unsafe extern "C" fn bios_limit_callback(cpu: i32, limit: *mut u32) -> kernel::ffi::c_int {12841284+ // SAFETY: The C API guarantees that `cpu` refers to a valid CPU number.12851285+ let cpu_id = unsafe { CpuId::from_i32_unchecked(cpu) };12861286+13521287 from_result(|| {13531353- let mut policy = PolicyCpu::from_cpu(cpu as u32)?;12881288+ let mut policy = PolicyCpu::from_cpu(cpu_id)?;1354128913551290 // SAFETY: `limit` is guaranteed by the C code to be valid.13561291 T::bios_limit(&mut policy, &mut (unsafe { *limit })).map(|()| 0)···1365128813661289 /// Driver's `set_boost` callback.13671290 ///13681368- /// SAFETY: Called from C. Inputs must be valid pointers.13691369- extern "C" fn set_boost_callback(12911291+ /// # Safety12921292+ ///12931293+ /// - This function may only be called from the cpufreq C infrastructure.12941294+ /// - The pointer arguments must be valid pointers.12951295+ unsafe extern "C" fn set_boost_callback(13701296 ptr: *mut bindings::cpufreq_policy,13711297 state: i32,13721298 ) -> kernel::ffi::c_int {···1383130313841304 /// Driver's `register_em` callback.13851305 ///13861386- /// SAFETY: Called from C. Inputs must be valid pointers.13871387- extern "C" fn register_em_callback(ptr: *mut bindings::cpufreq_policy) {13061306+ /// # Safety13071307+ ///13081308+ /// - This function may only be called from the cpufreq C infrastructure.13091309+ /// - The pointer arguments must be valid pointers.13101310+ unsafe extern "C" fn register_em_callback(ptr: *mut bindings::cpufreq_policy) {13881311 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the13891312 // lifetime of `policy`.13901313 let policy = unsafe { Policy::from_raw_mut(ptr) };
+36-15
rust/kernel/cpumask.rs
···6677use crate::{88 alloc::{AllocError, Flags},99+ cpu::CpuId,910 prelude::*,1011 types::Opaque,1112};···3635///3736/// ```3837/// use kernel::bindings;3838+/// use kernel::cpu::CpuId;3939/// use kernel::cpumask::Cpumask;4040///4141-/// fn set_clear_cpu(ptr: *mut bindings::cpumask, set_cpu: u32, clear_cpu: i32) {4141+/// fn set_clear_cpu(ptr: *mut bindings::cpumask, set_cpu: CpuId, clear_cpu: CpuId) {4242/// // SAFETY: The `ptr` is valid for writing and remains valid for the lifetime of the4343/// // returned reference.4444/// let mask = unsafe { Cpumask::as_mut_ref(ptr) };···9290 /// This mismatches kernel naming convention and corresponds to the C9391 /// function `__cpumask_set_cpu()`.9492 #[inline]9595- pub fn set(&mut self, cpu: u32) {9393+ pub fn set(&mut self, cpu: CpuId) {9694 // SAFETY: By the type invariant, `self.as_raw` is a valid argument to `__cpumask_set_cpu`.9797- unsafe { bindings::__cpumask_set_cpu(cpu, self.as_raw()) };9595+ unsafe { bindings::__cpumask_set_cpu(u32::from(cpu), self.as_raw()) };9896 }999710098 /// Clear `cpu` in the cpumask.···103101 /// This mismatches kernel naming convention and corresponds to the C104102 /// function `__cpumask_clear_cpu()`.105103 #[inline]106106- pub fn clear(&mut self, cpu: i32) {104104+ pub fn clear(&mut self, cpu: CpuId) {107105 // SAFETY: By the type invariant, `self.as_raw` is a valid argument to108106 // `__cpumask_clear_cpu`.109109- unsafe { bindings::__cpumask_clear_cpu(cpu, self.as_raw()) };107107+ unsafe { bindings::__cpumask_clear_cpu(i32::from(cpu), self.as_raw()) };110108 }111109112110 /// Test `cpu` in the cpumask.113111 ///114112 /// Equivalent to the kernel's `cpumask_test_cpu` API.115113 #[inline]116116- pub fn test(&self, cpu: i32) -> bool {114114+ pub fn test(&self, cpu: CpuId) -> bool {117115 // SAFETY: By the type invariant, `self.as_raw` is a valid argument to `cpumask_test_cpu`.118118- unsafe { bindings::cpumask_test_cpu(cpu, self.as_raw()) }116116+ unsafe { bindings::cpumask_test_cpu(i32::from(cpu), self.as_raw()) }119117 }120118121119 /// Set all CPUs in the cpumask.···180178/// The following example demonstrates how to create and update a [`CpumaskVar`].181179///182180/// ```181181+/// use kernel::cpu::CpuId;183182/// use kernel::cpumask::CpumaskVar;184183///185184/// let mut mask = CpumaskVar::new_zero(GFP_KERNEL).unwrap();186185///187186/// assert!(mask.empty());188188-/// mask.set(2);189189-/// assert!(mask.test(2));190190-/// mask.set(3);191191-/// assert!(mask.test(3));192192-/// assert_eq!(mask.weight(), 2);187187+/// let mut count = 0;188188+///189189+/// let cpu2 = CpuId::from_u32(2);190190+/// if let Some(cpu) = cpu2 {191191+/// mask.set(cpu);192192+/// assert!(mask.test(cpu));193193+/// count += 1;194194+/// }195195+///196196+/// let cpu3 = CpuId::from_u32(3);197197+/// if let Some(cpu) = cpu3 {198198+/// mask.set(cpu);199199+/// assert!(mask.test(cpu));200200+/// count += 1;201201+/// }202202+///203203+/// assert_eq!(mask.weight(), count);193204///194205/// let mask2 = CpumaskVar::try_clone(&mask).unwrap();195195-/// assert!(mask2.test(2));196196-/// assert!(mask2.test(3));197197-/// assert_eq!(mask2.weight(), 2);206206+///207207+/// if let Some(cpu) = cpu2 {208208+/// assert!(mask2.test(cpu));209209+/// }210210+///211211+/// if let Some(cpu) = cpu3 {212212+/// assert!(mask2.test(cpu));213213+/// }214214+/// assert_eq!(mask2.weight(), count);198215/// ```199216pub struct CpumaskVar {200217 #[cfg(CONFIG_CPUMASK_OFFSTACK)]
+43-17
rust/kernel/devres.rs
···1212 error::{Error, Result},1313 ffi::c_void,1414 prelude::*,1515- revocable::Revocable,1616- sync::Arc,1515+ revocable::{Revocable, RevocableGuard},1616+ sync::{rcu, Arc, Completion},1717 types::ARef,1818};1919-2020-use core::ops::Deref;21192220#[pin_data]2321struct DevresInner<T> {···2325 callback: unsafe extern "C" fn(*mut c_void),2426 #[pin]2527 data: Revocable<T>,2828+ #[pin]2929+ revoke: Completion,2630}27312832/// This abstraction is meant to be used by subsystems to containerize [`Device`] bound resources to2933/// manage their lifetime.3034///3135/// [`Device`] bound resources should be freed when either the resource goes out of scope or the3232-/// [`Device`] is unbound respectively, depending on what happens first.3636+/// [`Device`] is unbound respectively, depending on what happens first. In any case, it is always3737+/// guaranteed that revoking the device resource is completed before the corresponding [`Device`]3838+/// is unbound.3339///3440/// To achieve that [`Devres`] registers a devres callback on creation, which is called once the3541/// [`Device`] is unbound, revoking access to the encapsulated resource (see also [`Revocable`]).···104102 dev: dev.into(),105103 callback: Self::devres_callback,106104 data <- Revocable::new(data),105105+ revoke <- Completion::new(),107106 }),108107 flags,109108 )?;···133130 self as _134131 }135132136136- fn remove_action(this: &Arc<Self>) {133133+ fn remove_action(this: &Arc<Self>) -> bool {137134 // SAFETY:138135 // - `self.inner.dev` is a valid `Device`,139136 // - the `action` and `data` pointers are the exact same ones as given to devm_add_action()140137 // previously,141138 // - `self` is always valid, even if the action has been released already.142142- let ret = unsafe {139139+ let success = unsafe {143140 bindings::devm_remove_action_nowarn(144141 this.dev.as_raw(),145142 Some(this.callback),146143 this.as_ptr() as _,147144 )148148- };145145+ } == 0;149146150150- if ret == 0 {147147+ if success {151148 // SAFETY: We leaked an `Arc` reference to devm_add_action() in `DevresInner::new`; if152149 // devm_remove_action_nowarn() was successful we can (and have to) claim back ownership153150 // of this reference.154151 let _ = unsafe { Arc::from_raw(this.as_ptr()) };155152 }153153+154154+ success156155 }157156158157 #[allow(clippy::missing_safety_doc)]···166161 // `DevresInner::new`.167162 let inner = unsafe { Arc::from_raw(ptr) };168163169169- inner.data.revoke();164164+ if !inner.data.revoke() {165165+ // If `revoke()` returns false, it means that `Devres::drop` already started revoking166166+ // `inner.data` for us. Hence we have to wait until `Devres::drop()` signals that it167167+ // completed revoking `inner.data`.168168+ inner.revoke.wait_for_completion();169169+ }170170 }171171}172172···228218 // SAFETY: `dev` being the same device as the device this `Devres` has been created for229219 // proves that `self.0.data` hasn't been revoked and is guaranteed to not be revoked as230220 // long as `dev` lives; `dev` lives at least as long as `self`.231231- Ok(unsafe { self.deref().access() })221221+ Ok(unsafe { self.0.data.access() })232222 }233233-}234223235235-impl<T> Deref for Devres<T> {236236- type Target = Revocable<T>;224224+ /// [`Devres`] accessor for [`Revocable::try_access`].225225+ pub fn try_access(&self) -> Option<RevocableGuard<'_, T>> {226226+ self.0.data.try_access()227227+ }237228238238- fn deref(&self) -> &Self::Target {239239- &self.0.data229229+ /// [`Devres`] accessor for [`Revocable::try_access_with`].230230+ pub fn try_access_with<R, F: FnOnce(&T) -> R>(&self, f: F) -> Option<R> {231231+ self.0.data.try_access_with(f)232232+ }233233+234234+ /// [`Devres`] accessor for [`Revocable::try_access_with_guard`].235235+ pub fn try_access_with_guard<'a>(&'a self, guard: &'a rcu::Guard) -> Option<&'a T> {236236+ self.0.data.try_access_with_guard(guard)240237 }241238}242239243240impl<T> Drop for Devres<T> {244241 fn drop(&mut self) {245245- DevresInner::remove_action(&self.0);242242+ // SAFETY: When `drop` runs, it is guaranteed that nobody is accessing the revocable data243243+ // anymore, hence it is safe not to wait for the grace period to finish.244244+ if unsafe { self.0.data.revoke_nosync() } {245245+ // We revoked `self.0.data` before the devres action did, hence try to remove it.246246+ if !DevresInner::remove_action(&self.0) {247247+ // We could not remove the devres action, which means that it now runs concurrently,248248+ // hence signal that `self.0.data` has been revoked successfully.249249+ self.0.revoke.complete_all();250250+ }251251+ }246252 }247253}
+14-4
rust/kernel/revocable.rs
···154154 /// # Safety155155 ///156156 /// Callers must ensure that there are no more concurrent users of the revocable object.157157- unsafe fn revoke_internal<const SYNC: bool>(&self) {158158- if self.is_available.swap(false, Ordering::Relaxed) {157157+ unsafe fn revoke_internal<const SYNC: bool>(&self) -> bool {158158+ let revoke = self.is_available.swap(false, Ordering::Relaxed);159159+160160+ if revoke {159161 if SYNC {160162 // SAFETY: Just an FFI call, there are no further requirements.161163 unsafe { bindings::synchronize_rcu() };···167165 // `compare_exchange` above that takes `is_available` from `true` to `false`.168166 unsafe { drop_in_place(self.data.get()) };169167 }168168+169169+ revoke170170 }171171172172 /// Revokes access to and drops the wrapped object.···176172 /// Access to the object is revoked immediately to new callers of [`Revocable::try_access`],177173 /// expecting that there are no concurrent users of the object.178174 ///175175+ /// Returns `true` if `&self` has been revoked with this call, `false` if it was revoked176176+ /// already.177177+ ///179178 /// # Safety180179 ///181180 /// Callers must ensure that there are no more concurrent users of the revocable object.182182- pub unsafe fn revoke_nosync(&self) {181181+ pub unsafe fn revoke_nosync(&self) -> bool {183182 // SAFETY: By the safety requirement of this function, the caller ensures that nobody is184183 // accessing the data anymore and hence we don't have to wait for the grace period to185184 // finish.···196189 /// If there are concurrent users of the object (i.e., ones that called197190 /// [`Revocable::try_access`] beforehand and still haven't dropped the returned guard), this198191 /// function waits for the concurrent access to complete before dropping the wrapped object.199199- pub fn revoke(&self) {192192+ ///193193+ /// Returns `true` if `&self` has been revoked with this call, `false` if it was revoked194194+ /// already.195195+ pub fn revoke(&self) -> bool {200196 // SAFETY: By passing `true` we ask `revoke_internal` to wait for the grace period to201197 // finish.202198 unsafe { self.revoke_internal::<true>() }
+2
rust/kernel/sync.rs
···1010use pin_init;11111212mod arc;1313+pub mod completion;1314mod condvar;1415pub mod lock;1516mod locked_by;···1817pub mod rcu;19182019pub use arc::{Arc, ArcBorrow, UniqueArc};2020+pub use completion::Completion;2121pub use condvar::{new_condvar, CondVar, CondVarTimeoutResult};2222pub use lock::global::{global_lock, GlobalGuard, GlobalLock, GlobalLockBackend, GlobalLockedBy};2323pub use lock::mutex::{new_mutex, Mutex, MutexGuard};
+112
rust/kernel/sync/completion.rs
···11+// SPDX-License-Identifier: GPL-2.022+33+//! Completion support.44+//!55+//! Reference: <https://docs.kernel.org/scheduler/completion.html>66+//!77+//! C header: [`include/linux/completion.h`](srctree/include/linux/completion.h)88+99+use crate::{bindings, prelude::*, types::Opaque};1010+1111+/// Synchronization primitive to signal when a certain task has been completed.1212+///1313+/// The [`Completion`] synchronization primitive signals when a certain task has been completed by1414+/// waking up other tasks that have been queued up to wait for the [`Completion`] to be completed.1515+///1616+/// # Examples1717+///1818+/// ```1919+/// use kernel::sync::{Arc, Completion};2020+/// use kernel::workqueue::{self, impl_has_work, new_work, Work, WorkItem};2121+///2222+/// #[pin_data]2323+/// struct MyTask {2424+/// #[pin]2525+/// work: Work<MyTask>,2626+/// #[pin]2727+/// done: Completion,2828+/// }2929+///3030+/// impl_has_work! {3131+/// impl HasWork<Self> for MyTask { self.work }3232+/// }3333+///3434+/// impl MyTask {3535+/// fn new() -> Result<Arc<Self>> {3636+/// let this = Arc::pin_init(pin_init!(MyTask {3737+/// work <- new_work!("MyTask::work"),3838+/// done <- Completion::new(),3939+/// }), GFP_KERNEL)?;4040+///4141+/// let _ = workqueue::system().enqueue(this.clone());4242+///4343+/// Ok(this)4444+/// }4545+///4646+/// fn wait_for_completion(&self) {4747+/// self.done.wait_for_completion();4848+///4949+/// pr_info!("Completion: task complete\n");5050+/// }5151+/// }5252+///5353+/// impl WorkItem for MyTask {5454+/// type Pointer = Arc<MyTask>;5555+///5656+/// fn run(this: Arc<MyTask>) {5757+/// // process this task5858+/// this.done.complete_all();5959+/// }6060+/// }6161+///6262+/// let task = MyTask::new()?;6363+/// task.wait_for_completion();6464+/// # Ok::<(), Error>(())6565+/// ```6666+#[pin_data]6767+pub struct Completion {6868+ #[pin]6969+ inner: Opaque<bindings::completion>,7070+}7171+7272+// SAFETY: `Completion` is safe to be send to any task.7373+unsafe impl Send for Completion {}7474+7575+// SAFETY: `Completion` is safe to be accessed concurrently.7676+unsafe impl Sync for Completion {}7777+7878+impl Completion {7979+ /// Create an initializer for a new [`Completion`].8080+ pub fn new() -> impl PinInit<Self> {8181+ pin_init!(Self {8282+ inner <- Opaque::ffi_init(|slot: *mut bindings::completion| {8383+ // SAFETY: `slot` is a valid pointer to an uninitialized `struct completion`.8484+ unsafe { bindings::init_completion(slot) };8585+ }),8686+ })8787+ }8888+8989+ fn as_raw(&self) -> *mut bindings::completion {9090+ self.inner.get()9191+ }9292+9393+ /// Signal all tasks waiting on this completion.9494+ ///9595+ /// This method wakes up all tasks waiting on this completion; after this operation the9696+ /// completion is permanently done, i.e. signals all current and future waiters.9797+ pub fn complete_all(&self) {9898+ // SAFETY: `self.as_raw()` is a pointer to a valid `struct completion`.9999+ unsafe { bindings::complete_all(self.as_raw()) };100100+ }101101+102102+ /// Wait for completion of a task.103103+ ///104104+ /// This method waits for the completion of a task; it is not interruptible and there is no105105+ /// timeout.106106+ ///107107+ /// See also [`Completion::complete_all`].108108+ pub fn wait_for_completion(&self) {109109+ // SAFETY: `self.as_raw()` is a pointer to a valid `struct completion`.110110+ unsafe { bindings::wait_for_completion(self.as_raw()) };111111+ }112112+}
+1-1
rust/kernel/time/hrtimer.rs
···517517 ) -> *mut Self {518518 // SAFETY: As per the safety requirement of this function, `ptr`519519 // is pointing inside a `$timer_type`.520520- unsafe { ::kernel::container_of!(ptr, $timer_type, $field).cast_mut() }520520+ unsafe { ::kernel::container_of!(ptr, $timer_type, $field) }521521 }522522 }523523 }
···231231 self.extack['unknown'].append(extack)232232233233 if attr_space:234234- # We don't have the ability to parse nests yet, so only do global235235- if 'miss-type' in self.extack and 'miss-nest' not in self.extack:236236- miss_type = self.extack['miss-type']237237- if miss_type in attr_space.attrs_by_val:238238- spec = attr_space.attrs_by_val[miss_type]239239- self.extack['miss-type'] = spec['name']240240- if 'doc' in spec:241241- self.extack['miss-type-doc'] = spec['doc']234234+ self.annotate_extack(attr_space)242235243236 def _decode_policy(self, raw):244237 policy = {}···257264 policy['mask'] = attr.as_scalar('u64')258265 return policy259266267267+ def annotate_extack(self, attr_space):268268+ """ Make extack more human friendly with attribute information """269269+270270+ # We don't have the ability to parse nests yet, so only do global271271+ if 'miss-type' in self.extack and 'miss-nest' not in self.extack:272272+ miss_type = self.extack['miss-type']273273+ if miss_type in attr_space.attrs_by_val:274274+ spec = attr_space.attrs_by_val[miss_type]275275+ self.extack['miss-type'] = spec['name']276276+ if 'doc' in spec:277277+ self.extack['miss-type-doc'] = spec['doc']278278+260279 def cmd(self):261280 return self.nl_type262281···282277283278284279class NlMsgs:285285- def __init__(self, data, attr_space=None):280280+ def __init__(self, data):286281 self.msgs = []287282288283 offset = 0289284 while offset < len(data):290290- msg = NlMsg(data, offset, attr_space=attr_space)285285+ msg = NlMsg(data, offset)291286 offset += msg.nl_len292287 self.msgs.append(msg)293288···10391034 op_rsp = []10401035 while not done:10411036 reply = self.sock.recv(self._recv_size)10421042- nms = NlMsgs(reply, attr_space=op.attr_set)10371037+ nms = NlMsgs(reply)10431038 self._recv_dbg_print(reply, nms)10441039 for nl_msg in nms:10451040 if nl_msg.nl_seq in reqs_by_seq:10461041 (op, vals, req_msg, req_flags) = reqs_by_seq[nl_msg.nl_seq]10471042 if nl_msg.extack:10431043+ nl_msg.annotate_extack(op.attr_set)10481044 self._decode_extack(req_msg, op, nl_msg.extack, vals)10491045 else:10501046 op = None
···2222#include "gic.h"2323#include "vgic.h"24242525-static const uint64_t CVAL_MAX = ~0ULL;2525+/* Depends on counter width. */2626+static uint64_t CVAL_MAX;2627/* tval is a signed 32-bit int. */2728static const int32_t TVAL_MAX = INT32_MAX;2829static const int32_t TVAL_MIN = INT32_MIN;···3130/* After how much time we say there is no IRQ. */3231static const uint32_t TIMEOUT_NO_IRQ_US = 50000;33323434-/* A nice counter value to use as the starting one for most tests. */3535-static const uint64_t DEF_CNT = (CVAL_MAX / 2);3333+/* Counter value to use as the starting one for most tests. Set to CVAL_MAX/2 */3434+static uint64_t DEF_CNT;36353736/* Number of runs. */3837static const uint32_t NR_TEST_ITERS_DEF = 5;···192191{193192 atomic_set(&shared_data.handled, 0);194193 atomic_set(&shared_data.spurious, 0);195195- timer_set_ctl(timer, ctl);196194 timer_set_tval(timer, tval_cycles);195195+ timer_set_ctl(timer, ctl);197196}198197199198static void set_xval_irq(enum arch_timer timer, uint64_t xval, uint32_t ctl,···733732 test_set_cnt_after_tval(timer, 0, tval, (uint64_t) tval + 1,734733 wm);735734 }736736-737737- for (i = 0; i < ARRAY_SIZE(sleep_method); i++) {738738- sleep_method_t sm = sleep_method[i];739739-740740- test_set_cnt_after_cval_no_irq(timer, 0, DEF_CNT, CVAL_MAX, sm);741741- }742735}743736744737/*···844849 GUEST_DONE();845850}846851852852+static cpu_set_t default_cpuset;853853+847854static uint32_t next_pcpu(void)848855{849856 uint32_t max = get_nprocs();850857 uint32_t cur = sched_getcpu();851858 uint32_t next = cur;852852- cpu_set_t cpuset;859859+ cpu_set_t cpuset = default_cpuset;853860854861 TEST_ASSERT(max > 1, "Need at least two physical cpus");855855-856856- sched_getaffinity(0, sizeof(cpuset), &cpuset);857862858863 do {859864 next = (next + 1) % CPU_SETSIZE;···970975 test_init_timer_irq(*vm, *vcpu);971976 vgic_v3_setup(*vm, 1, 64);972977 sync_global_to_guest(*vm, test_args);978978+ sync_global_to_guest(*vm, CVAL_MAX);979979+ sync_global_to_guest(*vm, DEF_CNT);973980}974981975982static void test_print_help(char *name)···983986 pr_info("\t-b: Test both physical and virtual timers (default: true)\n");984987 pr_info("\t-l: Delta (in ms) used for long wait time test (default: %u)\n",985988 LONG_WAIT_TEST_MS);986986- pr_info("\t-l: Delta (in ms) used for wait times (default: %u)\n",989989+ pr_info("\t-w: Delta (in ms) used for wait times (default: %u)\n",987990 WAIT_TEST_MS);988991 pr_info("\t-p: Test physical timer (default: true)\n");989992 pr_info("\t-v: Test virtual timer (default: true)\n");···10321035 return false;10331036}1034103710381038+static void set_counter_defaults(void)10391039+{10401040+ const uint64_t MIN_ROLLOVER_SECS = 40ULL * 365 * 24 * 3600;10411041+ uint64_t freq = read_sysreg(CNTFRQ_EL0);10421042+ uint64_t width = ilog2(MIN_ROLLOVER_SECS * freq);10431043+10441044+ width = clamp(width, 56, 64);10451045+ CVAL_MAX = GENMASK_ULL(width - 1, 0);10461046+ DEF_CNT = CVAL_MAX / 2;10471047+}10481048+10351049int main(int argc, char *argv[])10361050{10371051 struct kvm_vcpu *vcpu;···1053104510541046 if (!parse_args(argc, argv))10551047 exit(KSFT_SKIP);10481048+10491049+ sched_getaffinity(0, sizeof(default_cpuset), &default_cpuset);10501050+ set_counter_defaults();1056105110571052 if (test_args.test_virtual) {10581053 test_vm_create(&vm, &vcpu, VIRTUAL);
···11+// SPDX-License-Identifier: GPL-2.0-only22+/*33+ * Copyright (C) 2025 Intel Corporation44+ */55+#define _GNU_SOURCE66+77+#include <err.h>88+#include <signal.h>99+#include <stdio.h>1010+#include <stdlib.h>1111+#include <string.h>1212+#include <sys/ucontext.h>1313+1414+#ifdef __x86_64__1515+# define REG_IP REG_RIP1616+#else1717+# define REG_IP REG_EIP1818+#endif1919+2020+static void sethandler(int sig, void (*handler)(int, siginfo_t *, void *), int flags)2121+{2222+ struct sigaction sa;2323+2424+ memset(&sa, 0, sizeof(sa));2525+ sa.sa_sigaction = handler;2626+ sa.sa_flags = SA_SIGINFO | flags;2727+ sigemptyset(&sa.sa_mask);2828+2929+ if (sigaction(sig, &sa, 0))3030+ err(1, "sigaction");3131+3232+ return;3333+}3434+3535+static void sigtrap(int sig, siginfo_t *info, void *ctx_void)3636+{3737+ ucontext_t *ctx = (ucontext_t *)ctx_void;3838+ static unsigned int loop_count_on_same_ip;3939+ static unsigned long last_trap_ip;4040+4141+ if (last_trap_ip == ctx->uc_mcontext.gregs[REG_IP]) {4242+ printf("\tTrapped at %016lx\n", last_trap_ip);4343+4444+ /*4545+ * If the same IP is hit more than 10 times in a row, it is4646+ * _considered_ an infinite loop.4747+ */4848+ if (++loop_count_on_same_ip > 10) {4949+ printf("[FAIL]\tDetected SIGTRAP infinite loop\n");5050+ exit(1);5151+ }5252+5353+ return;5454+ }5555+5656+ loop_count_on_same_ip = 0;5757+ last_trap_ip = ctx->uc_mcontext.gregs[REG_IP];5858+ printf("\tTrapped at %016lx\n", last_trap_ip);5959+}6060+6161+int main(int argc, char *argv[])6262+{6363+ sethandler(SIGTRAP, sigtrap, 0);6464+6565+ /*6666+ * Set the Trap Flag (TF) to single-step the test code, therefore to6767+ * trigger a SIGTRAP signal after each instruction until the TF is6868+ * cleared.6969+ *7070+ * Because the arithmetic flags are not significant here, the TF is7171+ * set by pushing 0x302 onto the stack and then popping it into the7272+ * flags register.7373+ *7474+ * Four instructions in the following asm code are executed with the7575+ * TF set, thus the SIGTRAP handler is expected to run four times.7676+ */7777+ printf("[RUN]\tSIGTRAP infinite loop detection\n");7878+ asm volatile(7979+#ifdef __x86_64__8080+ /*8181+ * Avoid clobbering the redzone8282+ *8383+ * Equivalent to "sub $128, %rsp", however -128 can be encoded8484+ * in a single byte immediate while 128 uses 4 bytes.8585+ */8686+ "add $-128, %rsp\n\t"8787+#endif8888+ "push $0x302\n\t"8989+ "popf\n\t"9090+ "nop\n\t"9191+ "nop\n\t"9292+ "push $0x202\n\t"9393+ "popf\n\t"9494+#ifdef __x86_64__9595+ "sub $-128, %rsp\n\t"9696+#endif9797+ );9898+9999+ printf("[OK]\tNo SIGTRAP infinite loop detected\n");100100+ return 0;101101+}
+16
tools/testing/vma/vma_internal.h
···159159160160#define ASSERT_EXCLUSIVE_WRITER(x)161161162162+/**163163+ * swap - swap values of @a and @b164164+ * @a: first value165165+ * @b: second value166166+ */167167+#define swap(a, b) \168168+ do { typeof(a) __tmp = (a); (a) = (b); (b) = __tmp; } while (0)169169+162170struct kref {163171 refcount_t refcount;164172};···14741466static inline void fixup_hugetlb_reservations(struct vm_area_struct *vma)14751467{14761468 (void)vma;14691469+}14701470+14711471+static inline void vma_set_file(struct vm_area_struct *vma, struct file *file)14721472+{14731473+ /* Changing an anonymous vma with this is illegal */14741474+ get_file(file);14751475+ swap(vma->vm_file, file);14761476+ fput(file);14771477}1478147814791479#endif /* __MM_VMA_INTERNAL_H */