···197197Daniel Borkmann <daniel@iogearbox.net> <dborkmann@redhat.com>198198Daniel Borkmann <daniel@iogearbox.net> <dborkman@redhat.com>199199Daniel Borkmann <daniel@iogearbox.net> <dxchgb@gmail.com>200200+Danilo Krummrich <dakr@kernel.org> <dakr@redhat.com>200201David Brownell <david-b@pacbell.net>201202David Collins <quic_collinsd@quicinc.com> <collinsd@codeaurora.org>202203David Heidelberg <david@ixit.cz> <d.okias@gmail.com>···223222Dmitry Safonov <0x7f454c46@gmail.com> <dsafonov@virtuozzo.com>224223Domen Puncer <domen@coderock.org>225224Douglas Gilbert <dougg@torque.net>225225+Drew Fustini <fustini@kernel.org> <drew@pdp7.com>226226+<duje@dujemihanovic.xyz> <duje.mihanovic@skole.hr>226227Ed L. Cashin <ecashin@coraid.com>227228Elliot Berman <quic_eberman@quicinc.com> <eberman@codeaurora.org>228229Enric Balletbo i Serra <eballetbo@kernel.org> <enric.balletbo@collabora.com>···285282Gustavo Padovan <padovan@profusion.mobi>286283Hamza Mahfooz <hamzamahfooz@linux.microsoft.com> <hamza.mahfooz@amd.com>287284Hanjun Guo <guohanjun@huawei.com> <hanjun.guo@linaro.org>285285+Hans de Goede <hansg@kernel.org> <hdegoede@redhat.com>288286Hans Verkuil <hverkuil@xs4all.nl> <hansverk@cisco.com>289287Hans Verkuil <hverkuil@xs4all.nl> <hverkuil-cisco@xs4all.nl>290288Harry Yoo <harry.yoo@oracle.com> <42.hyeyoo@gmail.com>···695691Serge Hallyn <sergeh@kernel.org> <serue@us.ibm.com>696692Seth Forshee <sforshee@kernel.org> <seth.forshee@canonical.com>697693Shakeel Butt <shakeel.butt@linux.dev> <shakeelb@google.com>698698-Shannon Nelson <shannon.nelson@amd.com> <snelson@pensando.io>699699-Shannon Nelson <shannon.nelson@amd.com> <shannon.nelson@intel.com>700700-Shannon Nelson <shannon.nelson@amd.com> <shannon.nelson@oracle.com>694694+Shannon Nelson <sln@onemain.com> <shannon.nelson@amd.com>695695+Shannon Nelson <sln@onemain.com> <snelson@pensando.io>696696+Shannon Nelson <sln@onemain.com> <shannon.nelson@intel.com>697697+Shannon Nelson <sln@onemain.com> <shannon.nelson@oracle.com>701698Sharath Chandra Vurukala <quic_sharathv@quicinc.com> <sharathv@codeaurora.org>702699Shiraz Hashim <shiraz.linux.kernel@gmail.com> <shiraz.hashim@st.com>703700Shuah Khan <shuah@kernel.org> <shuahkhan@gmail.com>···832827Yusuke Goda <goda.yusuke@renesas.com>833828Zack Rusin <zack.rusin@broadcom.com> <zackr@vmware.com>834829Zhu Yanjun <zyjzyj2000@gmail.com> <yanjunz@nvidia.com>830830+Zijun Hu <zijun.hu@oss.qualcomm.com> <quic_zijuhu@quicinc.com>831831+Zijun Hu <zijun.hu@oss.qualcomm.com> <zijuhu@codeaurora.org>832832+Zijun Hu <zijun_hu@htc.com>
+5
CREDITS
···29812981S: Potsdam, New York 1367629822982S: USA2983298329842984+N: Shannon Nelson29852985+E: sln@onemain.com29862986+D: Worked on several network drivers including29872987+D: ixgbe, i40e, ionic, pds_core, pds_vdpa, pds_fwctl29882988+29842989N: Dave Neuer29852990E: dave.neuer@pobox.com29862991D: Helped implement support for Compaq's H31xx series iPAQs
+16
Documentation/ABI/testing/sysfs-edac-scrub
···4949 (RO) Supported minimum scrub cycle duration in seconds5050 by the memory scrubber.51515252+ Device-based scrub: returns the minimum scrub cycle5353+ supported by the memory device.5454+5555+ Region-based scrub: returns the max of minimum scrub cycles5656+ supported by individual memory devices that back the region.5757+5258What: /sys/bus/edac/devices/<dev-name>/scrubX/max_cycle_duration5359Date: March 20255460KernelVersion: 6.15···6256Description:6357 (RO) Supported maximum scrub cycle duration in seconds6458 by the memory scrubber.5959+6060+ Device-based scrub: returns the maximum scrub cycle supported6161+ by the memory device.6262+6363+ Region-based scrub: returns the min of maximum scrub cycles6464+ supported by individual memory devices that back the region.6565+6666+ If the memory device does not provide maximum scrub cycle6767+ information, return the maximum supported value of the scrub6868+ cycle field.65696670What: /sys/bus/edac/devices/<dev-name>/scrubX/current_cycle_duration6771Date: March 2025
+1-1
Documentation/arch/arm64/booting.rst
···234234235235 - If the kernel is entered at EL1:236236237237- - ICC.SRE_EL2.Enable (bit 3) must be initialised to 0b1237237+ - ICC_SRE_EL2.Enable (bit 3) must be initialised to 0b1238238 - ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b1.239239240240 - The DT or ACPI tables must describe a GICv3 interrupt controller.
+7-1
Documentation/bpf/map_hash.rst
···233233other CPUs involved in the following operation attempts:234234235235- Attempt to use CPU-local state to batch operations236236-- Attempt to fetch free nodes from global lists236236+- Attempt to fetch ``target_free`` free nodes from global lists237237- Attempt to pull any node from a global list and remove it from the hashmap238238- Attempt to pull any node from any CPU's list and remove it from the hashmap239239+240240+The number of nodes to borrow from the global list in a batch, ``target_free``,241241+depends on the size of the map. Larger batch size reduces lock contention, but242242+may also exhaust the global structure. The value is computed at map init to243243+avoid exhaustion, by limiting aggregate reservation by all CPUs to half the map244244+size. With a minimum of a single element and maximum budget of 128 at a time.239245240246This algorithm is described visually in the following diagram. See the241247description in commit 3a08c2fd7634 ("bpf: LRU List") for a full explanation of
+3-3
Documentation/bpf/map_lru_hash_update.dot
···3535 fn_bpf_lru_list_pop_free_to_local [shape=rectangle,fillcolor=2,3636 label="Flush local pending,3737 Rotate Global list, move3838- LOCAL_FREE_TARGET3838+ target_free3939 from global -> local"]4040 // Also corresponds to:4141 // fn__local_list_flush()4242 // fn_bpf_lru_list_rotate()4343 fn___bpf_lru_node_move_to_free[shape=diamond,fillcolor=2,4444- label="Able to free\nLOCAL_FREE_TARGET\nnodes?"]4444+ label="Able to free\ntarget_free\nnodes?"]45454646 fn___bpf_lru_list_shrink_inactive [shape=rectangle,fillcolor=3,4747 label="Shrink inactive list4848 up to remaining4949- LOCAL_FREE_TARGET4949+ target_free5050 (global LRU -> local)"]5151 fn___bpf_lru_list_shrink [shape=diamond,fillcolor=2,5252 label="> 0 entries in\nlocal free list?"]
···11-Device-tree bindings for persistent memory regions22------------------------------------------------------33-44-Persistent memory refers to a class of memory devices that are:55-66- a) Usable as main system memory (i.e. cacheable), and77- b) Retain their contents across power failure.88-99-Given b) it is best to think of persistent memory as a kind of memory mapped1010-storage device. To ensure data integrity the operating system needs to manage1111-persistent regions separately to the normal memory pool. To aid with that this1212-binding provides a standardised interface for discovering where persistent1313-memory regions exist inside the physical address space.1414-1515-Bindings for the region nodes:1616------------------------------1717-1818-Required properties:1919- - compatible = "pmem-region"2020-2121- - reg = <base, size>;2222- The reg property should specify an address range that is2323- translatable to a system physical address range. This address2424- range should be mappable as normal system memory would be2525- (i.e cacheable).2626-2727- If the reg property contains multiple address ranges2828- each address range will be treated as though it was specified2929- in a separate device node. Having multiple address ranges in a3030- node implies no special relationship between the two ranges.3131-3232-Optional properties:3333- - Any relevant NUMA associativity properties for the target platform.3434-3535- - volatile; This property indicates that this region is actually3636- backed by non-persistent memory. This lets the OS know that it3737- may skip the cache flushes required to ensure data is made3838- persistent after a write.3939-4040- If this property is absent then the OS must assume that the region4141- is backed by non-volatile memory.4242-4343-Examples:4444---------------------4545-4646- /*4747- * This node specifies one 4KB region spanning from4848- * 0x5000 to 0x5fff that is backed by non-volatile memory.4949- */5050- pmem@5000 {5151- compatible = "pmem-region";5252- reg = <0x00005000 0x00001000>;5353- };5454-5555- /*5656- * This node specifies two 4KB regions that are backed by5757- * volatile (normal) memory.5858- */5959- pmem@6000 {6060- compatible = "pmem-region";6161- reg = < 0x00006000 0x000010006262- 0x00008000 0x00001000 >;6363- volatile;6464- };6565-
···11+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)22+%YAML 1.233+---44+$id: http://devicetree.org/schemas/pmem-region.yaml#55+$schema: http://devicetree.org/meta-schemas/core.yaml#66+77+maintainers:88+ - Oliver O'Halloran <oohall@gmail.com>99+1010+title: Persistent Memory Regions1111+1212+description: |1313+ Persistent memory refers to a class of memory devices that are:1414+1515+ a) Usable as main system memory (i.e. cacheable), and1616+ b) Retain their contents across power failure.1717+1818+ Given b) it is best to think of persistent memory as a kind of memory mapped1919+ storage device. To ensure data integrity the operating system needs to manage2020+ persistent regions separately to the normal memory pool. To aid with that this2121+ binding provides a standardised interface for discovering where persistent2222+ memory regions exist inside the physical address space.2323+2424+properties:2525+ compatible:2626+ const: pmem-region2727+2828+ reg:2929+ maxItems: 13030+3131+ volatile:3232+ description:3333+ Indicates the region is volatile (non-persistent) and the OS can skip3434+ cache flushes for writes3535+ type: boolean3636+3737+required:3838+ - compatible3939+ - reg4040+4141+additionalProperties: false4242+4343+examples:4444+ - |4545+ pmem@5000 {4646+ compatible = "pmem-region";4747+ reg = <0x00005000 0x00001000>;4848+ };
···11-Altera UART22-33-Required properties:44-- compatible : should be "ALTR,uart-1.0" <DEPRECATED>55-- compatible : should be "altr,uart-1.0"66-77-Optional properties:88-- clock-frequency : frequency of the clock input to the UART
···11+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)22+%YAML 1.233+---44+$id: http://devicetree.org/schemas/serial/altr,uart-1.0.yaml#55+$schema: http://devicetree.org/meta-schemas/core.yaml#66+77+title: Altera UART88+99+maintainers:1010+ - Dinh Nguyen <dinguyen@kernel.org>1111+1212+allOf:1313+ - $ref: /schemas/serial/serial.yaml#1414+1515+properties:1616+ compatible:1717+ const: altr,uart-1.01818+1919+ clock-frequency:2020+ description: Frequency of the clock input to the UART.2121+2222+required:2323+ - compatible2424+2525+unevaluatedProperties: false
···1249124912501250Calling conventions for ->d_automount() have changed; we should *not* grab12511251an extra reference to new mount - it should be returned with refcount 1.12521252+12531253+---12541254+12551255+collect_mounts()/drop_collected_mounts()/iterate_mounts() are gone now.12561256+Replacement is collect_paths()/drop_collected_path(), with no special12571257+iterator needed. Instead of a cloned mount tree, the new interface returns12581258+an array of struct path, one for each mount collect_mounts() would've12591259+created. These struct path point to locations in the caller's namespace12601260+that would be roots of the cloned mounts.
···6677# Common defines88$defs:99+ name:1010+ type: string1111+ pattern: ^[0-9a-z-]+$912 uint:1013 type: integer1114 minimum: 0···7976 additionalProperties: False8077 properties:8178 name:8282- type: string7979+ $ref: '#/$defs/name'8380 header:8481 description: For C-compatible languages, header which already defines this value.8582 type: string···106103 additionalProperties: False107104 properties:108105 name:109109- type: string106106+ $ref: '#/$defs/name'110107 value:111108 type: integer112109 doc:···135132 additionalProperties: False136133 properties:137134 name:138138- type: string135135+ $ref: '#/$defs/name'139136 type:140137 description: The netlink attribute type141138 enum: [ u8, u16, u32, u64, s8, s16, s32, s64, string, binary ]···172169 name:173170 description: |174171 Name used when referring to this space in other definitions, not used outside of the spec.175175- type: string172172+ $ref: '#/$defs/name'176173 name-prefix:177174 description: |178175 Prefix for the C enum name of the attributes. Default family[name]-set[name]-a-···209206 additionalProperties: False210207 properties:211208 name:212212- type: string209209+ $ref: '#/$defs/name'213210 type: &attr-type214211 description: The netlink attribute type215212 enum: [ unused, pad, flag, binary, bitfield32,···351348 properties:352349 name:353350 description: Name of the operation, also defining its C enum value in uAPI.354354- type: string351351+ $ref: '#/$defs/name'355352 doc:356353 description: Documentation for the command.357354 type: string
+10-7
Documentation/netlink/genetlink.yaml
···6677# Common defines88$defs:99+ name:1010+ type: string1111+ pattern: ^[0-9a-z-]+$912 uint:1013 type: integer1114 minimum: 0···3229properties:3330 name:3431 description: Name of the genetlink family.3535- type: string3232+ $ref: '#/$defs/name'3633 doc:3734 type: string3835 protocol:···5148 additionalProperties: False5249 properties:5350 name:5454- type: string5151+ $ref: '#/$defs/name'5552 header:5653 description: For C-compatible languages, header which already defines this value.5754 type: string···7875 additionalProperties: False7976 properties:8077 name:8181- type: string7878+ $ref: '#/$defs/name'8279 value:8380 type: integer8481 doc:···9996 name:10097 description: |10198 Name used when referring to this space in other definitions, not used outside of the spec.102102- type: string9999+ $ref: '#/$defs/name'103100 name-prefix:104101 description: |105102 Prefix for the C enum name of the attributes. Default family[name]-set[name]-a-···124121 additionalProperties: False125122 properties:126123 name:127127- type: string124124+ $ref: '#/$defs/name'128125 type: &attr-type129126 enum: [ unused, pad, flag, binary,130127 uint, sint, u8, u16, u32, u64, s8, s16, s32, s64,···246243 properties:247244 name:248245 description: Name of the operation, also defining its C enum value in uAPI.249249- type: string246246+ $ref: '#/$defs/name'250247 doc:251248 description: Documentation for the command.252249 type: string···330327 name:331328 description: |332329 The name for the group, used to form the define and the value of the define.333333- type: string330330+ $ref: '#/$defs/name'334331 flags: *cmd_flags335332336333 kernel-family:
+12-6
Documentation/netlink/netlink-raw.yaml
···6677# Common defines88$defs:99+ name:1010+ type: string1111+ pattern: ^[0-9a-z-]+$1212+ name-cap:1313+ type: string1414+ pattern: ^[0-9a-zA-Z-]+$915 uint:1016 type: integer1117 minimum: 0···7771 additionalProperties: False7872 properties:7973 name:8080- type: string7474+ $ref: '#/$defs/name'8175 header:8276 description: For C-compatible languages, header which already defines this value.8377 type: string···10498 additionalProperties: False10599 properties:106100 name:107107- type: string101101+ $ref: '#/$defs/name'108102 value:109103 type: integer110104 doc:···130124 additionalProperties: False131125 properties:132126 name:133133- type: string127127+ $ref: '#/$defs/name-cap'134128 type:135129 description: |136130 The netlink attribute type. Members of type 'binary' or 'pad'···189183 name:190184 description: |191185 Name used when referring to this space in other definitions, not used outside of the spec.192192- type: string186186+ $ref: '#/$defs/name'193187 name-prefix:194188 description: |195189 Prefix for the C enum name of the attributes. Default family[name]-set[name]-a-···226220 additionalProperties: False227221 properties:228222 name:229229- type: string223223+ $ref: '#/$defs/name'230224 type: &attr-type231225 description: The netlink attribute type232226 enum: [ unused, pad, flag, binary, bitfield32,···414408 properties:415409 name:416410 description: Name of the operation, also defining its C enum value in uAPI.417417- type: string411411+ $ref: '#/$defs/name'418412 doc:419413 description: Documentation for the command.420414 type: string
···6666As mentioned above RVU PF0 is called the admin function (AF), this driver6767supports resource provisioning and configuration of functional blocks.6868Doesn't handle any I/O. It sets up few basic stuff but most of the6969-funcionality is achieved via configuration requests from PFs and VFs.6969+functionality is achieved via configuration requests from PFs and VFs.70707171PF/VFs communicates with AF via a shared memory region (mailbox). Upon7272receiving requests AF does resource provisioning and other HW configuration.
···290290 AMD Tom Lendacky <thomas.lendacky@amd.com>291291 Ampere Darren Hart <darren@os.amperecomputing.com>292292 ARM Catalin Marinas <catalin.marinas@arm.com>293293+ IBM Power Madhavan Srinivasan <maddy@linux.ibm.com>293294 IBM Z Christian Borntraeger <borntraeger@de.ibm.com>294295 Intel Tony Luck <tony.luck@intel.com>295296 Qualcomm Trilok Soni <quic_tsoni@quicinc.com>
+19-5
Documentation/sound/codecs/cs35l56.rst
···11.. SPDX-License-Identifier: GPL-2.0-only2233-=====================================================================44-Audio drivers for Cirrus Logic CS35L54/56/57 Boosted Smart Amplifiers55-=====================================================================33+========================================================================44+Audio drivers for Cirrus Logic CS35L54/56/57/63 Boosted Smart Amplifiers55+========================================================================66:Copyright: 2025 Cirrus Logic, Inc. and77 Cirrus Logic International Semiconductor Ltd.88···13131414The high-level summary of this document is:15151616-**If you have a laptop that uses CS35L54/56/57 amplifiers but audio is not1616+**If you have a laptop that uses CS35L54/56/57/63 amplifiers but audio is not1717working, DO NOT ATTEMPT TO USE FIRMWARE AND SETTINGS FROM ANOTHER LAPTOP,1818EVEN IF THAT LAPTOP SEEMS SIMILAR.**19192020-The CS35L54/56/57 amplifiers must be correctly configured for the power2020+The CS35L54/56/57/63 amplifiers must be correctly configured for the power2121supply voltage, speaker impedance, maximum speaker voltage/current, and2222other external hardware connections.2323···3434* CS35L543535* CS35L563636* CS35L573737+* CS35L6337383839There are two drivers in the kernel3940···105104106105The format of the firmware file names is:107106107107+SoundWire (except CS35L56 Rev B0):108108+ cs35lxx-b0-dsp1-misc-SSID[-spkidX]-l?u?109109+110110+SoundWire CS35L56 Rev B0:111111+ cs35lxx-b0-dsp1-misc-SSID[-spkidX]-ampN112112+113113+Non-SoundWire (HDA and I2S):108114 cs35lxx-b0-dsp1-misc-SSID[-spkidX]-ampN109115110116Where:···119111 * cs35lxx-b0 is the amplifier model and silicon revision. This information120112 is logged by the driver during initialization.121113 * SSID is the 8-digit hexadecimal SSID value.114114+ * l?u? is the physical address on the SoundWire bus of the amp this115115+ file applies to.122116 * ampN is the amplifier number (for example amp1). This is the same as123117 the prefix on the ALSA control names except that it is always lower-case124118 in the file name.125119 * spkidX is an optional part, used for laptops that have firmware126120 configurations for different makes and models of internal speakers.121121+122122+The CS35L56 Rev B0 continues to use the old filename scheme because a123123+large number of firmware files have already been published with these124124+names.127125128126Sound Open Firmware and ALSA topology files129127-------------------------------------------
+58-1
Documentation/virt/kvm/api.rst
···66456645.. note::6646664666476647 For KVM_EXIT_IO, KVM_EXIT_MMIO, KVM_EXIT_OSI, KVM_EXIT_PAPR, KVM_EXIT_XEN,66486648- KVM_EXIT_EPR, KVM_EXIT_X86_RDMSR and KVM_EXIT_X86_WRMSR the corresponding66486648+ KVM_EXIT_EPR, KVM_EXIT_HYPERCALL, KVM_EXIT_TDX,66496649+ KVM_EXIT_X86_RDMSR and KVM_EXIT_X86_WRMSR the corresponding66496650 operations are complete (and guest state is consistent) only after userspace66506651 has re-entered the kernel with KVM_RUN. The kernel side will first finish66516652 incomplete operations and then check for pending signals.···71757174 - KVM_NOTIFY_CONTEXT_INVALID -- the VM context is corrupted and not valid71767175 in VMCS. It would run into unknown result if resume the target VM.7177717671777177+::71787178+71797179+ /* KVM_EXIT_TDX */71807180+ struct {71817181+ __u64 flags;71827182+ __u64 nr;71837183+ union {71847184+ struct {71857185+ u64 ret;71867186+ u64 data[5];71877187+ } unknown;71887188+ struct {71897189+ u64 ret;71907190+ u64 gpa;71917191+ u64 size;71927192+ } get_quote;71937193+ struct {71947194+ u64 ret;71957195+ u64 leaf;71967196+ u64 r11, r12, r13, r14;71977197+ } get_tdvmcall_info;71987198+ };71997199+ } tdx;72007200+72017201+Process a TDVMCALL from the guest. KVM forwards select TDVMCALL based72027202+on the Guest-Hypervisor Communication Interface (GHCI) specification;72037203+KVM bridges these requests to the userspace VMM with minimal changes,72047204+placing the inputs in the union and copying them back to the guest72057205+on re-entry.72067206+72077207+Flags are currently always zero, whereas ``nr`` contains the TDVMCALL72087208+number from register R11. The remaining field of the union provide the72097209+inputs and outputs of the TDVMCALL. Currently the following values of72107210+``nr`` are defined:72117211+72127212+* ``TDVMCALL_GET_QUOTE``: the guest has requested to generate a TD-Quote72137213+signed by a service hosting TD-Quoting Enclave operating on the host.72147214+Parameters and return value are in the ``get_quote`` field of the union.72157215+The ``gpa`` field and ``size`` specify the guest physical address72167216+(without the shared bit set) and the size of a shared-memory buffer, in72177217+which the TDX guest passes a TD Report. The ``ret`` field represents72187218+the return value of the GetQuote request. When the request has been72197219+queued successfully, the TDX guest can poll the status field in the72207220+shared-memory area to check whether the Quote generation is completed or72217221+not. When completed, the generated Quote is returned via the same buffer.72227222+72237223+* ``TDVMCALL_GET_TD_VM_CALL_INFO``: the guest has requested the support72247224+status of TDVMCALLs. The output values for the given leaf should be72257225+placed in fields from ``r11`` to ``r14`` of the ``get_tdvmcall_info``72267226+field of the union.72277227+72287228+KVM may add support for more values in the future that may cause a userspace72297229+exit, even without calls to ``KVM_ENABLE_CAP`` or similar. In this case,72307230+it will enter with output fields already valid; in the common case, the72317231+``unknown.ret`` field of the union will be ``TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED``.72327232+Userspace need not do anything if it does not wish to support a TDVMCALL.71787233::7179723471807235 /* Fix the size of the union. */
+84-47
MAINTAINERS
···207207X: include/uapi/208208209209ABIT UGURU 1,2 HARDWARE MONITOR DRIVER210210-M: Hans de Goede <hdegoede@redhat.com>210210+M: Hans de Goede <hansg@kernel.org>211211L: linux-hwmon@vger.kernel.org212212S: Maintained213213F: drivers/hwmon/abituguru.c···371371F: drivers/platform/x86/quickstart.c372372373373ACPI SERIAL MULTI INSTANTIATE DRIVER374374-M: Hans de Goede <hdegoede@redhat.com>374374+M: Hans de Goede <hansg@kernel.org>375375L: platform-driver-x86@vger.kernel.org376376S: Maintained377377F: drivers/platform/x86/serial-multi-instantiate.c···11571157F: arch/x86/kernel/amd_node.c1158115811591159AMD PDS CORE DRIVER11601160-M: Shannon Nelson <shannon.nelson@amd.com>11611160M: Brett Creeley <brett.creeley@amd.com>11621161L: netdev@vger.kernel.org11631162S: Maintained···35503551F: scripts/make_fit.py3551355235523553ARM64 PLATFORM DRIVERS35533553-M: Hans de Goede <hdegoede@redhat.com>35543554+M: Hans de Goede <hansg@kernel.org>35543555M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>35553556R: Bryan O'Donoghue <bryan.odonoghue@linaro.org>35563557L: platform-driver-x86@vger.kernel.org···37113712F: drivers/platform/x86/eeepc*.c3712371337133714ASUS TF103C DOCK DRIVER37143714-M: Hans de Goede <hdegoede@redhat.com>37153715+M: Hans de Goede <hansg@kernel.org>37153716L: platform-driver-x86@vger.kernel.org37163717S: Maintained37173718T: git git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86.git···56135614F: drivers/usb/chipidea/5614561556155616CHIPONE ICN8318 I2C TOUCHSCREEN DRIVER56165616-M: Hans de Goede <hdegoede@redhat.com>56175617+M: Hans de Goede <hansg@kernel.org>56175618L: linux-input@vger.kernel.org56185619S: Maintained56195620F: Documentation/devicetree/bindings/input/touchscreen/chipone,icn8318.yaml56205621F: drivers/input/touchscreen/chipone_icn8318.c5621562256225623CHIPONE ICN8505 I2C TOUCHSCREEN DRIVER56235623-M: Hans de Goede <hdegoede@redhat.com>56245624+M: Hans de Goede <hansg@kernel.org>56245625L: linux-input@vger.kernel.org56255626S: Maintained56265627F: drivers/input/touchscreen/chipone_icn8505.c···69186919F: include/linux/devfreq-event.h6919692069206921DEVICE RESOURCE MANAGEMENT HELPERS69216921-M: Hans de Goede <hdegoede@redhat.com>69226922+M: Hans de Goede <hansg@kernel.org>69226923R: Matti Vaittinen <mazziesaccount@gmail.com>69236924S: Maintained69246925F: include/linux/devm-helpers.h···75177518F: include/drm/gud.h7518751975197520DRM DRIVER FOR GRAIN MEDIA GM12U320 PROJECTORS75207520-M: Hans de Goede <hdegoede@redhat.com>75217521+M: Hans de Goede <hansg@kernel.org>75217522S: Maintained75227523T: git https://gitlab.freedesktop.org/drm/misc/kernel.git75237524F: drivers/gpu/drm/tiny/gm12u320.c···79177918F: drivers/gpu/drm/vkms/7918791979197920DRM DRIVER FOR VIRTUALBOX VIRTUAL GPU79207920-M: Hans de Goede <hdegoede@redhat.com>79217921+M: Hans de Goede <hansg@kernel.org>79217922L: dri-devel@lists.freedesktop.org79227923S: Maintained79237924T: git https://gitlab.freedesktop.org/drm/misc/kernel.git···83188319F: include/drm/drm_panel.h8319832083208321DRM PRIVACY-SCREEN CLASS83218321-M: Hans de Goede <hdegoede@redhat.com>83228322+M: Hans de Goede <hansg@kernel.org>83228323L: dri-devel@lists.freedesktop.org83238324S: Maintained83248325T: git https://gitlab.freedesktop.org/drm/misc/kernel.git···9941994299429943FWCTL PDS DRIVER99439944M: Brett Creeley <brett.creeley@amd.com>99449944-R: Shannon Nelson <shannon.nelson@amd.com>99459945L: linux-kernel@vger.kernel.org99469946S: Maintained99479947F: drivers/fwctl/pds/···1022110223F: Documentation/devicetree/bindings/connector/gocontroll,moduline-module-slot.yaml10222102241022310225GOODIX TOUCHSCREEN1022410224-M: Hans de Goede <hdegoede@redhat.com>1022610226+M: Hans de Goede <hansg@kernel.org>1022510227L: linux-input@vger.kernel.org1022610228S: Maintained1022710229F: drivers/input/touchscreen/goodix*···1026010262K: [gG]oogle.?[tT]ensor10261102631026210264GPD POCKET FAN DRIVER1026310263-M: Hans de Goede <hdegoede@redhat.com>1026510265+M: Hans de Goede <hansg@kernel.org>1026410266L: platform-driver-x86@vger.kernel.org1026510267S: Maintained1026610268F: drivers/platform/x86/gpd-pocket-fan.c···1083910841F: drivers/dma/hisi_dma.c10840108421084110843HISILICON GPIO DRIVER1084210842-M: Jay Fang <f.fangjian@huawei.com>1084410844+M: Yang Shen <shenyang39@huawei.com>1084310845L: linux-gpio@vger.kernel.org1084410846S: Maintained1084510847F: Documentation/devicetree/bindings/gpio/hisilicon,ascend910-gpio.yaml···11155111571115611158HUGETLB SUBSYSTEM1115711159M: Muchun Song <muchun.song@linux.dev>1115811158-R: Oscar Salvador <osalvador@suse.de>1116011160+M: Oscar Salvador <osalvador@suse.de>1116111161+R: David Hildenbrand <david@redhat.com>1115911162L: linux-mm@kvack.org1116011163S: Maintained1116111164F: Documentation/ABI/testing/sysfs-kernel-mm-hugepages···1116711168F: include/linux/hugetlb.h1116811169F: include/trace/events/hugetlbfs.h1116911170F: mm/hugetlb.c1117111171+F: mm/hugetlb_cgroup.c1117011172F: mm/hugetlb_cma.c1117111173F: mm/hugetlb_cma.h1117211174F: mm/hugetlb_vmemmap.c···1142311423F: drivers/i2c/busses/i2c-viapro.c11424114241142511425I2C/SMBUS INTEL CHT WHISKEY COVE PMIC DRIVER1142611426-M: Hans de Goede <hdegoede@redhat.com>1142611426+M: Hans de Goede <hansg@kernel.org>1142711427L: linux-i2c@vger.kernel.org1142811428S: Maintained1142911429F: drivers/i2c/busses/i2c-cht-wc.c···1201312013F: sound/soc/intel/12014120141201512015INTEL ATOMISP2 DUMMY / POWER-MANAGEMENT DRIVER1201612016-M: Hans de Goede <hdegoede@redhat.com>1201612016+M: Hans de Goede <hansg@kernel.org>1201712017L: platform-driver-x86@vger.kernel.org1201812018S: Maintained1201912019F: drivers/platform/x86/intel/atomisp2/pm.c12020120201202112021INTEL ATOMISP2 LED DRIVER1202212022-M: Hans de Goede <hdegoede@redhat.com>1202212022+M: Hans de Goede <hansg@kernel.org>1202312023L: platform-driver-x86@vger.kernel.org1202412024S: Maintained1202512025F: drivers/platform/x86/intel/atomisp2/led.c···1334713347M: Mike Rapoport <rppt@kernel.org>1334813348M: Changyuan Lyu <changyuanl@google.com>1334913349L: kexec@lists.infradead.org1335013350+L: linux-mm@kvack.org1335013351S: Maintained1335113352F: Documentation/admin-guide/mm/kho.rst1335213353F: Documentation/core-api/kho/*···1368113680F: drivers/platform/x86/lenovo-wmi-hotkey-utilities.c13682136811368313682LETSKETCH HID TABLET DRIVER1368413684-M: Hans de Goede <hdegoede@redhat.com>1368313683+M: Hans de Goede <hansg@kernel.org>1368513684L: linux-input@vger.kernel.org1368613685S: Maintained1368713686T: git git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid.git···1373113730F: drivers/ata/sata_gemini.h13732137311373313732LIBATA SATA AHCI PLATFORM devices support1373413734-M: Hans de Goede <hdegoede@redhat.com>1373313733+M: Hans de Goede <hansg@kernel.org>1373513734L: linux-ide@vger.kernel.org1373613735S: Maintained1373713736F: drivers/ata/ahci_platform.c···1380113800L: nvdimm@lists.linux.dev1380213801S: Supported1380313802Q: https://patchwork.kernel.org/project/linux-nvdimm/list/1380413804-F: Documentation/devicetree/bindings/pmem/pmem-region.txt1380313803+F: Documentation/devicetree/bindings/pmem/pmem-region.yaml1380513804F: drivers/nvdimm/of_pmem.c13806138051380713806LIBNVDIMM: NON-VOLATILE MEMORY DEVICE SUBSYSTEM···1410114100F: block/partitions/ldm.*14102141011410314102LOGITECH HID GAMING KEYBOARDS1410414104-M: Hans de Goede <hdegoede@redhat.com>1410314103+M: Hans de Goede <hansg@kernel.org>1410514104L: linux-input@vger.kernel.org1410614105S: Maintained1410714106T: git git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid.git···1478314782F: drivers/power/supply/max17040_battery.c14784147831478514784MAXIM MAX17042 FAMILY FUEL GAUGE DRIVERS1478614786-R: Hans de Goede <hdegoede@redhat.com>1478514785+R: Hans de Goede <hansg@kernel.org>1478714786R: Krzysztof Kozlowski <krzk@kernel.org>1478814787R: Marek Szyprowski <m.szyprowski@samsung.com>1478914788R: Sebastian Krzyszkowiak <sebastian.krzyszkowiak@puri.sm>···1558515584F: drivers/net/ethernet/mellanox/mlxfw/15586155851558715586MELLANOX HARDWARE PLATFORM SUPPORT1558815588-M: Hans de Goede <hdegoede@redhat.com>1558715587+M: Hans de Goede <hansg@kernel.org>1558915588M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>1559015589M: Vadim Pasternak <vadimp@nvidia.com>1559115590L: platform-driver-x86@vger.kernel.org···1567615675M: Mike Rapoport <rppt@kernel.org>1567715676L: linux-mm@kvack.org1567815677S: Maintained1567815678+T: git git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock.git for-next1567915679+T: git git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock.git fixes1567915680F: Documentation/core-api/boot-time-mm.rst1568015681F: Documentation/core-api/kho/bindings/memblock/*1568115682F: include/linux/memblock.h1568315683+F: mm/bootmem_info.c1568215684F: mm/memblock.c1568515685+F: mm/memtest.c1568315686F: mm/mm_init.c1568715687+F: mm/rodata_test.c1568415688F: tools/testing/memblock/15685156891568615690MEMORY ALLOCATION PROFILING···1574015734F: Documentation/mm/1574115735F: include/linux/gfp.h1574215736F: include/linux/gfp_types.h1574315743-F: include/linux/memfd.h1574415737F: include/linux/memory_hotplug.h1574515738F: include/linux/memory-tiers.h1574615739F: include/linux/mempolicy.h···1579915794W: http://www.linux-mm.org1580015795T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm1580115796F: mm/gup.c1579715797+F: mm/gup_test.c1579815798+F: mm/gup_test.h1579915799+F: tools/testing/selftests/mm/gup_longterm.c1580015800+F: tools/testing/selftests/mm/gup_test.c15802158011580315802MEMORY MANAGEMENT - KSM (Kernel Samepage Merging)1580415803M: Andrew Morton <akpm@linux-foundation.org>···1585015841F: mm/numa_emulation.c1585115842F: mm/numa_memblks.c15852158431584415844+MEMORY MANAGEMENT - OOM KILLER1584515845+M: Michal Hocko <mhocko@suse.com>1584615846+R: David Rientjes <rientjes@google.com>1584715847+R: Shakeel Butt <shakeel.butt@linux.dev>1584815848+L: linux-mm@kvack.org1584915849+S: Maintained1585015850+F: include/linux/oom.h1585115851+F: include/trace/events/oom.h1585215852+F: include/uapi/linux/oom.h1585315853+F: mm/oom_kill.c1585415854+1585315855MEMORY MANAGEMENT - PAGE ALLOCATOR1585415856M: Andrew Morton <akpm@linux-foundation.org>1585515857M: Vlastimil Babka <vbabka@suse.cz>···1587515855F: include/linux/gfp.h1587615856F: include/linux/page-isolation.h1587715857F: mm/compaction.c1585815858+F: mm/debug_page_alloc.c1585915859+F: mm/fail_page_alloc.c1587815860F: mm/page_alloc.c1586115861+F: mm/page_ext.c1586215862+F: mm/page_frag_cache.c1587915863F: mm/page_isolation.c1586415864+F: mm/page_owner.c1586515865+F: mm/page_poison.c1586615866+F: mm/page_reporting.c1586715867+F: mm/show_mem.c1586815868+F: mm/shuffle.c15880158691588115870MEMORY MANAGEMENT - RECLAIM1588215871M: Andrew Morton <akpm@linux-foundation.org>···1589915870S: Maintained1590015871F: mm/pt_reclaim.c1590115872F: mm/vmscan.c1587315873+F: mm/workingset.c15902158741590315875MEMORY MANAGEMENT - RMAP (REVERSE MAPPING)1590415876M: Andrew Morton <akpm@linux-foundation.org>···1591215882L: linux-mm@kvack.org1591315883S: Maintained1591415884F: include/linux/rmap.h1588515885+F: mm/page_vma_mapped.c1591515886F: mm/rmap.c15916158871591715888MEMORY MANAGEMENT - SECRETMEM···1594515914MEMORY MANAGEMENT - THP (TRANSPARENT HUGE PAGE)1594615915M: Andrew Morton <akpm@linux-foundation.org>1594715916M: David Hildenbrand <david@redhat.com>1591715917+M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1594815918R: Zi Yan <ziy@nvidia.com>1594915919R: Baolin Wang <baolin.wang@linux.alibaba.com>1595015950-R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1595115920R: Liam R. Howlett <Liam.Howlett@oracle.com>1595215921R: Nico Pache <npache@redhat.com>1595315922R: Ryan Roberts <ryan.roberts@arm.com>···1600515974W: http://www.linux-mm.org1600615975T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm1600715976F: include/trace/events/mmap.h1597715977+F: mm/mincore.c1600815978F: mm/mlock.c1600915979F: mm/mmap.c1601015980F: mm/mprotect.c1601115981F: mm/mremap.c1601215982F: mm/mseal.c1598315983+F: mm/msync.c1598415984+F: mm/nommu.c1601315985F: mm/vma.c1601415986F: mm/vma.h1601515987F: mm/vma_exec.c···1657516541F: drivers/platform/surface/surface_gpe.c16576165421657716543MICROSOFT SURFACE HARDWARE PLATFORM SUPPORT1657816578-M: Hans de Goede <hdegoede@redhat.com>1654416544+M: Hans de Goede <hansg@kernel.org>1657916545M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>1658016546M: Maximilian Luz <luzmaximilian@gmail.com>1658116547L: platform-driver-x86@vger.kernel.org···1774317709F: tools/testing/selftests/nolibc/17744177101774517711NOVATEK NVT-TS I2C TOUCHSCREEN DRIVER1774617746-M: Hans de Goede <hdegoede@redhat.com>1771217712+M: Hans de Goede <hansg@kernel.org>1774717713L: linux-input@vger.kernel.org1774817714S: Maintained1774917715F: Documentation/devicetree/bindings/input/touchscreen/novatek,nvt-ts.yaml···1941319379F: include/crypto/pcrypt.h19414193801941519381PDS DSC VIRTIO DATA PATH ACCELERATOR1941619416-R: Shannon Nelson <shannon.nelson@amd.com>1938219382+R: Brett Creeley <brett.creeley@amd.com>1941719383F: drivers/vdpa/pds/19418193841941919385PECI HARDWARE MONITORING DRIVERS···1943519401F: include/linux/peci.h19436194021943719403PENSANDO ETHERNET DRIVERS1943819438-M: Shannon Nelson <shannon.nelson@amd.com>1943919404M: Brett Creeley <brett.creeley@amd.com>1944019405L: netdev@vger.kernel.org1944119406S: Maintained···2141021377K: spacemit21411213782141221379RISC-V THEAD SoC SUPPORT2141321413-M: Drew Fustini <drew@pdp7.com>2138021380+M: Drew Fustini <fustini@kernel.org>2141421381M: Guo Ren <guoren@kernel.org>2141521382M: Fu Wei <wefu@redhat.com>2141621383L: linux-riscv@lists.infradead.org···2220722174R: David Vernet <void@manifault.com>2220822175R: Andrea Righi <arighi@nvidia.com>2220922176R: Changwoo Min <changwoo@igalia.com>2221022210-L: linux-kernel@vger.kernel.org2217722177+L: sched-ext@lists.linux.dev2221122178S: Maintained2221222179W: https://github.com/sched-ext/scx2221322180T: git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext.git···2274422711K: [^@]sifive22745227122274622713SILEAD TOUCHSCREEN DRIVER2274722747-M: Hans de Goede <hdegoede@redhat.com>2271422714+M: Hans de Goede <hansg@kernel.org>2274822715L: linux-input@vger.kernel.org2274922716L: platform-driver-x86@vger.kernel.org2275022717S: Maintained···2277722744F: drivers/i3c/master/svc-i3c-master.c22778227452277922746SIMPLEFB FB DRIVER2278022780-M: Hans de Goede <hdegoede@redhat.com>2274722747+M: Hans de Goede <hansg@kernel.org>2278122748L: linux-fbdev@vger.kernel.org2278222749S: Maintained2278322750F: Documentation/devicetree/bindings/display/simple-framebuffer.yaml···2290622873F: drivers/hwmon/emc2103.c22907228742290822875SMSC SCH5627 HARDWARE MONITOR DRIVER2290922909-M: Hans de Goede <hdegoede@redhat.com>2287622876+M: Hans de Goede <hansg@kernel.org>2291022877L: linux-hwmon@vger.kernel.org2291122878S: Supported2291222879F: Documentation/hwmon/sch5627.rst···2356123528F: Documentation/process/stable-kernel-rules.rst23562235292356323530STAGING - ATOMISP DRIVER2356423564-M: Hans de Goede <hdegoede@redhat.com>2353123531+M: Hans de Goede <hansg@kernel.org>2356523532M: Mauro Carvalho Chehab <mchehab@kernel.org>2356623533R: Sakari Ailus <sakari.ailus@linux.intel.com>2356723534L: linux-media@vger.kernel.org···2385723824F: drivers/net/ethernet/i825xx/sun3*23858238252385923826SUN4I LOW RES ADC ATTACHED TABLET KEYS DRIVER2386023860-M: Hans de Goede <hdegoede@redhat.com>2382723827+M: Hans de Goede <hansg@kernel.org>2386123828L: linux-input@vger.kernel.org2386223829S: Maintained2386323830F: Documentation/devicetree/bindings/input/allwinner,sun4i-a10-lradc-keys.yaml···2409924066L: linux-i2c@vger.kernel.org2410024067S: Maintained2410124068F: drivers/i2c/busses/i2c-designware-amdisp.c2406924069+F: include/linux/soc/amd/isp4_misc.h24102240702410324071SYNOPSYS DESIGNWARE MMC/SD/SDIO DRIVER2410424072M: Jaehoon Chung <jh80.chung@samsung.com>···2506425030R: Baolin Wang <baolin.wang@linux.alibaba.com>2506525031L: linux-mm@kvack.org2506625032S: Maintained2503325033+F: include/linux/memfd.h2506725034F: include/linux/shmem_fs.h2503525035+F: mm/memfd.c2506825036F: mm/shmem.c2503725037+F: mm/shmem_quota.c25069250382507025039TOMOYO SECURITY MODULE2507125040M: Kentaro Takeda <takedakn@nttdata.co.jp>···2562925592F: drivers/hid/usbhid/25630255932563125594USB INTEL XHCI ROLE MUX DRIVER2563225632-M: Hans de Goede <hdegoede@redhat.com>2559525595+M: Hans de Goede <hansg@kernel.org>2563325596L: linux-usb@vger.kernel.org2563425597S: Maintained2563525598F: drivers/usb/roles/intel-xhci-usb-role-switch.c···2582025783F: drivers/usb/typec/mux/intel_pmc_mux.c25821257842582225785USB TYPEC PI3USB30532 MUX DRIVER2582325823-M: Hans de Goede <hdegoede@redhat.com>2578625786+M: Hans de Goede <hansg@kernel.org>2582425787L: linux-usb@vger.kernel.org2582525788S: Maintained2582625789F: drivers/usb/typec/mux/pi3usb30532.c···25849258122585025813USB VIDEO CLASS2585125814M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>2585225852-M: Hans de Goede <hdegoede@redhat.com>2581525815+M: Hans de Goede <hansg@kernel.org>2585325816L: linux-media@vger.kernel.org2585425817S: Maintained2585525818W: http://www.ideasonboard.org/uvc/···2638026343F: sound/virtio/*26381263442638226345VIRTUAL BOX GUEST DEVICE DRIVER2638326383-M: Hans de Goede <hdegoede@redhat.com>2634626346+M: Hans de Goede <hansg@kernel.org>2638426347M: Arnd Bergmann <arnd@arndb.de>2638526348M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>2638626349S: Maintained···2638926352F: include/uapi/linux/vbox*.h26390263532639126354VIRTUAL BOX SHARED FOLDER VFS DRIVER2639226392-M: Hans de Goede <hdegoede@redhat.com>2635526355+M: Hans de Goede <hansg@kernel.org>2639326356L: linux-fsdevel@vger.kernel.org2639426357S: Maintained2639526358F: fs/vboxsf/*···26643266062664426607WACOM PROTOCOL 4 SERIAL TABLETS2664526608M: Julian Squires <julian@cipht.net>2664626646-M: Hans de Goede <hdegoede@redhat.com>2660926609+M: Hans de Goede <hansg@kernel.org>2664726610L: linux-input@vger.kernel.org2664826611S: Maintained2664926612F: drivers/input/tablet/wacom_serial4.c···2681026773F: include/uapi/linux/wwan.h26811267742681226775X-POWERS AXP288 PMIC DRIVERS2681326813-M: Hans de Goede <hdegoede@redhat.com>2677626776+M: Hans de Goede <hansg@kernel.org>2681426777S: Maintained2681526778F: drivers/acpi/pmic/intel_pmic_xpower.c2681626779N: axp288···2690226865F: arch/x86/mm/26903268662690426867X86 PLATFORM ANDROID TABLETS DSDT FIXUP DRIVER2690526905-M: Hans de Goede <hdegoede@redhat.com>2686826868+M: Hans de Goede <hansg@kernel.org>2690626869L: platform-driver-x86@vger.kernel.org2690726870S: Maintained2690826871T: git git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86.git2690926872F: drivers/platform/x86/x86-android-tablets/26910268732691126874X86 PLATFORM DRIVERS2691226912-M: Hans de Goede <hdegoede@redhat.com>2687526875+M: Hans de Goede <hansg@kernel.org>2691326876M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>2691426877L: platform-driver-x86@vger.kernel.org2691526878S: Maintained
···561561 vcpu_set_flag((v), e); \562562 } while (0)563563564564-#define __build_check_all_or_none(r, bits) \565565- BUILD_BUG_ON(((r) & (bits)) && ((r) & (bits)) != (bits))566566-567567-#define __cpacr_to_cptr_clr(clr, set) \568568- ({ \569569- u64 cptr = 0; \570570- \571571- if ((set) & CPACR_EL1_FPEN) \572572- cptr |= CPTR_EL2_TFP; \573573- if ((set) & CPACR_EL1_ZEN) \574574- cptr |= CPTR_EL2_TZ; \575575- if ((set) & CPACR_EL1_SMEN) \576576- cptr |= CPTR_EL2_TSM; \577577- if ((clr) & CPACR_EL1_TTA) \578578- cptr |= CPTR_EL2_TTA; \579579- if ((clr) & CPTR_EL2_TAM) \580580- cptr |= CPTR_EL2_TAM; \581581- if ((clr) & CPTR_EL2_TCPAC) \582582- cptr |= CPTR_EL2_TCPAC; \583583- \584584- cptr; \585585- })586586-587587-#define __cpacr_to_cptr_set(clr, set) \588588- ({ \589589- u64 cptr = 0; \590590- \591591- if ((clr) & CPACR_EL1_FPEN) \592592- cptr |= CPTR_EL2_TFP; \593593- if ((clr) & CPACR_EL1_ZEN) \594594- cptr |= CPTR_EL2_TZ; \595595- if ((clr) & CPACR_EL1_SMEN) \596596- cptr |= CPTR_EL2_TSM; \597597- if ((set) & CPACR_EL1_TTA) \598598- cptr |= CPTR_EL2_TTA; \599599- if ((set) & CPTR_EL2_TAM) \600600- cptr |= CPTR_EL2_TAM; \601601- if ((set) & CPTR_EL2_TCPAC) \602602- cptr |= CPTR_EL2_TCPAC; \603603- \604604- cptr; \605605- })606606-607607-#define cpacr_clear_set(clr, set) \608608- do { \609609- BUILD_BUG_ON((set) & CPTR_VHE_EL2_RES0); \610610- BUILD_BUG_ON((clr) & CPACR_EL1_E0POE); \611611- __build_check_all_or_none((clr), CPACR_EL1_FPEN); \612612- __build_check_all_or_none((set), CPACR_EL1_FPEN); \613613- __build_check_all_or_none((clr), CPACR_EL1_ZEN); \614614- __build_check_all_or_none((set), CPACR_EL1_ZEN); \615615- __build_check_all_or_none((clr), CPACR_EL1_SMEN); \616616- __build_check_all_or_none((set), CPACR_EL1_SMEN); \617617- \618618- if (has_vhe() || has_hvhe()) \619619- sysreg_clear_set(cpacr_el1, clr, set); \620620- else \621621- sysreg_clear_set(cptr_el2, \622622- __cpacr_to_cptr_clr(clr, set), \623623- __cpacr_to_cptr_set(clr, set));\624624- } while (0)625625-626564/*627565 * Returns a 'sanitised' view of CPTR_EL2, translating from nVHE to the VHE628566 * format if E2H isn't set.
+2-4
arch/arm64/include/asm/kvm_host.h
···12891289 })1290129012911291/*12921292- * The couple of isb() below are there to guarantee the same behaviour12931293- * on VHE as on !VHE, where the eret to EL1 acts as a context12941294- * synchronization event.12921292+ * The isb() below is there to guarantee the same behaviour on VHE as on !VHE,12931293+ * where the eret to EL1 acts as a context synchronization event.12951294 */12961295#define kvm_call_hyp(f, ...) \12971296 do { \···13081309 \13091310 if (has_vhe()) { \13101311 ret = f(__VA_ARGS__); \13111311- isb(); \13121312 } else { \13131313 ret = kvm_call_hyp_nvhe(f, ##__VA_ARGS__); \13141314 } \
···6565 }6666}67676868+static inline void __activate_cptr_traps_nvhe(struct kvm_vcpu *vcpu)6969+{7070+ u64 val = CPTR_NVHE_EL2_RES1 | CPTR_EL2_TAM | CPTR_EL2_TTA;7171+7272+ /*7373+ * Always trap SME since it's not supported in KVM.7474+ * TSM is RES1 if SME isn't implemented.7575+ */7676+ val |= CPTR_EL2_TSM;7777+7878+ if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs())7979+ val |= CPTR_EL2_TZ;8080+8181+ if (!guest_owns_fp_regs())8282+ val |= CPTR_EL2_TFP;8383+8484+ write_sysreg(val, cptr_el2);8585+}8686+8787+static inline void __activate_cptr_traps_vhe(struct kvm_vcpu *vcpu)8888+{8989+ /*9090+ * With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to9191+ * CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2,9292+ * except for some missing controls, such as TAM.9393+ * In this case, CPTR_EL2.TAM has the same position with or without9494+ * VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM9595+ * shift value for trapping the AMU accesses.9696+ */9797+ u64 val = CPTR_EL2_TAM | CPACR_EL1_TTA;9898+ u64 cptr;9999+100100+ if (guest_owns_fp_regs()) {101101+ val |= CPACR_EL1_FPEN;102102+ if (vcpu_has_sve(vcpu))103103+ val |= CPACR_EL1_ZEN;104104+ }105105+106106+ if (!vcpu_has_nv(vcpu))107107+ goto write;108108+109109+ /*110110+ * The architecture is a bit crap (what a surprise): an EL2 guest111111+ * writing to CPTR_EL2 via CPACR_EL1 can't set any of TCPAC or TTA,112112+ * as they are RES0 in the guest's view. To work around it, trap the113113+ * sucker using the very same bit it can't set...114114+ */115115+ if (vcpu_el2_e2h_is_set(vcpu) && is_hyp_ctxt(vcpu))116116+ val |= CPTR_EL2_TCPAC;117117+118118+ /*119119+ * Layer the guest hypervisor's trap configuration on top of our own if120120+ * we're in a nested context.121121+ */122122+ if (is_hyp_ctxt(vcpu))123123+ goto write;124124+125125+ cptr = vcpu_sanitised_cptr_el2(vcpu);126126+127127+ /*128128+ * Pay attention, there's some interesting detail here.129129+ *130130+ * The CPTR_EL2.xEN fields are 2 bits wide, although there are only two131131+ * meaningful trap states when HCR_EL2.TGE = 0 (running a nested guest):132132+ *133133+ * - CPTR_EL2.xEN = x0, traps are enabled134134+ * - CPTR_EL2.xEN = x1, traps are disabled135135+ *136136+ * In other words, bit[0] determines if guest accesses trap or not. In137137+ * the interest of simplicity, clear the entire field if the guest138138+ * hypervisor has traps enabled to dispel any illusion of something more139139+ * complicated taking place.140140+ */141141+ if (!(SYS_FIELD_GET(CPACR_EL1, FPEN, cptr) & BIT(0)))142142+ val &= ~CPACR_EL1_FPEN;143143+ if (!(SYS_FIELD_GET(CPACR_EL1, ZEN, cptr) & BIT(0)))144144+ val &= ~CPACR_EL1_ZEN;145145+146146+ if (kvm_has_feat(vcpu->kvm, ID_AA64MMFR3_EL1, S2POE, IMP))147147+ val |= cptr & CPACR_EL1_E0POE;148148+149149+ val |= cptr & CPTR_EL2_TCPAC;150150+151151+write:152152+ write_sysreg(val, cpacr_el1);153153+}154154+155155+static inline void __activate_cptr_traps(struct kvm_vcpu *vcpu)156156+{157157+ if (!guest_owns_fp_regs())158158+ __activate_traps_fpsimd32(vcpu);159159+160160+ if (has_vhe() || has_hvhe())161161+ __activate_cptr_traps_vhe(vcpu);162162+ else163163+ __activate_cptr_traps_nvhe(vcpu);164164+}165165+166166+static inline void __deactivate_cptr_traps_nvhe(struct kvm_vcpu *vcpu)167167+{168168+ u64 val = CPTR_NVHE_EL2_RES1;169169+170170+ if (!cpus_have_final_cap(ARM64_SVE))171171+ val |= CPTR_EL2_TZ;172172+ if (!cpus_have_final_cap(ARM64_SME))173173+ val |= CPTR_EL2_TSM;174174+175175+ write_sysreg(val, cptr_el2);176176+}177177+178178+static inline void __deactivate_cptr_traps_vhe(struct kvm_vcpu *vcpu)179179+{180180+ u64 val = CPACR_EL1_FPEN;181181+182182+ if (cpus_have_final_cap(ARM64_SVE))183183+ val |= CPACR_EL1_ZEN;184184+ if (cpus_have_final_cap(ARM64_SME))185185+ val |= CPACR_EL1_SMEN;186186+187187+ write_sysreg(val, cpacr_el1);188188+}189189+190190+static inline void __deactivate_cptr_traps(struct kvm_vcpu *vcpu)191191+{192192+ if (has_vhe() || has_hvhe())193193+ __deactivate_cptr_traps_vhe(vcpu);194194+ else195195+ __deactivate_cptr_traps_nvhe(vcpu);196196+}197197+68198#define reg_to_fgt_masks(reg) \69199 ({ \70200 struct fgt_masks *m; \···616486 */617487 if (system_supports_sve()) {618488 __hyp_sve_save_host();619619-620620- /* Re-enable SVE traps if not supported for the guest vcpu. */621621- if (!vcpu_has_sve(vcpu))622622- cpacr_clear_set(CPACR_EL1_ZEN, 0);623623-624489 } else {625490 __fpsimd_save_state(host_data_ptr(host_ctxt.fp_regs));626491 }···666541 /* Valid trap. Switch the context: */667542668543 /* First disable enough traps to allow us to update the registers */669669- if (sve_guest || (is_protected_kvm_enabled() && system_supports_sve()))670670- cpacr_clear_set(0, CPACR_EL1_FPEN | CPACR_EL1_ZEN);671671- else672672- cpacr_clear_set(0, CPACR_EL1_FPEN);544544+ __deactivate_cptr_traps(vcpu);673545 isb();674546675547 /* Write out the host state if it's in the registers */···687565 write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2);688566689567 *host_data_ptr(fp_owner) = FP_STATE_GUEST_OWNED;568568+569569+ /*570570+ * Re-enable traps necessary for the current state of the guest, e.g.571571+ * those enabled by a guest hypervisor. The ERET to the guest will572572+ * provide the necessary context synchronization.573573+ */574574+ __activate_cptr_traps(vcpu);690575691576 return true;692577}
+4-1
arch/arm64/kvm/hyp/nvhe/hyp-main.c
···6969 if (!guest_owns_fp_regs())7070 return;71717272- cpacr_clear_set(0, CPACR_EL1_FPEN | CPACR_EL1_ZEN);7272+ /*7373+ * Traps have been disabled by __deactivate_cptr_traps(), but there7474+ * hasn't necessarily been a context synchronization event yet.7575+ */7376 isb();74777578 if (vcpu_has_sve(vcpu))
-59
arch/arm64/kvm/hyp/nvhe/switch.c
···47474848extern void kvm_nvhe_prepare_backtrace(unsigned long fp, unsigned long pc);49495050-static void __activate_cptr_traps(struct kvm_vcpu *vcpu)5151-{5252- u64 val = CPTR_EL2_TAM; /* Same bit irrespective of E2H */5353-5454- if (!guest_owns_fp_regs())5555- __activate_traps_fpsimd32(vcpu);5656-5757- if (has_hvhe()) {5858- val |= CPACR_EL1_TTA;5959-6060- if (guest_owns_fp_regs()) {6161- val |= CPACR_EL1_FPEN;6262- if (vcpu_has_sve(vcpu))6363- val |= CPACR_EL1_ZEN;6464- }6565-6666- write_sysreg(val, cpacr_el1);6767- } else {6868- val |= CPTR_EL2_TTA | CPTR_NVHE_EL2_RES1;6969-7070- /*7171- * Always trap SME since it's not supported in KVM.7272- * TSM is RES1 if SME isn't implemented.7373- */7474- val |= CPTR_EL2_TSM;7575-7676- if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs())7777- val |= CPTR_EL2_TZ;7878-7979- if (!guest_owns_fp_regs())8080- val |= CPTR_EL2_TFP;8181-8282- write_sysreg(val, cptr_el2);8383- }8484-}8585-8686-static void __deactivate_cptr_traps(struct kvm_vcpu *vcpu)8787-{8888- if (has_hvhe()) {8989- u64 val = CPACR_EL1_FPEN;9090-9191- if (cpus_have_final_cap(ARM64_SVE))9292- val |= CPACR_EL1_ZEN;9393- if (cpus_have_final_cap(ARM64_SME))9494- val |= CPACR_EL1_SMEN;9595-9696- write_sysreg(val, cpacr_el1);9797- } else {9898- u64 val = CPTR_NVHE_EL2_RES1;9999-100100- if (!cpus_have_final_cap(ARM64_SVE))101101- val |= CPTR_EL2_TZ;102102- if (!cpus_have_final_cap(ARM64_SME))103103- val |= CPTR_EL2_TSM;104104-105105- write_sysreg(val, cptr_el2);106106- }107107-}108108-10950static void __activate_traps(struct kvm_vcpu *vcpu)11051{11152 ___activate_traps(vcpu, vcpu->arch.hcr_el2);
+14-93
arch/arm64/kvm/hyp/vhe/switch.c
···9090 return hcr | (guest_hcr & ~NV_HCR_GUEST_EXCLUDE);9191}92929393-static void __activate_cptr_traps(struct kvm_vcpu *vcpu)9494-{9595- u64 cptr;9696-9797- /*9898- * With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to9999- * CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2,100100- * except for some missing controls, such as TAM.101101- * In this case, CPTR_EL2.TAM has the same position with or without102102- * VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM103103- * shift value for trapping the AMU accesses.104104- */105105- u64 val = CPACR_EL1_TTA | CPTR_EL2_TAM;106106-107107- if (guest_owns_fp_regs()) {108108- val |= CPACR_EL1_FPEN;109109- if (vcpu_has_sve(vcpu))110110- val |= CPACR_EL1_ZEN;111111- } else {112112- __activate_traps_fpsimd32(vcpu);113113- }114114-115115- if (!vcpu_has_nv(vcpu))116116- goto write;117117-118118- /*119119- * The architecture is a bit crap (what a surprise): an EL2 guest120120- * writing to CPTR_EL2 via CPACR_EL1 can't set any of TCPAC or TTA,121121- * as they are RES0 in the guest's view. To work around it, trap the122122- * sucker using the very same bit it can't set...123123- */124124- if (vcpu_el2_e2h_is_set(vcpu) && is_hyp_ctxt(vcpu))125125- val |= CPTR_EL2_TCPAC;126126-127127- /*128128- * Layer the guest hypervisor's trap configuration on top of our own if129129- * we're in a nested context.130130- */131131- if (is_hyp_ctxt(vcpu))132132- goto write;133133-134134- cptr = vcpu_sanitised_cptr_el2(vcpu);135135-136136- /*137137- * Pay attention, there's some interesting detail here.138138- *139139- * The CPTR_EL2.xEN fields are 2 bits wide, although there are only two140140- * meaningful trap states when HCR_EL2.TGE = 0 (running a nested guest):141141- *142142- * - CPTR_EL2.xEN = x0, traps are enabled143143- * - CPTR_EL2.xEN = x1, traps are disabled144144- *145145- * In other words, bit[0] determines if guest accesses trap or not. In146146- * the interest of simplicity, clear the entire field if the guest147147- * hypervisor has traps enabled to dispel any illusion of something more148148- * complicated taking place.149149- */150150- if (!(SYS_FIELD_GET(CPACR_EL1, FPEN, cptr) & BIT(0)))151151- val &= ~CPACR_EL1_FPEN;152152- if (!(SYS_FIELD_GET(CPACR_EL1, ZEN, cptr) & BIT(0)))153153- val &= ~CPACR_EL1_ZEN;154154-155155- if (kvm_has_feat(vcpu->kvm, ID_AA64MMFR3_EL1, S2POE, IMP))156156- val |= cptr & CPACR_EL1_E0POE;157157-158158- val |= cptr & CPTR_EL2_TCPAC;159159-160160-write:161161- write_sysreg(val, cpacr_el1);162162-}163163-164164-static void __deactivate_cptr_traps(struct kvm_vcpu *vcpu)165165-{166166- u64 val = CPACR_EL1_FPEN | CPACR_EL1_ZEN_EL1EN;167167-168168- if (cpus_have_final_cap(ARM64_SME))169169- val |= CPACR_EL1_SMEN_EL1EN;170170-171171- write_sysreg(val, cpacr_el1);172172-}173173-17493static void __activate_traps(struct kvm_vcpu *vcpu)17594{17695 u64 val;···558639 host_ctxt = host_data_ptr(host_ctxt);559640 guest_ctxt = &vcpu->arch.ctxt;560641561561- sysreg_save_host_state_vhe(host_ctxt);562562-563642 fpsimd_lazy_switch_to_guest(vcpu);643643+644644+ sysreg_save_host_state_vhe(host_ctxt);564645565646 /*566647 * Note that ARM erratum 1165522 requires us to configure both stage 1···586667587668 __deactivate_traps(vcpu);588669589589- fpsimd_lazy_switch_to_host(vcpu);590590-591670 sysreg_restore_host_state_vhe(host_ctxt);671671+672672+ __debug_switch_to_host(vcpu);673673+674674+ /*675675+ * Ensure that all system register writes above have taken effect676676+ * before returning to the host. In VHE mode, CPTR traps for677677+ * FPSIMD/SVE/SME also apply to EL2, so FPSIMD/SVE/SME state must be678678+ * manipulated after the ISB.679679+ */680680+ isb();681681+682682+ fpsimd_lazy_switch_to_host(vcpu);592683593684 if (guest_owns_fp_regs())594685 __fpsimd_save_fpexc32(vcpu);595595-596596- __debug_switch_to_host(vcpu);597686598687 return exit_code;599688}···632705 */633706 local_daif_restore(DAIF_PROCCTX_NOIRQ);634707635635- /*636636- * When we exit from the guest we change a number of CPU configuration637637- * parameters, such as traps. We rely on the isb() in kvm_call_hyp*()638638- * to make sure these changes take effect before running the host or639639- * additional guests.640640- */641708 return ret;642709}643710
+42-39
arch/arm64/kvm/vgic/vgic-v3-nested.c
···36363737static DEFINE_PER_CPU(struct shadow_if, shadow_if);38383939+static int lr_map_idx_to_shadow_idx(struct shadow_if *shadow_if, int idx)4040+{4141+ return hweight16(shadow_if->lr_map & (BIT(idx) - 1));4242+}4343+3944/*4045 * Nesting GICv3 support4146 *···214209 return reg;215210}216211212212+static u64 translate_lr_pintid(struct kvm_vcpu *vcpu, u64 lr)213213+{214214+ struct vgic_irq *irq;215215+216216+ if (!(lr & ICH_LR_HW))217217+ return lr;218218+219219+ /* We have the HW bit set, check for validity of pINTID */220220+ irq = vgic_get_vcpu_irq(vcpu, FIELD_GET(ICH_LR_PHYS_ID_MASK, lr));221221+ /* If there was no real mapping, nuke the HW bit */222222+ if (!irq || !irq->hw || irq->intid > VGIC_MAX_SPI)223223+ lr &= ~ICH_LR_HW;224224+225225+ /* Translate the virtual mapping to the real one, even if invalid */226226+ if (irq) {227227+ lr &= ~ICH_LR_PHYS_ID_MASK;228228+ lr |= FIELD_PREP(ICH_LR_PHYS_ID_MASK, (u64)irq->hwintid);229229+ vgic_put_irq(vcpu->kvm, irq);230230+ }231231+232232+ return lr;233233+}234234+217235/*218236 * For LRs which have HW bit set such as timer interrupts, we modify them to219237 * have the host hardware interrupt number instead of the virtual one programmed···245217static void vgic_v3_create_shadow_lr(struct kvm_vcpu *vcpu,246218 struct vgic_v3_cpu_if *s_cpu_if)247219{248248- unsigned long lr_map = 0;249249- int index = 0;220220+ struct shadow_if *shadow_if;221221+222222+ shadow_if = container_of(s_cpu_if, struct shadow_if, cpuif);223223+ shadow_if->lr_map = 0;250224251225 for (int i = 0; i < kvm_vgic_global_state.nr_lr; i++) {252226 u64 lr = __vcpu_sys_reg(vcpu, ICH_LRN(i));253253- struct vgic_irq *irq;254227255228 if (!(lr & ICH_LR_STATE))256256- lr = 0;229229+ continue;257230258258- if (!(lr & ICH_LR_HW))259259- goto next;231231+ lr = translate_lr_pintid(vcpu, lr);260232261261- /* We have the HW bit set, check for validity of pINTID */262262- irq = vgic_get_vcpu_irq(vcpu, FIELD_GET(ICH_LR_PHYS_ID_MASK, lr));263263- if (!irq || !irq->hw || irq->intid > VGIC_MAX_SPI ) {264264- /* There was no real mapping, so nuke the HW bit */265265- lr &= ~ICH_LR_HW;266266- if (irq)267267- vgic_put_irq(vcpu->kvm, irq);268268- goto next;269269- }270270-271271- /* Translate the virtual mapping to the real one */272272- lr &= ~ICH_LR_PHYS_ID_MASK;273273- lr |= FIELD_PREP(ICH_LR_PHYS_ID_MASK, (u64)irq->hwintid);274274-275275- vgic_put_irq(vcpu->kvm, irq);276276-277277-next:278278- s_cpu_if->vgic_lr[index] = lr;279279- if (lr) {280280- lr_map |= BIT(i);281281- index++;282282- }233233+ s_cpu_if->vgic_lr[hweight16(shadow_if->lr_map)] = lr;234234+ shadow_if->lr_map |= BIT(i);283235 }284236285285- container_of(s_cpu_if, struct shadow_if, cpuif)->lr_map = lr_map;286286- s_cpu_if->used_lrs = index;237237+ s_cpu_if->used_lrs = hweight16(shadow_if->lr_map);287238}288239289240void vgic_v3_sync_nested(struct kvm_vcpu *vcpu)290241{291242 struct shadow_if *shadow_if = get_shadow_if();292292- int i, index = 0;243243+ int i;293244294245 for_each_set_bit(i, &shadow_if->lr_map, kvm_vgic_global_state.nr_lr) {295246 u64 lr = __vcpu_sys_reg(vcpu, ICH_LRN(i));296247 struct vgic_irq *irq;297248298249 if (!(lr & ICH_LR_HW) || !(lr & ICH_LR_STATE))299299- goto next;250250+ continue;300251301252 /*302253 * If we had a HW lr programmed by the guest hypervisor, we···284277 */285278 irq = vgic_get_vcpu_irq(vcpu, FIELD_GET(ICH_LR_PHYS_ID_MASK, lr));286279 if (WARN_ON(!irq)) /* Shouldn't happen as we check on load */287287- goto next;280280+ continue;288281289289- lr = __gic_v3_get_lr(index);282282+ lr = __gic_v3_get_lr(lr_map_idx_to_shadow_idx(shadow_if, i));290283 if (!(lr & ICH_LR_STATE))291284 irq->active = false;292285293286 vgic_put_irq(vcpu->kvm, irq);294294- next:295295- index++;296287 }297288}298289···373368 val = __vcpu_sys_reg(vcpu, ICH_LRN(i));374369375370 val &= ~ICH_LR_STATE;376376- val |= s_cpu_if->vgic_lr[i] & ICH_LR_STATE;371371+ val |= s_cpu_if->vgic_lr[lr_map_idx_to_shadow_idx(shadow_if, i)] & ICH_LR_STATE;377372378373 __vcpu_assign_sys_reg(vcpu, ICH_LRN(i), val);379379- s_cpu_if->vgic_lr[i] = 0;380374 }381375382382- shadow_if->lr_map = 0;383376 vcpu->arch.vgic_cpu.vgic_v3.used_lrs = 0;384377}385378
+2-2
arch/arm64/lib/crypto/poly1305-glue.c
···3838 unsigned int todo = min_t(unsigned int, len, SZ_4K);39394040 kernel_neon_begin();4141- poly1305_blocks_neon(state, src, todo, 1);4141+ poly1305_blocks_neon(state, src, todo, padbit);4242 kernel_neon_end();43434444 len -= todo;4545 src += todo;4646 } while (len);4747 } else4848- poly1305_blocks(state, src, len, 1);4848+ poly1305_blocks(state, src, len, padbit);4949}5050EXPORT_SYMBOL_GPL(poly1305_blocks_arch);5151
+2-1
arch/arm64/mm/mmu.c
···13051305 next = addr;13061306 end = addr + PUD_SIZE;13071307 do {13081308- pmd_free_pte_page(pmdp, next);13081308+ if (pmd_present(pmdp_get(pmdp)))13091309+ pmd_free_pte_page(pmdp, next);13091310 } while (pmdp++, next += PMD_SIZE, next != end);1310131113111312 pud_clear(pudp);
···3434#define ORC_TYPE_REGS 33535#define ORC_TYPE_REGS_PARTIAL 436363737-#ifndef __ASSEMBLY__3737+#ifndef __ASSEMBLER__3838/*3939 * This struct is more or less a vastly simplified version of the DWARF Call4040 * Frame Information standard. It contains only the necessary parts of DWARF···5353 unsigned int type:3;5454 unsigned int signal:1;5555};5656-#endif /* __ASSEMBLY__ */5656+#endif /* __ASSEMBLER__ */57575858#endif /* _ORC_TYPES_H */
···22#ifndef __ASM_VDSO_VSYSCALL_H33#define __ASM_VDSO_VSYSCALL_H4455-#ifndef __ASSEMBLY__55+#ifndef __ASSEMBLER__6677#include <vdso/datapage.h>8899/* The asm-generic header needs to be included after the definitions above */1010#include <asm-generic/vdso/vsyscall.h>11111212-#endif /* !__ASSEMBLY__ */1212+#endif /* !__ASSEMBLER__ */13131414#endif /* __ASM_VDSO_VSYSCALL_H */
···144144 if (efi_memmap_init_early(&data) < 0)145145 panic("Unable to map EFI memory map.\n");146146147147+ /*148148+ * Reserve the physical memory region occupied by the EFI149149+ * memory map table (header + descriptors). This is crucial150150+ * for kdump, as the kdump kernel relies on this original151151+ * memmap passed by the bootloader. Without reservation,152152+ * this region could be overwritten by the primary kernel.153153+ * Also, set the EFI_PRESERVE_BS_REGIONS flag to indicate that154154+ * critical boot services code/data regions like this are preserved.155155+ */156156+ memblock_reserve((phys_addr_t)boot_memmap, sizeof(*tbl) + data.size);157157+ set_bit(EFI_PRESERVE_BS_REGIONS, &efi.flags);158158+147159 early_memunmap(tbl, sizeof(*tbl));148160 }149161
···183183/*184184 * Used to name C functions called from asm185185 */186186-#ifdef CONFIG_PPC_KERNEL_PCREL186186+#if defined(__powerpc64__) && defined(CONFIG_PPC_KERNEL_PCREL)187187#define CFUNC(name) name@notoc188188#else189189#define CFUNC(name) name
···5353ldflags-y += $(filter-out $(CC_AUTO_VAR_INIT_ZERO_ENABLER) $(CC_FLAGS_FTRACE) -Wa$(comma)%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))54545555CC32FLAGS := -m325656-CC32FLAGSREMOVE := -mcmodel=medium -mabi=elfv1 -mabi=elfv2 -mcall-aixdesc5656+CC32FLAGSREMOVE := -mcmodel=medium -mabi=elfv1 -mabi=elfv2 -mcall-aixdesc -mpcrel5757ifdef CONFIG_CC_IS_CLANG5858# This flag is supported by clang for 64-bit but not 32-bit so it will cause5959# an unused command line flag warning for this file.
···5050#endif5151;5252unsigned long boot_cpu_hartid;5353+EXPORT_SYMBOL_GPL(boot_cpu_hartid);53545455/*5556 * Place kernel memory regions on the resource tree so that
+2-2
arch/riscv/kernel/traps_misaligned.c
···454454455455 val.data_u64 = 0;456456 if (user_mode(regs)) {457457- if (copy_from_user_nofault(&val, (u8 __user *)addr, len))457457+ if (copy_from_user(&val, (u8 __user *)addr, len))458458 return -1;459459 } else {460460 memcpy(&val, (u8 *)addr, len);···555555 return -EOPNOTSUPP;556556557557 if (user_mode(regs)) {558558- if (copy_to_user_nofault((u8 __user *)addr, &val, len))558558+ if (copy_to_user((u8 __user *)addr, &val, len))559559 return -1;560560 } else {561561 memcpy((u8 *)addr, &val, len);
···88#include <linux/types.h>991010/* All SiFive vendor extensions supported in Linux */1111-const struct riscv_isa_ext_data riscv_isa_vendor_ext_sifive[] = {1111+static const struct riscv_isa_ext_data riscv_isa_vendor_ext_sifive[] = {1212 __RISCV_ISA_EXT_DATA(xsfvfnrclipxfqf, RISCV_ISA_VENDOR_EXT_XSFVFNRCLIPXFQF),1313 __RISCV_ISA_EXT_DATA(xsfvfwmaccqqq, RISCV_ISA_VENDOR_EXT_XSFVFWMACCQQQ),1414 __RISCV_ISA_EXT_DATA(xsfvqmaccdod, RISCV_ISA_VENDOR_EXT_XSFVQMACCDOD),
+4-4
arch/riscv/kvm/vcpu_sbi_replace.c
···103103 kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_FENCE_I_SENT);104104 break;105105 case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA:106106- if (cp->a2 == 0 && cp->a3 == 0)106106+ if ((cp->a2 == 0 && cp->a3 == 0) || cp->a3 == -1UL)107107 kvm_riscv_hfence_vvma_all(vcpu->kvm, hbase, hmask);108108 else109109 kvm_riscv_hfence_vvma_gva(vcpu->kvm, hbase, hmask,···111111 kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_SENT);112112 break;113113 case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID:114114- if (cp->a2 == 0 && cp->a3 == 0)114114+ if ((cp->a2 == 0 && cp->a3 == 0) || cp->a3 == -1UL)115115 kvm_riscv_hfence_vvma_asid_all(vcpu->kvm,116116 hbase, hmask, cp->a4);117117 else···127127 case SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA_ASID:128128 /*129129 * Until nested virtualization is implemented, the130130- * SBI HFENCE calls should be treated as NOPs130130+ * SBI HFENCE calls should return not supported131131+ * hence fallthrough.131132 */132132- break;133133 default:134134 retdata->err_val = SBI_ERR_NOT_SUPPORTED;135135 }
+1-1
arch/s390/include/asm/ptrace.h
···265265 addr = kernel_stack_pointer(regs) + n * sizeof(long);266266 if (!regs_within_kernel_stack(regs, addr))267267 return 0;268268- return READ_ONCE_NOCHECK(addr);268268+ return READ_ONCE_NOCHECK(*(unsigned long *)addr);269269}270270271271/**
+1-1
arch/um/drivers/ubd_user.c
···4141 *fd_out = fds[1];42424343 err = os_set_fd_block(*fd_out, 0);4444- err = os_set_fd_block(kernel_fd, 0);4444+ err |= os_set_fd_block(kernel_fd, 0);4545 if (err) {4646 printk("start_io_thread - failed to set nonblocking I/O.\n");4747 goto out_close;
+13-29
arch/um/drivers/vector_kern.c
···1625162516261626 device->dev = dev;1627162716281628- *vp = ((struct vector_private)16291629- {16301630- .list = LIST_HEAD_INIT(vp->list),16311631- .dev = dev,16321632- .unit = n,16331633- .options = get_transport_options(def),16341634- .rx_irq = 0,16351635- .tx_irq = 0,16361636- .parsed = def,16371637- .max_packet = get_mtu(def) + ETH_HEADER_OTHER,16381638- /* TODO - we need to calculate headroom so that ip header16391639- * is 16 byte aligned all the time16401640- */16411641- .headroom = get_headroom(def),16421642- .form_header = NULL,16431643- .verify_header = NULL,16441644- .header_rxbuffer = NULL,16451645- .header_txbuffer = NULL,16461646- .header_size = 0,16471647- .rx_header_size = 0,16481648- .rexmit_scheduled = false,16491649- .opened = false,16501650- .transport_data = NULL,16511651- .in_write_poll = false,16521652- .coalesce = 2,16531653- .req_size = get_req_size(def),16541654- .in_error = false,16551655- .bpf = NULL16561656- });16281628+ INIT_LIST_HEAD(&vp->list);16291629+ vp->dev = dev;16301630+ vp->unit = n;16311631+ vp->options = get_transport_options(def);16321632+ vp->parsed = def;16331633+ vp->max_packet = get_mtu(def) + ETH_HEADER_OTHER;16341634+ /*16351635+ * TODO - we need to calculate headroom so that ip header16361636+ * is 16 byte aligned all the time16371637+ */16381638+ vp->headroom = get_headroom(def);16391639+ vp->coalesce = 2;16401640+ vp->req_size = get_req_size(def);1657164116581642 dev->features = dev->hw_features = (NETIF_F_SG | NETIF_F_FRAGLIST);16591643 INIT_WORK(&vp->reset_tx, vector_reset_tx);
+14
arch/um/drivers/vfio_kern.c
···570570 kfree(dev);571571}572572573573+static struct uml_vfio_device *uml_vfio_find_device(const char *device)574574+{575575+ struct uml_vfio_device *dev;576576+577577+ list_for_each_entry(dev, ¨_vfio_devices, list) {578578+ if (!strcmp(dev->name, device))579579+ return dev;580580+ }581581+ return NULL;582582+}583583+573584static int uml_vfio_cmdline_set(const char *device, const struct kernel_param *kp)574585{575586 struct uml_vfio_device *dev;···592581 return fd;593582 uml_vfio_container.fd = fd;594583 }584584+585585+ if (uml_vfio_find_device(device))586586+ return -EEXIST;595587596588 dev = kzalloc(sizeof(*dev), GFP_KERNEL);597589 if (!dev)
+1-1
arch/x86/Kconfig
···8989 select ARCH_HAS_DMA_OPS if GART_IOMMU || XEN9090 select ARCH_HAS_EARLY_DEBUG if KGDB9191 select ARCH_HAS_ELF_RANDOMIZE9292- select ARCH_HAS_EXECMEM_ROX if X86_649292+ select ARCH_HAS_EXECMEM_ROX if X86_64 && STRICT_MODULE_RWX9393 select ARCH_HAS_FAST_MULTIPLIER9494 select ARCH_HAS_FORTIFY_SOURCE9595 select ARCH_HAS_GCOV_PROFILE_ALL
+1-1
arch/x86/events/intel/core.c
···28262826 * If the PEBS counters snapshotting is enabled,28272827 * the topdown event is available in PEBS records.28282828 */28292829- if (is_topdown_event(event) && !is_pebs_counter_event_group(event))28292829+ if (is_topdown_count(event) && !is_pebs_counter_event_group(event))28302830 static_call(intel_pmu_update_topdown_event)(event, NULL);28312831 else28322832 intel_pmu_drain_pebs_buffer();
+15-4
arch/x86/include/asm/debugreg.h
···99#include <asm/cpufeature.h>1010#include <asm/msr.h>11111212+/*1313+ * Define bits that are always set to 1 in DR7, only bit 10 is1414+ * architecturally reserved to '1'.1515+ *1616+ * This is also the init/reset value for DR7.1717+ */1818+#define DR7_FIXED_1 0x000004001919+1220DECLARE_PER_CPU(unsigned long, cpu_dr7);13211422#ifndef CONFIG_PARAVIRT_XXL···108100109101static inline void hw_breakpoint_disable(void)110102{111111- /* Zero the control register for HW Breakpoint */112112- set_debugreg(0UL, 7);103103+ /* Reset the control register for HW Breakpoint */104104+ set_debugreg(DR7_FIXED_1, 7);113105114106 /* Zero-out the individual HW breakpoint address registers */115107 set_debugreg(0UL, 0);···133125 return 0;134126135127 get_debugreg(dr7, 7);136136- dr7 &= ~0x400; /* architecturally set bit */128128+129129+ /* Architecturally set bit */130130+ dr7 &= ~DR7_FIXED_1;137131 if (dr7)138138- set_debugreg(0, 7);132132+ set_debugreg(DR7_FIXED_1, 7);133133+139134 /*140135 * Ensure the compiler doesn't lower the above statements into141136 * the critical section; disabling breakpoints late would not
···2424int x64_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs);2525int x32_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs);26262727+/*2828+ * To prevent immediate repeat of single step trap on return from SIGTRAP2929+ * handler if the trap flag (TF) is set without an external debugger attached,3030+ * clear the software event flag in the augmented SS, ensuring no single-step3131+ * trap is pending upon ERETU completion.3232+ *3333+ * Note, this function should be called in sigreturn() before the original3434+ * state is restored to make sure the TF is read from the entry frame.3535+ */3636+static __always_inline void prevent_single_step_upon_eretu(struct pt_regs *regs)3737+{3838+ /*3939+ * If the trap flag (TF) is set, i.e., the sigreturn() SYSCALL instruction4040+ * is being single-stepped, do not clear the software event flag in the4141+ * augmented SS, thus a debugger won't skip over the following instruction.4242+ */4343+#ifdef CONFIG_X86_FRED4444+ if (!(regs->flags & X86_EFLAGS_TF))4545+ regs->fred_ss.swevent = 0;4646+#endif4747+}4848+2749#endif /* _ASM_X86_SIGHANDLING_H */
···1515 which debugging register was responsible for the trap. The other bits1616 are either reserved or not of interest to us. */17171818-/* Define reserved bits in DR6 which are always set to 1 */1818+/*1919+ * Define bits in DR6 which are set to 1 by default.2020+ *2121+ * This is also the DR6 architectural value following Power-up, Reset or INIT.2222+ *2323+ * Note, with the introduction of Bus Lock Detection (BLD) and Restricted2424+ * Transactional Memory (RTM), the DR6 register has been modified:2525+ *2626+ * 1) BLD flag (bit 11) is no longer reserved to 1 if the CPU supports2727+ * Bus Lock Detection. The assertion of a bus lock could clear it.2828+ *2929+ * 2) RTM flag (bit 16) is no longer reserved to 1 if the CPU supports3030+ * restricted transactional memory. #DB occurred inside an RTM region3131+ * could clear it.3232+ *3333+ * Apparently, DR6.BLD and DR6.RTM are active low bits.3434+ *3535+ * As a result, DR6_RESERVED is an incorrect name now, but it is kept for3636+ * compatibility.3737+ */1938#define DR6_RESERVED (0xFFFF0FF0)20392140#define DR_TRAP0 (0x1) /* db0 */
+56-25
arch/x86/kernel/alternative.c
···116116#endif117117static void *its_page;118118static unsigned int its_offset;119119+struct its_array its_pages;120120+121121+static void *__its_alloc(struct its_array *pages)122122+{123123+ void *page __free(execmem) = execmem_alloc(EXECMEM_MODULE_TEXT, PAGE_SIZE);124124+ if (!page)125125+ return NULL;126126+127127+ void *tmp = krealloc(pages->pages, (pages->num+1) * sizeof(void *),128128+ GFP_KERNEL);129129+ if (!tmp)130130+ return NULL;131131+132132+ pages->pages = tmp;133133+ pages->pages[pages->num++] = page;134134+135135+ return no_free_ptr(page);136136+}119137120138/* Initialize a thunk with the "jmp *reg; int3" instructions. */121139static void *its_init_thunk(void *thunk, int reg)···169151 return thunk + offset;170152}171153154154+static void its_pages_protect(struct its_array *pages)155155+{156156+ for (int i = 0; i < pages->num; i++) {157157+ void *page = pages->pages[i];158158+ execmem_restore_rox(page, PAGE_SIZE);159159+ }160160+}161161+162162+static void its_fini_core(void)163163+{164164+ if (IS_ENABLED(CONFIG_STRICT_KERNEL_RWX))165165+ its_pages_protect(&its_pages);166166+ kfree(its_pages.pages);167167+}168168+172169#ifdef CONFIG_MODULES173170void its_init_mod(struct module *mod)174171{···206173 its_page = NULL;207174 mutex_unlock(&text_mutex);208175209209- for (int i = 0; i < mod->its_num_pages; i++) {210210- void *page = mod->its_page_array[i];211211- execmem_restore_rox(page, PAGE_SIZE);212212- }176176+ if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX))177177+ its_pages_protect(&mod->arch.its_pages);213178}214179215180void its_free_mod(struct module *mod)···215184 if (!cpu_feature_enabled(X86_FEATURE_INDIRECT_THUNK_ITS))216185 return;217186218218- for (int i = 0; i < mod->its_num_pages; i++) {219219- void *page = mod->its_page_array[i];187187+ for (int i = 0; i < mod->arch.its_pages.num; i++) {188188+ void *page = mod->arch.its_pages.pages[i];220189 execmem_free(page);221190 }222222- kfree(mod->its_page_array);191191+ kfree(mod->arch.its_pages.pages);223192}224193#endif /* CONFIG_MODULES */225194226195static void *its_alloc(void)227196{228228- void *page __free(execmem) = execmem_alloc(EXECMEM_MODULE_TEXT, PAGE_SIZE);197197+ struct its_array *pages = &its_pages;198198+ void *page;229199200200+#ifdef CONFIG_MODULES201201+ if (its_mod)202202+ pages = &its_mod->arch.its_pages;203203+#endif204204+205205+ page = __its_alloc(pages);230206 if (!page)231207 return NULL;232208233233-#ifdef CONFIG_MODULES234234- if (its_mod) {235235- void *tmp = krealloc(its_mod->its_page_array,236236- (its_mod->its_num_pages+1) * sizeof(void *),237237- GFP_KERNEL);238238- if (!tmp)239239- return NULL;209209+ execmem_make_temp_rw(page, PAGE_SIZE);210210+ if (pages == &its_pages)211211+ set_memory_x((unsigned long)page, 1);240212241241- its_mod->its_page_array = tmp;242242- its_mod->its_page_array[its_mod->its_num_pages++] = page;243243-244244- execmem_make_temp_rw(page, PAGE_SIZE);245245- }246246-#endif /* CONFIG_MODULES */247247-248248- return no_free_ptr(page);213213+ return page;249214}250215251216static void *its_allocate_thunk(int reg)···295268 return thunk;296269}297270298298-#endif271271+#else272272+static inline void its_fini_core(void) {}273273+#endif /* CONFIG_MITIGATION_ITS */299274300275/*301276 * Nomenclature for variable names to simplify and clarify this code and ease···23672338 apply_retpolines(__retpoline_sites, __retpoline_sites_end);23682339 apply_returns(__return_sites, __return_sites_end);2369234023412341+ its_fini_core();23422342+23702343 /*23712344 * Adjust all CALL instructions to point to func()-10, including23722345 * those in .altinstr_replacement.···31383107 */31393108void __ref smp_text_poke_single(void *addr, const void *opcode, size_t len, const void *emulate)31403109{31413141- __smp_text_poke_batch_add(addr, opcode, len, emulate);31103110+ smp_text_poke_batch_add(addr, opcode, len, emulate);31423111 smp_text_poke_batch_finish();31433112}
···22432243#endif22442244#endif2245224522462246-/*22472247- * Clear all 6 debug registers:22482248- */22492249-static void clear_all_debug_regs(void)22462246+static void initialize_debug_regs(void)22502247{22512251- int i;22522252-22532253- for (i = 0; i < 8; i++) {22542254- /* Ignore db4, db5 */22552255- if ((i == 4) || (i == 5))22562256- continue;22572257-22582258- set_debugreg(0, i);22592259- }22482248+ /* Control register first -- to make sure everything is disabled. */22492249+ set_debugreg(DR7_FIXED_1, 7);22502250+ set_debugreg(DR6_RESERVED, 6);22512251+ /* dr5 and dr4 don't exist */22522252+ set_debugreg(0, 3);22532253+ set_debugreg(0, 2);22542254+ set_debugreg(0, 1);22552255+ set_debugreg(0, 0);22602256}2261225722622258#ifdef CONFIG_KGDB···2413241724142418 load_mm_ldt(&init_mm);2415241924162416- clear_all_debug_regs();24202420+ initialize_debug_regs();24172421 dbg_restore_debug_regs();2418242224192423 doublefault_init_cpu_tss();
+4-2
arch/x86/kernel/cpu/resctrl/core.c
···498498 struct rdt_hw_mon_domain *hw_dom;499499 struct rdt_domain_hdr *hdr;500500 struct rdt_mon_domain *d;501501+ struct cacheinfo *ci;501502 int err;502503503504 lockdep_assert_held(&domain_list_lock);···526525 d = &hw_dom->d_resctrl;527526 d->hdr.id = id;528527 d->hdr.type = RESCTRL_MON_DOMAIN;529529- d->ci = get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE);530530- if (!d->ci) {528528+ ci = get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE);529529+ if (!ci) {531530 pr_warn_once("Can't find L3 cache for CPU:%d resource %s\n", cpu, r->name);532531 mon_domain_free(hw_dom);533532 return;534533 }534534+ d->ci_id = ci->id;535535 cpumask_set_cpu(cpu, &d->hdr.cpu_mask);536536537537 arch_mon_domain_online(r, d);
+1-1
arch/x86/kernel/kgdb.c
···385385 struct perf_event *bp;386386387387 /* Disable hardware debugging while we are in kgdb: */388388- set_debugreg(0UL, 7);388388+ set_debugreg(DR7_FIXED_1, 7);389389 for (i = 0; i < HBP_NUM; i++) {390390 if (!breakinfo[i].enabled)391391 continue;
+1-1
arch/x86/kernel/process_32.c
···93939494 /* Only print out debug registers if they are in their non-default state. */9595 if ((d0 == 0) && (d1 == 0) && (d2 == 0) && (d3 == 0) &&9696- (d6 == DR6_RESERVED) && (d7 == 0x400))9696+ (d6 == DR6_RESERVED) && (d7 == DR7_FIXED_1))9797 return;98989999 printk("%sDR0: %08lx DR1: %08lx DR2: %08lx DR3: %08lx\n",
+1-1
arch/x86/kernel/process_64.c
···133133134134 /* Only print out debug registers if they are in their non-default state. */135135 if (!((d0 == 0) && (d1 == 0) && (d2 == 0) && (d3 == 0) &&136136- (d6 == DR6_RESERVED) && (d7 == 0x400))) {136136+ (d6 == DR6_RESERVED) && (d7 == DR7_FIXED_1))) {137137 printk("%sDR0: %016lx DR1: %016lx DR2: %016lx\n",138138 log_lvl, d0, d1, d2);139139 printk("%sDR3: %016lx DR6: %016lx DR7: %016lx\n",
···10221022#endif10231023}1024102410251025-static __always_inline unsigned long debug_read_clear_dr6(void)10251025+static __always_inline unsigned long debug_read_reset_dr6(void)10261026{10271027 unsigned long dr6;10281028+10291029+ get_debugreg(dr6, 6);10301030+ dr6 ^= DR6_RESERVED; /* Flip to positive polarity */1028103110291032 /*10301033 * The Intel SDM says:10311034 *10321032- * Certain debug exceptions may clear bits 0-3. The remaining10331033- * contents of the DR6 register are never cleared by the10341034- * processor. To avoid confusion in identifying debug10351035- * exceptions, debug handlers should clear the register before10361036- * returning to the interrupted task.10351035+ * Certain debug exceptions may clear bits 0-3 of DR6.10371036 *10381038- * Keep it simple: clear DR6 immediately.10371037+ * BLD induced #DB clears DR6.BLD and any other debug10381038+ * exception doesn't modify DR6.BLD.10391039+ *10401040+ * RTM induced #DB clears DR6.RTM and any other debug10411041+ * exception sets DR6.RTM.10421042+ *10431043+ * To avoid confusion in identifying debug exceptions,10441044+ * debug handlers should set DR6.BLD and DR6.RTM, and10451045+ * clear other DR6 bits before returning.10461046+ *10471047+ * Keep it simple: write DR6 with its architectural reset10481048+ * value 0xFFFF0FF0, defined as DR6_RESERVED, immediately.10391049 */10401040- get_debugreg(dr6, 6);10411050 set_debugreg(DR6_RESERVED, 6);10421042- dr6 ^= DR6_RESERVED; /* Flip to positive polarity */1043105110441052 return dr6;10451053}···12471239/* IST stack entry */12481240DEFINE_IDTENTRY_DEBUG(exc_debug)12491241{12501250- exc_debug_kernel(regs, debug_read_clear_dr6());12421242+ exc_debug_kernel(regs, debug_read_reset_dr6());12511243}1252124412531245/* User entry, runs on regular task stack */12541246DEFINE_IDTENTRY_DEBUG_USER(exc_debug)12551247{12561256- exc_debug_user(regs, debug_read_clear_dr6());12481248+ exc_debug_user(regs, debug_read_reset_dr6());12571249}1258125012591251#ifdef CONFIG_X86_FRED···12721264{12731265 /*12741266 * FRED #DB stores DR6 on the stack in the format which12751275- * debug_read_clear_dr6() returns for the IDT entry points.12671267+ * debug_read_reset_dr6() returns for the IDT entry points.12761268 */12771269 unsigned long dr6 = fred_event_data(regs);12781270···12871279/* 32 bit does not have separate entry points. */12881280DEFINE_IDTENTRY_RAW(exc_debug)12891281{12901290- unsigned long dr6 = debug_read_clear_dr6();12821282+ unsigned long dr6 = debug_read_reset_dr6();1291128312921284 if (user_mode(regs))12931285 exc_debug_user(regs, dr6);
+76-7
arch/x86/kvm/vmx/tdx.c
···12121212 /*12131213 * Converting TDVMCALL_MAP_GPA to KVM_HC_MAP_GPA_RANGE requires12141214 * userspace to enable KVM_CAP_EXIT_HYPERCALL with KVM_HC_MAP_GPA_RANGE12151215- * bit set. If not, the error code is not defined in GHCI for TDX, use12161216- * TDVMCALL_STATUS_INVALID_OPERAND for this case.12151215+ * bit set. This is a base call so it should always be supported, but12161216+ * KVM has no way to ensure that userspace implements the GHCI correctly.12171217+ * So if KVM_HC_MAP_GPA_RANGE does not cause a VMEXIT, return an error12181218+ * to the guest.12171219 */12181220 if (!user_exit_on_hypercall(vcpu->kvm, KVM_HC_MAP_GPA_RANGE)) {12191219- ret = TDVMCALL_STATUS_INVALID_OPERAND;12211221+ ret = TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED;12201222 goto error;12211223 }12221224···14511449 return 1;14521450}1453145114521452+static int tdx_complete_get_td_vm_call_info(struct kvm_vcpu *vcpu)14531453+{14541454+ struct vcpu_tdx *tdx = to_tdx(vcpu);14551455+14561456+ tdvmcall_set_return_code(vcpu, vcpu->run->tdx.get_tdvmcall_info.ret);14571457+14581458+ /*14591459+ * For now, there is no TDVMCALL beyond GHCI base API supported by KVM14601460+ * directly without the support from userspace, just set the value14611461+ * returned from userspace.14621462+ */14631463+ tdx->vp_enter_args.r11 = vcpu->run->tdx.get_tdvmcall_info.r11;14641464+ tdx->vp_enter_args.r12 = vcpu->run->tdx.get_tdvmcall_info.r12;14651465+ tdx->vp_enter_args.r13 = vcpu->run->tdx.get_tdvmcall_info.r13;14661466+ tdx->vp_enter_args.r14 = vcpu->run->tdx.get_tdvmcall_info.r14;14671467+14681468+ return 1;14691469+}14701470+14541471static int tdx_get_td_vm_call_info(struct kvm_vcpu *vcpu)14551472{14561473 struct vcpu_tdx *tdx = to_tdx(vcpu);1457147414581458- if (tdx->vp_enter_args.r12)14591459- tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_INVALID_OPERAND);14601460- else {14751475+ switch (tdx->vp_enter_args.r12) {14761476+ case 0:14611477 tdx->vp_enter_args.r11 = 0;14781478+ tdx->vp_enter_args.r12 = 0;14621479 tdx->vp_enter_args.r13 = 0;14631480 tdx->vp_enter_args.r14 = 0;14811481+ tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_SUCCESS);14821482+ return 1;14831483+ case 1:14841484+ vcpu->run->tdx.get_tdvmcall_info.leaf = tdx->vp_enter_args.r12;14851485+ vcpu->run->exit_reason = KVM_EXIT_TDX;14861486+ vcpu->run->tdx.flags = 0;14871487+ vcpu->run->tdx.nr = TDVMCALL_GET_TD_VM_CALL_INFO;14881488+ vcpu->run->tdx.get_tdvmcall_info.ret = TDVMCALL_STATUS_SUCCESS;14891489+ vcpu->run->tdx.get_tdvmcall_info.r11 = 0;14901490+ vcpu->run->tdx.get_tdvmcall_info.r12 = 0;14911491+ vcpu->run->tdx.get_tdvmcall_info.r13 = 0;14921492+ vcpu->run->tdx.get_tdvmcall_info.r14 = 0;14931493+ vcpu->arch.complete_userspace_io = tdx_complete_get_td_vm_call_info;14941494+ return 0;14951495+ default:14961496+ tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_INVALID_OPERAND);14971497+ return 1;14641498 }14991499+}15001500+15011501+static int tdx_complete_simple(struct kvm_vcpu *vcpu)15021502+{15031503+ tdvmcall_set_return_code(vcpu, vcpu->run->tdx.unknown.ret);14651504 return 1;15051505+}15061506+15071507+static int tdx_get_quote(struct kvm_vcpu *vcpu)15081508+{15091509+ struct vcpu_tdx *tdx = to_tdx(vcpu);15101510+ u64 gpa = tdx->vp_enter_args.r12;15111511+ u64 size = tdx->vp_enter_args.r13;15121512+15131513+ /* The gpa of buffer must have shared bit set. */15141514+ if (vt_is_tdx_private_gpa(vcpu->kvm, gpa)) {15151515+ tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_INVALID_OPERAND);15161516+ return 1;15171517+ }15181518+15191519+ vcpu->run->exit_reason = KVM_EXIT_TDX;15201520+ vcpu->run->tdx.flags = 0;15211521+ vcpu->run->tdx.nr = TDVMCALL_GET_QUOTE;15221522+ vcpu->run->tdx.get_quote.ret = TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED;15231523+ vcpu->run->tdx.get_quote.gpa = gpa & ~gfn_to_gpa(kvm_gfn_direct_bits(tdx->vcpu.kvm));15241524+ vcpu->run->tdx.get_quote.size = size;15251525+15261526+ vcpu->arch.complete_userspace_io = tdx_complete_simple;15271527+15281528+ return 0;14661529}1467153014681531static int handle_tdvmcall(struct kvm_vcpu *vcpu)···15391472 return tdx_report_fatal_error(vcpu);15401473 case TDVMCALL_GET_TD_VM_CALL_INFO:15411474 return tdx_get_td_vm_call_info(vcpu);14751475+ case TDVMCALL_GET_QUOTE:14761476+ return tdx_get_quote(vcpu);15421477 default:15431478 break;15441479 }1545148015461546- tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_INVALID_OPERAND);14811481+ tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED);15471482 return 1;15481483}15491484
···128128static void bdev_count_inflight_rw(struct block_device *part,129129 unsigned int inflight[2], bool mq_driver)130130{131131+ int write = 0;132132+ int read = 0;131133 int cpu;132134133135 if (mq_driver) {134136 blk_mq_in_driver_rw(part, inflight);135135- } else {136136- for_each_possible_cpu(cpu) {137137- inflight[READ] += part_stat_local_read_cpu(138138- part, in_flight[READ], cpu);139139- inflight[WRITE] += part_stat_local_read_cpu(140140- part, in_flight[WRITE], cpu);141141- }137137+ return;142138 }143139144144- if (WARN_ON_ONCE((int)inflight[READ] < 0))145145- inflight[READ] = 0;146146- if (WARN_ON_ONCE((int)inflight[WRITE] < 0))147147- inflight[WRITE] = 0;140140+ for_each_possible_cpu(cpu) {141141+ read += part_stat_local_read_cpu(part, in_flight[READ], cpu);142142+ write += part_stat_local_read_cpu(part, in_flight[WRITE], cpu);143143+ }144144+145145+ /*146146+ * While iterating all CPUs, some IOs may be issued from a CPU already147147+ * traversed and complete on a CPU that has not yet been traversed,148148+ * causing the inflight number to be negative.149149+ */150150+ inflight[READ] = read > 0 ? read : 0;151151+ inflight[WRITE] = write > 0 ? write : 0;148152}149153150154/**
+21-4
crypto/Kconfig
···176176177177config CRYPTO_SELFTESTS178178 bool "Enable cryptographic self-tests"179179- depends on DEBUG_KERNEL179179+ depends on EXPERT180180 help181181 Enable the cryptographic self-tests.182182183183 The cryptographic self-tests run at boot time, or at algorithm184184 registration time if algorithms are dynamically loaded later.185185186186- This is primarily intended for developer use. It should not be187187- enabled in production kernels, unless you are trying to use these188188- tests to fulfill a FIPS testing requirement.186186+ There are two main use cases for these tests:187187+188188+ - Development and pre-release testing. In this case, also enable189189+ CRYPTO_SELFTESTS_FULL to get the full set of tests. All crypto code190190+ in the kernel is expected to pass the full set of tests.191191+192192+ - Production kernels, to help prevent buggy drivers from being used193193+ and/or meet FIPS 140-3 pre-operational testing requirements. In194194+ this case, enable CRYPTO_SELFTESTS but not CRYPTO_SELFTESTS_FULL.195195+196196+config CRYPTO_SELFTESTS_FULL197197+ bool "Enable the full set of cryptographic self-tests"198198+ depends on CRYPTO_SELFTESTS199199+ help200200+ Enable the full set of cryptographic self-tests for each algorithm.201201+202202+ The full set of tests should be enabled for development and203203+ pre-release testing, but not in production kernels.204204+205205+ All crypto code in the kernel is expected to pass the full tests.189206190207config CRYPTO_NULL191208 tristate "Null algorithms"
+3-1
crypto/ahash.c
···600600601601static int ahash_def_finup_finish1(struct ahash_request *req, int err)602602{603603+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);604604+603605 if (err)604606 goto out;605607606608 req->base.complete = ahash_def_finup_done2;607609608608- err = crypto_ahash_final(req);610610+ err = crypto_ahash_alg(tfm)->final(req);609611 if (err == -EINPROGRESS || err == -EBUSY)610612 return err;611613
+12-3
crypto/testmgr.c
···4545module_param(notests, bool, 0644);4646MODULE_PARM_DESC(notests, "disable all crypto self-tests");47474848+#ifdef CONFIG_CRYPTO_SELFTESTS_FULL4849static bool noslowtests;4950module_param(noslowtests, bool, 0644);5051MODULE_PARM_DESC(noslowtests, "disable slow crypto self-tests");···5352static unsigned int fuzz_iterations = 100;5453module_param(fuzz_iterations, uint, 0644);5554MODULE_PARM_DESC(fuzz_iterations, "number of fuzz test iterations");5555+#else5656+#define noslowtests 15757+#define fuzz_iterations 05858+#endif56595760#ifndef CONFIG_CRYPTO_SELFTESTS5861···324319325320/*326321 * The following are the lists of testvec_configs to test for each algorithm327327- * type when the fast crypto self-tests are enabled. They aim to provide good328328- * test coverage, while keeping the test time much shorter than the full tests329329- * so that the fast tests can be used to fulfill FIPS 140 testing requirements.322322+ * type when the "fast" crypto self-tests are enabled. They aim to provide good323323+ * test coverage, while keeping the test time much shorter than the "full" tests324324+ * so that the "fast" tests can be enabled in a wider range of circumstances.330325 */331326332327/* Configs for skciphers and aeads */···1188118311891184static void crypto_disable_simd_for_test(void)11901185{11861186+#ifdef CONFIG_CRYPTO_SELFTESTS_FULL11911187 migrate_disable();11921188 __this_cpu_write(crypto_simd_disabled_for_test, true);11891189+#endif11931190}1194119111951192static void crypto_reenable_simd_for_test(void)11961193{11941194+#ifdef CONFIG_CRYPTO_SELFTESTS_FULL11971195 __this_cpu_write(crypto_simd_disabled_for_test, false);11981196 migrate_enable();11971197+#endif11991198}1200119912011200/*
···483483 return_ACPI_STATUS(AE_NULL_OBJECT);484484 }485485486486+ if (this_walk_state->num_operands < obj_desc->method.param_count) {487487+ ACPI_ERROR((AE_INFO, "Missing argument for method [%4.4s]",488488+ acpi_ut_get_node_name(method_node)));489489+490490+ return_ACPI_STATUS(AE_AML_UNINITIALIZED_ARG);491491+ }492492+486493 /* Init for new method, possibly wait on method mutex */487494488495 status =
+33-6
drivers/ata/ahci.c
···1410141014111411static bool ahci_broken_lpm(struct pci_dev *pdev)14121412{14131413+ /*14141414+ * Platforms with LPM problems.14151415+ * If driver_data is NULL, there is no existing BIOS version with14161416+ * functioning LPM.14171417+ * If driver_data is non-NULL, then driver_data contains the DMI BIOS14181418+ * build date of the first BIOS version with functioning LPM (i.e. older14191419+ * BIOS versions have broken LPM).14201420+ */14131421 static const struct dmi_system_id sysids[] = {14141414- /* Various Lenovo 50 series have LPM issues with older BIOSen */14151422 {14161423 .matches = {14171424 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),···14451438 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),14461439 DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad W541"),14471440 },14411441+ .driver_data = "20180409", /* 2.35 */14421442+ },14431443+ {14441444+ .matches = {14451445+ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),14461446+ DMI_MATCH(DMI_PRODUCT_NAME, "ASUSPRO D840MB_M840SA"),14471447+ },14481448+ /* 320 is broken, there is no known good version. */14491449+ },14501450+ {14481451 /*14491449- * Note date based on release notes, 2.35 has been14501450- * reported to be good, but I've been unable to get14511451- * a hold of the reporter to get the DMI BIOS date.14521452- * TODO: fix this.14521452+ * AMD 500 Series Chipset SATA Controller [1022:43eb]14531453+ * on this motherboard timeouts on ports 5 and 6 when14541454+ * LPM is enabled, at least with WDC WD20EFAX-68FB5N014551455+ * hard drives. LPM with the same drive works fine on14561456+ * all other ports on the same controller.14531457 */14541454- .driver_data = "20180310", /* 2.35 */14581458+ .matches = {14591459+ DMI_MATCH(DMI_BOARD_VENDOR,14601460+ "ASUSTeK COMPUTER INC."),14611461+ DMI_MATCH(DMI_BOARD_NAME,14621462+ "ROG STRIX B550-F GAMING (WI-FI)"),14631463+ },14641464+ /* 3621 is broken, there is no known good version. */14551465 },14561466 { } /* terminate list */14571467 };···1478145414791455 if (!dmi)14801456 return false;14571457+14581458+ if (!dmi->driver_data)14591459+ return true;1481146014821461 dmi_get_date(DMI_BIOS_DATE, &year, &month, &date);14831462 snprintf(buf, sizeof(buf), "%04d%02d%02d", year, month, date);
+16-8
drivers/ata/libata-acpi.c
···514514EXPORT_SYMBOL_GPL(ata_acpi_gtm_xfermask);515515516516/**517517- * ata_acpi_cbl_80wire - Check for 80 wire cable517517+ * ata_acpi_cbl_pata_type - Return PATA cable type518518 * @ap: Port to check519519- * @gtm: GTM data to use520519 *521521- * Return 1 if the @gtm indicates the BIOS selected an 80wire mode.520520+ * Return ATA_CBL_PATA* according to the transfer mode selected by BIOS522521 */523523-int ata_acpi_cbl_80wire(struct ata_port *ap, const struct ata_acpi_gtm *gtm)522522+int ata_acpi_cbl_pata_type(struct ata_port *ap)524523{525524 struct ata_device *dev;525525+ int ret = ATA_CBL_PATA_UNK;526526+ const struct ata_acpi_gtm *gtm = ata_acpi_init_gtm(ap);527527+528528+ if (!gtm)529529+ return ATA_CBL_PATA40;526530527531 ata_for_each_dev(dev, &ap->link, ENABLED) {528532 unsigned int xfer_mask, udma_mask;···534530 xfer_mask = ata_acpi_gtm_xfermask(dev, gtm);535531 ata_unpack_xfermask(xfer_mask, NULL, NULL, &udma_mask);536532537537- if (udma_mask & ~ATA_UDMA_MASK_40C)538538- return 1;533533+ ret = ATA_CBL_PATA40;534534+535535+ if (udma_mask & ~ATA_UDMA_MASK_40C) {536536+ ret = ATA_CBL_PATA80;537537+ break;538538+ }539539 }540540541541- return 0;541541+ return ret;542542}543543-EXPORT_SYMBOL_GPL(ata_acpi_cbl_80wire);543543+EXPORT_SYMBOL_GPL(ata_acpi_cbl_pata_type);544544545545static void ata_acpi_gtf_to_tf(struct ata_device *dev,546546 const struct ata_acpi_gtf *gtf,
···8080 DEVFL_NEWSIZE = (1<<6), /* need to update dev size in block layer */8181 DEVFL_FREEING = (1<<7), /* set when device is being cleaned up */8282 DEVFL_FREED = (1<<8), /* device has been cleaned up */8383+ DEVFL_DEAD = (1<<9), /* device has timed out of aoe_deadsecs */8384};84858586enum {
···198198{199199 struct aoetgt *t, **tt, **te;200200 struct list_head *head, *pos, *nx;201201+ struct request *rq, *rqnext;201202 int i;203203+ unsigned long flags;202204203203- d->flags &= ~DEVFL_UP;205205+ spin_lock_irqsave(&d->lock, flags);206206+ d->flags &= ~(DEVFL_UP | DEVFL_DEAD);207207+ spin_unlock_irqrestore(&d->lock, flags);204208205209 /* clean out active and to-be-retransmitted buffers */206210 for (i = 0; i < NFACTIVE; i++) {···226222227223 /* clean out the in-process request (if any) */228224 aoe_failip(d);225225+226226+ /* clean out any queued block requests */227227+ list_for_each_entry_safe(rq, rqnext, &d->rq_list, queuelist) {228228+ list_del_init(&rq->queuelist);229229+ blk_mq_start_request(rq);230230+ blk_mq_end_request(rq, BLK_STS_IOERR);231231+ }229232230233 /* fast fail all pending I/O */231234 if (d->blkq) {
+39-11
drivers/block/ublk_drv.c
···11481148 blk_mq_end_request(req, res);11491149}1150115011511151-static void ublk_complete_io_cmd(struct ublk_io *io, struct request *req,11521152- int res, unsigned issue_flags)11511151+static struct io_uring_cmd *__ublk_prep_compl_io_cmd(struct ublk_io *io,11521152+ struct request *req)11531153{11541154 /* read cmd first because req will overwrite it */11551155 struct io_uring_cmd *cmd = io->cmd;···11641164 io->flags &= ~UBLK_IO_FLAG_ACTIVE;1165116511661166 io->req = req;11671167+ return cmd;11681168+}11691169+11701170+static void ublk_complete_io_cmd(struct ublk_io *io, struct request *req,11711171+ int res, unsigned issue_flags)11721172+{11731173+ struct io_uring_cmd *cmd = __ublk_prep_compl_io_cmd(io, req);1167117411681175 /* tell ublksrv one io request is coming */11691176 io_uring_cmd_done(cmd, res, 0, issue_flags);···14231416 return BLK_STS_OK;14241417}1425141814191419+static inline bool ublk_belong_to_same_batch(const struct ublk_io *io,14201420+ const struct ublk_io *io2)14211421+{14221422+ return (io_uring_cmd_ctx_handle(io->cmd) ==14231423+ io_uring_cmd_ctx_handle(io2->cmd)) &&14241424+ (io->task == io2->task);14251425+}14261426+14261427static void ublk_queue_rqs(struct rq_list *rqlist)14271428{14281429 struct rq_list requeue_list = { };···14421427 struct ublk_queue *this_q = req->mq_hctx->driver_data;14431428 struct ublk_io *this_io = &this_q->ios[req->tag];1444142914451445- if (io && io->task != this_io->task && !rq_list_empty(&submit_list))14301430+ if (io && !ublk_belong_to_same_batch(io, this_io) &&14311431+ !rq_list_empty(&submit_list))14461432 ublk_queue_cmd_list(io, &submit_list);14471433 io = this_io;14481434···21642148 return 0;21652149}2166215021672167-static bool ublk_get_data(const struct ublk_queue *ubq, struct ublk_io *io)21512151+static bool ublk_get_data(const struct ublk_queue *ubq, struct ublk_io *io,21522152+ struct request *req)21682153{21692169- struct request *req = io->req;21702170-21712154 /*21722155 * We have handled UBLK_IO_NEED_GET_DATA command,21732156 * so clear UBLK_IO_FLAG_NEED_GET_DATA now and just···21932178 u32 cmd_op = cmd->cmd_op;21942179 unsigned tag = ub_cmd->tag;21952180 int ret = -EINVAL;21812181+ struct request *req;2196218221972183 pr_devel("%s: received: cmd op %d queue %d tag %d result %d\n",21982184 __func__, cmd->cmd_op, ub_cmd->q_id, tag,···22522236 goto out;22532237 break;22542238 case UBLK_IO_NEED_GET_DATA:22552255- io->addr = ub_cmd->addr;22562256- if (!ublk_get_data(ubq, io))22572257- return -EIOCBQUEUED;22582258-22592259- return UBLK_IO_RES_OK;22392239+ /*22402240+ * ublk_get_data() may fail and fallback to requeue, so keep22412241+ * uring_cmd active first and prepare for handling new requeued22422242+ * request22432243+ */22442244+ req = io->req;22452245+ ublk_fill_io_cmd(io, cmd, ub_cmd->addr);22462246+ io->flags &= ~UBLK_IO_FLAG_OWNED_BY_SRV;22472247+ if (likely(ublk_get_data(ubq, io, req))) {22482248+ __ublk_prep_compl_io_cmd(io, req);22492249+ return UBLK_IO_RES_OK;22502250+ }22512251+ break;22602252 default:22612253 goto out;22622254 }···2848282428492825 if (copy_from_user(&info, argp, sizeof(info)))28502826 return -EFAULT;28272827+28282828+ if (info.queue_depth > UBLK_MAX_QUEUE_DEPTH || !info.queue_depth ||28292829+ info.nr_hw_queues > UBLK_MAX_NR_QUEUES || !info.nr_hw_queues)28302830+ return -EINVAL;2851283128522832 if (capable(CAP_SYS_ADMIN))28532833 info.flags &= ~UBLK_F_UNPRIVILEGED_DEV;
+31-2
drivers/bluetooth/btintel_pcie.c
···20332033 data->hdev = NULL;20342034}2035203520362036+static void btintel_pcie_disable_interrupts(struct btintel_pcie_data *data)20372037+{20382038+ spin_lock(&data->irq_lock);20392039+ btintel_pcie_wr_reg32(data, BTINTEL_PCIE_CSR_MSIX_FH_INT_MASK, data->fh_init_mask);20402040+ btintel_pcie_wr_reg32(data, BTINTEL_PCIE_CSR_MSIX_HW_INT_MASK, data->hw_init_mask);20412041+ spin_unlock(&data->irq_lock);20422042+}20432043+20442044+static void btintel_pcie_enable_interrupts(struct btintel_pcie_data *data)20452045+{20462046+ spin_lock(&data->irq_lock);20472047+ btintel_pcie_wr_reg32(data, BTINTEL_PCIE_CSR_MSIX_FH_INT_MASK, ~data->fh_init_mask);20482048+ btintel_pcie_wr_reg32(data, BTINTEL_PCIE_CSR_MSIX_HW_INT_MASK, ~data->hw_init_mask);20492049+ spin_unlock(&data->irq_lock);20502050+}20512051+20522052+static void btintel_pcie_synchronize_irqs(struct btintel_pcie_data *data)20532053+{20542054+ for (int i = 0; i < data->alloc_vecs; i++)20552055+ synchronize_irq(data->msix_entries[i].vector);20562056+}20572057+20362058static int btintel_pcie_setup_internal(struct hci_dev *hdev)20372059{20382060 struct btintel_pcie_data *data = hci_get_drvdata(hdev);···21742152 bt_dev_err(hdev, "Firmware download retry count: %d",21752153 fw_dl_retry);21762154 btintel_pcie_dump_debug_registers(hdev);21552155+ btintel_pcie_disable_interrupts(data);21562156+ btintel_pcie_synchronize_irqs(data);21772157 err = btintel_pcie_reset_bt(data);21782158 if (err) {21792159 bt_dev_err(hdev, "Failed to do shr reset: %d", err);···21832159 }21842160 usleep_range(10000, 12000);21852161 btintel_pcie_reset_ia(data);21622162+ btintel_pcie_enable_interrupts(data);21862163 btintel_pcie_config_msix(data);21872164 err = btintel_pcie_enable_bt(data);21882165 if (err) {···2316229123172292 data = pci_get_drvdata(pdev);2318229322942294+ btintel_pcie_disable_interrupts(data);22952295+22962296+ btintel_pcie_synchronize_irqs(data);22972297+22982298+ flush_work(&data->rx_work);22992299+23192300 btintel_pcie_reset_bt(data);23202301 for (int i = 0; i < data->alloc_vecs; i++) {23212302 struct msix_entry *msix_entry;···23332302 pci_free_irq_vectors(pdev);2334230323352304 btintel_pcie_release_hdev(data);23362336-23372337- flush_work(&data->rx_work);2338230523392306 destroy_workqueue(data->workqueue);23402307
+10-3
drivers/bluetooth/hci_qca.c
···23922392 */23932393 qcadev->bt_power->pwrseq = devm_pwrseq_get(&serdev->dev,23942394 "bluetooth");23952395- if (IS_ERR(qcadev->bt_power->pwrseq))23962396- return PTR_ERR(qcadev->bt_power->pwrseq);2397239523982398- break;23962396+ /*23972397+ * Some modules have BT_EN enabled via a hardware pull-up,23982398+ * meaning it is not defined in the DTS and is not controlled23992399+ * through the power sequence. In such cases, fall through24002400+ * to follow the legacy flow.24012401+ */24022402+ if (IS_ERR(qcadev->bt_power->pwrseq))24032403+ qcadev->bt_power->pwrseq = NULL;24042404+ else24052405+ break;23992406 }24002407 fallthrough;24012408 case QCA_WCN3950:
+13-5
drivers/cxl/core/edac.c
···103103 u8 *cap, u16 *cycle, u8 *flags, u8 *min_cycle)104104{105105 struct cxl_mailbox *cxl_mbox;106106- u8 min_scrub_cycle = U8_MAX;107106 struct cxl_region_params *p;108107 struct cxl_memdev *cxlmd;109108 struct cxl_region *cxlr;109109+ u8 min_scrub_cycle = 0;110110 int i, ret;111111112112 if (!cxl_ps_ctx->cxlr) {···133133 if (ret)134134 return ret;135135136136+ /*137137+ * The min_scrub_cycle of a region is the max of minimum scrub138138+ * cycles supported by memdevs that back the region.139139+ */136140 if (min_cycle)137137- min_scrub_cycle = min(*min_cycle, min_scrub_cycle);141141+ min_scrub_cycle = max(*min_cycle, min_scrub_cycle);138142 }139143140144 if (min_cycle)···11031099 old_rec = xa_store(&array_rec->rec_gen_media,11041100 le64_to_cpu(rec->media_hdr.phys_addr), rec,11051101 GFP_KERNEL);11061106- if (xa_is_err(old_rec))11021102+ if (xa_is_err(old_rec)) {11031103+ kfree(rec);11071104 return xa_err(old_rec);11051105+ }1108110611091107 kfree(old_rec);11101108···11331127 old_rec = xa_store(&array_rec->rec_dram,11341128 le64_to_cpu(rec->media_hdr.phys_addr), rec,11351129 GFP_KERNEL);11361136- if (xa_is_err(old_rec))11301130+ if (xa_is_err(old_rec)) {11311131+ kfree(rec);11371132 return xa_err(old_rec);11331133+ }1138113411391135 kfree(old_rec);11401136···13231315 attrbs.bank = ctx->bank;13241316 break;13251317 case EDAC_REPAIR_RANK_SPARING:13261326- attrbs.repair_type = CXL_BANK_SPARING;13181318+ attrbs.repair_type = CXL_RANK_SPARING;13271319 break;13281320 default:13291321 return NULL;
+1-1
drivers/cxl/core/features.c
···544544 u32 flags;545545546546 if (rpc_in->op_size < sizeof(uuid_t))547547- return ERR_PTR(-EINVAL);547547+ return false;548548549549 feat = cxl_feature_info(cxlfs, &rpc_in->set_feat_in.uuid);550550 if (IS_ERR(feat))
···19021902 continue;19031903 }19041904 job = to_amdgpu_job(s_job);19051905- if (preempted && (&job->hw_fence) == fence)19051905+ if (preempted && (&job->hw_fence.base) == fence)19061906 /* mark the job as preempted */19071907 job->preemption_status |= AMDGPU_IB_PREEMPTED;19081908 }
+56-26
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
···60196019 return ret;60206020}6021602160226022-static int amdgpu_device_halt_activities(struct amdgpu_device *adev,60236023- struct amdgpu_job *job,60246024- struct amdgpu_reset_context *reset_context,60256025- struct list_head *device_list,60266026- struct amdgpu_hive_info *hive,60276027- bool need_emergency_restart)60226022+static int amdgpu_device_recovery_prepare(struct amdgpu_device *adev,60236023+ struct list_head *device_list,60246024+ struct amdgpu_hive_info *hive)60286025{60296029- struct list_head *device_list_handle = NULL;60306026 struct amdgpu_device *tmp_adev = NULL;60316031- int i, r = 0;60276027+ int r;6032602860336029 /*60346030 * Build list of devices to reset.···60416045 }60426046 if (!list_is_first(&adev->reset_list, device_list))60436047 list_rotate_to_front(&adev->reset_list, device_list);60446044- device_list_handle = device_list;60456048 } else {60466049 list_add_tail(&adev->reset_list, device_list);60476047- device_list_handle = device_list;60486050 }6049605160506052 if (!amdgpu_sriov_vf(adev) && (!adev->pcie_reset_ctx.occurs_dpc)) {60516051- r = amdgpu_device_health_check(device_list_handle);60536053+ r = amdgpu_device_health_check(device_list);60526054 if (r)60536055 return r;60546056 }6055605760566056- /* We need to lock reset domain only once both for XGMI and single device */60576057- tmp_adev = list_first_entry(device_list_handle, struct amdgpu_device,60586058- reset_list);60586058+ return 0;60596059+}60606060+60616061+static void amdgpu_device_recovery_get_reset_lock(struct amdgpu_device *adev,60626062+ struct list_head *device_list)60636063+{60646064+ struct amdgpu_device *tmp_adev = NULL;60656065+60666066+ if (list_empty(device_list))60676067+ return;60686068+ tmp_adev =60696069+ list_first_entry(device_list, struct amdgpu_device, reset_list);60596070 amdgpu_device_lock_reset_domain(tmp_adev->reset_domain);60716071+}60726072+60736073+static void amdgpu_device_recovery_put_reset_lock(struct amdgpu_device *adev,60746074+ struct list_head *device_list)60756075+{60766076+ struct amdgpu_device *tmp_adev = NULL;60776077+60786078+ if (list_empty(device_list))60796079+ return;60806080+ tmp_adev =60816081+ list_first_entry(device_list, struct amdgpu_device, reset_list);60826082+ amdgpu_device_unlock_reset_domain(tmp_adev->reset_domain);60836083+}60846084+60856085+static int amdgpu_device_halt_activities(60866086+ struct amdgpu_device *adev, struct amdgpu_job *job,60876087+ struct amdgpu_reset_context *reset_context,60886088+ struct list_head *device_list, struct amdgpu_hive_info *hive,60896089+ bool need_emergency_restart)60906090+{60916091+ struct amdgpu_device *tmp_adev = NULL;60926092+ int i, r = 0;6060609360616094 /* block all schedulers and reset given job's ring */60626062- list_for_each_entry(tmp_adev, device_list_handle, reset_list) {60636063-60956095+ list_for_each_entry(tmp_adev, device_list, reset_list) {60646096 amdgpu_device_set_mp1_state(tmp_adev);6065609760666098 /*···62766252 amdgpu_ras_set_error_query_ready(tmp_adev, true);6277625362786254 }62796279-62806280- tmp_adev = list_first_entry(device_list, struct amdgpu_device,62816281- reset_list);62826282- amdgpu_device_unlock_reset_domain(tmp_adev->reset_domain);62836283-62846255}6285625662866257···63436324 reset_context->hive = hive;63446325 INIT_LIST_HEAD(&device_list);6345632663276327+ if (amdgpu_device_recovery_prepare(adev, &device_list, hive))63286328+ goto end_reset;63296329+63306330+ /* We need to lock reset domain only once both for XGMI and single device */63316331+ amdgpu_device_recovery_get_reset_lock(adev, &device_list);63326332+63466333 r = amdgpu_device_halt_activities(adev, job, reset_context, &device_list,63476334 hive, need_emergency_restart);63486335 if (r)63496349- goto end_reset;63366336+ goto reset_unlock;6350633763516338 if (need_emergency_restart)63526339 goto skip_sched_resume;···63626337 *63636338 * job->base holds a reference to parent fence63646339 */63656365- if (job && dma_fence_is_signaled(&job->hw_fence)) {63406340+ if (job && dma_fence_is_signaled(&job->hw_fence.base)) {63666341 job_signaled = true;63676342 dev_info(adev->dev, "Guilty job already signaled, skipping HW reset");63686343 goto skip_hw_reset;···6370634563716346 r = amdgpu_device_asic_reset(adev, &device_list, reset_context);63726347 if (r)63736373- goto end_reset;63486348+ goto reset_unlock;63746349skip_hw_reset:63756350 r = amdgpu_device_sched_resume(&device_list, reset_context, job_signaled);63766351 if (r)63776377- goto end_reset;63526352+ goto reset_unlock;63786353skip_sched_resume:63796354 amdgpu_device_gpu_resume(adev, &device_list, need_emergency_restart);63556355+reset_unlock:63566356+ amdgpu_device_recovery_put_reset_lock(adev, &device_list);63806357end_reset:63816358 if (hive) {63826359 mutex_unlock(&hive->hive_lock);···67906763 memset(&reset_context, 0, sizeof(reset_context));67916764 INIT_LIST_HEAD(&device_list);6792676567666766+ amdgpu_device_recovery_prepare(adev, &device_list, hive);67676767+ amdgpu_device_recovery_get_reset_lock(adev, &device_list);67936768 r = amdgpu_device_halt_activities(adev, NULL, &reset_context, &device_list,67946769 hive, false);67956770 if (hive) {···69096880 if (hive) {69106881 list_for_each_entry(tmp_adev, &device_list, reset_list)69116882 amdgpu_device_unset_mp1_state(tmp_adev);69126912- amdgpu_device_unlock_reset_domain(adev->reset_domain);69136883 }68846884+ amdgpu_device_recovery_put_reset_lock(adev, &device_list);69146885 }6915688669166887 if (hive) {···6956692769576928 amdgpu_device_sched_resume(&device_list, NULL, NULL);69586929 amdgpu_device_gpu_resume(adev, &device_list, false);69306930+ amdgpu_device_recovery_put_reset_lock(adev, &device_list);69596931 adev->pcie_reset_ctx.occurs_dpc = false;6960693269616933 if (hive) {
+13-15
drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
···321321 const struct firmware *fw;322322 int r;323323324324- r = request_firmware(&fw, fw_name, adev->dev);324324+ r = firmware_request_nowarn(&fw, fw_name, adev->dev);325325 if (r) {326326- dev_err(adev->dev, "can't load firmware \"%s\"\n",327327- fw_name);326326+ if (amdgpu_discovery == 2)327327+ dev_err(adev->dev, "can't load firmware \"%s\"\n", fw_name);328328+ else329329+ drm_info(&adev->ddev, "Optional firmware \"%s\" was not found\n", fw_name);328330 return r;329331 }330332···461459 /* Read from file if it is the preferred option */462460 fw_name = amdgpu_discovery_get_fw_name(adev);463461 if (fw_name != NULL) {464464- dev_info(adev->dev, "use ip discovery information from file");462462+ drm_dbg(&adev->ddev, "use ip discovery information from file");465463 r = amdgpu_discovery_read_binary_from_file(adev, adev->mman.discovery_bin, fw_name);466466-467467- if (r) {468468- dev_err(adev->dev, "failed to read ip discovery binary from file\n");469469- r = -EINVAL;464464+ if (r)470465 goto out;471471- }472472-473466 } else {467467+ drm_dbg(&adev->ddev, "use ip discovery information from memory");474468 r = amdgpu_discovery_read_binary_from_mem(475469 adev, adev->mman.discovery_bin);476470 if (r)···13361338 int r;1337133913381340 r = amdgpu_discovery_init(adev);13391339- if (r) {13401340- DRM_ERROR("amdgpu_discovery_init failed\n");13411341+ if (r)13411342 return r;13421342- }1343134313441344 wafl_ver = 0;13451345 adev->gfx.xcc_mask = 0;···25752579 break;25762580 default:25772581 r = amdgpu_discovery_reg_base_init(adev);25782578- if (r)25792579- return -EINVAL;25822582+ if (r) {25832583+ drm_err(&adev->ddev, "discovery failed: %d\n", r);25842584+ return r;25852585+ }2580258625812587 amdgpu_discovery_harvest_ip(adev);25822588 amdgpu_discovery_get_gfx_info(adev);
+7-23
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
···4141#include "amdgpu_trace.h"4242#include "amdgpu_reset.h"43434444-/*4545- * Fences mark an event in the GPUs pipeline and are used4646- * for GPU/CPU synchronization. When the fence is written,4747- * it is expected that all buffers associated with that fence4848- * are no longer in use by the associated ring on the GPU and4949- * that the relevant GPU caches have been flushed.5050- */5151-5252-struct amdgpu_fence {5353- struct dma_fence base;5454-5555- /* RB, DMA, etc. */5656- struct amdgpu_ring *ring;5757- ktime_t start_timestamp;5858-};5959-6044static struct kmem_cache *amdgpu_fence_slab;61456246int amdgpu_fence_slab_init(void)···135151 am_fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC);136152 if (am_fence == NULL)137153 return -ENOMEM;138138- fence = &am_fence->base;139139- am_fence->ring = ring;140154 } else {141155 /* take use of job-embedded fence */142142- fence = &job->hw_fence;156156+ am_fence = &job->hw_fence;143157 }158158+ fence = &am_fence->base;159159+ am_fence->ring = ring;144160145161 seq = ++ring->fence_drv.sync_seq;146162 if (job && job->job_run_counter) {···702718 * it right here or we won't be able to track them in fence_drv703719 * and they will remain unsignaled during sa_bo free.704720 */705705- job = container_of(old, struct amdgpu_job, hw_fence);721721+ job = container_of(old, struct amdgpu_job, hw_fence.base);706722 if (!job->base.s_fence && !dma_fence_is_signaled(old))707723 dma_fence_signal(old);708724 RCU_INIT_POINTER(*ptr, NULL);···764780765781static const char *amdgpu_job_fence_get_timeline_name(struct dma_fence *f)766782{767767- struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);783783+ struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base);768784769785 return (const char *)to_amdgpu_ring(job->base.sched)->name;770786}···794810 */795811static bool amdgpu_job_fence_enable_signaling(struct dma_fence *f)796812{797797- struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);813813+ struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base);798814799815 if (!timer_pending(&to_amdgpu_ring(job->base.sched)->fence_drv.fallback_timer))800816 amdgpu_fence_schedule_fallback(to_amdgpu_ring(job->base.sched));···829845 struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);830846831847 /* free job if fence has a parent job */832832- kfree(container_of(f, struct amdgpu_job, hw_fence));848848+ kfree(container_of(f, struct amdgpu_job, hw_fence.base));833849}834850835851/**
+6-6
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
···272272 /* Check if any fences where initialized */273273 if (job->base.s_fence && job->base.s_fence->finished.ops)274274 f = &job->base.s_fence->finished;275275- else if (job->hw_fence.ops)276276- f = &job->hw_fence;275275+ else if (job->hw_fence.base.ops)276276+ f = &job->hw_fence.base;277277 else278278 f = NULL;279279···290290 amdgpu_sync_free(&job->explicit_sync);291291292292 /* only put the hw fence if has embedded fence */293293- if (!job->hw_fence.ops)293293+ if (!job->hw_fence.base.ops)294294 kfree(job);295295 else296296- dma_fence_put(&job->hw_fence);296296+ dma_fence_put(&job->hw_fence.base);297297}298298299299void amdgpu_job_set_gang_leader(struct amdgpu_job *job,···322322 if (job->gang_submit != &job->base.s_fence->scheduled)323323 dma_fence_put(job->gang_submit);324324325325- if (!job->hw_fence.ops)325325+ if (!job->hw_fence.base.ops)326326 kfree(job);327327 else328328- dma_fence_put(&job->hw_fence);328328+ dma_fence_put(&job->hw_fence.base);329329}330330331331struct dma_fence *amdgpu_job_submit(struct amdgpu_job *job)
···127127 struct dma_fence **fences;128128};129129130130+/*131131+ * Fences mark an event in the GPUs pipeline and are used132132+ * for GPU/CPU synchronization. When the fence is written,133133+ * it is expected that all buffers associated with that fence134134+ * are no longer in use by the associated ring on the GPU and135135+ * that the relevant GPU caches have been flushed.136136+ */137137+138138+struct amdgpu_fence {139139+ struct dma_fence base;140140+141141+ /* RB, DMA, etc. */142142+ struct amdgpu_ring *ring;143143+ ktime_t start_timestamp;144144+};145145+130146extern const struct drm_sched_backend_ops amdgpu_sched_ops;131147132148void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring);
+6-4
drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
···540540 case IP_VERSION(4, 4, 2):541541 case IP_VERSION(4, 4, 4):542542 case IP_VERSION(4, 4, 5):543543- /* For SDMA 4.x, use the existing DPM interface for backward compatibility */544544- r = amdgpu_dpm_reset_sdma(adev, 1 << instance_id);543543+ /* For SDMA 4.x, use the existing DPM interface for backward compatibility,544544+ * we need to convert the logical instance ID to physical instance ID before reset.545545+ */546546+ r = amdgpu_dpm_reset_sdma(adev, 1 << GET_INST(SDMA0, instance_id));545547 break;546548 case IP_VERSION(5, 0, 0):547549 case IP_VERSION(5, 0, 1):···570568/**571569 * amdgpu_sdma_reset_engine - Reset a specific SDMA engine572570 * @adev: Pointer to the AMDGPU device573573- * @instance_id: ID of the SDMA engine instance to reset571571+ * @instance_id: Logical ID of the SDMA engine instance to reset574572 *575573 * Returns: 0 on success, or a negative error code on failure.576574 */···603601 /* Perform the SDMA reset for the specified instance */604602 ret = amdgpu_sdma_soft_reset(adev, instance_id);605603 if (ret) {606606- dev_err(adev->dev, "Failed to reset SDMA instance %u\n", instance_id);604604+ dev_err(adev->dev, "Failed to reset SDMA logical instance %u\n", instance_id);607605 goto exit;608606 }609607
+17
drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
···30303131#define AMDGPU_UCODE_NAME_MAX (128)32323333+static const struct kicker_device kicker_device_list[] = {3434+ {0x744B, 0x00},3535+};3636+3337static void amdgpu_ucode_print_common_hdr(const struct common_firmware_header *hdr)3438{3539 DRM_DEBUG("size_bytes: %u\n", le32_to_cpu(hdr->size_bytes));···13891385 }13901386 }13911387 return NULL;13881388+}13891389+13901390+bool amdgpu_is_kicker_fw(struct amdgpu_device *adev)13911391+{13921392+ int i;13931393+13941394+ for (i = 0; i < ARRAY_SIZE(kicker_device_list); i++) {13951395+ if (adev->pdev->device == kicker_device_list[i].device &&13961396+ adev->pdev->revision == kicker_device_list[i].revision)13971397+ return true;13981398+ }13991399+14001400+ return false;13921401}1393140213941403void amdgpu_ucode_ip_version_decode(struct amdgpu_device *adev, int block_type, char *ucode_prefix, int len)
···241241 DC_LOG_DC("BIOS object table - end");242242243243 /* Create a link for each usb4 dpia port */244244+ dc->lowest_dpia_link_index = MAX_LINKS;244245 for (i = 0; i < dc->res_pool->usb4_dpia_count; i++) {245246 struct link_init_data link_init_params = {0};246247 struct dc_link *link;···254253255254 link = dc->link_srv->create_link(&link_init_params);256255 if (link) {256256+ if (dc->lowest_dpia_link_index > dc->link_count)257257+ dc->lowest_dpia_link_index = dc->link_count;258258+257259 dc->links[dc->link_count] = link;258260 link->dc = dc;259261 ++dc->link_count;···63796375 return dc->res_pool->funcs->get_det_buffer_size(context);63806376 else63816377 return 0;63786378+}63796379+/**63806380+ ***********************************************************************************************63816381+ * dc_get_host_router_index: Get index of host router from a dpia link63826382+ *63836383+ * This function return a host router index of the target link. If the target link is dpia link.63846384+ *63856385+ * @param [in] link: target link63866386+ * @param [out] host_router_index: host router index of the target link63876387+ *63886388+ * @return: true if the host router index is found and valid.63896389+ *63906390+ ***********************************************************************************************63916391+ */63926392+bool dc_get_host_router_index(const struct dc_link *link, unsigned int *host_router_index)63936393+{63946394+ struct dc *dc = link->ctx->dc;63956395+63966396+ if (link->ep_type != DISPLAY_ENDPOINT_USB4_DPIA)63976397+ return false;63986398+63996399+ if (link->link_index < dc->lowest_dpia_link_index)64006400+ return false;64016401+64026402+ *host_router_index = (link->link_index - dc->lowest_dpia_link_index) / dc->caps.num_of_dpias_per_host_router;64036403+ if (*host_router_index < dc->caps.num_of_host_routers)64046404+ return true;64056405+ else64066406+ return false;63826407}6383640863846409bool dc_is_cursor_limit_pending(struct dc *dc)
···788788 plane->pixel_format = dml2_420_10;789789 break;790790 case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616:791791+ case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616:791792 case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616F:792793 case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616F:793794 plane->pixel_format = dml2_444_64;
···953953 out->SourcePixelFormat[location] = dml_420_10;954954 break;955955 case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616:956956+ case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616:956957 case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616F:957958 case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616F:958959 out->SourcePixelFormat[location] = dml_444_64;
···18941894 dc->caps.color.mpc.ogam_rom_caps.hlg = 0;18951895 dc->caps.color.mpc.ocsc = 1;1896189618971897+ dc->caps.num_of_host_routers = 2;18981898+ dc->caps.num_of_dpias_per_host_router = 2;18991899+18971900 /* max_disp_clock_khz_at_vmin is slightly lower than the STA value in order18981901 * to provide some margin.18991902 * It's expected for furture ASIC to have equal or higher value, in order to
···18661866 dc->caps.color.mpc.ogam_rom_caps.hlg = 0;18671867 dc->caps.color.mpc.ocsc = 1;1868186818691869+ dc->caps.num_of_host_routers = 2;18701870+ dc->caps.num_of_dpias_per_host_router = 2;18711871+18691872 /* max_disp_clock_khz_at_vmin is slightly lower than the STA value in order18701873 * to provide some margin.18711874 * It's expected for furture ASIC to have equal or higher value, in order to
···18671867 dc->caps.color.mpc.ogam_rom_caps.hlg = 0;18681868 dc->caps.color.mpc.ocsc = 1;1869186918701870+ dc->caps.num_of_host_routers = 2;18711871+ dc->caps.num_of_dpias_per_host_router = 2;18721872+18701873 /* max_disp_clock_khz_at_vmin is slightly lower than the STA value in order18711874 * to provide some margin.18721875 * It's expected for furture ASIC to have equal or higher value, in order to
···348348 * 200 ms. We'll assume that the panel driver will have the hardcoded349349 * delay in its prepare and always disable HPD.350350 *351351- * If HPD somehow makes sense on some future panel we'll have to352352- * change this to be conditional on someone specifying that HPD should353353- * be used.351351+ * For DisplayPort bridge type, we need HPD. So we use the bridge type352352+ * to conditionally disable HPD.353353+ * NOTE: The bridge type is set in ti_sn_bridge_probe() but enable_comms()354354+ * can be called before. So for DisplayPort, HPD will be enabled once355355+ * bridge type is set. We are using bridge type instead of "no-hpd"356356+ * property because it is not used properly in devicetree description357357+ * and hence is unreliable.354358 */355355- regmap_update_bits(pdata->regmap, SN_HPD_DISABLE_REG, HPD_DISABLE,356356- HPD_DISABLE);359359+360360+ if (pdata->bridge.type != DRM_MODE_CONNECTOR_DisplayPort)361361+ regmap_update_bits(pdata->regmap, SN_HPD_DISABLE_REG, HPD_DISABLE,362362+ HPD_DISABLE);357363358364 pdata->comms_enabled = true;359365···12011195 struct ti_sn65dsi86 *pdata = bridge_to_ti_sn65dsi86(bridge);12021196 int val = 0;1203119712041204- pm_runtime_get_sync(pdata->dev);11981198+ /*11991199+ * Runtime reference is grabbed in ti_sn_bridge_hpd_enable()12001200+ * as the chip won't report HPD just after being powered on.12011201+ * HPD_DEBOUNCED_STATE reflects correct state only after the12021202+ * debounce time (~100-400 ms).12031203+ */12041204+12051205 regmap_read(pdata->regmap, SN_HPD_DISABLE_REG, &val);12061206- pm_runtime_put_autosuspend(pdata->dev);1207120612081207 return val & HPD_DEBOUNCED_STATE ? connector_status_connected12091208 : connector_status_disconnected;···12311220 debugfs_create_file("status", 0600, debugfs, pdata, &status_fops);12321221}1233122212231223+static void ti_sn_bridge_hpd_enable(struct drm_bridge *bridge)12241224+{12251225+ struct ti_sn65dsi86 *pdata = bridge_to_ti_sn65dsi86(bridge);12261226+12271227+ /*12281228+ * Device needs to be powered on before reading the HPD state12291229+ * for reliable hpd detection in ti_sn_bridge_detect() due to12301230+ * the high debounce time.12311231+ */12321232+12331233+ pm_runtime_get_sync(pdata->dev);12341234+}12351235+12361236+static void ti_sn_bridge_hpd_disable(struct drm_bridge *bridge)12371237+{12381238+ struct ti_sn65dsi86 *pdata = bridge_to_ti_sn65dsi86(bridge);12391239+12401240+ pm_runtime_put_autosuspend(pdata->dev);12411241+}12421242+12341243static const struct drm_bridge_funcs ti_sn_bridge_funcs = {12351244 .attach = ti_sn_bridge_attach,12361245 .detach = ti_sn_bridge_detach,···12651234 .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,12661235 .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,12671236 .debugfs_init = ti_sn65dsi86_debugfs_init,12371237+ .hpd_enable = ti_sn_bridge_hpd_enable,12381238+ .hpd_disable = ti_sn_bridge_hpd_disable,12681239};1269124012701241static void ti_sn_bridge_parse_lanes(struct ti_sn65dsi86 *pdata,···13541321 pdata->bridge.type = pdata->next_bridge->type == DRM_MODE_CONNECTOR_DisplayPort13551322 ? DRM_MODE_CONNECTOR_DisplayPort : DRM_MODE_CONNECTOR_eDP;1356132313571357- if (pdata->bridge.type == DRM_MODE_CONNECTOR_DisplayPort)13581358- pdata->bridge.ops = DRM_BRIDGE_OP_EDID | DRM_BRIDGE_OP_DETECT;13241324+ if (pdata->bridge.type == DRM_MODE_CONNECTOR_DisplayPort) {13251325+ pdata->bridge.ops = DRM_BRIDGE_OP_EDID | DRM_BRIDGE_OP_DETECT |13261326+ DRM_BRIDGE_OP_HPD;13271327+ /*13281328+ * If comms were already enabled they would have been enabled13291329+ * with the wrong value of HPD_DISABLE. Update it now. Comms13301330+ * could be enabled if anyone is holding a pm_runtime reference13311331+ * (like if a GPIO is in use). Note that in most cases nobody13321332+ * is doing AUX channel xfers before the bridge is added so13331333+ * HPD doesn't _really_ matter then. The only exception is in13341334+ * the eDP case where the panel wants to read the EDID before13351335+ * the bridge is added. We always consistently have HPD disabled13361336+ * for eDP.13371337+ */13381338+ mutex_lock(&pdata->comms_mutex);13391339+ if (pdata->comms_enabled)13401340+ regmap_update_bits(pdata->regmap, SN_HPD_DISABLE_REG,13411341+ HPD_DISABLE, 0);13421342+ mutex_unlock(&pdata->comms_mutex);13431343+ };1359134413601345 drm_bridge_add(&pdata->bridge);13611346
+5-2
drivers/gpu/drm/display/drm_bridge_connector.c
···708708 if (bridge_connector->bridge_hdmi_audio ||709709 bridge_connector->bridge_dp_audio) {710710 struct device *dev;711711+ struct drm_bridge *bridge;711712712713 if (bridge_connector->bridge_hdmi_audio)713713- dev = bridge_connector->bridge_hdmi_audio->hdmi_audio_dev;714714+ bridge = bridge_connector->bridge_hdmi_audio;714715 else715715- dev = bridge_connector->bridge_dp_audio->hdmi_audio_dev;716716+ bridge = bridge_connector->bridge_dp_audio;717717+718718+ dev = bridge->hdmi_audio_dev;716719717720 ret = drm_connector_hdmi_audio_init(connector, dev,718721 &drm_bridge_connector_hdmi_audio_funcs,
+1-1
drivers/gpu/drm/display/drm_dp_helper.c
···725725 * monitor doesn't power down exactly after the throw away read.726726 */727727 if (!aux->is_remote) {728728- ret = drm_dp_dpcd_probe(aux, DP_DPCD_REV);728728+ ret = drm_dp_dpcd_probe(aux, DP_LANE0_1_STATUS);729729 if (ret < 0)730730 return ret;731731 }
+4-3
drivers/gpu/drm/drm_writeback.c
···343343/**344344 * drm_writeback_connector_cleanup - Cleanup the writeback connector345345 * @dev: DRM device346346- * @wb_connector: Pointer to the writeback connector to clean up346346+ * @data: Pointer to the writeback connector to clean up347347 *348348 * This will decrement the reference counter of blobs and destroy properties. It349349 * will also clean the remaining jobs in this writeback connector. Caution: This helper will not350350 * clean up the attached encoder and the drm_connector.351351 */352352static void drm_writeback_connector_cleanup(struct drm_device *dev,353353- struct drm_writeback_connector *wb_connector)353353+ void *data)354354{355355 unsigned long flags;356356 struct drm_writeback_job *pos, *n;357357+ struct drm_writeback_connector *wb_connector = data;357358358359 delete_writeback_properties(dev);359360 drm_property_blob_put(wb_connector->pixel_formats_blob_ptr);···406405 if (ret)407406 return ret;408407409409- ret = drmm_add_action_or_reset(dev, (void *)drm_writeback_connector_cleanup,408408+ ret = drmm_add_action_or_reset(dev, drm_writeback_connector_cleanup,410409 wb_connector);411410 if (ret)412411 return ret;
···131131 struct msm_ringbuffer *ring = submit->ring;132132 unsigned int i, ibs = 0;133133134134+ adreno_check_and_reenable_stall(adreno_gpu);135135+134136 if (IS_ENABLED(CONFIG_DRM_MSM_GPU_SUDO) && submit->in_rb) {135137 ring->cur_ctx_seqno = 0;136138 a5xx_submit_in_rb(gpu, submit);
+18
drivers/gpu/drm/msm/adreno/a6xx_gpu.c
···130130 OUT_RING(ring, lower_32_bits(rbmemptr(ring, fence)));131131 OUT_RING(ring, upper_32_bits(rbmemptr(ring, fence)));132132 OUT_RING(ring, submit->seqno - 1);133133+134134+ OUT_PKT7(ring, CP_THREAD_CONTROL, 1);135135+ OUT_RING(ring, CP_SET_THREAD_BOTH);136136+137137+ /* Reset state used to synchronize BR and BV */138138+ OUT_PKT7(ring, CP_RESET_CONTEXT_STATE, 1);139139+ OUT_RING(ring,140140+ CP_RESET_CONTEXT_STATE_0_CLEAR_ON_CHIP_TS |141141+ CP_RESET_CONTEXT_STATE_0_CLEAR_RESOURCE_TABLE |142142+ CP_RESET_CONTEXT_STATE_0_CLEAR_BV_BR_COUNTER |143143+ CP_RESET_CONTEXT_STATE_0_RESET_GLOBAL_LOCAL_TS);144144+145145+ OUT_PKT7(ring, CP_THREAD_CONTROL, 1);146146+ OUT_RING(ring, CP_SET_THREAD_BR);133147 }134148135149 if (!sysprof) {···225211 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);226212 struct msm_ringbuffer *ring = submit->ring;227213 unsigned int i, ibs = 0;214214+215215+ adreno_check_and_reenable_stall(adreno_gpu);228216229217 a6xx_set_pagetable(a6xx_gpu, ring, submit);230218···350334 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);351335 struct msm_ringbuffer *ring = submit->ring;352336 unsigned int i, ibs = 0;337337+338338+ adreno_check_and_reenable_stall(adreno_gpu);353339354340 /*355341 * Toggle concurrent binning for pagetable switch and set the thread to
+29-10
drivers/gpu/drm/msm/adreno/adreno_device.c
···137137 return NULL;138138}139139140140-static int find_chipid(struct device *dev, uint32_t *chipid)140140+static int find_chipid(struct device_node *node, uint32_t *chipid)141141{142142- struct device_node *node = dev->of_node;143142 const char *compat;144143 int ret;145144···172173 /* and if that fails, fall back to legacy "qcom,chipid" property: */173174 ret = of_property_read_u32(node, "qcom,chipid", chipid);174175 if (ret) {175175- DRM_DEV_ERROR(dev, "could not parse qcom,chipid: %d\n", ret);176176+ DRM_ERROR("%pOF: could not parse qcom,chipid: %d\n",177177+ node, ret);176178 return ret;177179 }178180179179- dev_warn(dev, "Using legacy qcom,chipid binding!\n");181181+ pr_warn("%pOF: Using legacy qcom,chipid binding!\n", node);180182181183 return 0;184184+}185185+186186+bool adreno_has_gpu(struct device_node *node)187187+{188188+ const struct adreno_info *info;189189+ uint32_t chip_id;190190+ int ret;191191+192192+ ret = find_chipid(node, &chip_id);193193+ if (ret)194194+ return false;195195+196196+ info = adreno_info(chip_id);197197+ if (!info) {198198+ pr_warn("%pOF: Unknown GPU revision: %"ADRENO_CHIPID_FMT"\n",199199+ node, ADRENO_CHIPID_ARGS(chip_id));200200+ return false;201201+ }202202+203203+ return true;182204}183205184206static int adreno_bind(struct device *dev, struct device *master, void *data)···211191 struct msm_gpu *gpu;212192 int ret;213193214214- ret = find_chipid(dev, &config.chip_id);215215- if (ret)194194+ ret = find_chipid(dev->of_node, &config.chip_id);195195+ /* We shouldn't have gotten this far if we can't parse the chip_id */196196+ if (WARN_ON(ret))216197 return ret;217198218199 dev->platform_data = &config;219200 priv->gpu_pdev = to_platform_device(dev);220201221202 info = adreno_info(config.chip_id);222222- if (!info) {223223- dev_warn(drm->dev, "Unknown GPU revision: %"ADRENO_CHIPID_FMT"\n",224224- ADRENO_CHIPID_ARGS(config.chip_id));203203+ /* We shouldn't have gotten this far if we don't recognize the GPU: */204204+ if (WARN_ON(!info))225205 return -ENXIO;226226- }227206228207 config.info = info;229208
+43-11
drivers/gpu/drm/msm/adreno/adreno_gpu.c
···259259 return BIT(ttbr1_cfg->ias) - ADRENO_VM_START;260260}261261262262+void adreno_check_and_reenable_stall(struct adreno_gpu *adreno_gpu)263263+{264264+ struct msm_gpu *gpu = &adreno_gpu->base;265265+ struct msm_drm_private *priv = gpu->dev->dev_private;266266+ unsigned long flags;267267+268268+ /*269269+ * Wait until the cooldown period has passed and we would actually270270+ * collect a crashdump to re-enable stall-on-fault.271271+ */272272+ spin_lock_irqsave(&priv->fault_stall_lock, flags);273273+ if (!priv->stall_enabled &&274274+ ktime_after(ktime_get(), priv->stall_reenable_time) &&275275+ !READ_ONCE(gpu->crashstate)) {276276+ priv->stall_enabled = true;277277+278278+ gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, true);279279+ }280280+ spin_unlock_irqrestore(&priv->fault_stall_lock, flags);281281+}282282+262283#define ARM_SMMU_FSR_TF BIT(1)263284#define ARM_SMMU_FSR_PF BIT(3)264285#define ARM_SMMU_FSR_EF BIT(4)286286+#define ARM_SMMU_FSR_SS BIT(30)265287266288int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags,267289 struct adreno_smmu_fault_info *info, const char *block,268290 u32 scratch[4])269291{292292+ struct msm_drm_private *priv = gpu->dev->dev_private;270293 const char *type = "UNKNOWN";271271- bool do_devcoredump = info && !READ_ONCE(gpu->crashstate);294294+ bool do_devcoredump = info && (info->fsr & ARM_SMMU_FSR_SS) &&295295+ !READ_ONCE(gpu->crashstate);296296+ unsigned long irq_flags;272297273298 /*274274- * If we aren't going to be resuming later from fault_worker, then do275275- * it now.299299+ * In case there is a subsequent storm of pagefaults, disable300300+ * stall-on-fault for at least half a second.276301 */277277- if (!do_devcoredump) {278278- gpu->aspace->mmu->funcs->resume_translation(gpu->aspace->mmu);302302+ spin_lock_irqsave(&priv->fault_stall_lock, irq_flags);303303+ if (priv->stall_enabled) {304304+ priv->stall_enabled = false;305305+306306+ gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, false);279307 }308308+ priv->stall_reenable_time = ktime_add_ms(ktime_get(), 500);309309+ spin_unlock_irqrestore(&priv->fault_stall_lock, irq_flags);280310281311 /*282312 * Print a default message if we couldn't get the data from the···334304 scratch[0], scratch[1], scratch[2], scratch[3]);335305336306 if (do_devcoredump) {307307+ struct msm_gpu_fault_info fault_info = {};308308+337309 /* Turn off the hangcheck timer to keep it from bothering us */338310 timer_delete(&gpu->hangcheck_timer);339311340340- gpu->fault_info.ttbr0 = info->ttbr0;341341- gpu->fault_info.iova = iova;342342- gpu->fault_info.flags = flags;343343- gpu->fault_info.type = type;344344- gpu->fault_info.block = block;312312+ fault_info.ttbr0 = info->ttbr0;313313+ fault_info.iova = iova;314314+ fault_info.flags = flags;315315+ fault_info.type = type;316316+ fault_info.block = block;345317346346- kthread_queue_work(gpu->worker, &gpu->fault_work);318318+ msm_gpu_fault_crashstate_capture(gpu, &fault_info);347319 }348320349321 return 0;
···9494 timing->vsync_polarity = 0;9595 }96969797- /* for DP/EDP, Shift timings to align it to bottom right */9898- if (phys_enc->hw_intf->cap->type == INTF_DP) {9797+ timing->wide_bus_en = dpu_encoder_is_widebus_enabled(phys_enc->parent);9898+ timing->compression_en = dpu_encoder_is_dsc_enabled(phys_enc->parent);9999+100100+ /*101101+ * For DP/EDP, Shift timings to align it to bottom right.102102+ * wide_bus_en is set for everything excluding SDM845 &103103+ * porch changes cause DisplayPort failure and HDMI tearing.104104+ */105105+ if (phys_enc->hw_intf->cap->type == INTF_DP && timing->wide_bus_en) {99106 timing->h_back_porch += timing->h_front_porch;100107 timing->h_front_porch = 0;101108 timing->v_back_porch += timing->v_front_porch;102109 timing->v_front_porch = 0;103110 }104104-105105- timing->wide_bus_en = dpu_encoder_is_widebus_enabled(phys_enc->parent);106106- timing->compression_en = dpu_encoder_is_dsc_enabled(phys_enc->parent);107111108112 /*109113 * for DP, divide the horizonal parameters by 2 when
···704704 /* TODO: Remove this when we have proper display handover support */705705 msm_dsi_phy_pll_save_state(phy);706706707707+ /*708708+ * Store also proper vco_current_rate, because its value will be used in709709+ * dsi_10nm_pll_restore_state().710710+ */711711+ if (!dsi_pll_10nm_vco_recalc_rate(&pll_10nm->clk_hw, VCO_REF_CLK_RATE))712712+ pll_10nm->vco_current_rate = pll_10nm->phy->cfg->min_pll_rate;713713+707714 return 0;708715}709716
+32
drivers/gpu/drm/msm/msm_debugfs.c
···208208 shrink_get, shrink_set,209209 "0x%08llx\n");210210211211+/*212212+ * Return the number of microseconds to wait until stall-on-fault is213213+ * re-enabled. If 0 then it is already enabled or will be re-enabled on the214214+ * next submit (unless there's a leftover devcoredump). This is useful for215215+ * kernel tests that intentionally produce a fault and check the devcoredump to216216+ * wait until the cooldown period is over.217217+ */218218+219219+static int220220+stall_reenable_time_get(void *data, u64 *val)221221+{222222+ struct msm_drm_private *priv = data;223223+ unsigned long irq_flags;224224+225225+ spin_lock_irqsave(&priv->fault_stall_lock, irq_flags);226226+227227+ if (priv->stall_enabled)228228+ *val = 0;229229+ else230230+ *val = max(ktime_us_delta(priv->stall_reenable_time, ktime_get()), 0);231231+232232+ spin_unlock_irqrestore(&priv->fault_stall_lock, irq_flags);233233+234234+ return 0;235235+}236236+237237+DEFINE_DEBUGFS_ATTRIBUTE(stall_reenable_time_fops,238238+ stall_reenable_time_get, NULL,239239+ "%lld\n");211240212241static int msm_gem_show(struct seq_file *m, void *arg)213242{···347318348319 debugfs_create_bool("disable_err_irq", 0600, minor->debugfs_root,349320 &priv->disable_err_irq);321321+322322+ debugfs_create_file("stall_reenable_time_us", 0400, minor->debugfs_root,323323+ priv, &stall_reenable_time_fops);350324351325 gpu_devfreq = debugfs_create_dir("devfreq", minor->debugfs_root);352326
+7-3
drivers/gpu/drm/msm/msm_drv.c
···245245 drm_gem_lru_init(&priv->lru.willneed, &priv->lru.lock);246246 drm_gem_lru_init(&priv->lru.dontneed, &priv->lru.lock);247247248248+ /* Initialize stall-on-fault */249249+ spin_lock_init(&priv->fault_stall_lock);250250+ priv->stall_enabled = true;251251+248252 /* Teach lockdep about lock ordering wrt. shrinker: */249253 fs_reclaim_acquire(GFP_KERNEL);250254 might_lock(&priv->lru.lock);···930926 * is no external component that we need to add since LVDS is within MDP4931927 * itself.932928 */933933-static int add_components_mdp(struct device *master_dev,929929+static int add_mdp_components(struct device *master_dev,934930 struct component_match **matchptr)935931{936932 struct device_node *np = master_dev->of_node;···10341030 if (!np)10351031 return 0;1036103210371037- if (of_device_is_available(np))10331033+ if (of_device_is_available(np) && adreno_has_gpu(np))10381034 drm_of_component_match_add(dev, matchptr, component_compare_of, np);1039103510401036 of_node_put(np);···1075107110761072 /* Add mdp components if we have KMS. */10771073 if (kms_init) {10781078- ret = add_components_mdp(master_dev, &match);10741074+ ret = add_mdp_components(master_dev, &match);10791075 if (ret)10801076 return ret;10811077 }
+23
drivers/gpu/drm/msm/msm_drv.h
···222222 * the sw hangcheck mechanism.223223 */224224 bool disable_err_irq;225225+226226+ /**227227+ * @fault_stall_lock:228228+ *229229+ * Serialize changes to stall-on-fault state.230230+ */231231+ spinlock_t fault_stall_lock;232232+233233+ /**234234+ * @fault_stall_reenable_time:235235+ *236236+ * If stall_enabled is false, when to reenable stall-on-fault.237237+ * Protected by @fault_stall_lock.238238+ */239239+ ktime_t stall_reenable_time;240240+241241+ /**242242+ * @stall_enabled:243243+ *244244+ * Whether stall-on-fault is currently enabled. Protected by245245+ * @fault_stall_lock.246246+ */247247+ bool stall_enabled;225248};226249227250const struct msm_format *mdp_get_format(struct msm_kms *kms, uint32_t format, uint64_t modifier);
+15-2
drivers/gpu/drm/msm/msm_gem_submit.c
···8585 container_of(kref, struct msm_gem_submit, ref);8686 unsigned i;87878888+ /*8989+ * In error paths, we could unref the submit without calling9090+ * drm_sched_entity_push_job(), so msm_job_free() will never9191+ * get called. Since drm_sched_job_cleanup() will NULL out9292+ * s_fence, we can use that to detect this case.9393+ */9494+ if (submit->base.s_fence)9595+ drm_sched_job_cleanup(&submit->base);9696+8897 if (submit->fence_id) {8998 spin_lock(&submit->queue->idr_lock);9099 idr_remove(&submit->queue->fence_idr, submit->fence_id);···658649 struct msm_ringbuffer *ring;659650 struct msm_submit_post_dep *post_deps = NULL;660651 struct drm_syncobj **syncobjs_to_reset = NULL;652652+ struct sync_file *sync_file = NULL;661653 int out_fence_fd = -1;662654 unsigned i;663655 int ret;···868858 }869859870860 if (ret == 0 && args->flags & MSM_SUBMIT_FENCE_FD_OUT) {871871- struct sync_file *sync_file = sync_file_create(submit->user_fence);861861+ sync_file = sync_file_create(submit->user_fence);872862 if (!sync_file) {873863 ret = -ENOMEM;874864 } else {···902892out_unlock:903893 mutex_unlock(&queue->lock);904894out_post_unlock:905905- if (ret && (out_fence_fd >= 0))895895+ if (ret && (out_fence_fd >= 0)) {906896 put_unused_fd(out_fence_fd);897897+ if (sync_file)898898+ fput(sync_file->file);899899+ }907900908901 if (!IS_ERR_OR_NULL(submit)) {909902 msm_gem_submit_put(submit);
···1111import argparse1212import time1313import datetime1414+import re14151516class Error(Exception):1617 def __init__(self, message):···878877""")879878 maxlen = 0880879 for filepath in p.xml_files:881881- maxlen = max(maxlen, len(filepath))880880+ new_filepath = re.sub("^.+drivers","drivers",filepath)881881+ maxlen = max(maxlen, len(new_filepath))882882 for filepath in p.xml_files:883883- pad = " " * (maxlen - len(filepath))883883+ pad = " " * (maxlen - len(new_filepath))884884 filesize = str(os.path.getsize(filepath))885885 filesize = " " * (7 - len(filesize)) + filesize886886 filetime = time.ctime(os.path.getmtime(filepath))887887- print("- " + filepath + pad + " (" + filesize + " bytes, from " + filetime + ")")887887+ print("- " + new_filepath + pad + " (" + filesize + " bytes, from <stripped>)")888888 if p.copyright_year:889889 current_year = str(datetime.date.today().year)890890 print()
+1-1
drivers/gpu/drm/nouveau/nouveau_backlight.c
···4242#include "nouveau_acpi.h"43434444static struct ida bl_ida;4545-#define BL_NAME_SIZE 15 // 12 for name + 2 for digits + 1 for '\0'4545+#define BL_NAME_SIZE 24 // 12 for name + 11 for digits + 1 for '\0'46464747static bool4848nouveau_get_backlight_name(char backlight_name[BL_NAME_SIZE],
···138138 int pending_seqno;139139140140 /*141141+ * we can get here before the CTs are even initialized if we're wedging142142+ * very early, in which case there are not going to be any pending143143+ * fences so we can bail immediately.144144+ */145145+ if (!xe_guc_ct_initialized(>->uc.guc.ct))146146+ return;147147+148148+ /*141149 * CT channel is already disabled at this point. No new TLB requests can142150 * appear.143151 */
···17621762{17631763 int ret;1764176417651765+ if (!guc->submission_state.initialized)17661766+ return 0;17671767+17651768 /*17661769 * Using an atomic here rather than submission_state.lock as this17671770 * function can be called while holding the CT lock (engine reset
···492492 case USB_DEVICE_ID_LENOVO_X12_TAB:493493 case USB_DEVICE_ID_LENOVO_X12_TAB2:494494 case USB_DEVICE_ID_LENOVO_X1_TAB:495495+ case USB_DEVICE_ID_LENOVO_X1_TAB2:495496 case USB_DEVICE_ID_LENOVO_X1_TAB3:496497 return lenovo_input_mapping_x1_tab_kbd(hdev, hi, field, usage, bit, max);497498 default:···549548550549 /*551550 * Tell the keyboard a driver understands it, and turn F7, F9, F11 into552552- * regular keys551551+ * regular keys (Compact only)553552 */554554- ret = lenovo_send_cmd_cptkbd(hdev, 0x01, 0x03);555555- if (ret)556556- hid_warn(hdev, "Failed to switch F7/9/11 mode: %d\n", ret);553553+ if (hdev->product == USB_DEVICE_ID_LENOVO_CUSBKBD ||554554+ hdev->product == USB_DEVICE_ID_LENOVO_CBTKBD) {555555+ ret = lenovo_send_cmd_cptkbd(hdev, 0x01, 0x03);556556+ if (ret)557557+ hid_warn(hdev, "Failed to switch F7/9/11 mode: %d\n", ret);558558+ }557559558560 /* Switch middle button to native mode */559561 ret = lenovo_send_cmd_cptkbd(hdev, 0x09, 0x01);···609605 case USB_DEVICE_ID_LENOVO_X12_TAB2:610606 case USB_DEVICE_ID_LENOVO_TP10UBKBD:611607 case USB_DEVICE_ID_LENOVO_X1_TAB:608608+ case USB_DEVICE_ID_LENOVO_X1_TAB2:612609 case USB_DEVICE_ID_LENOVO_X1_TAB3:613610 ret = lenovo_led_set_tp10ubkbd(hdev, TP10UBKBD_FN_LOCK_LED, value);614611 if (ret)···866861 case USB_DEVICE_ID_LENOVO_X12_TAB2:867862 case USB_DEVICE_ID_LENOVO_TP10UBKBD:868863 case USB_DEVICE_ID_LENOVO_X1_TAB:864864+ case USB_DEVICE_ID_LENOVO_X1_TAB2:869865 case USB_DEVICE_ID_LENOVO_X1_TAB3:870866 return lenovo_event_tp10ubkbd(hdev, field, usage, value);871867 default:···11501144 case USB_DEVICE_ID_LENOVO_X12_TAB2:11511145 case USB_DEVICE_ID_LENOVO_TP10UBKBD:11521146 case USB_DEVICE_ID_LENOVO_X1_TAB:11471147+ case USB_DEVICE_ID_LENOVO_X1_TAB2:11531148 case USB_DEVICE_ID_LENOVO_X1_TAB3:11541149 ret = lenovo_led_set_tp10ubkbd(hdev, tp10ubkbd_led[led_nr], value);11551150 break;···13911384 case USB_DEVICE_ID_LENOVO_X12_TAB2:13921385 case USB_DEVICE_ID_LENOVO_TP10UBKBD:13931386 case USB_DEVICE_ID_LENOVO_X1_TAB:13871387+ case USB_DEVICE_ID_LENOVO_X1_TAB2:13941388 case USB_DEVICE_ID_LENOVO_X1_TAB3:13951389 ret = lenovo_probe_tp10ubkbd(hdev);13961390 break;···14811473 case USB_DEVICE_ID_LENOVO_X12_TAB2:14821474 case USB_DEVICE_ID_LENOVO_TP10UBKBD:14831475 case USB_DEVICE_ID_LENOVO_X1_TAB:14761476+ case USB_DEVICE_ID_LENOVO_X1_TAB2:14841477 case USB_DEVICE_ID_LENOVO_X1_TAB3:14851478 lenovo_remove_tp10ubkbd(hdev);14861479 break;···15321523 */15331524 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,15341525 USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X1_TAB) },15261526+ { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,15271527+ USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X1_TAB2) },15351528 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,15361529 USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X1_TAB3) },15371530 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
···308308 JOYCON_CTLR_STATE_INIT,309309 JOYCON_CTLR_STATE_READ,310310 JOYCON_CTLR_STATE_REMOVED,311311+ JOYCON_CTLR_STATE_SUSPENDED,311312};312313313314/* Controller type received as part of device info */···2751275027522751static int nintendo_hid_resume(struct hid_device *hdev)27532752{27542754- int ret = joycon_init(hdev);27532753+ struct joycon_ctlr *ctlr = hid_get_drvdata(hdev);27542754+ int ret;2755275527562756+ hid_dbg(hdev, "resume\n");27572757+ if (!joycon_using_usb(ctlr)) {27582758+ hid_dbg(hdev, "no-op resume for bt ctlr\n");27592759+ ctlr->ctlr_state = JOYCON_CTLR_STATE_READ;27602760+ return 0;27612761+ }27622762+27632763+ ret = joycon_init(hdev);27562764 if (ret)27572757- hid_err(hdev, "Failed to restore controller after resume");27652765+ hid_err(hdev,27662766+ "Failed to restore controller after resume: %d\n",27672767+ ret);27682768+ else27692769+ ctlr->ctlr_state = JOYCON_CTLR_STATE_READ;2758277027592771 return ret;27722772+}27732773+27742774+static int nintendo_hid_suspend(struct hid_device *hdev, pm_message_t message)27752775+{27762776+ struct joycon_ctlr *ctlr = hid_get_drvdata(hdev);27772777+27782778+ hid_dbg(hdev, "suspend: %d\n", message.event);27792779+ /*27802780+ * Avoid any blocking loops in suspend/resume transitions.27812781+ *27822782+ * joycon_enforce_subcmd_rate() can result in repeated retries if for27832783+ * whatever reason the controller stops providing input reports.27842784+ *27852785+ * This has been observed with bluetooth controllers which lose27862786+ * connectivity prior to suspend (but not long enough to result in27872787+ * complete disconnection).27882788+ */27892789+ ctlr->ctlr_state = JOYCON_CTLR_STATE_SUSPENDED;27902790+ return 0;27602791}2761279227622793#endif···2829279628302797#ifdef CONFIG_PM28312798 .resume = nintendo_hid_resume,27992799+ .suspend = nintendo_hid_suspend,28322800#endif28332801};28342802static int __init nintendo_init(void)
···44#include <linux/bitfield.h>55#include <linux/hid.h>66#include <linux/hid-over-i2c.h>77+#include <linux/unaligned.h>7889#include "intel-thc-dev.h"910#include "intel-thc-dma.h"···201200202201int quicki2c_reset(struct quicki2c_device *qcdev)203202{203203+ u16 input_reg = le16_to_cpu(qcdev->dev_desc.input_reg);204204+ size_t read_len = HIDI2C_LENGTH_LEN;205205+ u32 prd_len = read_len;204206 int ret;205207206208 qcdev->reset_ack = false;···217213218214 ret = wait_event_interruptible_timeout(qcdev->reset_ack_wq, qcdev->reset_ack,219215 HIDI2C_RESET_TIMEOUT * HZ);220220- if (ret <= 0 || !qcdev->reset_ack) {216216+ if (qcdev->reset_ack)217217+ return 0;218218+219219+ /*220220+ * Manually read reset response if it wasn't received, in case reset interrupt221221+ * was missed by touch device or THC hardware.222222+ */223223+ ret = thc_tic_pio_read(qcdev->thc_hw, input_reg, read_len, &prd_len,224224+ (u32 *)qcdev->input_buf);225225+ if (ret) {226226+ dev_err_once(qcdev->dev, "Read Reset Response failed, ret %d\n", ret);227227+ return ret;228228+ }229229+230230+ /*231231+ * Check response packet length, it's first 16 bits of packet.232232+ * If response packet length is zero, it's reset response, otherwise not.233233+ */234234+ if (get_unaligned_le16(qcdev->input_buf)) {221235 dev_err_once(qcdev->dev,222236 "Wait reset response timed out ret:%d timeout:%ds\n",223237 ret, HIDI2C_RESET_TIMEOUT);224238 return -ETIMEDOUT;225239 }240240+241241+ qcdev->reset_ack = true;226242227243 return 0;228244}
+6-1
drivers/hid/wacom_sys.c
···2048204820492049 remote->remote_dir = kobject_create_and_add("wacom_remote",20502050 &wacom->hdev->dev.kobj);20512051- if (!remote->remote_dir)20512051+ if (!remote->remote_dir) {20522052+ kfifo_free(&remote->remote_fifo);20522053 return -ENOMEM;20542054+ }2053205520542056 error = sysfs_create_files(remote->remote_dir, remote_unpair_attrs);2055205720562058 if (error) {20572059 hid_err(wacom->hdev,20582060 "cannot create sysfs group err: %d\n", error);20612061+ kfifo_free(&remote->remote_fifo);20622062+ kobject_put(remote->remote_dir);20592063 return error;20602064 }20612065···29052901 hid_hw_stop(hdev);2906290229072903 cancel_delayed_work_sync(&wacom->init_work);29042904+ cancel_delayed_work_sync(&wacom->aes_battery_work);29082905 cancel_work_sync(&wacom->wireless_work);29092906 cancel_work_sync(&wacom->battery_work);29102907 cancel_work_sync(&wacom->remote_work);
+6-3
drivers/hwmon/ftsteutates.c
···423423 break;424424 case hwmon_pwm:425425 switch (attr) {426426- case hwmon_pwm_auto_channels_temp:427427- if (data->fan_source[channel] == FTS_FAN_SOURCE_INVALID)426426+ case hwmon_pwm_auto_channels_temp: {427427+ u8 fan_source = data->fan_source[channel];428428+429429+ if (fan_source == FTS_FAN_SOURCE_INVALID || fan_source >= BITS_PER_LONG)428430 *val = 0;429431 else430430- *val = BIT(data->fan_source[channel]);432432+ *val = BIT(fan_source);431433432434 return 0;435435+ }433436 default:434437 break;435438 }
-7
drivers/hwmon/ltc4282.c
···15121512 }1513151315141514 if (device_property_read_bool(dev, "adi,fault-log-enable")) {15151515- ret = regmap_set_bits(st->map, LTC4282_ADC_CTRL,15161516- LTC4282_FAULT_LOG_EN_MASK);15171517- if (ret)15181518- return ret;15191519- }15201520-15211521- if (device_property_read_bool(dev, "adi,fault-log-enable")) {15221515 ret = regmap_set_bits(st->map, LTC4282_ADC_CTRL, LTC4282_FAULT_LOG_EN_MASK);15231516 if (ret)15241517 return ret;
···1530153015311531config SCx200_ACB15321532 tristate "Geode ACCESS.bus support"15331533- depends on X86_32 && PCI15331533+ depends on X86_32 && PCI && HAS_IOPORT15341534 help15351535 Enable the use of the ACCESS.bus controllers on the Geode SCx200 and15361536 SC1100 processors and the CS5535 and CS5536 Geode companion devices.
···477477478478 ret = spacemit_i2c_wait_bus_idle(i2c);479479 if (!ret)480480- spacemit_i2c_xfer_msg(i2c);480480+ ret = spacemit_i2c_xfer_msg(i2c);481481 else if (ret < 0)482482 dev_dbg(i2c->dev, "i2c transfer error: %d\n", ret);483483 else
···738738 atr->flags = flags;739739740740 if (parent->algo->master_xfer)741741- atr->algo.master_xfer = i2c_atr_master_xfer;741741+ atr->algo.xfer = i2c_atr_master_xfer;742742 if (parent->algo->smbus_xfer)743743 atr->algo.smbus_xfer = i2c_atr_smbus_xfer;744744 atr->algo.functionality = i2c_atr_functionality;
+3-3
drivers/i2c/i2c-mux.c
···293293 */294294 if (parent->algo->master_xfer) {295295 if (muxc->mux_locked)296296- priv->algo.master_xfer = i2c_mux_master_xfer;296296+ priv->algo.xfer = i2c_mux_master_xfer;297297 else298298- priv->algo.master_xfer = __i2c_mux_master_xfer;298298+ priv->algo.xfer = __i2c_mux_master_xfer;299299 }300300 if (parent->algo->master_xfer_atomic)301301- priv->algo.master_xfer_atomic = priv->algo.master_xfer;301301+ priv->algo.xfer_atomic = priv->algo.master_xfer;302302303303 if (parent->algo->smbus_xfer) {304304 if (muxc->mux_locked)
+2-2
drivers/i2c/muxes/i2c-demux-pinctrl.c
···9595 priv->cur_chan = new_chan;96969797 /* Now fill out current adapter structure. cur_chan must be up to date */9898- priv->algo.master_xfer = i2c_demux_master_xfer;9898+ priv->algo.xfer = i2c_demux_master_xfer;9999 if (adap->algo->master_xfer_atomic)100100- priv->algo.master_xfer_atomic = i2c_demux_master_xfer;100100+ priv->algo.xfer_atomic = i2c_demux_master_xfer;101101 priv->algo.functionality = i2c_demux_functionality;102102103103 snprintf(priv->cur_adap.name, sizeof(priv->cur_adap.name),
+2-18
drivers/irqchip/irq-ath79-misc.c
···1515#include <linux/of_address.h>1616#include <linux/of_irq.h>17171818+#include <asm/time.h>1919+1820#define AR71XX_RESET_REG_MISC_INT_STATUS 01921#define AR71XX_RESET_REG_MISC_INT_ENABLE 42022···179177180178IRQCHIP_DECLARE(ar7240_misc_intc, "qca,ar7240-misc-intc",181179 ar7240_misc_intc_of_init);182182-183183-void __init ath79_misc_irq_init(void __iomem *regs, int irq,184184- int irq_base, bool is_ar71xx)185185-{186186- struct irq_domain *domain;187187-188188- if (is_ar71xx)189189- ath79_misc_irq_chip.irq_mask_ack = ar71xx_misc_irq_mask;190190- else191191- ath79_misc_irq_chip.irq_ack = ar724x_misc_irq_ack;192192-193193- domain = irq_domain_create_legacy(NULL, ATH79_MISC_IRQ_COUNT,194194- irq_base, 0, &misc_irq_domain_ops, regs);195195- if (!domain)196196- panic("Failed to create MISC irqdomain");197197-198198- ath79_misc_intc_domain_init(domain, irq);199199-}
-1
drivers/md/bcache/Kconfig
···55 select BLOCK_HOLDER_DEPRECATED if SYSFS66 select CRC6477 select CLOSURES88- select MIN_HEAP98 help109 Allows a block device to be used as cache for other devices; uses1110 a btree for indexing and the layout is optimized for SSDs.
···8686 * parent conditional on that option. Note, this is a way to8787 * distinguish between the parent and its partitions in sysfs.8888 */8989- child->dev.parent = &parent->dev;8989+ child->dev.parent = IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER) || mtd_is_partition(parent) ?9090+ &parent->dev : parent->dev.parent;9091 child->dev.of_node = part->of_node;9192 child->parent = parent;9293 child->part.offset = part->offset;···243242}244243245244int mtd_add_partition(struct mtd_info *parent, const char *name,246246- long long offset, long long length, struct mtd_info **out)245245+ long long offset, long long length)247246{248247 struct mtd_info *master = mtd_get_master(parent);249248 u64 parent_size = mtd_is_partition(parent) ?···276275 list_add_tail(&child->part.node, &parent->partitions);277276 mutex_unlock(&master->master.partitions_lock);278277279279- ret = add_mtd_device(child, true);278278+ ret = add_mtd_device(child);280279 if (ret)281280 goto err_remove_part;282281283282 mtd_add_partition_attrs(child);284284-285285- if (out)286286- *out = child;287283288284 return 0;289285···413415 list_add_tail(&child->part.node, &parent->partitions);414416 mutex_unlock(&master->master.partitions_lock);415417416416- ret = add_mtd_device(child, true);418418+ ret = add_mtd_device(child);417419 if (ret) {418420 mutex_lock(&master->master.partitions_lock);419421 list_del(&child->part.node);···590592 int ret, err = 0;591593592594 dev = &master->dev;595595+ /* Use parent device (controller) if the top level MTD is not registered */596596+ if (!IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER) && !mtd_is_partition(master))597597+ dev = master->dev.parent;593598594599 np = mtd_get_of_node(master);595600 if (mtd_is_partition(master))···711710 if (ret < 0 && !err)712711 err = ret;713712 }714714-715713 return err;716714}717715
···411411 priv = cdev_to_priv(mcan_class);412412413413 priv->power = devm_regulator_get_optional(&spi->dev, "vsup");414414- if (PTR_ERR(priv->power) == -EPROBE_DEFER) {415415- ret = -EPROBE_DEFER;416416- goto out_m_can_class_free_dev;417417- } else {414414+ if (IS_ERR(priv->power)) {415415+ if (PTR_ERR(priv->power) == -EPROBE_DEFER) {416416+ ret = -EPROBE_DEFER;417417+ goto out_m_can_class_free_dev;418418+ }418419 priv->power = NULL;419420 }420421
+16-11
drivers/net/ethernet/airoha/airoha_eth.c
···1065106510661066static int airoha_qdma_init_hfwd_queues(struct airoha_qdma *qdma)10671067{10681068+ int size, index, num_desc = HW_DSCP_NUM;10681069 struct airoha_eth *eth = qdma->eth;10691070 int id = qdma - ð->qdma[0];10711071+ u32 status, buf_size;10701072 dma_addr_t dma_addr;10711073 const char *name;10721072- int size, index;10731073- u32 status;10741074-10751075- size = HW_DSCP_NUM * sizeof(struct airoha_qdma_fwd_desc);10761076- if (!dmam_alloc_coherent(eth->dev, size, &dma_addr, GFP_KERNEL))10771077- return -ENOMEM;10781078-10791079- airoha_qdma_wr(qdma, REG_FWD_DSCP_BASE, dma_addr);1080107410811075 name = devm_kasprintf(eth->dev, GFP_KERNEL, "qdma%d-buf", id);10821076 if (!name)10831077 return -ENOMEM;1084107810791079+ buf_size = id ? AIROHA_MAX_PACKET_SIZE / 2 : AIROHA_MAX_PACKET_SIZE;10851080 index = of_property_match_string(eth->dev->of_node,10861081 "memory-region-names", name);10871082 if (index >= 0) {···10941099 rmem = of_reserved_mem_lookup(np);10951100 of_node_put(np);10961101 dma_addr = rmem->base;11021102+ /* Compute the number of hw descriptors according to the11031103+ * reserved memory size and the payload buffer size11041104+ */11051105+ num_desc = div_u64(rmem->size, buf_size);10971106 } else {10981098- size = AIROHA_MAX_PACKET_SIZE * HW_DSCP_NUM;11071107+ size = buf_size * num_desc;10991108 if (!dmam_alloc_coherent(eth->dev, size, &dma_addr,11001109 GFP_KERNEL))11011110 return -ENOMEM;···1107110811081109 airoha_qdma_wr(qdma, REG_FWD_BUF_BASE, dma_addr);1109111011111111+ size = num_desc * sizeof(struct airoha_qdma_fwd_desc);11121112+ if (!dmam_alloc_coherent(eth->dev, size, &dma_addr, GFP_KERNEL))11131113+ return -ENOMEM;11141114+11151115+ airoha_qdma_wr(qdma, REG_FWD_DSCP_BASE, dma_addr);11161116+ /* QDMA0: 2KB. QDMA1: 1KB */11101117 airoha_qdma_rmw(qdma, REG_HW_FWD_DSCP_CFG,11111118 HW_FWD_DSCP_PAYLOAD_SIZE_MASK,11121112- FIELD_PREP(HW_FWD_DSCP_PAYLOAD_SIZE_MASK, 0));11191119+ FIELD_PREP(HW_FWD_DSCP_PAYLOAD_SIZE_MASK, !!id));11131120 airoha_qdma_rmw(qdma, REG_FWD_DSCP_LOW_THR, FWD_DSCP_LOW_THR_MASK,11141121 FIELD_PREP(FWD_DSCP_LOW_THR_MASK, 128));11151122 airoha_qdma_rmw(qdma, REG_LMGR_INIT_CFG,11161123 LMGR_INIT_START | LMGR_SRAM_MODE_MASK |11171124 HW_FWD_DESC_NUM_MASK,11181118- FIELD_PREP(HW_FWD_DESC_NUM_MASK, HW_DSCP_NUM) |11251125+ FIELD_PREP(HW_FWD_DESC_NUM_MASK, num_desc) |11191126 LMGR_INIT_START | LMGR_SRAM_MODE_MASK);1120112711211128 return read_poll_timeout(airoha_qdma_rr, status,
+3-1
drivers/net/ethernet/airoha/airoha_ppe.c
···809809 int idle;810810811811 hwe = airoha_ppe_foe_get_entry(ppe, iter->hash);812812- ib1 = READ_ONCE(hwe->ib1);812812+ if (!hwe)813813+ continue;813814815815+ ib1 = READ_ONCE(hwe->ib1);814816 state = FIELD_GET(AIROHA_FOE_IB1_BIND_STATE, ib1);815817 if (state != AIROHA_FOE_STATE_BIND) {816818 iter->hash = 0xffff;
+78-14
drivers/net/ethernet/broadcom/bnxt/bnxt.c
···29892989{29902990 struct bnxt_napi *bnapi = cpr->bnapi;29912991 u32 raw_cons = cpr->cp_raw_cons;29922992+ bool flush_xdp = false;29922993 u32 cons;29932994 int rx_pkts = 0;29942995 u8 event = 0;···30433042 else30443043 rc = bnxt_force_rx_discard(bp, cpr, &raw_cons,30453044 &event);30453045+ if (event & BNXT_REDIRECT_EVENT)30463046+ flush_xdp = true;30463047 if (likely(rc >= 0))30473048 rx_pkts += rc;30483049 /* Increment rx_pkts when rc is -ENOMEM to count towards···30693066 }30703067 }3071306830723072- if (event & BNXT_REDIRECT_EVENT) {30693069+ if (flush_xdp) {30733070 xdp_do_flush();30743071 event &= ~BNXT_REDIRECT_EVENT;30753072 }···1078310780 bp->num_rss_ctx--;1078410781}10785107821078310783+static bool bnxt_vnic_has_rx_ring(struct bnxt *bp, struct bnxt_vnic_info *vnic,1078410784+ int rxr_id)1078510785+{1078610786+ u16 tbl_size = bnxt_get_rxfh_indir_size(bp->dev);1078710787+ int i, vnic_rx;1078810788+1078910789+ /* Ntuple VNIC always has all the rx rings. Any change of ring id1079010790+ * must be updated because a future filter may use it.1079110791+ */1079210792+ if (vnic->flags & BNXT_VNIC_NTUPLE_FLAG)1079310793+ return true;1079410794+1079510795+ for (i = 0; i < tbl_size; i++) {1079610796+ if (vnic->flags & BNXT_VNIC_RSSCTX_FLAG)1079710797+ vnic_rx = ethtool_rxfh_context_indir(vnic->rss_ctx)[i];1079810798+ else1079910799+ vnic_rx = bp->rss_indir_tbl[i];1080010800+1080110801+ if (rxr_id == vnic_rx)1080210802+ return true;1080310803+ }1080410804+1080510805+ return false;1080610806+}1080710807+1080810808+static int bnxt_set_vnic_mru_p5(struct bnxt *bp, struct bnxt_vnic_info *vnic,1080910809+ u16 mru, int rxr_id)1081010810+{1081110811+ int rc;1081210812+1081310813+ if (!bnxt_vnic_has_rx_ring(bp, vnic, rxr_id))1081410814+ return 0;1081510815+1081610816+ if (mru) {1081710817+ rc = bnxt_hwrm_vnic_set_rss_p5(bp, vnic, true);1081810818+ if (rc) {1081910819+ netdev_err(bp->dev, "hwrm vnic %d set rss failure rc: %d\n",1082010820+ vnic->vnic_id, rc);1082110821+ return rc;1082210822+ }1082310823+ }1082410824+ vnic->mru = mru;1082510825+ bnxt_hwrm_vnic_update(bp, vnic,1082610826+ VNIC_UPDATE_REQ_ENABLES_MRU_VALID);1082710827+1082810828+ return 0;1082910829+}1083010830+1083110831+static int bnxt_set_rss_ctx_vnic_mru(struct bnxt *bp, u16 mru, int rxr_id)1083210832+{1083310833+ struct ethtool_rxfh_context *ctx;1083410834+ unsigned long context;1083510835+ int rc;1083610836+1083710837+ xa_for_each(&bp->dev->ethtool->rss_ctx, context, ctx) {1083810838+ struct bnxt_rss_ctx *rss_ctx = ethtool_rxfh_context_priv(ctx);1083910839+ struct bnxt_vnic_info *vnic = &rss_ctx->vnic;1084010840+1084110841+ rc = bnxt_set_vnic_mru_p5(bp, vnic, mru, rxr_id);1084210842+ if (rc)1084310843+ return rc;1084410844+ }1084510845+1084610846+ return 0;1084710847+}1084810848+1078610849static void bnxt_hwrm_realloc_rss_ctx_vnic(struct bnxt *bp)1078710850{1078810851 bool set_tpa = !!(bp->flags & BNXT_FLAG_TPA);···1599615927 struct bnxt_vnic_info *vnic;1599715928 struct bnxt_napi *bnapi;1599815929 int i, rc;1593015930+ u16 mru;15999159311600015932 rxr = &bp->rx_ring[idx];1600115933 clone = qmem;···1604715977 napi_enable_locked(&bnapi->napi);1604815978 bnxt_db_nq_arm(bp, &cpr->cp_db, cpr->cp_raw_cons);16049159791598015980+ mru = bp->dev->mtu + ETH_HLEN + VLAN_HLEN;1605015981 for (i = 0; i < bp->nr_vnics; i++) {1605115982 vnic = &bp->vnic_info[i];16052159831605316053- rc = bnxt_hwrm_vnic_set_rss_p5(bp, vnic, true);1605416054- if (rc) {1605516055- netdev_err(bp->dev, "hwrm vnic %d set rss failure rc: %d\n",1605616056- vnic->vnic_id, rc);1598415984+ rc = bnxt_set_vnic_mru_p5(bp, vnic, mru, idx);1598515985+ if (rc)1605715986 return rc;1605816058- }1605916059- vnic->mru = bp->dev->mtu + ETH_HLEN + VLAN_HLEN;1606016060- bnxt_hwrm_vnic_update(bp, vnic,1606116061- VNIC_UPDATE_REQ_ENABLES_MRU_VALID);1606215987 }1606316063-1606416064- return 0;1598815988+ return bnxt_set_rss_ctx_vnic_mru(bp, mru, idx);16065159891606615990err_reset:1606715991 netdev_err(bp->dev, "Unexpected HWRM error during queue start rc: %d\n",···16077160131607816014 for (i = 0; i < bp->nr_vnics; i++) {1607916015 vnic = &bp->vnic_info[i];1608016080- vnic->mru = 0;1608116081- bnxt_hwrm_vnic_update(bp, vnic,1608216082- VNIC_UPDATE_REQ_ENABLES_MRU_VALID);1601616016+1601716017+ bnxt_set_vnic_mru_p5(bp, vnic, 0, idx);1608316018 }1601916019+ bnxt_set_rss_ctx_vnic_mru(bp, 0, idx);1608416020 /* Make sure NAPI sees that the VNIC is disabled */1608516021 synchronize_net();1608616022 rxr = &bp->rx_ring[idx];
+10-14
drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
···231231 return;232232233233 mutex_lock(&edev->en_dev_lock);234234- if (!bnxt_ulp_registered(edev)) {235235- mutex_unlock(&edev->en_dev_lock);236236- return;237237- }234234+ if (!bnxt_ulp_registered(edev) ||235235+ (edev->flags & BNXT_EN_FLAG_ULP_STOPPED))236236+ goto ulp_stop_exit;238237239238 edev->flags |= BNXT_EN_FLAG_ULP_STOPPED;240239 if (aux_priv) {···249250 adrv->suspend(adev, pm);250251 }251252 }253253+ulp_stop_exit:252254 mutex_unlock(&edev->en_dev_lock);253255}254256···258258 struct bnxt_aux_priv *aux_priv = bp->aux_priv;259259 struct bnxt_en_dev *edev = bp->edev;260260261261- if (!edev)262262- return;263263-264264- edev->flags &= ~BNXT_EN_FLAG_ULP_STOPPED;265265-266266- if (err)261261+ if (!edev || err)267262 return;268263269264 mutex_lock(&edev->en_dev_lock);270270- if (!bnxt_ulp_registered(edev)) {271271- mutex_unlock(&edev->en_dev_lock);272272- return;273273- }265265+ if (!bnxt_ulp_registered(edev) ||266266+ !(edev->flags & BNXT_EN_FLAG_ULP_STOPPED))267267+ goto ulp_start_exit;274268275269 if (edev->ulp_tbl->msix_requested)276270 bnxt_fill_msix_vecs(bp, edev->msix_entries);···281287 adrv->resume(adev);282288 }283289 }290290+ulp_start_exit:291291+ edev->flags &= ~BNXT_EN_FLAG_ULP_STOPPED;284292 mutex_unlock(&edev->en_dev_lock);285293}286294
+1
drivers/net/ethernet/faraday/Kconfig
···3131 depends on ARM || COMPILE_TEST3232 depends on !64BIT || BROKEN3333 select PHYLIB3434+ select FIXED_PHY3435 select MDIO_ASPEED if MACH_ASPEED_G63536 select CRC323637 help
···35343534 case e1000_pch_cnp:35353535 case e1000_pch_tgp:35363536 case e1000_pch_adp:35373537- case e1000_pch_mtp:35383538- case e1000_pch_lnp:35393539- case e1000_pch_ptp:35403537 case e1000_pch_nvp:35413538 if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI) {35423539 /* Stable 24MHz frequency */···35483551 shift = INCVALUE_SHIFT_38400KHZ;35493552 adapter->cc.shift = shift;35503553 }35543554+ break;35553555+ case e1000_pch_mtp:35563556+ case e1000_pch_lnp:35573557+ case e1000_pch_ptp:35583558+ /* System firmware can misreport this value, so set it to a35593559+ * stable 38400KHz frequency.35603560+ */35613561+ incperiod = INCPERIOD_38400KHZ;35623562+ incvalue = INCVALUE_38400KHZ;35633563+ shift = INCVALUE_SHIFT_38400KHZ;35643564+ adapter->cc.shift = shift;35513565 break;35523566 case e1000_82574:35533567 case e1000_82583:
+5-3
drivers/net/ethernet/intel/e1000e/ptp.c
···295295 case e1000_pch_cnp:296296 case e1000_pch_tgp:297297 case e1000_pch_adp:298298- case e1000_pch_mtp:299299- case e1000_pch_lnp:300300- case e1000_pch_ptp:301298 case e1000_pch_nvp:302299 if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI)303300 adapter->ptp_clock_info.max_adj = MAX_PPB_24MHZ;304301 else305302 adapter->ptp_clock_info.max_adj = MAX_PPB_38400KHZ;303303+ break;304304+ case e1000_pch_mtp:305305+ case e1000_pch_lnp:306306+ case e1000_pch_ptp:307307+ adapter->ptp_clock_info.max_adj = MAX_PPB_38400KHZ;306308 break;307309 case e1000_82574:308310 case e1000_82583:
+48
drivers/net/ethernet/intel/ice/ice_arfs.c
···378378}379379380380/**381381+ * ice_arfs_cmp - Check if aRFS filter matches this flow.382382+ * @fltr_info: filter info of the saved ARFS entry.383383+ * @fk: flow dissector keys.384384+ * @n_proto: One of htons(ETH_P_IP) or htons(ETH_P_IPV6).385385+ * @ip_proto: One of IPPROTO_TCP or IPPROTO_UDP.386386+ *387387+ * Since this function assumes limited values for n_proto and ip_proto, it388388+ * is meant to be called only from ice_rx_flow_steer().389389+ *390390+ * Return:391391+ * * true - fltr_info refers to the same flow as fk.392392+ * * false - fltr_info and fk refer to different flows.393393+ */394394+static bool395395+ice_arfs_cmp(const struct ice_fdir_fltr *fltr_info, const struct flow_keys *fk,396396+ __be16 n_proto, u8 ip_proto)397397+{398398+ /* Determine if the filter is for IPv4 or IPv6 based on flow_type,399399+ * which is one of ICE_FLTR_PTYPE_NONF_IPV{4,6}_{TCP,UDP}.400400+ */401401+ bool is_v4 = fltr_info->flow_type == ICE_FLTR_PTYPE_NONF_IPV4_TCP ||402402+ fltr_info->flow_type == ICE_FLTR_PTYPE_NONF_IPV4_UDP;403403+404404+ /* Following checks are arranged in the quickest and most discriminative405405+ * fields first for early failure.406406+ */407407+ if (is_v4)408408+ return n_proto == htons(ETH_P_IP) &&409409+ fltr_info->ip.v4.src_port == fk->ports.src &&410410+ fltr_info->ip.v4.dst_port == fk->ports.dst &&411411+ fltr_info->ip.v4.src_ip == fk->addrs.v4addrs.src &&412412+ fltr_info->ip.v4.dst_ip == fk->addrs.v4addrs.dst &&413413+ fltr_info->ip.v4.proto == ip_proto;414414+415415+ return fltr_info->ip.v6.src_port == fk->ports.src &&416416+ fltr_info->ip.v6.dst_port == fk->ports.dst &&417417+ fltr_info->ip.v6.proto == ip_proto &&418418+ !memcmp(&fltr_info->ip.v6.src_ip, &fk->addrs.v6addrs.src,419419+ sizeof(struct in6_addr)) &&420420+ !memcmp(&fltr_info->ip.v6.dst_ip, &fk->addrs.v6addrs.dst,421421+ sizeof(struct in6_addr));422422+}423423+424424+/**381425 * ice_rx_flow_steer - steer the Rx flow to where application is being run382426 * @netdev: ptr to the netdev being adjusted383427 * @skb: buffer with required header information···492448 continue;493449494450 fltr_info = &arfs_entry->fltr_info;451451+452452+ if (!ice_arfs_cmp(fltr_info, &fk, n_proto, ip_proto))453453+ continue;454454+495455 ret = fltr_info->fltr_id;496456497457 if (fltr_info->q_index == rxq_idx ||
···516516 unsigned long start_time;517517 unsigned long max_wait;518518 unsigned long duration;519519- int done = 0;520519 bool fw_up;521520 int opcode;521521+ bool done;522522 int err;523523524524 /* Wait for dev cmd to complete, retrying if we get EAGAIN,···526526 */527527 max_wait = jiffies + (max_seconds * HZ);528528try_again:529529+ done = false;529530 opcode = idev->opcode;530531 start_time = jiffies;531532 for (fw_up = ionic_is_fw_running(idev);
···8787 * We need to do some backwards compatibility to make this work.8888 */8989 if (le32_to_cpu(targ_info->byte_count) != sizeof(*targ_info)) {9090- WARN_ON(1);9090+ ath6kl_err("mismatched byte count %d vs. expected %zd\n",9191+ le32_to_cpu(targ_info->byte_count),9292+ sizeof(*targ_info));9193 return -EINVAL;9294 }9395
+13-6
drivers/net/wireless/ath/carl9170/usb.c
···438438439439 if (atomic_read(&ar->rx_anch_urbs) == 0) {440440 /*441441- * The system is too slow to cope with442442- * the enormous workload. We have simply443443- * run out of active rx urbs and this444444- * unfortunately leads to an unpredictable445445- * device.441441+ * At this point, either the system is too slow to442442+ * cope with the enormous workload (so we have simply443443+ * run out of active rx urbs and this unfortunately444444+ * leads to an unpredictable device), or the device445445+ * is not fully functional after an unsuccessful446446+ * firmware loading attempts (so it doesn't pass447447+ * ieee80211_register_hw() and there is no internal448448+ * workqueue at all).446449 */447450448448- ieee80211_queue_work(ar->hw, &ar->ping_work);451451+ if (ar->registered)452452+ ieee80211_queue_work(ar->hw, &ar->ping_work);453453+ else454454+ pr_warn_once("device %s is not registered\n",455455+ dev_name(&ar->udev->dev));449456 }450457 } else {451458 /*
+2-1
drivers/net/wireless/intel/iwlegacy/4965-rs.c
···203203 return (u8) (rate_n_flags & 0xFF);204204}205205206206-static void206206+/* noinline works around https://github.com/llvm/llvm-project/issues/143908 */207207+static noinline_for_stack void207208il4965_rs_rate_scale_clear_win(struct il_rate_scale_data *win)208209{209210 win->data = 0;
···9797 short_sleep = jiffies + msecs_to_jiffies(HSMP_SHORT_SLEEP);9898 timeout = jiffies + msecs_to_jiffies(HSMP_MSG_TIMEOUT);9999100100- while (time_before(jiffies, timeout)) {100100+ while (true) {101101 ret = sock->amd_hsmp_rdwr(sock, mbinfo->msg_resp_off, &mbox_status, HSMP_RD);102102 if (ret) {103103 dev_err(sock->dev, "Error %d reading mailbox status\n", ret);···106106107107 if (mbox_status != HSMP_STATUS_NOT_READY)108108 break;109109+110110+ if (!time_before(jiffies, timeout))111111+ break;112112+109113 if (time_before(jiffies, short_sleep))110114 usleep_range(50, 100);111115 else···214210 return -ENODEV;215211 sock = &hsmp_pdev.sock[msg->sock_ind];216212217217- /*218218- * The time taken by smu operation to complete is between219219- * 10us to 1ms. Sometime it may take more time.220220- * In SMP system timeout of 100 millisecs should221221- * be enough for the previous thread to finish the operation222222- */223223- ret = down_timeout(&sock->hsmp_sem, msecs_to_jiffies(HSMP_MSG_TIMEOUT));213213+ ret = down_interruptible(&sock->hsmp_sem);224214 if (ret < 0)225215 return ret;226216
+9
drivers/platform/x86/amd/pmc/pmc-quirks.c
···225225 DMI_MATCH(DMI_BOARD_NAME, "WUJIE14-GX4HRXL"),226226 }227227 },228228+ /* https://bugzilla.kernel.org/show_bug.cgi?id=220116 */229229+ {230230+ .ident = "PCSpecialist Lafite Pro V 14M",231231+ .driver_data = &quirk_spurious_8042,232232+ .matches = {233233+ DMI_MATCH(DMI_SYS_VENDOR, "PCSpecialist"),234234+ DMI_MATCH(DMI_PRODUCT_NAME, "Lafite Pro V 14M"),235235+ }236236+ },228237 {}229238};230239
···4545MODULE_AUTHOR("Abhay Salunke <abhay_salunke@dell.com>");4646MODULE_DESCRIPTION("Driver for updating BIOS image on DELL systems");4747MODULE_LICENSE("GPL");4848-MODULE_VERSION("3.2");4848+MODULE_VERSION("3.3");49495050#define BIOS_SCAN_LIMIT 0xffffffff5151#define MAX_IMAGE_LENGTH 16···9191 rbu_data.imagesize = 0;9292}93939494-static int create_packet(void *data, size_t length)9494+static int create_packet(void *data, size_t length) __must_hold(&rbu_data.lock)9595{9696 struct packet_data *newpacket;9797 int ordernum = 0;···292292 remaining_bytes = *pread_length;293293 bytes_read = rbu_data.packet_read_count;294294295295- list_for_each_entry(newpacket, (&packet_data_head.list)->next, list) {295295+ list_for_each_entry(newpacket, &packet_data_head.list, list) {296296 bytes_copied = do_packet_read(pdest, newpacket,297297 remaining_bytes, bytes_read, &temp_count);298298 remaining_bytes -= bytes_copied;···315315{316316 struct packet_data *newpacket, *tmp;317317318318- list_for_each_entry_safe(newpacket, tmp, (&packet_data_head.list)->next, list) {318318+ list_for_each_entry_safe(newpacket, tmp, &packet_data_head.list, list) {319319 list_del(&newpacket->list);320320321321 /*322322 * zero out the RBU packet memory before freeing323323 * to make sure there are no stale RBU packets left in memory324324 */325325- memset(newpacket->data, 0, rbu_data.packetsize);325325+ memset(newpacket->data, 0, newpacket->length);326326 set_memory_wb((unsigned long)newpacket->data,327327 1 << newpacket->ordernum);328328 free_pages((unsigned long) newpacket->data,
+17-2
drivers/platform/x86/ideapad-laptop.c
···1515#include <linux/bug.h>1616#include <linux/cleanup.h>1717#include <linux/debugfs.h>1818+#include <linux/delay.h>1819#include <linux/device.h>1920#include <linux/dmi.h>2021#include <linux/i8042.h>···268267 */269268#define IDEAPAD_EC_TIMEOUT 200 /* in ms */270269270270+/*271271+ * Some models (e.g., ThinkBook since 2024) have a low tolerance for being272272+ * polled too frequently. Doing so may break the state machine in the EC,273273+ * resulting in a hard shutdown.274274+ *275275+ * It is also observed that frequent polls may disturb the ongoing operation276276+ * and notably delay the availability of EC response.277277+ *278278+ * These values are used as the delay before the first poll and the interval279279+ * between subsequent polls to solve the above issues.280280+ */281281+#define IDEAPAD_EC_POLL_MIN_US 150282282+#define IDEAPAD_EC_POLL_MAX_US 300283283+271284static int eval_int(acpi_handle handle, const char *name, unsigned long *res)272285{273286 unsigned long long result;···398383 end_jiffies = jiffies + msecs_to_jiffies(IDEAPAD_EC_TIMEOUT) + 1;399384400385 while (time_before(jiffies, end_jiffies)) {401401- schedule();386386+ usleep_range(IDEAPAD_EC_POLL_MIN_US, IDEAPAD_EC_POLL_MAX_US);402387403388 err = eval_vpcr(handle, 1, &val);404389 if (err)···429414 end_jiffies = jiffies + msecs_to_jiffies(IDEAPAD_EC_TIMEOUT) + 1;430415431416 while (time_before(jiffies, end_jiffies)) {432432- schedule();417417+ usleep_range(IDEAPAD_EC_POLL_MIN_US, IDEAPAD_EC_POLL_MAX_US);433418434419 err = eval_vpcr(handle, 1, &val);435420 if (err)
···511511512512 /* Get the package ID from the TPMI core */513513 plat_info = tpmi_get_platform_data(auxdev);514514- if (plat_info)515515- pkg = plat_info->package_id;516516- else514514+ if (unlikely(!plat_info)) {517515 dev_info(&auxdev->dev, "Platform information is NULL\n");516516+ ret = -ENODEV;517517+ goto err_rem_common;518518+ }519519+520520+ pkg = plat_info->package_id;518521519522 for (i = 0; i < num_resources; ++i) {520523 struct tpmi_uncore_power_domain_info *pd_info;
···121121 struct ptp_clock_info *ops;122122 int err = -EOPNOTSUPP;123123124124- if (ptp_clock_freerun(ptp)) {124124+ if (tx->modes & (ADJ_SETOFFSET | ADJ_FREQUENCY | ADJ_OFFSET) &&125125+ ptp_clock_freerun(ptp)) {125126 pr_err("ptp: physical clock is free running\n");126127 return -EBUSY;127128 }
+21-1
drivers/ptp/ptp_private.h
···9898/* Check if ptp virtual clock is in use */9999static inline bool ptp_vclock_in_use(struct ptp_clock *ptp)100100{101101- return !ptp->is_virtual_clock;101101+ bool in_use = false;102102+103103+ /* Virtual clocks can't be stacked on top of virtual clocks.104104+ * Avoid acquiring the n_vclocks_mux on virtual clocks, to allow this105105+ * function to be called from code paths where the n_vclocks_mux of the106106+ * parent physical clock is already held. Functionally that's not an107107+ * issue, but lockdep would complain, because they have the same lock108108+ * class.109109+ */110110+ if (ptp->is_virtual_clock)111111+ return false;112112+113113+ if (mutex_lock_interruptible(&ptp->n_vclocks_mux))114114+ return true;115115+116116+ if (ptp->n_vclocks)117117+ in_use = true;118118+119119+ mutex_unlock(&ptp->n_vclocks_mux);120120+121121+ return in_use;102122}103123104124/* Check if ptp clock shall be free running */
···59105910 const struct cpumask *mask;5911591159125912 if (instance->perf_mode == MR_BALANCED_PERF_MODE) {59135913- mask = cpumask_of_node(dev_to_node(&instance->pdev->dev));59135913+ int nid = dev_to_node(&instance->pdev->dev);59145914+59155915+ if (nid == NUMA_NO_NODE)59165916+ nid = 0;59175917+ mask = cpumask_of_node(nid);5914591859155919 for (i = 0; i < instance->low_latency_index_start; i++) {59165920 irq = pci_irq_vector(instance->pdev, i);
+7-5
drivers/spi/spi-cadence-quadspi.c
···19581958 goto probe_setup_failed;19591959 }1960196019611961- ret = devm_pm_runtime_enable(dev);19621962- if (ret) {19631963- if (cqspi->rx_chan)19641964- dma_release_channel(cqspi->rx_chan);19611961+ pm_runtime_enable(dev);19621962+19631963+ if (cqspi->rx_chan) {19641964+ dma_release_channel(cqspi->rx_chan);19651965 goto probe_setup_failed;19661966 }19671967···19811981 return 0;19821982probe_setup_failed:19831983 cqspi_controller_enable(cqspi, 0);19841984+ pm_runtime_disable(dev);19841985probe_reset_failed:19851986 if (cqspi->is_jh7110)19861987 cqspi_jh7110_disable_clk(pdev, cqspi);···20001999 if (cqspi->rx_chan)20012000 dma_release_channel(cqspi->rx_chan);2002200120032003- clk_disable_unprepare(cqspi->clk);20022002+ if (pm_runtime_get_sync(&pdev->dev) >= 0)20032003+ clk_disable(cqspi->clk);2004200420052005 if (cqspi->is_jh7110)20062006 cqspi_jh7110_disable_clk(pdev, cqspi);
-14
drivers/spi/spi-tegra210-quad.c
···407407static void408408tegra_qspi_copy_client_txbuf_to_qspi_txbuf(struct tegra_qspi *tqspi, struct spi_transfer *t)409409{410410- dma_sync_single_for_cpu(tqspi->dev, tqspi->tx_dma_phys,411411- tqspi->dma_buf_size, DMA_TO_DEVICE);412412-413410 /*414411 * In packed mode, each word in FIFO may contain multiple packets415412 * based on bits per word. So all bytes in each FIFO word are valid.···439442440443 tqspi->cur_tx_pos += write_bytes;441444 }442442-443443- dma_sync_single_for_device(tqspi->dev, tqspi->tx_dma_phys,444444- tqspi->dma_buf_size, DMA_TO_DEVICE);445445}446446447447static void448448tegra_qspi_copy_qspi_rxbuf_to_client_rxbuf(struct tegra_qspi *tqspi, struct spi_transfer *t)449449{450450- dma_sync_single_for_cpu(tqspi->dev, tqspi->rx_dma_phys,451451- tqspi->dma_buf_size, DMA_FROM_DEVICE);452452-453450 if (tqspi->is_packed) {454451 tqspi->cur_rx_pos += tqspi->curr_dma_words * tqspi->bytes_per_word;455452 } else {···469478470479 tqspi->cur_rx_pos += read_bytes;471480 }472472-473473- dma_sync_single_for_device(tqspi->dev, tqspi->rx_dma_phys,474474- tqspi->dma_buf_size, DMA_FROM_DEVICE);475481}476482477483static void tegra_qspi_dma_complete(void *args)···689701 return ret;690702 }691703692692- dma_sync_single_for_device(tqspi->dev, tqspi->rx_dma_phys,693693- tqspi->dma_buf_size, DMA_FROM_DEVICE);694704 ret = tegra_qspi_start_rx_dma(tqspi, t, len);695705 if (ret < 0) {696706 dev_err(tqspi->dev, "failed to start RX DMA: %d\n", ret);
···7272 dev->parent = parent_dev;7373 dev->bus = &serial_base_bus_type;7474 dev->release = release;7575+ device_set_of_node_from_dev(dev, parent_dev);75767677 if (!serial_base_initialized) {7778 dev_dbg(port->dev, "uart_add_one_port() called before arch_initcall()?\n");
+1-1
drivers/tty/vt/ucs.c
···206206207207/**208208 * ucs_get_fallback() - Get a substitution for the provided Unicode character209209- * @base: Base Unicode code point (UCS-4)209209+ * @cp: Unicode code point (UCS-4)210210 *211211 * Get a simpler fallback character for the provided Unicode character.212212 * This is used for terminal display when corresponding glyph is unavailable.
···78077807 hba->silence_err_logs = false;7808780878097809 /* scale up clocks to max frequency before full reinitialization */78107810- ufshcd_scale_clks(hba, ULONG_MAX, true);78107810+ if (ufshcd_is_clkscaling_supported(hba))78117811+ ufshcd_scale_clks(hba, ULONG_MAX, true);7811781278127813 err = ufshcd_hba_enable(hba);78137814
+9-4
fs/bcachefs/alloc_background.c
···14061406 : BCH_DATA_free;14071407 struct printbuf buf = PRINTBUF;1408140814091409+ unsigned fsck_flags = (async_repair ? FSCK_ERR_NO_LOG : 0)|14101410+ FSCK_CAN_FIX|FSCK_CAN_IGNORE;14111411+14091412 struct bpos bucket = iter->pos;14101413 bucket.offset &= ~(~0ULL << 56);14111414 u64 genbits = iter->pos.offset & (~0ULL << 56);···14221419 return ret;1423142014241421 if (!bch2_dev_bucket_exists(c, bucket)) {14251425- if (fsck_err(trans, need_discard_freespace_key_to_invalid_dev_bucket,14261426- "entry in %s btree for nonexistant dev:bucket %llu:%llu",14271427- bch2_btree_id_str(iter->btree_id), bucket.inode, bucket.offset))14221422+ if (__fsck_err(trans, fsck_flags,14231423+ need_discard_freespace_key_to_invalid_dev_bucket,14241424+ "entry in %s btree for nonexistant dev:bucket %llu:%llu",14251425+ bch2_btree_id_str(iter->btree_id), bucket.inode, bucket.offset))14281426 goto delete;14291427 ret = 1;14301428 goto out;···14371433 if (a->data_type != state ||14381434 (state == BCH_DATA_free &&14391435 genbits != alloc_freespace_genbits(*a))) {14401440- if (fsck_err(trans, need_discard_freespace_key_bad,14361436+ if (__fsck_err(trans, fsck_flags,14371437+ need_discard_freespace_key_bad,14411438 "%s\nincorrectly set at %s:%llu:%llu:0 (free %u, genbits %llu should be %llu)",14421439 (bch2_bkey_val_to_text(&buf, c, alloc_k), buf.buf),14431440 bch2_btree_id_str(iter->btree_id),
+1-1
fs/bcachefs/backpointers.c
···353353 return ret ? bkey_s_c_err(ret) : bkey_s_c_null;354354 } else {355355 struct btree *b = __bch2_backpointer_get_node(trans, bp, iter, last_flushed, commit);356356- if (b == ERR_PTR(bch_err_throw(c, backpointer_to_overwritten_btree_node)))356356+ if (b == ERR_PTR(-BCH_ERR_backpointer_to_overwritten_btree_node))357357 return bkey_s_c_null;358358 if (IS_ERR_OR_NULL(b))359359 return ((struct bkey_s_c) { .k = ERR_CAST(b) });
···621621 if (s)622622 s->ret = ret;623623624624- if (trans)624624+ if (trans &&625625+ !(flags & FSCK_ERR_NO_LOG) &&626626+ ret == -BCH_ERR_fsck_fix)625627 ret = bch2_trans_log_str(trans, bch2_sb_error_strs[err]) ?: ret;626628err_unlock:627629 mutex_unlock(&c->fsck_error_msgs_lock);
···135135136136bool __bch2_snapshot_is_ancestor(struct bch_fs *c, u32 id, u32 ancestor)137137{138138- bool ret;138138+#ifdef CONFIG_BCACHEFS_DEBUG139139+ u32 orig_id = id;140140+#endif139141140142 guard(rcu)();141143 struct snapshot_table *t = rcu_dereference(c->snapshots);···149147 while (id && id < ancestor - IS_ANCESTOR_BITMAP)150148 id = get_ancestor_below(t, id, ancestor);151149152152- ret = id && id < ancestor150150+ bool ret = id && id < ancestor153151 ? test_ancestor_bitmap(t, id, ancestor)154152 : id == ancestor;155153156156- EBUG_ON(ret != __bch2_snapshot_is_ancestor_early(t, id, ancestor));154154+ EBUG_ON(ret != __bch2_snapshot_is_ancestor_early(t, orig_id, ancestor));157155 return ret;158156}159157···871869872870 for_each_btree_key_norestart(trans, iter, BTREE_ID_snapshot_trees, POS_MIN,873871 0, k, ret) {874874- if (le32_to_cpu(bkey_s_c_to_snapshot_tree(k).v->root_snapshot) == id) {872872+ if (k.k->type == KEY_TYPE_snapshot_tree &&873873+ le32_to_cpu(bkey_s_c_to_snapshot_tree(k).v->root_snapshot) == id) {875874 tree_id = k.k->p.offset;876875 break;877876 }···900897901898 for_each_btree_key_norestart(trans, iter, BTREE_ID_subvolumes, POS_MIN,902899 0, k, ret) {903903- if (le32_to_cpu(bkey_s_c_to_subvolume(k).v->snapshot) == id) {900900+ if (k.k->type == KEY_TYPE_subvolume &&901901+ le32_to_cpu(bkey_s_c_to_subvolume(k).v->snapshot) == id) {904902 snapshot->v.subvol = cpu_to_le32(k.k->p.offset);905903 SET_BCH_SNAPSHOT_SUBVOL(&snapshot->v, true);906904 break;
+11-2
fs/bcachefs/super.c
···210210static int bch2_dev_sysfs_online(struct bch_fs *, struct bch_dev *);211211static void bch2_dev_io_ref_stop(struct bch_dev *, int);212212static void __bch2_dev_read_only(struct bch_fs *, struct bch_dev *);213213-static int bch2_fs_init_rw(struct bch_fs *);214213215214struct bch_fs *bch2_dev_to_fs(dev_t dev)216215{···793794 return ret;794795}795796796796-static int bch2_fs_init_rw(struct bch_fs *c)797797+int bch2_fs_init_rw(struct bch_fs *c)797798{798799 if (test_bit(BCH_FS_rw_init_done, &c->flags))799800 return 0;···10131014 bch2_fs_vfs_init(c);10141015 if (ret)10151016 goto err;10171017+10181018+ if (go_rw_in_recovery(c)) {10191019+ /*10201020+ * start workqueues/kworkers early - kthread creation checks for10211021+ * pending signals, which is _very_ annoying10221022+ */10231023+ ret = bch2_fs_init_rw(c);10241024+ if (ret)10251025+ goto err;10261026+ }1016102710171028#ifdef CONFIG_UNICODE10181029 /* Default encoding until we can potentially have more as an option. */
···1377137713781378void btrfs_assert_delayed_root_empty(struct btrfs_fs_info *fs_info)13791379{13801380- WARN_ON(btrfs_first_delayed_node(fs_info->delayed_root));13801380+ struct btrfs_delayed_node *node = btrfs_first_delayed_node(fs_info->delayed_root);13811381+13821382+ if (WARN_ON(node))13831383+ refcount_dec(&node->refs);13811384}1382138513831386static bool could_end_wait(struct btrfs_delayed_root *delayed_root, int seq)
+21-6
fs/btrfs/disk-io.c
···18351835 if (refcount_dec_and_test(&root->refs)) {18361836 if (WARN_ON(!xa_empty(&root->inodes)))18371837 xa_destroy(&root->inodes);18381838+ if (WARN_ON(!xa_empty(&root->delayed_nodes)))18391839+ xa_destroy(&root->delayed_nodes);18381840 WARN_ON(test_bit(BTRFS_ROOT_DEAD_RELOC_TREE, &root->state));18391841 if (root->anon_dev)18401842 free_anon_bdev(root->anon_dev);···21582156 found = true;21592157 root = read_tree_root_path(tree_root, path, &key);21602158 if (IS_ERR(root)) {21612161- if (!btrfs_test_opt(fs_info, IGNOREBADROOTS))21622162- ret = PTR_ERR(root);21592159+ ret = PTR_ERR(root);21632160 break;21642161 }21652162 set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);···43114310 *43124311 * So wait for all ongoing ordered extents to complete and then run43134312 * delayed iputs. This works because once we reach this point no one43144314- * can either create new ordered extents nor create delayed iputs43154315- * through some other means.43134313+ * can create new ordered extents, but delayed iputs can still be added43144314+ * by a reclaim worker (see comments further below).43164315 *43174316 * Also note that btrfs_wait_ordered_roots() is not safe here, because43184317 * it waits for BTRFS_ORDERED_COMPLETE to be set on an ordered extent,···43234322 btrfs_flush_workqueue(fs_info->endio_write_workers);43244323 /* Ordered extents for free space inodes. */43254324 btrfs_flush_workqueue(fs_info->endio_freespace_worker);43254325+ /*43264326+ * Run delayed iputs in case an async reclaim worker is waiting for them43274327+ * to be run as mentioned above.43284328+ */43264329 btrfs_run_delayed_iputs(fs_info);43274327- /* There should be no more workload to generate new delayed iputs. */43284328- set_bit(BTRFS_FS_STATE_NO_DELAYED_IPUT, &fs_info->fs_state);4329433043304331 cancel_work_sync(&fs_info->async_reclaim_work);43314332 cancel_work_sync(&fs_info->async_data_reclaim_work);43324333 cancel_work_sync(&fs_info->preempt_reclaim_work);43334334 cancel_work_sync(&fs_info->em_shrinker_work);43354335+43364336+ /*43374337+ * Run delayed iputs again because an async reclaim worker may have43384338+ * added new ones if it was flushing delalloc:43394339+ *43404340+ * shrink_delalloc() -> btrfs_start_delalloc_roots() ->43414341+ * start_delalloc_inodes() -> btrfs_add_delayed_iput()43424342+ */43434343+ btrfs_run_delayed_iputs(fs_info);43444344+43454345+ /* There should be no more workload to generate new delayed iputs. */43464346+ set_bit(BTRFS_FS_STATE_NO_DELAYED_IPUT, &fs_info->fs_state);4334434743354348 /* Cancel or finish ongoing discard work */43364349 btrfs_discard_cleanup(fs_info);
+1-1
fs/btrfs/extent_io.c
···43124312 spin_unlock(&eb->refs_lock);43134313 continue;43144314 }43154315- xa_unlock_irq(&fs_info->buffer_tree);4316431543174316 /*43184317 * If tree ref isn't set then we know the ref on this eb is a···43284329 * check the folio private at the end. And43294330 * release_extent_buffer() will release the refs_lock.43304331 */43324332+ xa_unlock_irq(&fs_info->buffer_tree);43314333 release_extent_buffer(eb);43324334 xa_lock_irq(&fs_info->buffer_tree);43334335 }
+12-4
fs/btrfs/free-space-tree.c
···11151115 ret = btrfs_search_slot_for_read(extent_root, &key, path, 1, 0);11161116 if (ret < 0)11171117 goto out_locked;11181118- ASSERT(ret == 0);11181118+ /*11191119+ * If ret is 1 (no key found), it means this is an empty block group,11201120+ * without any extents allocated from it and there's no block group11211121+ * item (key BTRFS_BLOCK_GROUP_ITEM_KEY) located in the extent tree11221122+ * because we are using the block group tree feature, so block group11231123+ * items are stored in the block group tree. It also means there are no11241124+ * extents allocated for block groups with a start offset beyond this11251125+ * block group's end offset (this is the last, highest, block group).11261126+ */11271127+ if (!btrfs_fs_compat_ro(trans->fs_info, BLOCK_GROUP_TREE))11281128+ ASSERT(ret == 0);1119112911201130 start = block_group->start;11211131 end = block_group->start + block_group->length;11221122- while (1) {11321132+ while (ret == 0) {11231133 btrfs_item_key_to_cpu(path->nodes[0], &key, path->slots[0]);1124113411251135 if (key.type == BTRFS_EXTENT_ITEM_KEY ||···11591149 ret = btrfs_next_item(extent_root, path);11601150 if (ret < 0)11611151 goto out_locked;11621162- if (ret)11631163- break;11641152 }11651153 if (start < end) {11661154 ret = __add_to_free_space_tree(trans, block_group, path2,
+68-21
fs/btrfs/inode.c
···4250425042514251 ret = btrfs_del_inode_ref(trans, root, name, ino, dir_ino, &index);42524252 if (ret) {42534253- btrfs_info(fs_info,42544254- "failed to delete reference to %.*s, inode %llu parent %llu",42554255- name->len, name->name, ino, dir_ino);42534253+ btrfs_crit(fs_info,42544254+ "failed to delete reference to %.*s, root %llu inode %llu parent %llu",42554255+ name->len, name->name, btrfs_root_id(root), ino, dir_ino);42564256 btrfs_abort_transaction(trans, ret);42574257 goto err;42584258 }···80598059 int ret;80608060 int ret2;80618061 bool need_abort = false;80628062+ bool logs_pinned = false;80628063 struct fscrypt_name old_fname, new_fname;80638064 struct fscrypt_str *old_name, *new_name;80648065···81838182 inode_inc_iversion(new_inode);81848183 simple_rename_timestamp(old_dir, old_dentry, new_dir, new_dentry);8185818481858185+ if (old_ino != BTRFS_FIRST_FREE_OBJECTID &&81868186+ new_ino != BTRFS_FIRST_FREE_OBJECTID) {81878187+ /*81888188+ * If we are renaming in the same directory (and it's not for81898189+ * root entries) pin the log early to prevent any concurrent81908190+ * task from logging the directory after we removed the old81918191+ * entries and before we add the new entries, otherwise that81928192+ * task can sync a log without any entry for the inodes we are81938193+ * renaming and therefore replaying that log, if a power failure81948194+ * happens after syncing the log, would result in deleting the81958195+ * inodes.81968196+ *81978197+ * If the rename affects two different directories, we want to81988198+ * make sure the that there's no log commit that contains81998199+ * updates for only one of the directories but not for the82008200+ * other.82018201+ *82028202+ * If we are renaming an entry for a root, we don't care about82038203+ * log updates since we called btrfs_set_log_full_commit().82048204+ */82058205+ btrfs_pin_log_trans(root);82068206+ btrfs_pin_log_trans(dest);82078207+ logs_pinned = true;82088208+ }82098209+81868210 if (old_dentry->d_parent != new_dentry->d_parent) {81878211 btrfs_record_unlink_dir(trans, BTRFS_I(old_dir),81888212 BTRFS_I(old_inode), true);···82798253 BTRFS_I(new_inode)->dir_index = new_idx;8280825482818255 /*82828282- * Now pin the logs of the roots. We do it to ensure that no other task82838283- * can sync the logs while we are in progress with the rename, because82848284- * that could result in an inconsistency in case any of the inodes that82858285- * are part of this rename operation were logged before.82568256+ * Do the log updates for all inodes.82578257+ *82588258+ * If either entry is for a root we don't need to update the logs since82598259+ * we've called btrfs_set_log_full_commit() before.82868260 */82878287- if (old_ino != BTRFS_FIRST_FREE_OBJECTID)82888288- btrfs_pin_log_trans(root);82898289- if (new_ino != BTRFS_FIRST_FREE_OBJECTID)82908290- btrfs_pin_log_trans(dest);82918291-82928292- /* Do the log updates for all inodes. */82938293- if (old_ino != BTRFS_FIRST_FREE_OBJECTID)82618261+ if (logs_pinned) {82948262 btrfs_log_new_name(trans, old_dentry, BTRFS_I(old_dir),82958263 old_rename_ctx.index, new_dentry->d_parent);82968296- if (new_ino != BTRFS_FIRST_FREE_OBJECTID)82978264 btrfs_log_new_name(trans, new_dentry, BTRFS_I(new_dir),82988265 new_rename_ctx.index, old_dentry->d_parent);82668266+ }8299826783008300- /* Now unpin the logs. */83018301- if (old_ino != BTRFS_FIRST_FREE_OBJECTID)83028302- btrfs_end_log_trans(root);83038303- if (new_ino != BTRFS_FIRST_FREE_OBJECTID)83048304- btrfs_end_log_trans(dest);83058268out_fail:82698269+ if (logs_pinned) {82708270+ btrfs_end_log_trans(root);82718271+ btrfs_end_log_trans(dest);82728272+ }83068273 ret2 = btrfs_end_transaction(trans);83078274 ret = ret ? ret : ret2;83088275out_notrans:···83458326 int ret2;83468327 u64 old_ino = btrfs_ino(BTRFS_I(old_inode));83478328 struct fscrypt_name old_fname, new_fname;83298329+ bool logs_pinned = false;8348833083498331 if (btrfs_ino(BTRFS_I(new_dir)) == BTRFS_EMPTY_SUBVOL_DIR_OBJECTID)83508332 return -EPERM;···84808460 inode_inc_iversion(old_inode);84818461 simple_rename_timestamp(old_dir, old_dentry, new_dir, new_dentry);8482846284638463+ if (old_ino != BTRFS_FIRST_FREE_OBJECTID) {84648464+ /*84658465+ * If we are renaming in the same directory (and it's not a84668466+ * root entry) pin the log to prevent any concurrent task from84678467+ * logging the directory after we removed the old entry and84688468+ * before we add the new entry, otherwise that task can sync84698469+ * a log without any entry for the inode we are renaming and84708470+ * therefore replaying that log, if a power failure happens84718471+ * after syncing the log, would result in deleting the inode.84728472+ *84738473+ * If the rename affects two different directories, we want to84748474+ * make sure the that there's no log commit that contains84758475+ * updates for only one of the directories but not for the84768476+ * other.84778477+ *84788478+ * If we are renaming an entry for a root, we don't care about84798479+ * log updates since we called btrfs_set_log_full_commit().84808480+ */84818481+ btrfs_pin_log_trans(root);84828482+ btrfs_pin_log_trans(dest);84838483+ logs_pinned = true;84848484+ }84858485+84838486 if (old_dentry->d_parent != new_dentry->d_parent)84848487 btrfs_record_unlink_dir(trans, BTRFS_I(old_dir),84858488 BTRFS_I(old_inode), true);···85678524 if (old_inode->i_nlink == 1)85688525 BTRFS_I(old_inode)->dir_index = index;8569852685708570- if (old_ino != BTRFS_FIRST_FREE_OBJECTID)85278527+ if (logs_pinned)85718528 btrfs_log_new_name(trans, old_dentry, BTRFS_I(old_dir),85728529 rename_ctx.index, new_dentry->d_parent);85738530···85838540 }85848541 }85858542out_fail:85438543+ if (logs_pinned) {85448544+ btrfs_end_log_trans(root);85458545+ btrfs_end_log_trans(dest);85468546+ }85868547 ret2 = btrfs_end_transaction(trans);85878548 ret = ret ? ret : ret2;85888549out_notrans:
+1-1
fs/btrfs/ioctl.c
···31393139 return -EPERM;3140314031413141 if (btrfs_fs_incompat(fs_info, EXTENT_TREE_V2)) {31423142- btrfs_err(fs_info, "scrub is not supported on extent tree v2 yet");31423142+ btrfs_err(fs_info, "scrub: extent tree v2 not yet supported");31433143 return -EINVAL;31443144 }31453145
+26-27
fs/btrfs/scrub.c
···557557 */558558 for (i = 0; i < ipath->fspath->elem_cnt; ++i)559559 btrfs_warn_in_rcu(fs_info,560560-"%s at logical %llu on dev %s, physical %llu, root %llu, inode %llu, offset %llu, length %u, links %u (path: %s)",560560+"scrub: %s at logical %llu on dev %s, physical %llu root %llu inode %llu offset %llu length %u links %u (path: %s)",561561 swarn->errstr, swarn->logical,562562 btrfs_dev_name(swarn->dev),563563 swarn->physical,···571571572572err:573573 btrfs_warn_in_rcu(fs_info,574574- "%s at logical %llu on dev %s, physical %llu, root %llu, inode %llu, offset %llu: path resolving failed with ret=%d",574574+ "scrub: %s at logical %llu on dev %s, physical %llu root %llu inode %llu offset %llu: path resolving failed with ret=%d",575575 swarn->errstr, swarn->logical,576576 btrfs_dev_name(swarn->dev),577577 swarn->physical,···596596597597 /* Super block error, no need to search extent tree. */598598 if (is_super) {599599- btrfs_warn_in_rcu(fs_info, "%s on device %s, physical %llu",599599+ btrfs_warn_in_rcu(fs_info, "scrub: %s on device %s, physical %llu",600600 errstr, btrfs_dev_name(dev), physical);601601 return;602602 }···631631 &ref_level);632632 if (ret < 0) {633633 btrfs_warn(fs_info,634634- "failed to resolve tree backref for logical %llu: %d",635635- swarn.logical, ret);634634+ "scrub: failed to resolve tree backref for logical %llu: %d",635635+ swarn.logical, ret);636636 break;637637 }638638 if (ret > 0)639639 break;640640 btrfs_warn_in_rcu(fs_info,641641-"%s at logical %llu on dev %s, physical %llu: metadata %s (level %d) in tree %llu",641641+"scrub: %s at logical %llu on dev %s, physical %llu: metadata %s (level %d) in tree %llu",642642 errstr, swarn.logical, btrfs_dev_name(dev),643643 swarn.physical, (ref_level ? "node" : "leaf"),644644 ref_level, ref_root);···718718 scrub_bitmap_set_meta_error(stripe, sector_nr, sectors_per_tree);719719 scrub_bitmap_set_error(stripe, sector_nr, sectors_per_tree);720720 btrfs_warn_rl(fs_info,721721- "tree block %llu mirror %u has bad bytenr, has %llu want %llu",721721+ "scrub: tree block %llu mirror %u has bad bytenr, has %llu want %llu",722722 logical, stripe->mirror_num,723723 btrfs_stack_header_bytenr(header), logical);724724 return;···728728 scrub_bitmap_set_meta_error(stripe, sector_nr, sectors_per_tree);729729 scrub_bitmap_set_error(stripe, sector_nr, sectors_per_tree);730730 btrfs_warn_rl(fs_info,731731- "tree block %llu mirror %u has bad fsid, has %pU want %pU",731731+ "scrub: tree block %llu mirror %u has bad fsid, has %pU want %pU",732732 logical, stripe->mirror_num,733733 header->fsid, fs_info->fs_devices->fsid);734734 return;···738738 scrub_bitmap_set_meta_error(stripe, sector_nr, sectors_per_tree);739739 scrub_bitmap_set_error(stripe, sector_nr, sectors_per_tree);740740 btrfs_warn_rl(fs_info,741741- "tree block %llu mirror %u has bad chunk tree uuid, has %pU want %pU",741741+ "scrub: tree block %llu mirror %u has bad chunk tree uuid, has %pU want %pU",742742 logical, stripe->mirror_num,743743 header->chunk_tree_uuid, fs_info->chunk_tree_uuid);744744 return;···760760 scrub_bitmap_set_meta_error(stripe, sector_nr, sectors_per_tree);761761 scrub_bitmap_set_error(stripe, sector_nr, sectors_per_tree);762762 btrfs_warn_rl(fs_info,763763- "tree block %llu mirror %u has bad csum, has " CSUM_FMT " want " CSUM_FMT,763763+"scrub: tree block %llu mirror %u has bad csum, has " CSUM_FMT " want " CSUM_FMT,764764 logical, stripe->mirror_num,765765 CSUM_FMT_VALUE(fs_info->csum_size, on_disk_csum),766766 CSUM_FMT_VALUE(fs_info->csum_size, calculated_csum));···771771 scrub_bitmap_set_meta_gen_error(stripe, sector_nr, sectors_per_tree);772772 scrub_bitmap_set_error(stripe, sector_nr, sectors_per_tree);773773 btrfs_warn_rl(fs_info,774774- "tree block %llu mirror %u has bad generation, has %llu want %llu",774774+ "scrub: tree block %llu mirror %u has bad generation, has %llu want %llu",775775 logical, stripe->mirror_num,776776 btrfs_stack_header_generation(header),777777 stripe->sectors[sector_nr].generation);···814814 */815815 if (unlikely(sector_nr + sectors_per_tree > stripe->nr_sectors)) {816816 btrfs_warn_rl(fs_info,817817- "tree block at %llu crosses stripe boundary %llu",817817+ "scrub: tree block at %llu crosses stripe boundary %llu",818818 stripe->logical +819819 (sector_nr << fs_info->sectorsize_bits),820820 stripe->logical);···10461046 if (repaired) {10471047 if (dev) {10481048 btrfs_err_rl_in_rcu(fs_info,10491049- "fixed up error at logical %llu on dev %s physical %llu",10491049+ "scrub: fixed up error at logical %llu on dev %s physical %llu",10501050 stripe->logical, btrfs_dev_name(dev),10511051 physical);10521052 } else {10531053 btrfs_err_rl_in_rcu(fs_info,10541054- "fixed up error at logical %llu on mirror %u",10541054+ "scrub: fixed up error at logical %llu on mirror %u",10551055 stripe->logical, stripe->mirror_num);10561056 }10571057 continue;···10601060 /* The remaining are all for unrepaired. */10611061 if (dev) {10621062 btrfs_err_rl_in_rcu(fs_info,10631063- "unable to fixup (regular) error at logical %llu on dev %s physical %llu",10631063+"scrub: unable to fixup (regular) error at logical %llu on dev %s physical %llu",10641064 stripe->logical, btrfs_dev_name(dev),10651065 physical);10661066 } else {10671067 btrfs_err_rl_in_rcu(fs_info,10681068- "unable to fixup (regular) error at logical %llu on mirror %u",10681068+ "scrub: unable to fixup (regular) error at logical %llu on mirror %u",10691069 stripe->logical, stripe->mirror_num);10701070 }10711071···15931593 physical,15941594 sctx->write_pointer);15951595 if (ret)15961596- btrfs_err(fs_info,15971597- "zoned: failed to recover write pointer");15961596+ btrfs_err(fs_info, "scrub: zoned: failed to recover write pointer");15981597 }15991598 mutex_unlock(&sctx->wr_lock);16001599 btrfs_dev_clear_zone_empty(sctx->wr_tgtdev, physical);···16571658 int ret;1658165916591660 if (unlikely(!extent_root || !csum_root)) {16601660- btrfs_err(fs_info, "no valid extent or csum root for scrub");16611661+ btrfs_err(fs_info, "scrub: no valid extent or csum root found");16611662 return -EUCLEAN;16621663 }16631664 memset(stripe->sectors, 0, sizeof(struct scrub_sector_verification) *···19061907 struct btrfs_fs_info *fs_info = stripe->bg->fs_info;1907190819081909 btrfs_err(fs_info,19091909- "stripe %llu has unrepaired metadata sector at %llu",19101910+ "scrub: stripe %llu has unrepaired metadata sector at logical %llu",19101911 stripe->logical,19111912 stripe->logical + (i << fs_info->sectorsize_bits));19121913 return true;···21662167 bitmap_and(&error, &error, &has_extent, stripe->nr_sectors);21672168 if (!bitmap_empty(&error, stripe->nr_sectors)) {21682169 btrfs_err(fs_info,21692169-"unrepaired sectors detected, full stripe %llu data stripe %u errors %*pbl",21702170+"scrub: unrepaired sectors detected, full stripe %llu data stripe %u errors %*pbl",21702171 full_stripe_start, i, stripe->nr_sectors,21712172 &error);21722173 ret = -EIO;···27882789 ro_set = 0;27892790 } else if (ret == -ETXTBSY) {27902791 btrfs_warn(fs_info,27912791- "skipping scrub of block group %llu due to active swapfile",27922792+ "scrub: skipping scrub of block group %llu due to active swapfile",27922793 cache->start);27932794 scrub_pause_off(fs_info);27942795 ret = 0;27952796 goto skip_unfreeze;27962797 } else {27972797- btrfs_warn(fs_info,27982798- "failed setting block group ro: %d", ret);27982798+ btrfs_warn(fs_info, "scrub: failed setting block group ro: %d",27992799+ ret);27992800 btrfs_unfreeze_block_group(cache);28002801 btrfs_put_block_group(cache);28012802 scrub_pause_off(fs_info);···28912892 ret = btrfs_check_super_csum(fs_info, sb);28922893 if (ret != 0) {28932894 btrfs_err_rl(fs_info,28942894- "super block at physical %llu devid %llu has bad csum",28952895+ "scrub: super block at physical %llu devid %llu has bad csum",28952896 physical, dev->devid);28962897 return -EIO;28972898 }28982899 if (btrfs_super_generation(sb) != generation) {28992900 btrfs_err_rl(fs_info,29002900-"super block at physical %llu devid %llu has bad generation %llu expect %llu",29012901+"scrub: super block at physical %llu devid %llu has bad generation %llu expect %llu",29012902 physical, dev->devid,29022903 btrfs_super_generation(sb), generation);29032904 return -EUCLEAN;···30583059 !test_bit(BTRFS_DEV_STATE_WRITEABLE, &dev->dev_state)) {30593060 mutex_unlock(&fs_info->fs_devices->device_list_mutex);30603061 btrfs_err_in_rcu(fs_info,30613061- "scrub on devid %llu: filesystem on %s is not writable",30623062+ "scrub: devid %llu: filesystem on %s is not writable",30623063 devid, btrfs_dev_name(dev));30633064 ret = -EROFS;30643065 goto out;
+9-8
fs/btrfs/tree-log.c
···668668 extent_end = ALIGN(start + size,669669 fs_info->sectorsize);670670 } else {671671- ret = 0;672672- goto out;671671+ btrfs_err(fs_info,672672+ "unexpected extent type=%d root=%llu inode=%llu offset=%llu",673673+ found_type, btrfs_root_id(root), key->objectid, key->offset);674674+ return -EUCLEAN;673675 }674676675677 inode = read_one_inode(root, key->objectid);676676- if (!inode) {677677- ret = -EIO;678678- goto out;679679- }678678+ if (!inode)679679+ return -EIO;680680681681 /*682682 * first check to see if we already have this extent in the···961961 ret = unlink_inode_for_log_replay(trans, dir, inode, &name);962962out:963963 kfree(name.name);964964- iput(&inode->vfs_inode);964964+ if (inode)965965+ iput(&inode->vfs_inode);965966 return ret;966967}967968···11771176 ret = unlink_inode_for_log_replay(trans,11781177 victim_parent,11791178 inode, &victim_name);11791179+ iput(&victim_parent->vfs_inode);11801180 }11811181- iput(&victim_parent->vfs_inode);11821181 kfree(victim_name.name);11831182 if (ret)11841183 return ret;
···597597598598 if (la > map->m_la) {599599 r = mid;600600+ if (la > lend) {601601+ DBG_BUGON(1);602602+ return -EFSCORRUPTED;603603+ }600604 lend = la;601605 } else {602606 l = mid + 1;···639635 }640636 }641637 map->m_llen = lend - map->m_la;642642- if (!last && map->m_llen < sb->s_blocksize) {643643- erofs_err(sb, "extent too small %llu @ offset %llu of nid %llu",644644- map->m_llen, map->m_la, vi->nid);645645- DBG_BUGON(1);646646- return -EFSCORRUPTED;647647- }648638 return 0;649639}650640
+38
fs/f2fs/file.c
···3535#include <trace/events/f2fs.h>3636#include <uapi/linux/f2fs.h>37373838+static void f2fs_zero_post_eof_page(struct inode *inode, loff_t new_size)3939+{4040+ loff_t old_size = i_size_read(inode);4141+4242+ if (old_size >= new_size)4343+ return;4444+4545+ /* zero or drop pages only in range of [old_size, new_size] */4646+ truncate_pagecache(inode, old_size);4747+}4848+3849static vm_fault_t f2fs_filemap_fault(struct vm_fault *vmf)3950{4051 struct inode *inode = file_inode(vmf->vma->vm_file);···114103115104 f2fs_bug_on(sbi, f2fs_has_inline_data(inode));116105106106+ filemap_invalidate_lock(inode->i_mapping);107107+ f2fs_zero_post_eof_page(inode, (folio->index + 1) << PAGE_SHIFT);108108+ filemap_invalidate_unlock(inode->i_mapping);109109+117110 file_update_time(vmf->vma->vm_file);118111 filemap_invalidate_lock_shared(inode->i_mapping);112112+119113 folio_lock(folio);120114 if (unlikely(folio->mapping != inode->i_mapping ||121115 folio_pos(folio) > i_size_read(inode) ||···11251109 f2fs_down_write(&fi->i_gc_rwsem[WRITE]);11261110 filemap_invalidate_lock(inode->i_mapping);1127111111121112+ if (attr->ia_size > old_size)11131113+ f2fs_zero_post_eof_page(inode, attr->ia_size);11281114 truncate_setsize(inode, attr->ia_size);1129111511301116 if (attr->ia_size <= old_size)···12441226 ret = f2fs_convert_inline_inode(inode);12451227 if (ret)12461228 return ret;12291229+12301230+ filemap_invalidate_lock(inode->i_mapping);12311231+ f2fs_zero_post_eof_page(inode, offset + len);12321232+ filemap_invalidate_unlock(inode->i_mapping);1247123312481234 pg_start = ((unsigned long long) offset) >> PAGE_SHIFT;12491235 pg_end = ((unsigned long long) offset + len) >> PAGE_SHIFT;···15321510 f2fs_down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);15331511 filemap_invalidate_lock(inode->i_mapping);1534151215131513+ f2fs_zero_post_eof_page(inode, offset + len);15141514+15351515 f2fs_lock_op(sbi);15361516 f2fs_drop_extent_tree(inode);15371517 truncate_pagecache(inode, offset);···16541630 ret = filemap_write_and_wait_range(mapping, offset, offset + len - 1);16551631 if (ret)16561632 return ret;16331633+16341634+ filemap_invalidate_lock(mapping);16351635+ f2fs_zero_post_eof_page(inode, offset + len);16361636+ filemap_invalidate_unlock(mapping);1657163716581638 pg_start = ((unsigned long long) offset) >> PAGE_SHIFT;16591639 pg_end = ((unsigned long long) offset + len) >> PAGE_SHIFT;···17901762 /* avoid gc operation during block exchange */17911763 f2fs_down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);17921764 filemap_invalidate_lock(mapping);17651765+17661766+ f2fs_zero_post_eof_page(inode, offset + len);17931767 truncate_pagecache(inode, offset);1794176817951769 while (!ret && idx > pg_start) {···18481818 err = f2fs_convert_inline_inode(inode);18491819 if (err)18501820 return err;18211821+18221822+ filemap_invalidate_lock(inode->i_mapping);18231823+ f2fs_zero_post_eof_page(inode, offset + len);18241824+ filemap_invalidate_unlock(inode->i_mapping);1851182518521826 f2fs_balance_fs(sbi, true);18531827···48944860 err = file_modified(file);48954861 if (err)48964862 return err;48634863+48644864+ filemap_invalidate_lock(inode->i_mapping);48654865+ f2fs_zero_post_eof_page(inode, iocb->ki_pos + iov_iter_count(from));48664866+ filemap_invalidate_unlock(inode->i_mapping);48974867 return count;48984868}48994869
-1
fs/f2fs/node.c
···2078207820792079 if (!__write_node_folio(folio, false, &submitted,20802080 wbc, do_balance, io_type, NULL)) {20812081- folio_unlock(folio);20822081 folio_batch_release(&fbatch);20832082 ret = -EIO;20842083 goto out;
+6-2
fs/file.c
···11981198 if (!(file->f_mode & FMODE_ATOMIC_POS) && !file->f_op->iterate_shared)11991199 return false;1200120012011201- VFS_WARN_ON_ONCE((file_count(file) > 1) &&12021202- !mutex_is_locked(&file->f_pos_lock));12011201+ /*12021202+ * Note that we are not guaranteed to be called after fdget_pos() on12031203+ * this file obj, in which case the caller is expected to provide the12041204+ * appropriate locking.12051205+ */12061206+12031207 return true;12041208}12051209
+4
fs/fuse/inode.c
···99#include "fuse_i.h"1010#include "dev_uring_i.h"11111212+#include <linux/dax.h>1213#include <linux/pagemap.h>1314#include <linux/slab.h>1415#include <linux/file.h>···162161163162 /* Will write inode on close/munmap and in all other dirtiers */164163 WARN_ON(inode->i_state & I_DIRTY_INODE);164164+165165+ if (FUSE_IS_DAX(inode))166166+ dax_break_layout_final(inode);165167166168 truncate_inode_pages_final(&inode->i_data);167169 clear_inode(inode);
+13-4
fs/namei.c
···29172917 * @base: base directory to lookup from29182918 *29192919 * Look up a dentry by name in the dcache, returning NULL if it does not29202920- * currently exist. The function does not try to create a dentry.29202920+ * currently exist. The function does not try to create a dentry and if one29212921+ * is found it doesn't try to revalidate it.29212922 *29222923 * Note that this routine is purely a helper for filesystem usage and should29232924 * not be called by generic code. It does no permission checking.···29342933 if (err)29352934 return ERR_PTR(err);2936293529372937- return lookup_dcache(name, base, 0);29362936+ return d_lookup(base, name);29382937}29392938EXPORT_SYMBOL(try_lookup_noperm);29402939···30583057 * Note that this routine is purely a helper for filesystem usage and should30593058 * not be called by generic code. It does no permission checking.30603059 *30613061- * Unlike lookup_noperm, it should be called without the parent30603060+ * Unlike lookup_noperm(), it should be called without the parent30623061 * i_rwsem held, and will take the i_rwsem itself if necessary.30623062+ *30633063+ * Unlike try_lookup_noperm() it *does* revalidate the dentry if it already30643064+ * existed.30633065 */30643066struct dentry *lookup_noperm_unlocked(struct qstr *name, struct dentry *base)30653067{30663068 struct dentry *ret;30693069+ int err;3067307030683068- ret = try_lookup_noperm(name, base);30713071+ err = lookup_noperm_common(name, base);30723072+ if (err)30733073+ return ERR_PTR(err);30743074+30753075+ ret = lookup_dcache(name, base, 0);30693076 if (!ret)30703077 ret = lookup_slow(name, base, 0);30713078 return ret;
+67-50
fs/namespace.c
···23102310 return dst_mnt;23112311}2312231223132313-/* Caller should check returned pointer for errors */23142314-23152315-struct vfsmount *collect_mounts(const struct path *path)23132313+static inline bool extend_array(struct path **res, struct path **to_free,23142314+ unsigned n, unsigned *count, unsigned new_count)23162315{23172317- struct mount *tree;23182318- namespace_lock();23192319- if (!check_mnt(real_mount(path->mnt)))23202320- tree = ERR_PTR(-EINVAL);23212321- else23222322- tree = copy_tree(real_mount(path->mnt), path->dentry,23232323- CL_COPY_ALL | CL_PRIVATE);23242324- namespace_unlock();23252325- if (IS_ERR(tree))23262326- return ERR_CAST(tree);23272327- return &tree->mnt;23162316+ struct path *p;23172317+23182318+ if (likely(n < *count))23192319+ return true;23202320+ p = kmalloc_array(new_count, sizeof(struct path), GFP_KERNEL);23212321+ if (p && *count)23222322+ memcpy(p, *res, *count * sizeof(struct path));23232323+ *count = new_count;23242324+ kfree(*to_free);23252325+ *to_free = *res = p;23262326+ return p;23272327+}23282328+23292329+struct path *collect_paths(const struct path *path,23302330+ struct path *prealloc, unsigned count)23312331+{23322332+ struct mount *root = real_mount(path->mnt);23332333+ struct mount *child;23342334+ struct path *res = prealloc, *to_free = NULL;23352335+ unsigned n = 0;23362336+23372337+ guard(rwsem_read)(&namespace_sem);23382338+23392339+ if (!check_mnt(root))23402340+ return ERR_PTR(-EINVAL);23412341+ if (!extend_array(&res, &to_free, 0, &count, 32))23422342+ return ERR_PTR(-ENOMEM);23432343+ res[n++] = *path;23442344+ list_for_each_entry(child, &root->mnt_mounts, mnt_child) {23452345+ if (!is_subdir(child->mnt_mountpoint, path->dentry))23462346+ continue;23472347+ for (struct mount *m = child; m; m = next_mnt(m, child)) {23482348+ if (!extend_array(&res, &to_free, n, &count, 2 * count))23492349+ return ERR_PTR(-ENOMEM);23502350+ res[n].mnt = &m->mnt;23512351+ res[n].dentry = m->mnt.mnt_root;23522352+ n++;23532353+ }23542354+ }23552355+ if (!extend_array(&res, &to_free, n, &count, count + 1))23562356+ return ERR_PTR(-ENOMEM);23572357+ memset(res + n, 0, (count - n) * sizeof(struct path));23582358+ for (struct path *p = res; p->mnt; p++)23592359+ path_get(p);23602360+ return res;23612361+}23622362+23632363+void drop_collected_paths(struct path *paths, struct path *prealloc)23642364+{23652365+ for (struct path *p = paths; p->mnt; p++)23662366+ path_put(p);23672367+ if (paths != prealloc)23682368+ kfree(paths);23282369}2329237023302371static void free_mnt_ns(struct mnt_namespace *);···24402399 /* Make sure we notice when we leak mounts. */24412400 VFS_WARN_ON_ONCE(!mnt_ns_empty(ns));24422401 free_mnt_ns(ns);24432443-}24442444-24452445-void drop_collected_mounts(struct vfsmount *mnt)24462446-{24472447- namespace_lock();24482448- lock_mount_hash();24492449- umount_tree(real_mount(mnt), 0);24502450- unlock_mount_hash();24512451- namespace_unlock();24522402}2453240324542404static bool __has_locked_children(struct mount *mnt, struct dentry *dentry)···25422510 return &new_mnt->mnt;25432511}25442512EXPORT_SYMBOL_GPL(clone_private_mount);25452545-25462546-int iterate_mounts(int (*f)(struct vfsmount *, void *), void *arg,25472547- struct vfsmount *root)25482548-{25492549- struct mount *mnt;25502550- int res = f(root, arg);25512551- if (res)25522552- return res;25532553- list_for_each_entry(mnt, &real_mount(root)->mnt_list, mnt_list) {25542554- res = f(&mnt->mnt, arg);25552555- if (res)25562556- return res;25572557- }25582558- return 0;25592559-}2560251325612514static void lock_mnt_tree(struct mount *mnt)25622515{···27682751 hlist_for_each_entry_safe(child, n, &tree_list, mnt_hash) {27692752 struct mount *q;27702753 hlist_del_init(&child->mnt_hash);27712771- q = __lookup_mnt(&child->mnt_parent->mnt,27722772- child->mnt_mountpoint);27732773- if (q)27742774- mnt_change_mountpoint(child, smp, q);27752754 /* Notice when we are propagating across user namespaces */27762755 if (child->mnt_parent->mnt_ns->user_ns != user_ns)27772756 lock_mnt_tree(child);27782757 child->mnt.mnt_flags &= ~MNT_LOCKED;27582758+ q = __lookup_mnt(&child->mnt_parent->mnt,27592759+ child->mnt_mountpoint);27602760+ if (q)27612761+ mnt_change_mountpoint(child, smp, q);27792762 commit_tree(child);27802763 }27812764 put_mountpoint(smp);···53075290 kattr.kflags |= MOUNT_KATTR_RECURSE;5308529153095292 ret = wants_mount_setattr(uattr, usize, &kattr);53105310- if (ret < 0)53115311- return ret;53125312-53135313- if (ret) {52935293+ if (ret > 0) {53145294 ret = do_mount_setattr(&file->f_path, &kattr);53155315- if (ret)53165316- return ret;53175317-53185295 finish_mount_kattr(&kattr);53195296 }52975297+ if (ret)52985298+ return ret;53205299 }5321530053225301 fd = get_unused_fd_flags(flags & O_CLOEXEC);···62756262{62766263 if (!refcount_dec_and_test(&ns->ns.count))62776264 return;62786278- drop_collected_mounts(&ns->root->mnt);62656265+ namespace_lock();62666266+ lock_mount_hash();62676267+ umount_tree(ns->root, 0);62686268+ unlock_mount_hash();62696269+ namespace_unlock();62796270 free_mnt_ns(ns);62806271}62816272
···16111611 */16121612int nfsd_nl_threads_set_doit(struct sk_buff *skb, struct genl_info *info)16131613{16141614- int *nthreads, count = 0, nrpools, i, ret = -EOPNOTSUPP, rem;16141614+ int *nthreads, nrpools = 0, i, ret = -EOPNOTSUPP, rem;16151615 struct net *net = genl_info_net(info);16161616 struct nfsd_net *nn = net_generic(net, nfsd_net_id);16171617 const struct nlattr *attr;···16231623 /* count number of SERVER_THREADS values */16241624 nlmsg_for_each_attr(attr, info->nlhdr, GENL_HDRLEN, rem) {16251625 if (nla_type(attr) == NFSD_A_SERVER_THREADS)16261626- count++;16261626+ nrpools++;16271627 }1628162816291629 mutex_lock(&nfsd_mutex);1630163016311631- nrpools = max(count, nfsd_nrpools(net));16321631 nthreads = kcalloc(nrpools, sizeof(int), GFP_KERNEL);16331632 if (!nthreads) {16341633 ret = -ENOMEM;
+8-2
fs/overlayfs/namei.c
···13931393bool ovl_lower_positive(struct dentry *dentry)13941394{13951395 struct ovl_entry *poe = OVL_E(dentry->d_parent);13961396- struct qstr *name = &dentry->d_name;13961396+ const struct qstr *name = &dentry->d_name;13971397 const struct cred *old_cred;13981398 unsigned int i;13991399 bool positive = false;···14161416 struct dentry *this;14171417 struct ovl_path *parentpath = &ovl_lowerstack(poe)[i];1418141814191419+ /*14201420+ * We need to make a non-const copy of dentry->d_name,14211421+ * because lookup_one_positive_unlocked() will hash name14221422+ * with parentpath base, which is on another (lower fs).14231423+ */14191424 this = lookup_one_positive_unlocked(14201425 mnt_idmap(parentpath->layer->mnt),14211421- name, parentpath->dentry);14261426+ &QSTR_LEN(name->name, name->len),14271427+ parentpath->dentry);14221428 if (IS_ERR(this)) {14231429 switch (PTR_ERR(this)) {14241430 case -ENOENT:
···21822182 categories |= PAGE_IS_FILE;21832183 }2184218421852185- if (is_zero_pfn(pmd_pfn(pmd)))21852185+ if (is_huge_zero_pmd(pmd))21862186 categories |= PAGE_IS_PFNZERO;21872187 if (pmd_soft_dirty(pmd))21882188 categories |= PAGE_IS_SOFT_DIRTY;
+9-4
fs/resctrl/ctrlmondata.c
···594594 struct rmid_read rr = {0};595595 struct rdt_mon_domain *d;596596 struct rdtgroup *rdtgrp;597597+ int domid, cpu, ret = 0;597598 struct rdt_resource *r;599599+ struct cacheinfo *ci;598600 struct mon_data *md;599599- int domid, ret = 0;600601601602 rdtgrp = rdtgroup_kn_lock_live(of->kn);602603 if (!rdtgrp) {···624623 * one that matches this cache id.625624 */626625 list_for_each_entry(d, &r->mon_domains, hdr.list) {627627- if (d->ci->id == domid) {628628- rr.ci = d->ci;626626+ if (d->ci_id == domid) {627627+ rr.ci_id = d->ci_id;628628+ cpu = cpumask_any(&d->hdr.cpu_mask);629629+ ci = get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE);630630+ if (!ci)631631+ continue;629632 mon_event_read(&rr, r, NULL, rdtgrp,630630- &d->ci->shared_cpu_map, evtid, false);633633+ &ci->shared_cpu_map, evtid, false);631634 goto checkresult;632635 }633636 }
+2-2
fs/resctrl/internal.h
···9898 * domains in @r sharing L3 @ci.id9999 * @evtid: Which monitor event to read.100100 * @first: Initialize MBM counter when true.101101- * @ci: Cacheinfo for L3. Only set when @d is NULL. Used when summing domains.101101+ * @ci_id: Cacheinfo id for L3. Only set when @d is NULL. Used when summing domains.102102 * @err: Error encountered when reading counter.103103 * @val: Returned value of event counter. If @rgrp is a parent resource group,104104 * @val includes the sum of event counts from its child resource groups.···112112 struct rdt_mon_domain *d;113113 enum resctrl_event_id evtid;114114 bool first;115115- struct cacheinfo *ci;115115+ unsigned int ci_id;116116 int err;117117 u64 val;118118 void *arch_mon_ctx;
+4-2
fs/resctrl/monitor.c
···361361{362362 int cpu = smp_processor_id();363363 struct rdt_mon_domain *d;364364+ struct cacheinfo *ci;364365 struct mbm_state *m;365366 int err, ret;366367 u64 tval = 0;···389388 }390389391390 /* Summing domains that share a cache, must be on a CPU for that cache. */392392- if (!cpumask_test_cpu(cpu, &rr->ci->shared_cpu_map))391391+ ci = get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE);392392+ if (!ci || ci->id != rr->ci_id)393393 return -EINVAL;394394395395 /*···402400 */403401 ret = -EINVAL;404402 list_for_each_entry(d, &rr->r->mon_domains, hdr.list) {405405- if (d->ci->id != rr->ci->id)403403+ if (d->ci_id != rr->ci_id)406404 continue;407405 err = resctrl_arch_rmid_read(rr->r, d, closid, rmid,408406 rr->evtid, &tval, rr->arch_mon_ctx);
···509509 spin_lock(&cfids->cfid_list_lock);510510 list_for_each_entry(cfid, &cfids->entries, entry) {511511 tmp_list = kmalloc(sizeof(*tmp_list), GFP_ATOMIC);512512- if (tmp_list == NULL)513513- break;512512+ if (tmp_list == NULL) {513513+ /*514514+ * If the malloc() fails, we won't drop all515515+ * dentries, and unmounting is likely to trigger516516+ * a 'Dentry still in use' error.517517+ */518518+ cifs_tcon_dbg(VFS, "Out of memory while dropping dentries\n");519519+ spin_unlock(&cfids->cfid_list_lock);520520+ spin_unlock(&cifs_sb->tlink_tree_lock);521521+ goto done;522522+ }514523 spin_lock(&cfid->fid_lock);515524 tmp_list->dentry = cfid->dentry;516525 cfid->dentry = NULL;···531522 }532523 spin_unlock(&cifs_sb->tlink_tree_lock);533524525525+done:534526 list_for_each_entry_safe(tmp_list, q, &entry, entry) {535527 list_del(&tmp_list->entry);536528 dput(tmp_list->dentry);
···709709struct TCP_Server_Info {710710 struct list_head tcp_ses_list;711711 struct list_head smb_ses_list;712712+ struct list_head rlist; /* reconnect list */712713 spinlock_t srv_lock; /* protect anything here that is not protected */713714 __u64 conn_id; /* connection identifier (useful for debugging) */714715 int srv_count; /* reference counter */
+37-22
fs/smb/client/connect.c
···124124 (SMB_INTERFACE_POLL_INTERVAL * HZ));125125}126126127127+#define set_need_reco(server) \128128+do { \129129+ spin_lock(&server->srv_lock); \130130+ if (server->tcpStatus != CifsExiting) \131131+ server->tcpStatus = CifsNeedReconnect; \132132+ spin_unlock(&server->srv_lock); \133133+} while (0)134134+127135/*128136 * Update the tcpStatus for the server.129137 * This is used to signal the cifsd thread to call cifs_reconnect···145137cifs_signal_cifsd_for_reconnect(struct TCP_Server_Info *server,146138 bool all_channels)147139{148148- struct TCP_Server_Info *pserver;140140+ struct TCP_Server_Info *nserver;149141 struct cifs_ses *ses;142142+ LIST_HEAD(reco);150143 int i;151151-152152- /* If server is a channel, select the primary channel */153153- pserver = SERVER_IS_CHAN(server) ? server->primary_server : server;154144155145 /* if we need to signal just this channel */156146 if (!all_channels) {157157- spin_lock(&server->srv_lock);158158- if (server->tcpStatus != CifsExiting)159159- server->tcpStatus = CifsNeedReconnect;160160- spin_unlock(&server->srv_lock);147147+ set_need_reco(server);161148 return;162149 }163150164164- spin_lock(&cifs_tcp_ses_lock);165165- list_for_each_entry(ses, &pserver->smb_ses_list, smb_ses_list) {166166- if (cifs_ses_exiting(ses))167167- continue;168168- spin_lock(&ses->chan_lock);169169- for (i = 0; i < ses->chan_count; i++) {170170- if (!ses->chans[i].server)151151+ if (SERVER_IS_CHAN(server))152152+ server = server->primary_server;153153+ scoped_guard(spinlock, &cifs_tcp_ses_lock) {154154+ set_need_reco(server);155155+ list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) {156156+ spin_lock(&ses->ses_lock);157157+ if (ses->ses_status == SES_EXITING) {158158+ spin_unlock(&ses->ses_lock);171159 continue;172172-173173- spin_lock(&ses->chans[i].server->srv_lock);174174- if (ses->chans[i].server->tcpStatus != CifsExiting)175175- ses->chans[i].server->tcpStatus = CifsNeedReconnect;176176- spin_unlock(&ses->chans[i].server->srv_lock);160160+ }161161+ spin_lock(&ses->chan_lock);162162+ for (i = 1; i < ses->chan_count; i++) {163163+ nserver = ses->chans[i].server;164164+ if (!nserver)165165+ continue;166166+ nserver->srv_count++;167167+ list_add(&nserver->rlist, &reco);168168+ }169169+ spin_unlock(&ses->chan_lock);170170+ spin_unlock(&ses->ses_lock);177171 }178178- spin_unlock(&ses->chan_lock);179172 }180180- spin_unlock(&cifs_tcp_ses_lock);173173+174174+ list_for_each_entry_safe(server, nserver, &reco, rlist) {175175+ list_del_init(&server->rlist);176176+ set_need_reco(server);177177+ cifs_put_tcp_session(server, 0);178178+ }181179}182180183181/*···42134199 return 0;42144200 }4215420142024202+ server->lstrp = jiffies;42164203 server->tcpStatus = CifsInNegotiate;42174204 spin_unlock(&server->srv_lock);42184205
+6-2
fs/smb/client/file.c
···5252 struct netfs_io_stream *stream = &req->rreq.io_streams[subreq->stream_nr];5353 struct TCP_Server_Info *server;5454 struct cifsFileInfo *open_file = req->cfile;5555+ struct cifs_sb_info *cifs_sb = CIFS_SB(wdata->rreq->inode->i_sb);5556 size_t wsize = req->rreq.wsize;5657 int rc;5758···63626463 server = cifs_pick_channel(tlink_tcon(open_file->tlink)->ses);6564 wdata->server = server;6565+6666+ if (cifs_sb->ctx->wsize == 0)6767+ cifs_negotiate_wsize(server, cifs_sb->ctx,6868+ tlink_tcon(req->cfile->tlink));66696770retry:6871 if (open_file->invalidHandle) {···165160 server = cifs_pick_channel(tlink_tcon(req->cfile->tlink)->ses);166161 rdata->server = server;167162168168- if (cifs_sb->ctx->rsize == 0) {163163+ if (cifs_sb->ctx->rsize == 0)169164 cifs_negotiate_rsize(server, cifs_sb->ctx,170165 tlink_tcon(req->cfile->tlink));171171- }172166173167 rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize,174168 &size, &rdata->credits);
···875875 abs_path += sizeof("\\DosDevices\\")-1;876876 else if (strstarts(abs_path, "\\GLOBAL??\\"))877877 abs_path += sizeof("\\GLOBAL??\\")-1;878878- else {879879- /* Unhandled absolute symlink, points outside of DOS/Win32 */880880- cifs_dbg(VFS,881881- "absolute symlink '%s' cannot be converted from NT format "882882- "because points to unknown target\n",883883- smb_target);884884- rc = -EIO;885885- goto out;886886- }878878+ else879879+ goto out_unhandled_target;887880888881 /* Sometimes path separator after \?? is double backslash */889882 if (abs_path[0] == '\\')···903910 abs_path++;904911 abs_path[0] = drive_letter;905912 } else {906906- /* Unhandled absolute symlink. Report an error. */907907- cifs_dbg(VFS,908908- "absolute symlink '%s' cannot be converted from NT format "909909- "because points to unknown target\n",910910- smb_target);911911- rc = -EIO;912912- goto out;913913+ goto out_unhandled_target;913914 }914915915916 abs_path_len = strlen(abs_path)+1;···953966 * These paths have same format as Linux symlinks, so no954967 * conversion is needed.955968 */969969+out_unhandled_target:956970 linux_target = smb_target;957971 smb_target = NULL;958972 }···11601172 if (!have_xattr_dev && (tag == IO_REPARSE_TAG_LX_CHR || tag == IO_REPARSE_TAG_LX_BLK))11611173 return false;1162117411631163- fattr->cf_dtype = S_DT(fattr->cf_mode);11641175 return true;11651176}11661177
+1-2
fs/smb/client/sess.c
···498498 ctx->domainauto = ses->domainAuto;499499 ctx->domainname = ses->domainName;500500501501- /* no hostname for extra channels */502502- ctx->server_hostname = "";501501+ ctx->server_hostname = ses->server->hostname;503502504503 ctx->username = ses->user_name;505504 ctx->password = ses->password;
+60-106
fs/smb/client/smbdirect.c
···907907 .local_dma_lkey = sc->ib.pd->local_dma_lkey,908908 .direction = DMA_TO_DEVICE,909909 };910910+ size_t payload_len = umin(*_remaining_data_length,911911+ sp->max_send_size - sizeof(*packet));910912911911- rc = smb_extract_iter_to_rdma(iter, *_remaining_data_length,913913+ rc = smb_extract_iter_to_rdma(iter, payload_len,912914 &extract);913915 if (rc < 0)914916 goto err_dma;···1013101110141012 info->count_send_empty++;10151013 return smbd_post_send_iter(info, NULL, &remaining_data_length);10141014+}10151015+10161016+static int smbd_post_send_full_iter(struct smbd_connection *info,10171017+ struct iov_iter *iter,10181018+ int *_remaining_data_length)10191019+{10201020+ int rc = 0;10211021+10221022+ /*10231023+ * smbd_post_send_iter() respects the10241024+ * negotiated max_send_size, so we need to10251025+ * loop until the full iter is posted10261026+ */10271027+10281028+ while (iov_iter_count(iter) > 0) {10291029+ rc = smbd_post_send_iter(info, iter, _remaining_data_length);10301030+ if (rc < 0)10311031+ break;10321032+ }10331033+10341034+ return rc;10161035}1017103610181037/*···14751452 char name[MAX_NAME_LEN];14761453 int rc;1477145414551455+ if (WARN_ON_ONCE(sp->max_recv_size < sizeof(struct smbdirect_data_transfer)))14561456+ return -ENOMEM;14571457+14781458 scnprintf(name, MAX_NAME_LEN, "smbd_request_%p", info);14791459 info->request_cache =14801460 kmem_cache_create(···14951469 goto out1;1496147014971471 scnprintf(name, MAX_NAME_LEN, "smbd_response_%p", info);14721472+14731473+ struct kmem_cache_args response_args = {14741474+ .align = __alignof__(struct smbd_response),14751475+ .useroffset = (offsetof(struct smbd_response, packet) +14761476+ sizeof(struct smbdirect_data_transfer)),14771477+ .usersize = sp->max_recv_size - sizeof(struct smbdirect_data_transfer),14781478+ };14981479 info->response_cache =14991499- kmem_cache_create(15001500- name,15011501- sizeof(struct smbd_response) +15021502- sp->max_recv_size,15031503- 0, SLAB_HWCACHE_ALIGN, NULL);14801480+ kmem_cache_create(name,14811481+ sizeof(struct smbd_response) + sp->max_recv_size,14821482+ &response_args, SLAB_HWCACHE_ALIGN);15041483 if (!info->response_cache)15051484 goto out2;15061485···17781747}1779174817801749/*17811781- * Receive data from receive reassembly queue17501750+ * Receive data from the transport's receive reassembly queue17821751 * All the incoming data packets are placed in reassembly queue17831783- * buf: the buffer to read data into17521752+ * iter: the buffer to read data into17841753 * size: the length of data to read17851754 * return value: actual data read17861786- * Note: this implementation copies the data from reassebmly queue to receive17551755+ *17561756+ * Note: this implementation copies the data from reassembly queue to receive17871757 * buffers used by upper layer. This is not the optimal code path. A better way17881758 * to do it is to not have upper layer allocate its receive buffers but rather17891759 * borrow the buffer from reassembly queue, and return it after data is17901760 * consumed. But this will require more changes to upper layer code, and also17911761 * need to consider packet boundaries while they still being reassembled.17921762 */17931793-static int smbd_recv_buf(struct smbd_connection *info, char *buf,17941794- unsigned int size)17631763+int smbd_recv(struct smbd_connection *info, struct msghdr *msg)17951764{17961765 struct smbdirect_socket *sc = &info->socket;17971766 struct smbd_response *response;17981767 struct smbdirect_data_transfer *data_transfer;17681768+ size_t size = iov_iter_count(&msg->msg_iter);17991769 int to_copy, to_read, data_read, offset;18001770 u32 data_length, remaining_data_length, data_offset;18011771 int rc;17721772+17731773+ if (WARN_ON_ONCE(iov_iter_rw(&msg->msg_iter) == WRITE))17741774+ return -EINVAL; /* It's a bug in upper layer to get there */1802177518031776again:18041777 /*···18101775 * the only one reading from the front of the queue. The transport18111776 * may add more entries to the back of the queue at the same time18121777 */18131813- log_read(INFO, "size=%d info->reassembly_data_length=%d\n", size,17781778+ log_read(INFO, "size=%zd info->reassembly_data_length=%d\n", size,18141779 info->reassembly_data_length);18151780 if (info->reassembly_data_length >= size) {18161781 int queue_length;···18481813 if (response->first_segment && size == 4) {18491814 unsigned int rfc1002_len =18501815 data_length + remaining_data_length;18511851- *((__be32 *)buf) = cpu_to_be32(rfc1002_len);18161816+ __be32 rfc1002_hdr = cpu_to_be32(rfc1002_len);18171817+ if (copy_to_iter(&rfc1002_hdr, sizeof(rfc1002_hdr),18181818+ &msg->msg_iter) != sizeof(rfc1002_hdr))18191819+ return -EFAULT;18521820 data_read = 4;18531821 response->first_segment = false;18541822 log_read(INFO, "returning rfc1002 length %d\n",···18601822 }1861182318621824 to_copy = min_t(int, data_length - offset, to_read);18631863- memcpy(18641864- buf + data_read,18651865- (char *)data_transfer + data_offset + offset,18661866- to_copy);18251825+ if (copy_to_iter((char *)data_transfer + data_offset + offset,18261826+ to_copy, &msg->msg_iter) != to_copy)18271827+ return -EFAULT;1867182818681829 /* move on to the next buffer? */18691830 if (to_copy == data_length - offset) {···19281891}1929189219301893/*19311931- * Receive a page from receive reassembly queue19321932- * page: the page to read data into19331933- * to_read: the length of data to read19341934- * return value: actual data read19351935- */19361936-static int smbd_recv_page(struct smbd_connection *info,19371937- struct page *page, unsigned int page_offset,19381938- unsigned int to_read)19391939-{19401940- struct smbdirect_socket *sc = &info->socket;19411941- int ret;19421942- char *to_address;19431943- void *page_address;19441944-19451945- /* make sure we have the page ready for read */19461946- ret = wait_event_interruptible(19471947- info->wait_reassembly_queue,19481948- info->reassembly_data_length >= to_read ||19491949- sc->status != SMBDIRECT_SOCKET_CONNECTED);19501950- if (ret)19511951- return ret;19521952-19531953- /* now we can read from reassembly queue and not sleep */19541954- page_address = kmap_atomic(page);19551955- to_address = (char *) page_address + page_offset;19561956-19571957- log_read(INFO, "reading from page=%p address=%p to_read=%d\n",19581958- page, to_address, to_read);19591959-19601960- ret = smbd_recv_buf(info, to_address, to_read);19611961- kunmap_atomic(page_address);19621962-19631963- return ret;19641964-}19651965-19661966-/*19671967- * Receive data from transport19681968- * msg: a msghdr point to the buffer, can be ITER_KVEC or ITER_BVEC19691969- * return: total bytes read, or 0. SMB Direct will not do partial read.19701970- */19711971-int smbd_recv(struct smbd_connection *info, struct msghdr *msg)19721972-{19731973- char *buf;19741974- struct page *page;19751975- unsigned int to_read, page_offset;19761976- int rc;19771977-19781978- if (iov_iter_rw(&msg->msg_iter) == WRITE) {19791979- /* It's a bug in upper layer to get there */19801980- cifs_dbg(VFS, "Invalid msg iter dir %u\n",19811981- iov_iter_rw(&msg->msg_iter));19821982- rc = -EINVAL;19831983- goto out;19841984- }19851985-19861986- switch (iov_iter_type(&msg->msg_iter)) {19871987- case ITER_KVEC:19881988- buf = msg->msg_iter.kvec->iov_base;19891989- to_read = msg->msg_iter.kvec->iov_len;19901990- rc = smbd_recv_buf(info, buf, to_read);19911991- break;19921992-19931993- case ITER_BVEC:19941994- page = msg->msg_iter.bvec->bv_page;19951995- page_offset = msg->msg_iter.bvec->bv_offset;19961996- to_read = msg->msg_iter.bvec->bv_len;19971997- rc = smbd_recv_page(info, page, page_offset, to_read);19981998- break;19991999-20002000- default:20012001- /* It's a bug in upper layer to get there */20022002- cifs_dbg(VFS, "Invalid msg type %d\n",20032003- iov_iter_type(&msg->msg_iter));20042004- rc = -EINVAL;20052005- }20062006-20072007-out:20082008- /* SMBDirect will read it all or nothing */20092009- if (rc > 0)20102010- msg->msg_iter.count = 0;20112011- return rc;20122012-}20132013-20142014-/*20151894 * Send data to transport20161895 * Each rqst is transported as a SMBDirect payload20171896 * rqst: the data to write···19852032 klen += rqst->rq_iov[i].iov_len;19862033 iov_iter_kvec(&iter, ITER_SOURCE, rqst->rq_iov, rqst->rq_nvec, klen);1987203419881988- rc = smbd_post_send_iter(info, &iter, &remaining_data_length);20352035+ rc = smbd_post_send_full_iter(info, &iter, &remaining_data_length);19892036 if (rc < 0)19902037 break;1991203819922039 if (iov_iter_count(&rqst->rq_iter) > 0) {19932040 /* And then the data pages if there are any */19941994- rc = smbd_post_send_iter(info, &rqst->rq_iter,19951995- &remaining_data_length);20412041+ rc = smbd_post_send_full_iter(info, &rqst->rq_iter,20422042+ &remaining_data_length);19962043 if (rc < 0)19972044 break;19982045 }···25422589 size_t fsize = folioq_folio_size(folioq, slot);2543259025442591 if (offset < fsize) {25452545- size_t part = umin(maxsize - ret, fsize - offset);25922592+ size_t part = umin(maxsize, fsize - offset);2546259325472594 if (!smb_set_sge(rdma, folio_page(folio, 0), offset, part))25482595 return -EIO;2549259625502597 offset += part;25512598 ret += part;25992599+ maxsize -= part;25522600 }2553260125542602 if (offset >= fsize) {···25642610 slot = 0;25652611 }25662612 }25672567- } while (rdma->nr_sge < rdma->max_sge || maxsize > 0);26132613+ } while (rdma->nr_sge < rdma->max_sge && maxsize > 0);2568261425692615 iter->folioq = folioq;25702616 iter->folioq_slot = slot;
···4444 *4545 * This delegates to may_use_simd(), except that this also returns false if SIMD4646 * in crypto code has been temporarily disabled on this CPU by the crypto4747- * self-tests, in order to test the no-SIMD fallback code.4747+ * self-tests, in order to test the no-SIMD fallback code. This override is4848+ * currently limited to configurations where the "full" self-tests are enabled,4949+ * because it might be a bit too invasive to be part of the "fast" self-tests.4850 */4949-#ifdef CONFIG_CRYPTO_SELFTESTS5151+#ifdef CONFIG_CRYPTO_SELFTESTS_FULL5052DECLARE_PER_CPU(bool, crypto_simd_disabled_for_test);5153#define crypto_simd_usable() \5254 (may_use_simd() && !this_cpu_read(crypto_simd_disabled_for_test))
···108108 deregister_mtd_parser)109109110110int mtd_add_partition(struct mtd_info *master, const char *name,111111- long long offset, long long length, struct mtd_info **part);111111+ long long offset, long long length);112112int mtd_del_partition(struct mtd_info *master, int partno);113113uint64_t mtd_get_device_size(const struct mtd_info *mtd);114114
···635635 unsigned long size;636636};637637638638-/**639639- * enum perf_event_state - the states of an event:638638+/*639639+ * The normal states are:640640+ *641641+ * ACTIVE --.642642+ * ^ |643643+ * | |644644+ * sched_{in,out}() |645645+ * | |646646+ * v |647647+ * ,---> INACTIVE --+ <-.648648+ * | | |649649+ * | {dis,en}able()650650+ * sched_in() | |651651+ * | OFF <--' --+652652+ * | |653653+ * `---> ERROR ------'654654+ *655655+ * That is:656656+ *657657+ * sched_in: INACTIVE -> {ACTIVE,ERROR}658658+ * sched_out: ACTIVE -> INACTIVE659659+ * disable: {ACTIVE,INACTIVE} -> OFF660660+ * enable: {OFF,ERROR} -> INACTIVE661661+ *662662+ * Where {OFF,ERROR} are disabled states.663663+ *664664+ * Then we have the {EXIT,REVOKED,DEAD} states which are various shades of665665+ * defunct events:666666+ *667667+ * - EXIT means task that the even was assigned to died, but child events668668+ * still live, and further children can still be created. But the event669669+ * itself will never be active again. It can only transition to670670+ * {REVOKED,DEAD};671671+ *672672+ * - REVOKED means the PMU the event was associated with is gone; all673673+ * functionality is stopped but the event is still alive. Can only674674+ * transition to DEAD;675675+ *676676+ * - DEAD event really is DYING tearing down state and freeing bits.677677+ *640678 */641679enum perf_event_state {642680 PERF_EVENT_STATE_DEAD = -5,
+2-2
include/linux/resctrl.h
···159159/**160160 * struct rdt_mon_domain - group of CPUs sharing a resctrl monitor resource161161 * @hdr: common header for different domain types162162- * @ci: cache info for this domain162162+ * @ci_id: cache info id for this domain163163 * @rmid_busy_llc: bitmap of which limbo RMIDs are above threshold164164 * @mbm_total: saved state for MBM total bandwidth165165 * @mbm_local: saved state for MBM local bandwidth···170170 */171171struct rdt_mon_domain {172172 struct rdt_domain_hdr hdr;173173- struct cacheinfo *ci;173173+ unsigned int ci_id;174174 unsigned long *rmid_busy_llc;175175 struct mbm_state *mbm_total;176176 struct mbm_state *mbm_local;
···2727 * token, rem_id.2828 * @MPTCP_EVENT_SUB_ESTABLISHED: A new subflow has been established. 'error'2929 * should not be set. Attributes: token, family, loc_id, rem_id, saddr4 |3030- * saddr6, daddr4 | daddr6, sport, dport, backup, if_idx [, error].3030+ * saddr6, daddr4 | daddr6, sport, dport, backup, if-idx [, error].3131 * @MPTCP_EVENT_SUB_CLOSED: A subflow has been closed. An error (copy of3232 * sk_err) could be set if an error has been detected for this subflow.3333 * Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 |3434- * daddr6, sport, dport, backup, if_idx [, error].3434+ * daddr6, sport, dport, backup, if-idx [, error].3535 * @MPTCP_EVENT_SUB_PRIORITY: The priority of a subflow has changed. 'error'3636 * should not be set. Attributes: token, family, loc_id, rem_id, saddr4 |3737- * saddr6, daddr4 | daddr6, sport, dport, backup, if_idx [, error].3737+ * saddr6, daddr4 | daddr6, sport, dport, backup, if-idx [, error].3838 * @MPTCP_EVENT_LISTENER_CREATED: A new PM listener is created. Attributes:3939 * family, sport, saddr4 | saddr6.4040 * @MPTCP_EVENT_LISTENER_CLOSED: A PM listener is closed. Attributes: family,
+26-6
include/uapi/linux/ublk_cmd.h
···135135#define UBLKSRV_IO_BUF_TOTAL_SIZE (1ULL << UBLKSRV_IO_BUF_TOTAL_BITS)136136137137/*138138- * zero copy requires 4k block size, and can remap ublk driver's io139139- * request into ublksrv's vm space138138+ * ublk server can register data buffers for incoming I/O requests with a sparse139139+ * io_uring buffer table. The request buffer can then be used as the data buffer140140+ * for io_uring operations via the fixed buffer index.141141+ * Note that the ublk server can never directly access the request data memory.142142+ *143143+ * To use this feature, the ublk server must first register a sparse buffer144144+ * table on an io_uring instance.145145+ * When an incoming ublk request is received, the ublk server submits a146146+ * UBLK_U_IO_REGISTER_IO_BUF command to that io_uring instance. The147147+ * ublksrv_io_cmd's q_id and tag specify the request whose buffer to register148148+ * and addr is the index in the io_uring's buffer table to install the buffer.149149+ * SQEs can now be submitted to the io_uring to read/write the request's buffer150150+ * by enabling fixed buffers (e.g. using IORING_OP_{READ,WRITE}_FIXED or151151+ * IORING_URING_CMD_FIXED) and passing the registered buffer index in buf_index.152152+ * Once the last io_uring operation using the request's buffer has completed,153153+ * the ublk server submits a UBLK_U_IO_UNREGISTER_IO_BUF command with q_id, tag,154154+ * and addr again specifying the request buffer to unregister.155155+ * The ublk request is completed when its buffer is unregistered from all156156+ * io_uring instances and the ublk server issues UBLK_U_IO_COMMIT_AND_FETCH_REQ.157157+ *158158+ * Not available for UBLK_F_UNPRIVILEGED_DEV, as a ublk server can leak159159+ * uninitialized kernel memory by not reading into the full request buffer.140160 */141161#define UBLK_F_SUPPORT_ZERO_COPY (1ULL << 0)142162···470450 __u64 sqe_addr)471451{472452 struct ublk_auto_buf_reg reg = {473473- .index = sqe_addr & 0xffff,474474- .flags = (sqe_addr >> 16) & 0xff,475475- .reserved0 = (sqe_addr >> 24) & 0xff,476476- .reserved1 = sqe_addr >> 32,453453+ .index = (__u16)sqe_addr,454454+ .flags = (__u8)(sqe_addr >> 16),455455+ .reserved0 = (__u8)(sqe_addr >> 24),456456+ .reserved1 = (__u32)(sqe_addr >> 32),477457 };478458479459 return reg;
+4
include/uapi/linux/vm_sockets.h
···1717#ifndef _UAPI_VM_SOCKETS_H1818#define _UAPI_VM_SOCKETS_H19192020+#ifndef __KERNEL__2121+#include <sys/socket.h> /* for struct sockaddr and sa_family_t */2222+#endif2323+2024#include <linux/socket.h>2125#include <linux/types.h>2226
+3-1
io_uring/io-wq.c
···12591259 atomic_set(&wq->worker_refs, 1);12601260 init_completion(&wq->worker_done);12611261 ret = cpuhp_state_add_instance_nocalls(io_wq_online, &wq->cpuhp_node);12621262- if (ret)12621262+ if (ret) {12631263+ put_task_struct(wq->task);12631264 goto err;12651265+ }1264126612651267 return wq;12661268err:
-2
io_uring/io_uring.h
···9898struct llist_node *tctx_task_work_run(struct io_uring_task *tctx, unsigned int max_entries, unsigned int *count);9999void tctx_task_work(struct callback_head *cb);100100__cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd);101101-int io_uring_alloc_task_context(struct task_struct *task,102102- struct io_ring_ctx *ctx);103101104102int io_ring_add_registered_file(struct io_uring_task *tctx, struct file *file,105103 int start, int end);
+1
io_uring/kbuf.c
···271271 if (len > arg->max_len) {272272 len = arg->max_len;273273 if (!(bl->flags & IOBL_INC)) {274274+ arg->partial_map = 1;274275 if (iov != arg->iovs)275276 break;276277 buf->len = len;
+2-1
io_uring/kbuf.h
···5858 size_t max_len;5959 unsigned short nr_iovs;6060 unsigned short mode;6161- unsigned buf_group;6161+ unsigned short buf_group;6262+ unsigned short partial_map;6263};63646465void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
+22-14
io_uring/net.c
···7575 u16 flags;7676 /* initialised and used only by !msg send variants */7777 u16 buf_group;7878- bool retry;7878+ unsigned short retry_flags;7979 void __user *msg_control;8080 /* used only for send zerocopy */8181 struct io_kiocb *notif;8282+};8383+8484+enum sr_retry_flags {8585+ IO_SR_MSG_RETRY = 1,8686+ IO_SR_MSG_PARTIAL_MAP = 2,8287};83888489/*···192187193188 req->flags &= ~REQ_F_BL_EMPTY;194189 sr->done_io = 0;195195- sr->retry = false;190190+ sr->retry_flags = 0;196191 sr->len = 0; /* get from the provided buffer */197192}198193···402397 struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);403398404399 sr->done_io = 0;405405- sr->retry = false;400400+ sr->retry_flags = 0;406401 sr->len = READ_ONCE(sqe->len);407402 sr->flags = READ_ONCE(sqe->ioprio);408403 if (sr->flags & ~SENDMSG_FLAGS)···756751 struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);757752758753 sr->done_io = 0;759759- sr->retry = false;754754+ sr->retry_flags = 0;760755761756 if (unlikely(sqe->file_index || sqe->addr2))762757 return -EINVAL;···826821 if (sr->flags & IORING_RECVSEND_BUNDLE) {827822 size_t this_ret = *ret - sr->done_io;828823829829- cflags |= io_put_kbufs(req, *ret, io_bundle_nbufs(kmsg, this_ret),824824+ cflags |= io_put_kbufs(req, this_ret, io_bundle_nbufs(kmsg, this_ret),830825 issue_flags);831831- if (sr->retry)826826+ if (sr->retry_flags & IO_SR_MSG_RETRY)832827 cflags = req->cqe.flags | (cflags & CQE_F_MASK);833828 /* bundle with no more immediate buffers, we're done */834829 if (req->flags & REQ_F_BL_EMPTY)···837832 * If more is available AND it was a full transfer, retry and838833 * append to this one839834 */840840- if (!sr->retry && kmsg->msg.msg_inq > 1 && this_ret > 0 &&835835+ if (!sr->retry_flags && kmsg->msg.msg_inq > 1 && this_ret > 0 &&841836 !iov_iter_count(&kmsg->msg.msg_iter)) {842837 req->cqe.flags = cflags & ~CQE_F_MASK;843838 sr->len = kmsg->msg.msg_inq;844839 sr->done_io += this_ret;845845- sr->retry = true;840840+ sr->retry_flags |= IO_SR_MSG_RETRY;846841 return false;847842 }848843 } else {···10821077 if (unlikely(ret < 0))10831078 return ret;1084107910801080+ if (arg.iovs != &kmsg->fast_iov && arg.iovs != kmsg->vec.iovec) {10811081+ kmsg->vec.nr = ret;10821082+ kmsg->vec.iovec = arg.iovs;10831083+ req->flags |= REQ_F_NEED_CLEANUP;10841084+ }10851085+ if (arg.partial_map)10861086+ sr->retry_flags |= IO_SR_MSG_PARTIAL_MAP;10871087+10851088 /* special case 1 vec, can be a fast path */10861089 if (ret == 1) {10871090 sr->buf = arg.iovs[0].iov_base;···10981085 }10991086 iov_iter_init(&kmsg->msg.msg_iter, ITER_DEST, arg.iovs, ret,11001087 arg.out_len);11011101- if (arg.iovs != &kmsg->fast_iov && arg.iovs != kmsg->vec.iovec) {11021102- kmsg->vec.nr = ret;11031103- kmsg->vec.iovec = arg.iovs;11041104- req->flags |= REQ_F_NEED_CLEANUP;11051105- }11061088 } else {11071089 void __user *buf;11081090···12831275 int ret;1284127612851277 zc->done_io = 0;12861286- zc->retry = false;12781278+ zc->retry_flags = 0;1287127912881280 if (unlikely(READ_ONCE(sqe->__pad2[0]) || READ_ONCE(sqe->addr3)))12891281 return -EINVAL;
···112112 struct io_mapped_ubuf *imu = priv;113113 unsigned int i;114114115115- for (i = 0; i < imu->nr_bvecs; i++)116116- unpin_user_page(imu->bvec[i].bv_page);115115+ for (i = 0; i < imu->nr_bvecs; i++) {116116+ struct folio *folio = page_folio(imu->bvec[i].bv_page);117117+118118+ unpin_user_folio(folio, 1);119119+ }117120}118121119122static struct io_mapped_ubuf *io_alloc_imu(struct io_ring_ctx *ctx,···734731735732 data->nr_pages_mid = folio_nr_pages(folio);736733 data->folio_shift = folio_shift(folio);734734+ data->first_folio_page_idx = folio_page_idx(folio, page_array[0]);737735738736 /*739737 * Check if pages are contiguous inside a folio, and all folios have···813809814810 imu->nr_bvecs = nr_pages;815811 ret = io_buffer_account_pin(ctx, pages, nr_pages, imu, last_hpage);816816- if (ret) {817817- unpin_user_pages(pages, nr_pages);812812+ if (ret)818813 goto done;819819- }820814821815 size = iov->iov_len;822816 /* store original address for later verification */···828826 if (coalesced)829827 imu->folio_shift = data.folio_shift;830828 refcount_set(&imu->refs, 1);831831- off = (unsigned long) iov->iov_base & ((1UL << imu->folio_shift) - 1);829829+830830+ off = (unsigned long)iov->iov_base & ~PAGE_MASK;831831+ if (coalesced)832832+ off += data.first_folio_page_idx << PAGE_SHIFT;833833+832834 node->buf = imu;833835 ret = 0;834836···848842 if (ret) {849843 if (imu)850844 io_free_imu(ctx, imu);845845+ if (pages) {846846+ for (i = 0; i < nr_pages; i++)847847+ unpin_user_folio(page_folio(pages[i]), 1);848848+ }851849 io_cache_free(&ctx->node_cache, node);852850 node = ERR_PTR(ret);853851 }···11871177 return -EINVAL;11881178 if (check_add_overflow(arg->nr, arg->dst_off, &nbufs))11891179 return -EOVERFLOW;11801180+ if (nbufs > IORING_MAX_REG_BUFFERS)11811181+ return -EINVAL;1190118211911183 ret = io_rsrc_data_alloc(&data, max(nbufs, ctx->buf_table.nr));11921184 if (ret)···13391327{13401328 unsigned long folio_size = 1 << imu->folio_shift;13411329 unsigned long folio_mask = folio_size - 1;13421342- u64 folio_addr = imu->ubuf & ~folio_mask;13431330 struct bio_vec *res_bvec = vec->bvec;13441331 size_t total_len = 0;13451332 unsigned bvec_idx = 0;···13601349 if (unlikely(check_add_overflow(total_len, iov_len, &total_len)))13611350 return -EOVERFLOW;1362135113631363- /* by using folio address it also accounts for bvec offset */13641364- offset = buf_addr - folio_addr;13521352+ offset = buf_addr - imu->ubuf;13531353+ /*13541354+ * Only the first bvec can have non zero bv_offset, account it13551355+ * here and work with full folios below.13561356+ */13571357+ offset += imu->bvec[0].bv_offset;13581358+13651359 src_bvec = imu->bvec + (offset >> imu->folio_shift);13661360 offset &= folio_mask;13671361
+1
io_uring/rsrc.h
···4949 unsigned int nr_pages_mid;5050 unsigned int folio_shift;5151 unsigned int nr_folios;5252+ unsigned long first_folio_page_idx;5253};53545455bool io_rsrc_cache_init(struct io_ring_ctx *ctx);
+2-4
io_uring/sqpoll.c
···1616#include <uapi/linux/io_uring.h>17171818#include "io_uring.h"1919+#include "tctx.h"1920#include "napi.h"2021#include "sqpoll.h"2122···420419__cold int io_sq_offload_create(struct io_ring_ctx *ctx,421420 struct io_uring_params *p)422421{423423- struct task_struct *task_to_put = NULL;424422 int ret;425423426424 /* Retain compatibility with failing for an invalid attach attempt */···498498 rcu_assign_pointer(sqd->thread, tsk);499499 mutex_unlock(&sqd->lock);500500501501- task_to_put = get_task_struct(tsk);501501+ get_task_struct(tsk);502502 ret = io_uring_alloc_task_context(tsk, ctx);503503 wake_up_new_task(tsk);504504 if (ret)···513513 complete(&ctx->sq_data->exited);514514err:515515 io_sq_thread_finish(ctx);516516- if (task_to_put)517517- put_task_struct(task_to_put);518516 return ret;519517}520518
+4-2
io_uring/zcrx.c
···106106 for_each_sgtable_dma_sg(mem->sgt, sg, i)107107 total_size += sg_dma_len(sg);108108109109- if (total_size < off + len)110110- return -EINVAL;109109+ if (total_size < off + len) {110110+ ret = -EINVAL;111111+ goto err;112112+ }111113112114 mem->dmabuf_offset = off;113115 mem->size = len;
+1
kernel/Kconfig.kexec
···134134 depends on KEXEC_FILE135135 depends on CRASH_DUMP136136 depends on DM_CRYPT137137+ depends on KEYS137138 help138139 With this option enabled, user space can intereact with139140 /sys/kernel/config/crash_dm_crypt_keys to make the dm crypt keys
···207207 __perf_ctx_unlock(&cpuctx->ctx);208208}209209210210+typedef struct {211211+ struct perf_cpu_context *cpuctx;212212+ struct perf_event_context *ctx;213213+} class_perf_ctx_lock_t;214214+215215+static inline void class_perf_ctx_lock_destructor(class_perf_ctx_lock_t *_T)216216+{ perf_ctx_unlock(_T->cpuctx, _T->ctx); }217217+218218+static inline class_perf_ctx_lock_t219219+class_perf_ctx_lock_constructor(struct perf_cpu_context *cpuctx,220220+ struct perf_event_context *ctx)221221+{ perf_ctx_lock(cpuctx, ctx); return (class_perf_ctx_lock_t){ cpuctx, ctx }; }222222+210223#define TASK_TOMBSTONE ((void *)-1L)211224212225static bool is_kernel_event(struct perf_event *event)···957944 if (READ_ONCE(cpuctx->cgrp) == cgrp)958945 return;959946960960- perf_ctx_lock(cpuctx, cpuctx->task_ctx);947947+ guard(perf_ctx_lock)(cpuctx, cpuctx->task_ctx);948948+ /*949949+ * Re-check, could've raced vs perf_remove_from_context().950950+ */951951+ if (READ_ONCE(cpuctx->cgrp) == NULL)952952+ return;953953+961954 perf_ctx_disable(&cpuctx->ctx, true);962955963956 ctx_sched_out(&cpuctx->ctx, NULL, EVENT_ALL|EVENT_CGROUP);···981962 ctx_sched_in(&cpuctx->ctx, NULL, EVENT_ALL|EVENT_CGROUP);982963983964 perf_ctx_enable(&cpuctx->ctx, true);984984- perf_ctx_unlock(cpuctx, cpuctx->task_ctx);985965}986966987967static int perf_cgroup_ensure_storage(struct perf_event *event,···21382120 if (event->group_leader == event)21392121 del_event_from_groups(event, ctx);2140212221412141- /*21422142- * If event was in error state, then keep it21432143- * that way, otherwise bogus counts will be21442144- * returned on read(). The only way to get out21452145- * of error state is by explicit re-enabling21462146- * of the event21472147- */21482148- if (event->state > PERF_EVENT_STATE_OFF) {21492149- perf_cgroup_event_disable(event, ctx);21502150- perf_event_set_state(event, PERF_EVENT_STATE_OFF);21512151- }21522152-21532123 ctx->generation++;21542124 event->pmu_ctx->nr_events--;21552125}···21552149}2156215021572151static void put_event(struct perf_event *event);21582158-static void event_sched_out(struct perf_event *event,21592159- struct perf_event_context *ctx);21522152+static void __event_disable(struct perf_event *event,21532153+ struct perf_event_context *ctx,21542154+ enum perf_event_state state);2160215521612156static void perf_put_aux_event(struct perf_event *event)21622157{···21902183 * state so that we don't try to schedule it again. Note21912184 * that perf_event_enable() will clear the ERROR status.21922185 */21932193- event_sched_out(iter, ctx);21942194- perf_event_set_state(event, PERF_EVENT_STATE_ERROR);21862186+ __event_disable(iter, ctx, PERF_EVENT_STATE_ERROR);21952187 }21962188}21972189···22482242 &event->pmu_ctx->flexible_active;22492243}2250224422512251-/*22522252- * Events that have PERF_EV_CAP_SIBLING require being part of a group and22532253- * cannot exist on their own, schedule them out and move them into the ERROR22542254- * state. Also see _perf_event_enable(), it will not be able to recover22552255- * this ERROR state.22562256- */22572257-static inline void perf_remove_sibling_event(struct perf_event *event)22582258-{22592259- event_sched_out(event, event->ctx);22602260- perf_event_set_state(event, PERF_EVENT_STATE_ERROR);22612261-}22622262-22632245static void perf_group_detach(struct perf_event *event)22642246{22652247 struct perf_event *leader = event->group_leader;···22832289 */22842290 list_for_each_entry_safe(sibling, tmp, &event->sibling_list, sibling_list) {2285229122922292+ /*22932293+ * Events that have PERF_EV_CAP_SIBLING require being part of22942294+ * a group and cannot exist on their own, schedule them out22952295+ * and move them into the ERROR state. Also see22962296+ * _perf_event_enable(), it will not be able to recover this22972297+ * ERROR state.22982298+ */22862299 if (sibling->event_caps & PERF_EV_CAP_SIBLING)22872287- perf_remove_sibling_event(sibling);23002300+ __event_disable(sibling, ctx, PERF_EVENT_STATE_ERROR);2288230122892302 sibling->group_leader = sibling;22902303 list_del_init(&sibling->sibling_list);···24942493 state = PERF_EVENT_STATE_EXIT;24952494 if (flags & DETACH_REVOKE)24962495 state = PERF_EVENT_STATE_REVOKED;24972497- if (flags & DETACH_DEAD) {24982498- event->pending_disable = 1;24962496+ if (flags & DETACH_DEAD)24992497 state = PERF_EVENT_STATE_DEAD;25002500- }24982498+25012499 event_sched_out(event, ctx);25002500+25012501+ if (event->state > PERF_EVENT_STATE_OFF)25022502+ perf_cgroup_event_disable(event, ctx);25032503+25022504 perf_event_set_state(event, min(event->state, state));2503250525042506 if (flags & DETACH_GROUP)···25662562 event_function_call(event, __perf_remove_from_context, (void *)flags);25672563}2568256425652565+static void __event_disable(struct perf_event *event,25662566+ struct perf_event_context *ctx,25672567+ enum perf_event_state state)25682568+{25692569+ event_sched_out(event, ctx);25702570+ perf_cgroup_event_disable(event, ctx);25712571+ perf_event_set_state(event, state);25722572+}25732573+25692574/*25702575 * Cross CPU call to disable a performance event25712576 */···25892576 perf_pmu_disable(event->pmu_ctx->pmu);25902577 ctx_time_update_event(ctx, event);2591257825792579+ /*25802580+ * When disabling a group leader, the whole group becomes ineligible25812581+ * to run, so schedule out the full group.25822582+ */25922583 if (event == event->group_leader)25932584 group_sched_out(event, ctx);25942594- else25952595- event_sched_out(event, ctx);2596258525972597- perf_event_set_state(event, PERF_EVENT_STATE_OFF);25982598- perf_cgroup_event_disable(event, ctx);25862586+ /*25872587+ * But only mark the leader OFF; the siblings will remain25882588+ * INACTIVE.25892589+ */25902590+ __event_disable(event, ctx, PERF_EVENT_STATE_OFF);2599259126002592 perf_pmu_enable(event->pmu_ctx->pmu);26012593}···2674265626752657static void perf_event_throttle(struct perf_event *event)26762658{26772677- event->pmu->stop(event, 0);26782659 event->hw.interrupts = MAX_INTERRUPTS;26602660+ event->pmu->stop(event, 0);26792661 if (event == event->group_leader)26802662 perf_log_throttle(event, 0);26812663}···72517233 * CPU-A CPU-B72527234 *72537235 * perf_event_disable_inatomic()72547254- * @pending_disable = CPU-A;72367236+ * @pending_disable = 1;72557237 * irq_work_queue();72567238 *72577239 * sched-out72587258- * @pending_disable = -1;72407240+ * @pending_disable = 0;72597241 *72607242 * sched-in72617243 * perf_event_disable_inatomic()72627262- * @pending_disable = CPU-B;72447244+ * @pending_disable = 1;72637245 * irq_work_queue(); // FAILS72647246 *72657247 * irq_work_run()···7455743774567438 /* No regs, no stack pointer, no dump. */74577439 if (!regs)74407440+ return 0;74417441+74427442+ /* No mm, no stack, no dump. */74437443+ if (!current->mm)74587444 return 0;7459744574607446 /*···81718149 bool crosstask = event->ctx->task && event->ctx->task != current;81728150 const u32 max_stack = event->attr.sample_max_stack;81738151 struct perf_callchain_entry *callchain;81528152+81538153+ if (!current->mm)81548154+ user = false;8174815581758156 if (!kernel && !user)81768157 return &__empty_callchain;···1177411749{1177511750 struct hw_perf_event *hwc = &event->hw;11776117511177711777- if (is_sampling_event(event)) {1175211752+ /*1175311753+ * The throttle can be triggered in the hrtimer handler.1175411754+ * The HRTIMER_NORESTART should be used to stop the timer,1175511755+ * rather than hrtimer_cancel(). See perf_swevent_hrtimer()1175611756+ */1175711757+ if (is_sampling_event(event) && (hwc->interrupts != MAX_INTERRUPTS)) {1177811758 ktime_t remaining = hrtimer_get_remaining(&hwc->hrtimer);1177911759 local64_set(&hwc->period_left, ktime_to_ns(remaining));1178011760···1183411804static void cpu_clock_event_stop(struct perf_event *event, int flags)1183511805{1183611806 perf_swevent_cancel_hrtimer(event);1183711837- cpu_clock_event_update(event);1180711807+ if (flags & PERF_EF_UPDATE)1180811808+ cpu_clock_event_update(event);1183811809}11839118101184011811static int cpu_clock_event_add(struct perf_event *event, int flags)···1191311882static void task_clock_event_stop(struct perf_event *event, int flags)1191411883{1191511884 perf_swevent_cancel_hrtimer(event);1191611916- task_clock_event_update(event, event->ctx->time);1188511885+ if (flags & PERF_EF_UPDATE)1188611886+ task_clock_event_update(event, event->ctx->time);1191711887}11918118881191911889static int task_clock_event_add(struct perf_event *event, int flags)
+2-2
kernel/events/ring_buffer.c
···441441 * store that will be enabled on successful return442442 */443443 if (!handle->size) { /* A, matches D */444444- event->pending_disable = smp_processor_id();444444+ perf_event_disable_inatomic(handle->event);445445 perf_output_wakeup(handle);446446 WRITE_ONCE(rb->aux_nest, 0);447447 goto err_put;···526526527527 if (wakeup) {528528 if (handle->aux_flags & PERF_AUX_FLAG_TRUNCATED)529529- handle->event->pending_disable = smp_processor_id();529529+ perf_event_disable_inatomic(handle->event);530530 perf_output_wakeup(handle);531531 }532532
+9-8
kernel/exit.c
···940940 taskstats_exit(tsk, group_dead);941941 trace_sched_process_exit(tsk, group_dead);942942943943+ /*944944+ * Since sampling can touch ->mm, make sure to stop everything before we945945+ * tear it down.946946+ *947947+ * Also flushes inherited counters to the parent - before the parent948948+ * gets woken up by child-exit notifications.949949+ */950950+ perf_event_exit_task(tsk);951951+943952 exit_mm();944953945954 if (group_dead)···963954 exit_task_namespaces(tsk);964955 exit_task_work(tsk);965956 exit_thread(tsk);966966-967967- /*968968- * Flush inherited counters to the parent - before the parent969969- * gets woken up by child-exit notifications.970970- *971971- * because of cgroup mode, must be called before cgroup_exit()972972- */973973- perf_event_exit_task(tsk);974957975958 sched_autogroup_exit_task(tsk);976959 cgroup_exit(tsk);
+12-2
kernel/futex/core.c
···583583 if (futex_get_value(&node, naddr))584584 return -EFAULT;585585586586- if (node != FUTEX_NO_NODE &&587587- (node >= MAX_NUMNODES || !node_possible(node)))586586+ if ((node != FUTEX_NO_NODE) &&587587+ ((unsigned int)node >= MAX_NUMNODES || !node_possible(node)))588588 return -EINVAL;589589 }590590···16291629 mm->futex_phash_new = NULL;1630163016311631 if (fph) {16321632+ if (cur && (!cur->hash_mask || cur->immutable)) {16331633+ /*16341634+ * If two threads simultaneously request the global16351635+ * hash then the first one performs the switch,16361636+ * the second one returns here.16371637+ */16381638+ free = fph;16391639+ mm->futex_phash_new = new;16401640+ return -EBUSY;16411641+ }16321642 if (cur && !new) {16331643 /*16341644 * If we have an existing hash, but do not yet have
+8
kernel/irq/chip.c
···205205206206void irq_startup_managed(struct irq_desc *desc)207207{208208+ struct irq_data *d = irq_desc_get_irq_data(desc);209209+210210+ /*211211+ * Clear managed-shutdown flag, so we don't repeat managed-startup for212212+ * multiple hotplugs, and cause imbalanced disable depth.213213+ */214214+ irqd_clr_managed_shutdown(d);215215+208216 /*209217 * Only start it up when the disable depth is 1, so that a disable,210218 * hotunplug, hotplug sequence does not end up enabling it during
-7
kernel/irq/cpuhotplug.c
···210210 !irq_data_get_irq_chip(data) || !cpumask_test_cpu(cpu, affinity))211211 return;212212213213- /*214214- * Don't restore suspended interrupts here when a system comes back215215- * from S3. They are reenabled via resume_device_irqs().216216- */217217- if (desc->istate & IRQS_SUSPENDED)218218- return;219219-220213 if (irqd_is_managed_and_shutdown(data))221214 irq_startup_managed(desc);222215
···164164}165165166166/* almost as free_reserved_page(), just don't free the page */167167-static void kho_restore_page(struct page *page)167167+static void kho_restore_page(struct page *page, unsigned int order)168168{169169- ClearPageReserved(page);170170- init_page_count(page);171171- adjust_managed_page_count(page, 1);169169+ unsigned int nr_pages = (1 << order);170170+171171+ /* Head page gets refcount of 1. */172172+ set_page_count(page, 1);173173+174174+ /* For higher order folios, tail pages get a page count of zero. */175175+ for (unsigned int i = 1; i < nr_pages; i++)176176+ set_page_count(page + i, 0);177177+178178+ if (order > 0)179179+ prep_compound_page(page, order);180180+181181+ adjust_managed_page_count(page, nr_pages);172182}173183174184/**···196186 return NULL;197187198188 order = page->private;199199- if (order) {200200- if (order > MAX_PAGE_ORDER)201201- return NULL;189189+ if (order > MAX_PAGE_ORDER)190190+ return NULL;202191203203- prep_compound_page(page, order);204204- } else {205205- kho_restore_page(page);206206- }207207-192192+ kho_restore_page(page, order);208193 return page_folio(page);209194}210195EXPORT_SYMBOL_GPL(kho_restore_folio);
+4
kernel/rcu/tree.c
···30723072 /* Misaligned rcu_head! */30733073 WARN_ON_ONCE((unsigned long)head & (sizeof(void *) - 1));3074307430753075+ /* Avoid NULL dereference if callback is NULL. */30763076+ if (WARN_ON_ONCE(!func))30773077+ return;30783078+30753079 if (debug_rcu_head_queue(head)) {30763080 /*30773081 * Probable double call_rcu(), so leak the callback.
···455455 return 0;456456}457457458458+static struct tracer graph_trace;459459+458460static int ftrace_graph_trace_args(struct trace_array *tr, int set)459461{460462 trace_func_graph_ent_t entry;463463+464464+ /* Do nothing if the current tracer is not this tracer */465465+ if (tr->current_trace != &graph_trace)466466+ return 0;461467462468 if (set)463469 entry = trace_graph_entry_args;
···1010#include <linux/seq_buf.h>1111#include <linux/seq_file.h>1212#include <linux/vmalloc.h>1313+#include <linux/kmemleak.h>13141415#define ALLOCINFO_FILE_NAME "allocinfo"1516#define MODULE_ALLOC_TAG_VMAP_SIZE (100000UL * sizeof(struct alloc_tag))···633632 mod->name);634633 return -ENOMEM;635634 }636636- }637635636636+ /*637637+ * Avoid a kmemleak false positive. The pointer to the counters is stored638638+ * in the alloc_tag section of the module and cannot be directly accessed.639639+ */640640+ kmemleak_ignore_percpu(tag->counters);641641+ }638642 return 0;639643}640644
+5-1
lib/crypto/Makefile
···3535libcurve25519-generic-y := curve25519-fiat32.o3636libcurve25519-generic-$(CONFIG_ARCH_SUPPORTS_INT128) := curve25519-hacl64.o3737libcurve25519-generic-y += curve25519-generic.o3838+# clang versions prior to 18 may blow out the stack with KASAN3939+ifeq ($(call clang-min-version, 180000),)4040+KASAN_SANITIZE_curve25519-hacl64.o := n4141+endif38423943obj-$(CONFIG_CRYPTO_LIB_CURVE25519) += libcurve25519.o4044libcurve25519-y += curve25519.o···66626763obj-$(CONFIG_MPILIB) += mpi/68646969-obj-$(CONFIG_CRYPTO_SELFTESTS) += simd.o6565+obj-$(CONFIG_CRYPTO_SELFTESTS_FULL) += simd.o70667167obj-$(CONFIG_CRYPTO_LIB_SM3) += libsm3.o7268libsm3-y := sm3.o
···23032303/*23042304 * Returns the number of collected folios. Return value is always >= 0.23052305 */23062306-static void collect_longterm_unpinnable_folios(23062306+static unsigned long collect_longterm_unpinnable_folios(23072307 struct list_head *movable_folio_list,23082308 struct pages_or_folios *pofs)23092309{23102310+ unsigned long i, collected = 0;23102311 struct folio *prev_folio = NULL;23112312 bool drain_allow = true;23122312- unsigned long i;2313231323142314 for (i = 0; i < pofs->nr_entries; i++) {23152315 struct folio *folio = pofs_get_folio(pofs, i);···2320232023212321 if (folio_is_longterm_pinnable(folio))23222322 continue;23232323+23242324+ collected++;2323232523242326 if (folio_is_device_coherent(folio))23252327 continue;···23442342 NR_ISOLATED_ANON + folio_is_file_lru(folio),23452343 folio_nr_pages(folio));23462344 }23452345+23462346+ return collected;23472347}2348234823492349/*···24222418check_and_migrate_movable_pages_or_folios(struct pages_or_folios *pofs)24232419{24242420 LIST_HEAD(movable_folio_list);24212421+ unsigned long collected;2425242224262426- collect_longterm_unpinnable_folios(&movable_folio_list, pofs);24272427- if (list_empty(&movable_folio_list))24232423+ collected = collect_longterm_unpinnable_folios(&movable_folio_list,24242424+ pofs);24252425+ if (!collected)24282426 return 0;2429242724302428 return migrate_longterm_unpinnable_folios(&movable_folio_list, pofs);
+17-37
mm/hugetlb.c
···27872787/*27882788 * alloc_and_dissolve_hugetlb_folio - Allocate a new folio and dissolve27892789 * the old one27902790- * @h: struct hstate old page belongs to27912790 * @old_folio: Old folio to dissolve27922791 * @list: List to isolate the page in case we need to27932792 * Returns 0 on success, otherwise negated error.27942793 */27952795-static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,27962796- struct folio *old_folio, struct list_head *list)27942794+static int alloc_and_dissolve_hugetlb_folio(struct folio *old_folio,27952795+ struct list_head *list)27972796{27982798- gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE;27972797+ gfp_t gfp_mask;27982798+ struct hstate *h;27992799 int nid = folio_nid(old_folio);28002800 struct folio *new_folio = NULL;28012801 int ret = 0;2802280228032803retry:28042804+ /*28052805+ * The old_folio might have been dissolved from under our feet, so make sure28062806+ * to carefully check the state under the lock.28072807+ */28042808 spin_lock_irq(&hugetlb_lock);28052809 if (!folio_test_hugetlb(old_folio)) {28062810 /*···28332829 cond_resched();28342830 goto retry;28352831 } else {28322832+ h = folio_hstate(old_folio);28362833 if (!new_folio) {28372834 spin_unlock_irq(&hugetlb_lock);28352835+ gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE;28382836 new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid,28392837 NULL, NULL);28402838 if (!new_folio)···2880287428812875int isolate_or_dissolve_huge_folio(struct folio *folio, struct list_head *list)28822876{28832883- struct hstate *h;28842877 int ret = -EBUSY;2885287828862886- /*28872887- * The page might have been dissolved from under our feet, so make sure28882888- * to carefully check the state under the lock.28892889- * Return success when racing as if we dissolved the page ourselves.28902890- */28912891- spin_lock_irq(&hugetlb_lock);28922892- if (folio_test_hugetlb(folio)) {28932893- h = folio_hstate(folio);28942894- } else {28952895- spin_unlock_irq(&hugetlb_lock);28792879+ /* Not to disrupt normal path by vainly holding hugetlb_lock */28802880+ if (!folio_test_hugetlb(folio))28962881 return 0;28972897- }28982898- spin_unlock_irq(&hugetlb_lock);2899288229002883 /*29012884 * Fence off gigantic pages as there is a cyclic dependency between29022885 * alloc_contig_range and them. Return -ENOMEM as this has the effect29032886 * of bailing out right away without further retrying.29042887 */29052905- if (hstate_is_gigantic(h))28882888+ if (folio_order(folio) > MAX_PAGE_ORDER)29062889 return -ENOMEM;2907289029082891 if (folio_ref_count(folio) && folio_isolate_hugetlb(folio, list))29092892 ret = 0;29102893 else if (!folio_ref_count(folio))29112911- ret = alloc_and_dissolve_hugetlb_folio(h, folio, list);28942894+ ret = alloc_and_dissolve_hugetlb_folio(folio, list);2912289529132896 return ret;29142897}···29112916 */29122917int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn)29132918{29142914- struct hstate *h;29152919 struct folio *folio;29162920 int ret = 0;29172921···29192925 while (start_pfn < end_pfn) {29202926 folio = pfn_folio(start_pfn);2921292729222922- /*29232923- * The folio might have been dissolved from under our feet, so make sure29242924- * to carefully check the state under the lock.29252925- */29262926- spin_lock_irq(&hugetlb_lock);29272927- if (folio_test_hugetlb(folio)) {29282928- h = folio_hstate(folio);29292929- } else {29302930- spin_unlock_irq(&hugetlb_lock);29312931- start_pfn++;29322932- continue;29332933- }29342934- spin_unlock_irq(&hugetlb_lock);29352935-29362936- if (!folio_ref_count(folio)) {29372937- ret = alloc_and_dissolve_hugetlb_folio(h, folio,29382938- &isolate_list);29282928+ /* Not to disrupt normal path by vainly holding hugetlb_lock */29292929+ if (folio_test_hugetlb(folio) && !folio_ref_count(folio)) {29302930+ ret = alloc_and_dissolve_hugetlb_folio(folio, &isolate_list);29392931 if (ret)29402932 break;29412933
+14
mm/kmemleak.c
···12471247EXPORT_SYMBOL(kmemleak_transient_leak);1248124812491249/**12501250+ * kmemleak_ignore_percpu - similar to kmemleak_ignore but taking a percpu12511251+ * address argument12521252+ * @ptr: percpu address of the object12531253+ */12541254+void __ref kmemleak_ignore_percpu(const void __percpu *ptr)12551255+{12561256+ pr_debug("%s(0x%px)\n", __func__, ptr);12571257+12581258+ if (kmemleak_enabled && ptr && !IS_ERR_PCPU(ptr))12591259+ make_black_object((unsigned long)ptr, OBJECT_PERCPU);12601260+}12611261+EXPORT_SYMBOL_GPL(kmemleak_ignore_percpu);12621262+12631263+/**12501264 * kmemleak_ignore - ignore an allocated object12511265 * @ptr: pointer to beginning of the object12521266 *
-20
mm/memory.c
···43154315}4316431643174317#ifdef CONFIG_TRANSPARENT_HUGEPAGE43184318-static inline int non_swapcache_batch(swp_entry_t entry, int max_nr)43194319-{43204320- struct swap_info_struct *si = swp_swap_info(entry);43214321- pgoff_t offset = swp_offset(entry);43224322- int i;43234323-43244324- /*43254325- * While allocating a large folio and doing swap_read_folio, which is43264326- * the case the being faulted pte doesn't have swapcache. We need to43274327- * ensure all PTEs have no cache as well, otherwise, we might go to43284328- * swap devices while the content is in swapcache.43294329- */43304330- for (i = 0; i < max_nr; i++) {43314331- if ((si->swap_map[offset + i] & SWAP_HAS_CACHE))43324332- return i;43334333- }43344334-43354335- return i;43364336-}43374337-43384318/*43394319 * Check if the PTEs within a range are contiguous swap entries43404320 * and have consistent swapcache, zeromap.
+5-1
mm/shmem.c
···22592259 folio = swap_cache_get_folio(swap, NULL, 0);22602260 order = xa_get_order(&mapping->i_pages, index);22612261 if (!folio) {22622262+ int nr_pages = 1 << order;22622263 bool fallback_order0 = false;2263226422642265 /* Or update major stats only when swapin succeeds?? */···22732272 * If uffd is active for the vma, we need per-page fault22742273 * fidelity to maintain the uffd semantics, then fallback22752274 * to swapin order-0 folio, as well as for zswap case.22752275+ * Any existing sub folio in the swap cache also blocks22762276+ * mTHP swapin.22762277 */22772278 if (order > 0 && ((vma && unlikely(userfaultfd_armed(vma))) ||22782278- !zswap_never_enabled()))22792279+ !zswap_never_enabled() ||22802280+ non_swapcache_batch(swap, nr_pages) != nr_pages))22792281 fallback_order0 = true;2280228222812283 /* Skip swapcache for synchronous device. */
+23
mm/swap.h
···106106 return find_next_bit(sis->zeromap, end, start) - start;107107}108108109109+static inline int non_swapcache_batch(swp_entry_t entry, int max_nr)110110+{111111+ struct swap_info_struct *si = swp_swap_info(entry);112112+ pgoff_t offset = swp_offset(entry);113113+ int i;114114+115115+ /*116116+ * While allocating a large folio and doing mTHP swapin, we need to117117+ * ensure all entries are not cached, otherwise, the mTHP folio will118118+ * be in conflict with the folio in swap cache.119119+ */120120+ for (i = 0; i < max_nr; i++) {121121+ if ((si->swap_map[offset + i] & SWAP_HAS_CACHE))122122+ return i;123123+ }124124+125125+ return i;126126+}127127+109128#else /* CONFIG_SWAP */110129struct swap_iocb;111130static inline void swap_read_folio(struct folio *folio, struct swap_iocb **plug)···218199 return 0;219200}220201202202+static inline int non_swapcache_batch(swp_entry_t entry, int max_nr)203203+{204204+ return 0;205205+}221206#endif /* CONFIG_SWAP */222207223208/**
+31-2
mm/userfaultfd.c
···10841084 pte_t orig_dst_pte, pte_t orig_src_pte,10851085 pmd_t *dst_pmd, pmd_t dst_pmdval,10861086 spinlock_t *dst_ptl, spinlock_t *src_ptl,10871087- struct folio *src_folio)10871087+ struct folio *src_folio,10881088+ struct swap_info_struct *si, swp_entry_t entry)10881089{10901090+ /*10911091+ * Check if the folio still belongs to the target swap entry after10921092+ * acquiring the lock. Folio can be freed in the swap cache while10931093+ * not locked.10941094+ */10951095+ if (src_folio && unlikely(!folio_test_swapcache(src_folio) ||10961096+ entry.val != src_folio->swap.val))10971097+ return -EAGAIN;10981098+10891099 double_pt_lock(dst_ptl, src_ptl);1090110010911101 if (!is_pte_pages_stable(dst_pte, src_pte, orig_dst_pte, orig_src_pte,···11121102 if (src_folio) {11131103 folio_move_anon_rmap(src_folio, dst_vma);11141104 src_folio->index = linear_page_index(dst_vma, dst_addr);11051105+ } else {11061106+ /*11071107+ * Check if the swap entry is cached after acquiring the src_pte11081108+ * lock. Otherwise, we might miss a newly loaded swap cache folio.11091109+ *11101110+ * Check swap_map directly to minimize overhead, READ_ONCE is sufficient.11111111+ * We are trying to catch newly added swap cache, the only possible case is11121112+ * when a folio is swapped in and out again staying in swap cache, using the11131113+ * same entry before the PTE check above. The PTL is acquired and released11141114+ * twice, each time after updating the swap_map's flag. So holding11151115+ * the PTL here ensures we see the updated value. False positive is possible,11161116+ * e.g. SWP_SYNCHRONOUS_IO swapin may set the flag without touching the11171117+ * cache, or during the tiny synchronization window between swap cache and11181118+ * swap_map, but it will be gone very quickly, worst result is retry jitters.11191119+ */11201120+ if (READ_ONCE(si->swap_map[swp_offset(entry)]) & SWAP_HAS_CACHE) {11211121+ double_pt_unlock(dst_ptl, src_ptl);11221122+ return -EAGAIN;11231123+ }11151124 }1116112511171126 orig_src_pte = ptep_get_and_clear(mm, src_addr, src_pte);···14411412 }14421413 err = move_swap_pte(mm, dst_vma, dst_addr, src_addr, dst_pte, src_pte,14431414 orig_dst_pte, orig_src_pte, dst_pmd, dst_pmdval,14441444- dst_ptl, src_ptl, src_folio);14151415+ dst_ptl, src_ptl, src_folio, si, entry);14451416 }1446141714471418out:
+5-6
net/atm/clip.c
···193193194194 pr_debug("\n");195195196196- if (!clip_devs) {197197- atm_return(vcc, skb->truesize);198198- kfree_skb(skb);199199- return;200200- }201201-202196 if (!skb) {203197 pr_debug("removing VCC %p\n", clip_vcc);204198 if (clip_vcc->entry)···202208 return;203209 }204210 atm_return(vcc, skb->truesize);211211+ if (!clip_devs) {212212+ kfree_skb(skb);213213+ return;214214+ }215215+205216 skb->dev = clip_vcc->entry ? clip_vcc->entry->neigh->dev : clip_devs;206217 /* clip_vcc->entry == NULL if we don't have an IP address yet */207218 if (!skb->dev) {
+1
net/atm/common.c
···635635636636 skb->dev = NULL; /* for paths shared with net_device interfaces */637637 if (!copy_from_iter_full(skb_put(skb, size), size, &m->msg_iter)) {638638+ atm_return_tx(vcc, skb);638639 kfree_skb(skb);639640 error = -EFAULT;640641 goto out;
+10-2
net/atm/lec.c
···124124125125/* Device structures */126126static struct net_device *dev_lec[MAX_LEC_ITF];127127+static DEFINE_MUTEX(lec_mutex);127128128129#if IS_ENABLED(CONFIG_BRIDGE)129130static void lec_handle_bridge(struct sk_buff *skb, struct net_device *dev)···686685 int bytes_left;687686 struct atmlec_ioc ioc_data;688687688688+ lockdep_assert_held(&lec_mutex);689689 /* Lecd must be up in this case */690690 bytes_left = copy_from_user(&ioc_data, arg, sizeof(struct atmlec_ioc));691691 if (bytes_left != 0)···712710713711static int lec_mcast_attach(struct atm_vcc *vcc, int arg)714712{713713+ lockdep_assert_held(&lec_mutex);715714 if (arg < 0 || arg >= MAX_LEC_ITF)716715 return -EINVAL;717716 arg = array_index_nospec(arg, MAX_LEC_ITF);···728725 int i;729726 struct lec_priv *priv;730727728728+ lockdep_assert_held(&lec_mutex);731729 if (arg < 0)732730 arg = 0;733731 if (arg >= MAX_LEC_ITF)···746742 snprintf(dev_lec[i]->name, IFNAMSIZ, "lec%d", i);747743 if (register_netdev(dev_lec[i])) {748744 free_netdev(dev_lec[i]);745745+ dev_lec[i] = NULL;749746 return -EINVAL;750747 }751748···909904 v = (dev && netdev_priv(dev)) ?910905 lec_priv_walk(state, l, netdev_priv(dev)) : NULL;911906 if (!v && dev) {912912- dev_put(dev);913907 /* Partial state reset for the next time we get called */914908 dev = NULL;915909 }···932928{933929 struct lec_state *state = seq->private;934930931931+ mutex_lock(&lec_mutex);935932 state->itf = 0;936933 state->dev = NULL;937934 state->locked = NULL;···950945 if (state->dev) {951946 spin_unlock_irqrestore(&state->locked->lec_arp_lock,952947 state->flags);953953- dev_put(state->dev);948948+ state->dev = NULL;954949 }950950+ mutex_unlock(&lec_mutex);955951}956952957953static void *lec_seq_next(struct seq_file *seq, void *v, loff_t *pos)···10091003 return -ENOIOCTLCMD;10101004 }1011100510061006+ mutex_lock(&lec_mutex);10121007 switch (cmd) {10131008 case ATMLEC_CTRL:10141009 err = lecd_attach(vcc, (int)arg);···10241017 break;10251018 }1026101910201020+ mutex_unlock(&lec_mutex);10271021 return err;10281022}10291023
···64646565/* Get HCI device by index.6666 * Device is held on return. */6767-struct hci_dev *hci_dev_get(int index)6767+static struct hci_dev *__hci_dev_get(int index, int *srcu_index)6868{6969 struct hci_dev *hdev = NULL, *d;7070···7777 list_for_each_entry(d, &hci_dev_list, list) {7878 if (d->id == index) {7979 hdev = hci_dev_hold(d);8080+ if (srcu_index)8181+ *srcu_index = srcu_read_lock(&d->srcu);8082 break;8183 }8284 }8385 read_unlock(&hci_dev_list_lock);8486 return hdev;8787+}8888+8989+struct hci_dev *hci_dev_get(int index)9090+{9191+ return __hci_dev_get(index, NULL);9292+}9393+9494+static struct hci_dev *hci_dev_get_srcu(int index, int *srcu_index)9595+{9696+ return __hci_dev_get(index, srcu_index);9797+}9898+9999+static void hci_dev_put_srcu(struct hci_dev *hdev, int srcu_index)100100+{101101+ srcu_read_unlock(&hdev->srcu, srcu_index);102102+ hci_dev_put(hdev);85103}8610487105/* ---- Inquiry support ---- */···586568int hci_dev_reset(__u16 dev)587569{588570 struct hci_dev *hdev;589589- int err;571571+ int err, srcu_index;590572591591- hdev = hci_dev_get(dev);573573+ hdev = hci_dev_get_srcu(dev, &srcu_index);592574 if (!hdev)593575 return -ENODEV;594576···610592 err = hci_dev_do_reset(hdev);611593612594done:613613- hci_dev_put(hdev);595595+ hci_dev_put_srcu(hdev, srcu_index);614596 return err;615597}616598···24512433 if (!hdev)24522434 return NULL;2453243524362436+ if (init_srcu_struct(&hdev->srcu)) {24372437+ kfree(hdev);24382438+ return NULL;24392439+ }24402440+24542441 hdev->pkt_type = (HCI_DM1 | HCI_DH1 | HCI_HV1);24552442 hdev->esco_type = (ESCO_HV1);24562443 hdev->link_mode = (HCI_LM_ACCEPT);···27002677 write_lock(&hci_dev_list_lock);27012678 list_del(&hdev->list);27022679 write_unlock(&hci_dev_list_lock);26802680+26812681+ synchronize_srcu(&hdev->srcu);26822682+ cleanup_srcu_struct(&hdev->srcu);2703268327042684 disable_work_sync(&hdev->rx_work);27052685 disable_work_sync(&hdev->cmd_work);
+8-1
net/bluetooth/l2cap_core.c
···34153415 struct l2cap_conf_rfc rfc = { .mode = L2CAP_MODE_BASIC };34163416 struct l2cap_conf_efs efs;34173417 u8 remote_efs = 0;34183418- u16 mtu = L2CAP_DEFAULT_MTU;34183418+ u16 mtu = 0;34193419 u16 result = L2CAP_CONF_SUCCESS;34203420 u16 size;34213421···35193519 if (result == L2CAP_CONF_SUCCESS) {35203520 /* Configure output options and let the other side know35213521 * which ones we don't like. */35223522+35233523+ /* If MTU is not provided in configure request, use the most recently35243524+ * explicitly or implicitly accepted value for the other direction,35253525+ * or the default value.35263526+ */35273527+ if (mtu == 0)35283528+ mtu = chan->imtu ? chan->imtu : L2CAP_DEFAULT_MTU;3522352935233530 if (mtu < L2CAP_DEFAULT_MIN_MTU)35243531 result = L2CAP_CONF_UNACCEPT;
+9
net/bridge/br_multicast.c
···2015201520162016void br_multicast_port_ctx_deinit(struct net_bridge_mcast_port *pmctx)20172017{20182018+ struct net_bridge *br = pmctx->port->br;20192019+ bool del = false;20202020+20182021#if IS_ENABLED(CONFIG_IPV6)20192022 timer_delete_sync(&pmctx->ip6_mc_router_timer);20202023#endif20212024 timer_delete_sync(&pmctx->ip4_mc_router_timer);20252025+20262026+ spin_lock_bh(&br->multicast_lock);20272027+ del |= br_ip6_multicast_rport_del(pmctx);20282028+ del |= br_ip4_multicast_rport_del(pmctx);20292029+ br_multicast_rport_del_notify(pmctx, del);20302030+ spin_unlock_bh(&br->multicast_lock);20222031}2023203220242033int br_multicast_add_port(struct net_bridge_port *port)
···62616261 if (!pskb_may_pull(skb, write_len))62626262 return -ENOMEM;6263626362646264- if (!skb_frags_readable(skb))62656265- return -EFAULT;62666266-62676264 if (!skb_cloned(skb) || skb_clone_writable(skb, write_len))62686265 return 0;62696266
+3
net/ipv4/tcp_fastopen.c
···33#include <linux/tcp.h>44#include <linux/rcupdate.h>55#include <net/tcp.h>66+#include <net/busy_poll.h>6778void tcp_fastopen_init_key_once(struct net *net)89{···279278 req->timeout, false);280279281280 refcount_set(&req->rsk_refcnt, 2);281281+282282+ sk_mark_napi_id_set(child, skb);282283283284 /* Now finish processing the fastopen child socket. */284285 tcp_init_transfer(child, BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB, skb);
+24-11
net/ipv4/tcp_input.c
···24792479{24802480 const struct sock *sk = (const struct sock *)tp;2481248124822482- if (tp->retrans_stamp &&24832483- tcp_tsopt_ecr_before(tp, tp->retrans_stamp))24842484- return true; /* got echoed TS before first retransmission */24822482+ /* Received an echoed timestamp before the first retransmission? */24832483+ if (tp->retrans_stamp)24842484+ return tcp_tsopt_ecr_before(tp, tp->retrans_stamp);2485248524862486- /* Check if nothing was retransmitted (retrans_stamp==0), which may24872487- * happen in fast recovery due to TSQ. But we ignore zero retrans_stamp24882488- * in TCP_SYN_SENT, since when we set FLAG_SYN_ACKED we also clear24892489- * retrans_stamp even if we had retransmitted the SYN.24862486+ /* We set tp->retrans_stamp upon the first retransmission of a loss24872487+ * recovery episode, so normally if tp->retrans_stamp is 0 then no24882488+ * retransmission has happened yet (likely due to TSQ, which can cause24892489+ * fast retransmits to be delayed). So if snd_una advanced while24902490+ * (tp->retrans_stamp is 0 then apparently a packet was merely delayed,24912491+ * not lost. But there are exceptions where we retransmit but then24922492+ * clear tp->retrans_stamp, so we check for those exceptions.24902493 */24912491- if (!tp->retrans_stamp && /* no record of a retransmit/SYN? */24922492- sk->sk_state != TCP_SYN_SENT) /* not the FLAG_SYN_ACKED case? */24932493- return true; /* nothing was retransmitted */2494249424952495- return false;24952495+ /* (1) For non-SACK connections, tcp_is_non_sack_preventing_reopen()24962496+ * clears tp->retrans_stamp when snd_una == high_seq.24972497+ */24982498+ if (!tcp_is_sack(tp) && !before(tp->snd_una, tp->high_seq))24992499+ return false;25002500+25012501+ /* (2) In TCP_SYN_SENT tcp_clean_rtx_queue() clears tp->retrans_stamp25022502+ * when setting FLAG_SYN_ACKED is set, even if the SYN was25032503+ * retransmitted.25042504+ */25052505+ if (sk->sk_state == TCP_SYN_SENT)25062506+ return false;25072507+25082508+ return true; /* tp->retrans_stamp is zero; no retransmit yet */24962509}2497251024982511/* Undo procedures. */
+8
net/ipv6/calipso.c
···12071207 struct ipv6_opt_hdr *old, *new;12081208 struct sock *sk = sk_to_full_sk(req_to_sk(req));1209120912101210+ /* sk is NULL for SYN+ACK w/ SYN Cookie */12111211+ if (!sk)12121212+ return -ENOMEM;12131213+12101214 if (req_inet->ipv6_opt && req_inet->ipv6_opt->hopopt)12111215 old = req_inet->ipv6_opt->hopopt;12121216 else···12501246 struct ipv6_opt_hdr *new;12511247 struct ipv6_txoptions *txopts;12521248 struct sock *sk = sk_to_full_sk(req_to_sk(req));12491249+12501250+ /* sk is NULL for SYN+ACK w/ SYN Cookie */12511251+ if (!sk)12521252+ return;1253125312541254 if (!req_inet->ipv6_opt || !req_inet->ipv6_opt->hopopt)12551255 return;
···44324432 if (!multicast &&44334433 !ether_addr_equal(sdata->dev->dev_addr, hdr->addr1))44344434 return false;44354435+ /* reject invalid/our STA address */44364436+ if (!is_valid_ether_addr(hdr->addr2) ||44374437+ ether_addr_equal(sdata->dev->dev_addr, hdr->addr2))44384438+ return false;44354439 if (!rx->sta) {44364440 int rate_idx;44374441 if (status->encoding != RX_ENC_LEGACY)
+21-8
net/mac80211/tx.c
···55 * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>66 * Copyright 2007 Johannes Berg <johannes@sipsolutions.net>77 * Copyright 2013-2014 Intel Mobile Communications GmbH88- * Copyright (C) 2018-2024 Intel Corporation88+ * Copyright (C) 2018-2025 Intel Corporation99 *1010 * Transmit and frame generation functions.1111 */···50165016 }50175017}5018501850195019-static u8 __ieee80211_beacon_update_cntdwn(struct beacon_data *beacon)50195019+static u8 __ieee80211_beacon_update_cntdwn(struct ieee80211_link_data *link,50205020+ struct beacon_data *beacon)50205021{50215021- beacon->cntdwn_current_counter--;50225022+ if (beacon->cntdwn_current_counter == 1) {50235023+ /*50245024+ * Channel switch handling is done by a worker thread while50255025+ * beacons get pulled from hardware timers. It's therefore50265026+ * possible that software threads are slow enough to not be50275027+ * able to complete CSA handling in a single beacon interval,50285028+ * in which case we get here. There isn't much to do about50295029+ * it, other than letting the user know that the AP isn't50305030+ * behaving correctly.50315031+ */50325032+ link_err_once(link,50335033+ "beacon TX faster than countdown (channel/color switch) completion\n");50345034+ return 0;50355035+ }5022503650235023- /* the counter should never reach 0 */50245024- WARN_ON_ONCE(!beacon->cntdwn_current_counter);50375037+ beacon->cntdwn_current_counter--;5025503850265039 return beacon->cntdwn_current_counter;50275040}···50655052 if (!beacon)50665053 goto unlock;5067505450685068- count = __ieee80211_beacon_update_cntdwn(beacon);50555055+ count = __ieee80211_beacon_update_cntdwn(link, beacon);5069505650705057unlock:50715058 rcu_read_unlock();···5463545054645451 if (beacon->cntdwn_counter_offsets[0]) {54655452 if (!is_template)54665466- __ieee80211_beacon_update_cntdwn(beacon);54535453+ __ieee80211_beacon_update_cntdwn(link, beacon);5467545454685455 ieee80211_set_beacon_cntdwn(sdata, beacon, link);54695456 }···54955482 * for now we leave it consistent with overall54965483 * mac80211's behavior.54975484 */54985498- __ieee80211_beacon_update_cntdwn(beacon);54855485+ __ieee80211_beacon_update_cntdwn(link, beacon);5499548655005487 ieee80211_set_beacon_cntdwn(sdata, beacon, link);55015488 }
···220220 struct task_struct *owner;221221 local_lock_t bh_lock;222222};223223-DECLARE_PER_CPU(struct ovs_pcpu_storage, ovs_pcpu_storage);223223+224224+extern struct ovs_pcpu_storage __percpu *ovs_pcpu_storage;224225225226/**226227 * enum ovs_pkt_hash_types - hash info to include with a packet
···1212 error::{Error, Result},1313 ffi::c_void,1414 prelude::*,1515- revocable::Revocable,1616- sync::Arc,1515+ revocable::{Revocable, RevocableGuard},1616+ sync::{rcu, Arc, Completion},1717 types::ARef,1818};1919-2020-use core::ops::Deref;21192220#[pin_data]2321struct DevresInner<T> {···2325 callback: unsafe extern "C" fn(*mut c_void),2426 #[pin]2527 data: Revocable<T>,2828+ #[pin]2929+ revoke: Completion,2630}27312832/// This abstraction is meant to be used by subsystems to containerize [`Device`] bound resources to2933/// manage their lifetime.3034///3135/// [`Device`] bound resources should be freed when either the resource goes out of scope or the3232-/// [`Device`] is unbound respectively, depending on what happens first.3636+/// [`Device`] is unbound respectively, depending on what happens first. In any case, it is always3737+/// guaranteed that revoking the device resource is completed before the corresponding [`Device`]3838+/// is unbound.3339///3440/// To achieve that [`Devres`] registers a devres callback on creation, which is called once the3541/// [`Device`] is unbound, revoking access to the encapsulated resource (see also [`Revocable`]).···104102 dev: dev.into(),105103 callback: Self::devres_callback,106104 data <- Revocable::new(data),105105+ revoke <- Completion::new(),107106 }),108107 flags,109108 )?;···133130 self as _134131 }135132136136- fn remove_action(this: &Arc<Self>) {133133+ fn remove_action(this: &Arc<Self>) -> bool {137134 // SAFETY:138135 // - `self.inner.dev` is a valid `Device`,139136 // - the `action` and `data` pointers are the exact same ones as given to devm_add_action()140137 // previously,141138 // - `self` is always valid, even if the action has been released already.142142- let ret = unsafe {139139+ let success = unsafe {143140 bindings::devm_remove_action_nowarn(144141 this.dev.as_raw(),145142 Some(this.callback),146143 this.as_ptr() as _,147144 )148148- };145145+ } == 0;149146150150- if ret == 0 {147147+ if success {151148 // SAFETY: We leaked an `Arc` reference to devm_add_action() in `DevresInner::new`; if152149 // devm_remove_action_nowarn() was successful we can (and have to) claim back ownership153150 // of this reference.154151 let _ = unsafe { Arc::from_raw(this.as_ptr()) };155152 }153153+154154+ success156155 }157156158157 #[allow(clippy::missing_safety_doc)]···166161 // `DevresInner::new`.167162 let inner = unsafe { Arc::from_raw(ptr) };168163169169- inner.data.revoke();164164+ if !inner.data.revoke() {165165+ // If `revoke()` returns false, it means that `Devres::drop` already started revoking166166+ // `inner.data` for us. Hence we have to wait until `Devres::drop()` signals that it167167+ // completed revoking `inner.data`.168168+ inner.revoke.wait_for_completion();169169+ }170170 }171171}172172···228218 // SAFETY: `dev` being the same device as the device this `Devres` has been created for229219 // proves that `self.0.data` hasn't been revoked and is guaranteed to not be revoked as230220 // long as `dev` lives; `dev` lives at least as long as `self`.231231- Ok(unsafe { self.deref().access() })221221+ Ok(unsafe { self.0.data.access() })232222 }233233-}234223235235-impl<T> Deref for Devres<T> {236236- type Target = Revocable<T>;224224+ /// [`Devres`] accessor for [`Revocable::try_access`].225225+ pub fn try_access(&self) -> Option<RevocableGuard<'_, T>> {226226+ self.0.data.try_access()227227+ }237228238238- fn deref(&self) -> &Self::Target {239239- &self.0.data229229+ /// [`Devres`] accessor for [`Revocable::try_access_with`].230230+ pub fn try_access_with<R, F: FnOnce(&T) -> R>(&self, f: F) -> Option<R> {231231+ self.0.data.try_access_with(f)232232+ }233233+234234+ /// [`Devres`] accessor for [`Revocable::try_access_with_guard`].235235+ pub fn try_access_with_guard<'a>(&'a self, guard: &'a rcu::Guard) -> Option<&'a T> {236236+ self.0.data.try_access_with_guard(guard)240237 }241238}242239243240impl<T> Drop for Devres<T> {244241 fn drop(&mut self) {245245- DevresInner::remove_action(&self.0);242242+ // SAFETY: When `drop` runs, it is guaranteed that nobody is accessing the revocable data243243+ // anymore, hence it is safe not to wait for the grace period to finish.244244+ if unsafe { self.0.data.revoke_nosync() } {245245+ // We revoked `self.0.data` before the devres action did, hence try to remove it.246246+ if !DevresInner::remove_action(&self.0) {247247+ // We could not remove the devres action, which means that it now runs concurrently,248248+ // hence signal that `self.0.data` has been revoked successfully.249249+ self.0.revoke.complete_all();250250+ }251251+ }246252 }247253}
+14-4
rust/kernel/revocable.rs
···154154 /// # Safety155155 ///156156 /// Callers must ensure that there are no more concurrent users of the revocable object.157157- unsafe fn revoke_internal<const SYNC: bool>(&self) {158158- if self.is_available.swap(false, Ordering::Relaxed) {157157+ unsafe fn revoke_internal<const SYNC: bool>(&self) -> bool {158158+ let revoke = self.is_available.swap(false, Ordering::Relaxed);159159+160160+ if revoke {159161 if SYNC {160162 // SAFETY: Just an FFI call, there are no further requirements.161163 unsafe { bindings::synchronize_rcu() };···167165 // `compare_exchange` above that takes `is_available` from `true` to `false`.168166 unsafe { drop_in_place(self.data.get()) };169167 }168168+169169+ revoke170170 }171171172172 /// Revokes access to and drops the wrapped object.···176172 /// Access to the object is revoked immediately to new callers of [`Revocable::try_access`],177173 /// expecting that there are no concurrent users of the object.178174 ///175175+ /// Returns `true` if `&self` has been revoked with this call, `false` if it was revoked176176+ /// already.177177+ ///179178 /// # Safety180179 ///181180 /// Callers must ensure that there are no more concurrent users of the revocable object.182182- pub unsafe fn revoke_nosync(&self) {181181+ pub unsafe fn revoke_nosync(&self) -> bool {183182 // SAFETY: By the safety requirement of this function, the caller ensures that nobody is184183 // accessing the data anymore and hence we don't have to wait for the grace period to185184 // finish.···196189 /// If there are concurrent users of the object (i.e., ones that called197190 /// [`Revocable::try_access`] beforehand and still haven't dropped the returned guard), this198191 /// function waits for the concurrent access to complete before dropping the wrapped object.199199- pub fn revoke(&self) {192192+ ///193193+ /// Returns `true` if `&self` has been revoked with this call, `false` if it was revoked194194+ /// already.195195+ pub fn revoke(&self) -> bool {200196 // SAFETY: By passing `true` we ask `revoke_internal` to wait for the grace period to201197 // finish.202198 unsafe { self.revoke_internal::<true>() }
+2
rust/kernel/sync.rs
···1010use pin_init;11111212mod arc;1313+pub mod completion;1314mod condvar;1415pub mod lock;1516mod locked_by;···1817pub mod rcu;19182019pub use arc::{Arc, ArcBorrow, UniqueArc};2020+pub use completion::Completion;2121pub use condvar::{new_condvar, CondVar, CondVarTimeoutResult};2222pub use lock::global::{global_lock, GlobalGuard, GlobalLock, GlobalLockBackend, GlobalLockedBy};2323pub use lock::mutex::{new_mutex, Mutex, MutexGuard};
+112
rust/kernel/sync/completion.rs
···11+// SPDX-License-Identifier: GPL-2.022+33+//! Completion support.44+//!55+//! Reference: <https://docs.kernel.org/scheduler/completion.html>66+//!77+//! C header: [`include/linux/completion.h`](srctree/include/linux/completion.h)88+99+use crate::{bindings, prelude::*, types::Opaque};1010+1111+/// Synchronization primitive to signal when a certain task has been completed.1212+///1313+/// The [`Completion`] synchronization primitive signals when a certain task has been completed by1414+/// waking up other tasks that have been queued up to wait for the [`Completion`] to be completed.1515+///1616+/// # Examples1717+///1818+/// ```1919+/// use kernel::sync::{Arc, Completion};2020+/// use kernel::workqueue::{self, impl_has_work, new_work, Work, WorkItem};2121+///2222+/// #[pin_data]2323+/// struct MyTask {2424+/// #[pin]2525+/// work: Work<MyTask>,2626+/// #[pin]2727+/// done: Completion,2828+/// }2929+///3030+/// impl_has_work! {3131+/// impl HasWork<Self> for MyTask { self.work }3232+/// }3333+///3434+/// impl MyTask {3535+/// fn new() -> Result<Arc<Self>> {3636+/// let this = Arc::pin_init(pin_init!(MyTask {3737+/// work <- new_work!("MyTask::work"),3838+/// done <- Completion::new(),3939+/// }), GFP_KERNEL)?;4040+///4141+/// let _ = workqueue::system().enqueue(this.clone());4242+///4343+/// Ok(this)4444+/// }4545+///4646+/// fn wait_for_completion(&self) {4747+/// self.done.wait_for_completion();4848+///4949+/// pr_info!("Completion: task complete\n");5050+/// }5151+/// }5252+///5353+/// impl WorkItem for MyTask {5454+/// type Pointer = Arc<MyTask>;5555+///5656+/// fn run(this: Arc<MyTask>) {5757+/// // process this task5858+/// this.done.complete_all();5959+/// }6060+/// }6161+///6262+/// let task = MyTask::new()?;6363+/// task.wait_for_completion();6464+/// # Ok::<(), Error>(())6565+/// ```6666+#[pin_data]6767+pub struct Completion {6868+ #[pin]6969+ inner: Opaque<bindings::completion>,7070+}7171+7272+// SAFETY: `Completion` is safe to be send to any task.7373+unsafe impl Send for Completion {}7474+7575+// SAFETY: `Completion` is safe to be accessed concurrently.7676+unsafe impl Sync for Completion {}7777+7878+impl Completion {7979+ /// Create an initializer for a new [`Completion`].8080+ pub fn new() -> impl PinInit<Self> {8181+ pin_init!(Self {8282+ inner <- Opaque::ffi_init(|slot: *mut bindings::completion| {8383+ // SAFETY: `slot` is a valid pointer to an uninitialized `struct completion`.8484+ unsafe { bindings::init_completion(slot) };8585+ }),8686+ })8787+ }8888+8989+ fn as_raw(&self) -> *mut bindings::completion {9090+ self.inner.get()9191+ }9292+9393+ /// Signal all tasks waiting on this completion.9494+ ///9595+ /// This method wakes up all tasks waiting on this completion; after this operation the9696+ /// completion is permanently done, i.e. signals all current and future waiters.9797+ pub fn complete_all(&self) {9898+ // SAFETY: `self.as_raw()` is a pointer to a valid `struct completion`.9999+ unsafe { bindings::complete_all(self.as_raw()) };100100+ }101101+102102+ /// Wait for completion of a task.103103+ ///104104+ /// This method waits for the completion of a task; it is not interruptible and there is no105105+ /// timeout.106106+ ///107107+ /// See also [`Completion::complete_all`].108108+ pub fn wait_for_completion(&self) {109109+ // SAFETY: `self.as_raw()` is a pointer to a valid `struct completion`.110110+ unsafe { bindings::wait_for_completion(self.as_raw()) };111111+ }112112+}
+1-1
scripts/gdb/linux/vfs.py
···2222 if parent == d or parent == 0:2323 return ""2424 p = dentry_name(d['d_parent']) + "/"2525- return p + d['d_iname'].string()2525+ return p + d['d_shortname']['string'].string()26262727class DentryName(gdb.Function):2828 """Return string of the full path of a dentry.
+11-5
security/selinux/ss/services.c
···19091909 goto out_unlock;19101910 }19111911 /* Obtain the sid for the context. */19121912- rc = sidtab_context_to_sid(sidtab, &newcontext, out_sid);19131913- if (rc == -ESTALE) {19141914- rcu_read_unlock();19151915- context_destroy(&newcontext);19161916- goto retry;19121912+ if (context_equal(scontext, &newcontext))19131913+ *out_sid = ssid;19141914+ else if (context_equal(tcontext, &newcontext))19151915+ *out_sid = tsid;19161916+ else {19171917+ rc = sidtab_context_to_sid(sidtab, &newcontext, out_sid);19181918+ if (rc == -ESTALE) {19191919+ rcu_read_unlock();19201920+ context_destroy(&newcontext);19211921+ goto retry;19221922+ }19171923 }19181924out_unlock:19191925 rcu_read_unlock();
···22 tristate "Apple Silicon MCA driver"33 depends on ARCH_APPLE || COMPILE_TEST44 select SND_DMAENGINE_PCM55- default ARCH_APPLE65 help76 This option enables an ASoC platform driver for MCA peripherals found87 on Apple Silicon SoCs.
+10-8
sound/soc/codecs/cs35l56-sdw.c
···238238 .val_format_endian_default = REGMAP_ENDIAN_BIG,239239};240240241241-static int cs35l56_sdw_set_cal_index(struct cs35l56_private *cs35l56)241241+static int cs35l56_sdw_get_unique_id(struct cs35l56_private *cs35l56)242242{243243 int ret;244244245245- /* SoundWire UniqueId is used to index the calibration array */246245 ret = sdw_read_no_pm(cs35l56->sdw_peripheral, SDW_SCP_DEVID_0);247246 if (ret < 0)248247 return ret;249248250250- cs35l56->base.cal_index = ret & 0xf;249249+ cs35l56->sdw_unique_id = ret & 0xf;251250252251 return 0;253252}···258259259260 pm_runtime_get_noresume(cs35l56->base.dev);260261261261- if (cs35l56->base.cal_index < 0) {262262- ret = cs35l56_sdw_set_cal_index(cs35l56);263263- if (ret < 0)264264- goto out;265265- }262262+ ret = cs35l56_sdw_get_unique_id(cs35l56);263263+ if (ret)264264+ goto out;265265+266266+ /* SoundWire UniqueId is used to index the calibration array */267267+ if (cs35l56->base.cal_index < 0)268268+ cs35l56->base.cal_index = cs35l56->sdw_unique_id;266269267270 ret = cs35l56_init(cs35l56);268271 if (ret < 0) {···588587589588 cs35l56->base.dev = dev;590589 cs35l56->sdw_peripheral = peripheral;590590+ cs35l56->sdw_link_num = peripheral->bus->link_id;591591 INIT_WORK(&cs35l56->sdw_irq_work, cs35l56_sdw_irq_work);592592593593 dev_set_drvdata(dev, cs35l56);
+63-9
sound/soc/codecs/cs35l56.c
···706706 return ret;707707}708708709709+static int cs35l56_dsp_download_and_power_up(struct cs35l56_private *cs35l56,710710+ bool load_firmware)711711+{712712+ int ret;713713+714714+ /*715715+ * Abort the first load if it didn't find the suffixed bins and716716+ * we have an alternate fallback suffix.717717+ */718718+ cs35l56->dsp.bin_mandatory = (load_firmware && cs35l56->fallback_fw_suffix);719719+720720+ ret = wm_adsp_power_up(&cs35l56->dsp, load_firmware);721721+ if ((ret == -ENOENT) && cs35l56->dsp.bin_mandatory) {722722+ cs35l56->dsp.fwf_suffix = cs35l56->fallback_fw_suffix;723723+ cs35l56->fallback_fw_suffix = NULL;724724+ cs35l56->dsp.bin_mandatory = false;725725+ ret = wm_adsp_power_up(&cs35l56->dsp, load_firmware);726726+ }727727+728728+ if (ret) {729729+ dev_dbg(cs35l56->base.dev, "wm_adsp_power_up ret %d\n", ret);730730+ return ret;731731+ }732732+733733+ return 0;734734+}735735+709736static void cs35l56_reinit_patch(struct cs35l56_private *cs35l56)710737{711738 int ret;712739713713- /* Use wm_adsp to load and apply the firmware patch and coefficient files */714714- ret = wm_adsp_power_up(&cs35l56->dsp, true);715715- if (ret) {716716- dev_dbg(cs35l56->base.dev, "%s: wm_adsp_power_up ret %d\n", __func__, ret);740740+ ret = cs35l56_dsp_download_and_power_up(cs35l56, true);741741+ if (ret)717742 return;718718- }719743720744 cs35l56_write_cal(cs35l56);721745···774750 * but only if firmware is missing. If firmware is already patched just775751 * power-up wm_adsp without downloading firmware.776752 */777777- ret = wm_adsp_power_up(&cs35l56->dsp, !!firmware_missing);778778- if (ret) {779779- dev_dbg(cs35l56->base.dev, "%s: wm_adsp_power_up ret %d\n", __func__, ret);753753+ ret = cs35l56_dsp_download_and_power_up(cs35l56, firmware_missing);754754+ if (ret)780755 goto err;781781- }782756783757 mutex_lock(&cs35l56->base.irq_lock);784758···875853 pm_runtime_put_autosuspend(cs35l56->base.dev);876854}877855856856+static int cs35l56_set_fw_suffix(struct cs35l56_private *cs35l56)857857+{858858+ if (cs35l56->dsp.fwf_suffix)859859+ return 0;860860+861861+ if (!cs35l56->sdw_peripheral)862862+ return 0;863863+864864+ cs35l56->dsp.fwf_suffix = devm_kasprintf(cs35l56->base.dev, GFP_KERNEL,865865+ "l%uu%u",866866+ cs35l56->sdw_link_num,867867+ cs35l56->sdw_unique_id);868868+ if (!cs35l56->dsp.fwf_suffix)869869+ return -ENOMEM;870870+871871+ /*872872+ * There are published firmware files for L56 B0 silicon using873873+ * the ALSA prefix as the filename suffix. Default to trying these874874+ * first, with the new name as an alternate.875875+ */876876+ if ((cs35l56->base.type == 0x56) && (cs35l56->base.rev == 0xb0)) {877877+ cs35l56->fallback_fw_suffix = cs35l56->dsp.fwf_suffix;878878+ cs35l56->dsp.fwf_suffix = cs35l56->component->name_prefix;879879+ }880880+881881+ return 0;882882+}883883+878884static int cs35l56_component_probe(struct snd_soc_component *component)879885{880886 struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(component);···942892 return -ENOMEM;943893944894 cs35l56->component = component;895895+ ret = cs35l56_set_fw_suffix(cs35l56);896896+ if (ret)897897+ return ret;898898+945899 wm_adsp2_component_probe(&cs35l56->dsp, component);946900947901 debugfs_create_bool("init_done", 0444, debugfs_root, &cs35l56->base.init_done);
···7373 break;7474 default:7575 dev_warn(card->dev,7676- "only -2ch and -4ch are supported for dmic\n");7676+ "unsupported number of dmics: %d\n",7777+ mach_params.dmic_num);7778 continue;7879 }7980 tplg_dev = TPLG_DEVICE_INTEL_PCH_DMIC;
···3434#define ORC_TYPE_REGS 33535#define ORC_TYPE_REGS_PARTIAL 436363737-#ifndef __ASSEMBLY__3737+#ifndef __ASSEMBLER__3838/*3939 * This struct is more or less a vastly simplified version of the DWARF Call4040 * Frame Information standard. It contains only the necessary parts of DWARF···5353 unsigned int type:3;5454 unsigned int signal:1;5555};5656-#endif /* __ASSEMBLY__ */5656+#endif /* __ASSEMBLER__ */57575858#endif /* _ORC_TYPES_H */
+5
tools/arch/x86/include/asm/amd/ibs.h
···11/* SPDX-License-Identifier: GPL-2.0 */22+#ifndef _ASM_X86_AMD_IBS_H33+#define _ASM_X86_AMD_IBS_H44+25/*36 * From PPR Vol 1 for AMD Family 19h Model 01h B147 * 55898 Rev 0.35 - Feb 5, 2021···154151 };155152 u64 regs[MSR_AMD64_IBS_REG_COUNT_MAX];156153};154154+155155+#endif /* _ASM_X86_AMD_IBS_H */
+10-4
tools/arch/x86/include/asm/cpufeatures.h
···336336#define X86_FEATURE_AMD_IBRS (13*32+14) /* Indirect Branch Restricted Speculation */337337#define X86_FEATURE_AMD_STIBP (13*32+15) /* Single Thread Indirect Branch Predictors */338338#define X86_FEATURE_AMD_STIBP_ALWAYS_ON (13*32+17) /* Single Thread Indirect Branch Predictors always-on preferred */339339-#define X86_FEATURE_AMD_IBRS_SAME_MODE (13*32+19) /* Indirect Branch Restricted Speculation same mode protection*/339339+#define X86_FEATURE_AMD_IBRS_SAME_MODE (13*32+19) /* Indirect Branch Restricted Speculation same mode protection*/340340#define X86_FEATURE_AMD_PPIN (13*32+23) /* "amd_ppin" Protected Processor Inventory Number */341341#define X86_FEATURE_AMD_SSBD (13*32+24) /* Speculative Store Bypass Disable */342342#define X86_FEATURE_VIRT_SSBD (13*32+25) /* "virt_ssbd" Virtualized Speculative Store Bypass Disable */···379379#define X86_FEATURE_V_SPEC_CTRL (15*32+20) /* "v_spec_ctrl" Virtual SPEC_CTRL */380380#define X86_FEATURE_VNMI (15*32+25) /* "vnmi" Virtual NMI */381381#define X86_FEATURE_SVME_ADDR_CHK (15*32+28) /* SVME addr check */382382+#define X86_FEATURE_BUS_LOCK_THRESHOLD (15*32+29) /* Bus lock threshold */382383#define X86_FEATURE_IDLE_HLT (15*32+30) /* IDLE HLT intercept */383384384385/* Intel-defined CPU features, CPUID level 0x00000007:0 (ECX), word 16 */···448447#define X86_FEATURE_DEBUG_SWAP (19*32+14) /* "debug_swap" SEV-ES full debug state swap support */449448#define X86_FEATURE_RMPREAD (19*32+21) /* RMPREAD instruction */450449#define X86_FEATURE_SEGMENTED_RMP (19*32+23) /* Segmented RMP support */450450+#define X86_FEATURE_ALLOWED_SEV_FEATURES (19*32+27) /* Allowed SEV Features */451451#define X86_FEATURE_SVSM (19*32+28) /* "svsm" SVSM present */452452#define X86_FEATURE_HV_INUSE_WR_ALLOWED (19*32+30) /* Allow Write to in-use hypervisor-owned pages */453453···460458#define X86_FEATURE_AUTOIBRS (20*32+ 8) /* Automatic IBRS */461459#define X86_FEATURE_NO_SMM_CTL_MSR (20*32+ 9) /* SMM_CTL MSR is not present */462460461461+#define X86_FEATURE_PREFETCHI (20*32+20) /* Prefetch Data/Instruction to Cache Level */463462#define X86_FEATURE_SBPB (20*32+27) /* Selective Branch Prediction Barrier */464463#define X86_FEATURE_IBPB_BRTYPE (20*32+28) /* MSR_PRED_CMD[IBPB] flushes all branch type predictions */465464#define X86_FEATURE_SRSO_NO (20*32+29) /* CPU is not affected by SRSO */···485482#define X86_FEATURE_AMD_HTR_CORES (21*32+ 6) /* Heterogeneous Core Topology */486483#define X86_FEATURE_AMD_WORKLOAD_CLASS (21*32+ 7) /* Workload Classification */487484#define X86_FEATURE_PREFER_YMM (21*32+ 8) /* Avoid ZMM registers due to downclocking */488488-#define X86_FEATURE_INDIRECT_THUNK_ITS (21*32+ 9) /* Use thunk for indirect branches in lower half of cacheline */485485+#define X86_FEATURE_APX (21*32+ 9) /* Advanced Performance Extensions */486486+#define X86_FEATURE_INDIRECT_THUNK_ITS (21*32+10) /* Use thunk for indirect branches in lower half of cacheline */489487490488/*491489 * BUG word(s)···539535#define X86_BUG_BHI X86_BUG( 1*32+ 3) /* "bhi" CPU is affected by Branch History Injection */540536#define X86_BUG_IBPB_NO_RET X86_BUG( 1*32+ 4) /* "ibpb_no_ret" IBPB omits return target predictions */541537#define X86_BUG_SPECTRE_V2_USER X86_BUG( 1*32+ 5) /* "spectre_v2_user" CPU is affected by Spectre variant 2 attack between user processes */542542-#define X86_BUG_ITS X86_BUG( 1*32+ 6) /* "its" CPU is affected by Indirect Target Selection */543543-#define X86_BUG_ITS_NATIVE_ONLY X86_BUG( 1*32+ 7) /* "its_native_only" CPU is affected by ITS, VMX is not affected */538538+#define X86_BUG_OLD_MICROCODE X86_BUG( 1*32+ 6) /* "old_microcode" CPU has old microcode, it is surely vulnerable to something */539539+#define X86_BUG_ITS X86_BUG( 1*32+ 7) /* "its" CPU is affected by Indirect Target Selection */540540+#define X86_BUG_ITS_NATIVE_ONLY X86_BUG( 1*32+ 8) /* "its_native_only" CPU is affected by ITS, VMX is not affected */541541+544542#endif /* _ASM_X86_CPUFEATURES_H */
···441441#define KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS (1 << 6)442442#define KVM_X86_QUIRK_SLOT_ZAP_ALL (1 << 7)443443#define KVM_X86_QUIRK_STUFF_FEATURE_MSRS (1 << 8)444444+#define KVM_X86_QUIRK_IGNORE_GUEST_PAT (1 << 9)444445445446#define KVM_STATE_NESTED_FORMAT_VMX 0446447#define KVM_STATE_NESTED_FORMAT_SVM 1···931930#define KVM_X86_SEV_ES_VM 3932931#define KVM_X86_SNP_VM 4933932#define KVM_X86_TDX_VM 5933933+934934+/* Trust Domain eXtension sub-ioctl() commands. */935935+enum kvm_tdx_cmd_id {936936+ KVM_TDX_CAPABILITIES = 0,937937+ KVM_TDX_INIT_VM,938938+ KVM_TDX_INIT_VCPU,939939+ KVM_TDX_INIT_MEM_REGION,940940+ KVM_TDX_FINALIZE_VM,941941+ KVM_TDX_GET_CPUID,942942+943943+ KVM_TDX_CMD_NR_MAX,944944+};945945+946946+struct kvm_tdx_cmd {947947+ /* enum kvm_tdx_cmd_id */948948+ __u32 id;949949+ /* flags for sub-commend. If sub-command doesn't use this, set zero. */950950+ __u32 flags;951951+ /*952952+ * data for each sub-command. An immediate or a pointer to the actual953953+ * data in process virtual address. If sub-command doesn't use it,954954+ * set zero.955955+ */956956+ __u64 data;957957+ /*958958+ * Auxiliary error code. The sub-command may return TDX SEAMCALL959959+ * status code in addition to -Exxx.960960+ */961961+ __u64 hw_error;962962+};963963+964964+struct kvm_tdx_capabilities {965965+ __u64 supported_attrs;966966+ __u64 supported_xfam;967967+ __u64 reserved[254];968968+969969+ /* Configurable CPUID bits for userspace */970970+ struct kvm_cpuid2 cpuid;971971+};972972+973973+struct kvm_tdx_init_vm {974974+ __u64 attributes;975975+ __u64 xfam;976976+ __u64 mrconfigid[6]; /* sha384 digest */977977+ __u64 mrowner[6]; /* sha384 digest */978978+ __u64 mrownerconfig[6]; /* sha384 digest */979979+980980+ /* The total space for TD_PARAMS before the CPUIDs is 256 bytes */981981+ __u64 reserved[12];982982+983983+ /*984984+ * Call KVM_TDX_INIT_VM before vcpu creation, thus before985985+ * KVM_SET_CPUID2.986986+ * This configuration supersedes KVM_SET_CPUID2s for VCPUs because the987987+ * TDX module directly virtualizes those CPUIDs without VMM. The user988988+ * space VMM, e.g. qemu, should make KVM_SET_CPUID2 consistent with989989+ * those values. If it doesn't, KVM may have wrong idea of vCPUIDs of990990+ * the guest, and KVM may wrongly emulate CPUIDs or MSRs that the TDX991991+ * module doesn't virtualize.992992+ */993993+ struct kvm_cpuid2 cpuid;994994+};995995+996996+#define KVM_TDX_MEASURE_MEMORY_REGION _BITULL(0)997997+998998+struct kvm_tdx_init_mem_region {999999+ __u64 source_addr;10001000+ __u64 gpa;10011001+ __u64 nr_pages;10021002+};93410039351004#endif /* _ASM_X86_KVM_H */
···1212#define BIT_ULL_MASK(nr) (ULL(1) << ((nr) % BITS_PER_LONG_LONG))1313#define BIT_ULL_WORD(nr) ((nr) / BITS_PER_LONG_LONG)1414#define BITS_PER_BYTE 81515+#define BITS_PER_TYPE(type) (sizeof(type) * BITS_PER_BYTE)15161617/*1718 * Create a contiguous bitmask starting at bit position @l and ending at···2019 * GENMASK_ULL(39, 21) gives us the 64bit vector 0x000000ffffe00000.2120 */2221#if !defined(__ASSEMBLY__)2222+2323+/*2424+ * Missing asm support2525+ *2626+ * GENMASK_U*() and BIT_U*() depend on BITS_PER_TYPE() which relies on sizeof(),2727+ * something not available in asm. Nevertheless, fixed width integers is a C2828+ * concept. Assembly code can rely on the long and long long versions instead.2929+ */3030+2331#include <linux/build_bug.h>2432#include <linux/compiler.h>3333+#include <linux/overflow.h>3434+2535#define GENMASK_INPUT_CHECK(h, l) BUILD_BUG_ON_ZERO(const_true((l) > (h)))2626-#else3636+3737+/*3838+ * Generate a mask for the specified type @t. Additional checks are made to3939+ * guarantee the value returned fits in that type, relying on4040+ * -Wshift-count-overflow compiler check to detect incompatible arguments.4141+ * For example, all these create build errors or warnings:4242+ *4343+ * - GENMASK(15, 20): wrong argument order4444+ * - GENMASK(72, 15): doesn't fit unsigned long4545+ * - GENMASK_U32(33, 15): doesn't fit in a u324646+ */4747+#define GENMASK_TYPE(t, h, l) \4848+ ((t)(GENMASK_INPUT_CHECK(h, l) + \4949+ (type_max(t) << (l) & \5050+ type_max(t) >> (BITS_PER_TYPE(t) - 1 - (h)))))5151+5252+#define GENMASK_U8(h, l) GENMASK_TYPE(u8, h, l)5353+#define GENMASK_U16(h, l) GENMASK_TYPE(u16, h, l)5454+#define GENMASK_U32(h, l) GENMASK_TYPE(u32, h, l)5555+#define GENMASK_U64(h, l) GENMASK_TYPE(u64, h, l)5656+5757+/*5858+ * Fixed-type variants of BIT(), with additional checks like GENMASK_TYPE(). The5959+ * following examples generate compiler warnings due to -Wshift-count-overflow:6060+ *6161+ * - BIT_U8(8)6262+ * - BIT_U32(-1)6363+ * - BIT_U32(40)6464+ */6565+#define BIT_INPUT_CHECK(type, nr) \6666+ BUILD_BUG_ON_ZERO(const_true((nr) >= BITS_PER_TYPE(type)))6767+6868+#define BIT_TYPE(type, nr) ((type)(BIT_INPUT_CHECK(type, nr) + BIT_ULL(nr)))6969+7070+#define BIT_U8(nr) BIT_TYPE(u8, nr)7171+#define BIT_U16(nr) BIT_TYPE(u16, nr)7272+#define BIT_U32(nr) BIT_TYPE(u32, nr)7373+#define BIT_U64(nr) BIT_TYPE(u64, nr)7474+7575+#else /* defined(__ASSEMBLY__) */7676+2777/*2878 * BUILD_BUG_ON_ZERO is not available in h files included from asm files,2979 * disable the input check if that is the case.3080 */3181#define GENMASK_INPUT_CHECK(h, l) 03232-#endif8282+8383+#endif /* !defined(__ASSEMBLY__) */33843485#define GENMASK(h, l) \3586 (GENMASK_INPUT_CHECK(h, l) + __GENMASK(h, l))
+5-5
tools/include/linux/build_bug.h
···4455#include <linux/compiler.h>6677-#ifdef __CHECKER__88-#define BUILD_BUG_ON_ZERO(e) (0)99-#else /* __CHECKER__ */107/*118 * Force a compilation error if condition is true, but also produce a129 * result (of value 0 and type int), so the expression can be used1310 * e.g. in a structure initializer (or where-ever else comma expressions1411 * aren't permitted).1212+ *1313+ * Take an error message as an optional second argument. If omitted,1414+ * default to the stringification of the tested expression.1515 */1616-#define BUILD_BUG_ON_ZERO(e) ((int)(sizeof(struct { int:(-!!(e)); })))1717-#endif /* __CHECKER__ */1616+#define BUILD_BUG_ON_ZERO(e, ...) \1717+ __BUILD_BUG_ON_ZERO_MSG(e, ##__VA_ARGS__, #e " is true")18181919/* Force a compilation error if a constant expression is not a power of 2 */2020#define __BUILD_BUG_ON_NOT_POWER_OF_2(n) \
···597597 int sym_idx;598598 int btf_id;599599 int sec_btf_id;600600- const char *name;600600+ char *name;601601 char *essent_name;602602 bool is_set;603603 bool is_weak;···42594259 return ext->btf_id;42604260 }42614261 t = btf__type_by_id(obj->btf, ext->btf_id);42624262- ext->name = btf__name_by_offset(obj->btf, t->name_off);42624262+ ext->name = strdup(btf__name_by_offset(obj->btf, t->name_off));42634263+ if (!ext->name)42644264+ return -ENOMEM;42634265 ext->sym_idx = i;42644266 ext->is_weak = ELF64_ST_BIND(sym->st_info) == STB_WEAK;42654267···91409138 zfree(&obj->btf_custom_path);91419139 zfree(&obj->kconfig);9142914091439143- for (i = 0; i < obj->nr_extern; i++)91419141+ for (i = 0; i < obj->nr_extern; i++) {91429142+ zfree(&obj->externs[i].name);91449143 zfree(&obj->externs[i].essent_name);91449144+ }9145914591469146 zfree(&obj->externs);91479147 obj->nr_extern = 0;
+17-11
tools/net/ynl/pyynl/lib/ynl.py
···231231 self.extack['unknown'].append(extack)232232233233 if attr_space:234234- # We don't have the ability to parse nests yet, so only do global235235- if 'miss-type' in self.extack and 'miss-nest' not in self.extack:236236- miss_type = self.extack['miss-type']237237- if miss_type in attr_space.attrs_by_val:238238- spec = attr_space.attrs_by_val[miss_type]239239- self.extack['miss-type'] = spec['name']240240- if 'doc' in spec:241241- self.extack['miss-type-doc'] = spec['doc']234234+ self.annotate_extack(attr_space)242235243236 def _decode_policy(self, raw):244237 policy = {}···257264 policy['mask'] = attr.as_scalar('u64')258265 return policy259266267267+ def annotate_extack(self, attr_space):268268+ """ Make extack more human friendly with attribute information """269269+270270+ # We don't have the ability to parse nests yet, so only do global271271+ if 'miss-type' in self.extack and 'miss-nest' not in self.extack:272272+ miss_type = self.extack['miss-type']273273+ if miss_type in attr_space.attrs_by_val:274274+ spec = attr_space.attrs_by_val[miss_type]275275+ self.extack['miss-type'] = spec['name']276276+ if 'doc' in spec:277277+ self.extack['miss-type-doc'] = spec['doc']278278+260279 def cmd(self):261280 return self.nl_type262281···282277283278284279class NlMsgs:285285- def __init__(self, data, attr_space=None):280280+ def __init__(self, data):286281 self.msgs = []287282288283 offset = 0289284 while offset < len(data):290290- msg = NlMsg(data, offset, attr_space=attr_space)285285+ msg = NlMsg(data, offset)291286 offset += msg.nl_len292287 self.msgs.append(msg)293288···10391034 op_rsp = []10401035 while not done:10411036 reply = self.sock.recv(self._recv_size)10421042- nms = NlMsgs(reply, attr_space=op.attr_set)10371037+ nms = NlMsgs(reply)10431038 self._recv_dbg_print(reply, nms)10441039 for nl_msg in nms:10451040 if nl_msg.nl_seq in reqs_by_seq:10461041 (op, vals, req_msg, req_flags) = reqs_by_seq[nl_msg.nl_seq]10471042 if nl_msg.extack:10431043+ nl_msg.annotate_extack(op.attr_set)10481044 self._decode_extack(req_msg, op, nl_msg.extack, vals)10491045 else:10501046 op = None
+41-16
tools/perf/Documentation/perf-amd-ibs.txt
···171171 # perf mem report172172173173A normal perf mem report output will provide detailed memory access profile.174174-However, it can also be aggregated based on output fields. For example:174174+New output fields will show related access info together. For example:175175176176- # perf mem report -F mem,sample,snoop177177- Samples: 3M of event 'ibs_op//', Event count (approx.): 23524876178178- Memory access Samples Snoop179179- N/A 1903343 N/A180180- L1 hit 1056754 N/A181181- L2 hit 75231 N/A182182- L3 hit 9496 HitM183183- L3 hit 2270 N/A184184- RAM hit 8710 N/A185185- Remote node, same socket RAM hit 3241 N/A186186- Remote core, same node Any cache hit 1572 HitM187187- Remote core, same node Any cache hit 514 N/A188188- Remote node, same socket Any cache hit 1216 HitM189189- Remote node, same socket Any cache hit 350 N/A190190- Uncached hit 18 N/A176176+ # perf mem report -F overhead,cache,snoop,comm177177+ ...178178+ # Samples: 92K of event 'ibs_op//'179179+ # Total weight : 531104180180+ #181181+ # ---------- Cache ----------- --- Snoop ----182182+ # Overhead L1 L2 L1-buf Other HitM Other Command183183+ # ........ ............................ .............. ..........184184+ #185185+ 76.07% 5.8% 35.7% 0.0% 34.6% 23.3% 52.8% cc1186186+ 5.79% 0.2% 0.0% 0.0% 5.6% 0.1% 5.7% make187187+ 5.78% 0.1% 4.4% 0.0% 1.2% 0.5% 5.3% gcc188188+ 5.33% 0.3% 3.9% 0.0% 1.1% 0.2% 5.2% as189189+ 5.00% 0.1% 3.8% 0.0% 1.0% 0.3% 4.7% sh190190+ 1.56% 0.1% 0.1% 0.0% 1.4% 0.6% 0.9% ld191191+ 0.28% 0.1% 0.0% 0.0% 0.2% 0.1% 0.2% pkg-config192192+ 0.09% 0.0% 0.0% 0.0% 0.1% 0.0% 0.1% git193193+ 0.03% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% rm194194+ ...195195+196196+Also, it can be aggregated based on various memory access info using the197197+sort keys. For example:198198+199199+ # perf mem report -s mem,snoop200200+ ...201201+ # Samples: 92K of event 'ibs_op//'202202+ # Total weight : 531104203203+ # Sort order : mem,snoop204204+ #205205+ # Overhead Samples Memory access Snoop206206+ # ........ ............ ....................................... ............207207+ #208208+ 47.99% 1509 L2 hit N/A209209+ 25.08% 338 core, same node Any cache hit HitM210210+ 10.24% 54374 N/A N/A211211+ 6.77% 35938 L1 hit N/A212212+ 6.39% 101 core, same node Any cache hit N/A213213+ 3.50% 69 RAM hit N/A214214+ 0.03% 158 LFB/MAB hit N/A215215+ 0.00% 2 Uncached hit N/A191216192217Please refer to their man page for more detail.193218
+50
tools/perf/Documentation/perf-mem.txt
···119119 And the default sort keys are changed to local_weight, mem, sym, dso,120120 symbol_daddr, dso_daddr, snoop, tlb, locked, blocked, local_ins_lat.121121122122+-F::123123+--fields=::124124+ Specify output field - multiple keys can be specified in CSV format.125125+ Please see linkperf:perf-report[1] for details.126126+127127+ In addition to the default fields, 'perf mem report' will provide the128128+ following fields to break down sample periods.129129+130130+ - op: operation in the sample instruction (load, store, prefetch, ...)131131+ - cache: location in CPU cache (L1, L2, ...) where the sample hit132132+ - mem: location in memory or other places the sample hit133133+ - dtlb: location in Data TLB (L1, L2) where the sample hit134134+ - snoop: snoop result for the sampled data access135135+136136+ Please take a look at the OUTPUT FIELD SELECTION section for caveats.137137+122138-T::123139--type-profile::124140 Show data-type profile result instead of code symbols. This requires···171155 $ perf mem report -F overhead,symbol172156 90% [k] memcpy173157 10% [.] strcmp158158+159159+OUTPUT FIELD SELECTION160160+----------------------161161+"perf mem report" adds a number of new output fields specific to data source162162+information in the sample. Some of them have the same name with the existing163163+sort keys ("mem" and "snoop"). So unlike other fields and sort keys, they'll164164+behave differently when it's used by -F/--fields or -s/--sort.165165+166166+Using those two as output fields will aggregate samples altogether and show167167+breakdown.168168+169169+ $ perf mem report -F mem,snoop170170+ ...171171+ # ------ Memory ------- --- Snoop ----172172+ # RAM Uncach Other HitM Other173173+ # ..................... ..............174174+ #175175+ 3.5% 0.0% 96.5% 25.1% 74.9%176176+177177+But using the same name for sort keys will aggregate samples for each type178178+separately.179179+180180+ $ perf mem report -s mem,snoop181181+ # Overhead Samples Memory access Snoop182182+ # ........ ............ ....................................... ............183183+ #184184+ 47.99% 1509 L2 hit N/A185185+ 25.08% 338 core, same node Any cache hit HitM186186+ 10.24% 54374 N/A N/A187187+ 6.77% 35938 L1 hit N/A188188+ 6.39% 101 core, same node Any cache hit N/A189189+ 3.50% 69 RAM hit N/A190190+ 0.03% 158 LFB/MAB hit N/A191191+ 0.00% 2 Uncached hit N/A174192175193SEE ALSO176194--------
···99err=010101111test_event_uniquifying() {1212- # We use `clockticks` to verify the uniquify behavior.1212+ # We use `clockticks` in `uncore_imc` to verify the uniquify behavior.1313+ pmu="uncore_imc"1314 event="clockticks"14151516 # If the `-A` option is added, the event should be uniquified.···4443 echo "stat event uniquifying test"4544 uniquified_event_array=()46454646+ # Skip if the machine does not have `uncore_imc` device.4747+ if ! ${perf_tool} list pmu | grep -q ${pmu}; then4848+ echo "Target does not support PMU ${pmu} [Skipped]"4949+ err=25050+ return5151+ fi5252+4753 # Check how many uniquified events.4854 while IFS= read -r line; do4955 uniquified_event=$(echo "$line" | awk '{print $1}')5056 uniquified_event_array+=("${uniquified_event}")5151- done < <(${perf_tool} list -v ${event} | grep "\[Kernel PMU event\]")5757+ done < <(${perf_tool} list -v ${event} | grep ${pmu})52585359 perf_command="${perf_tool} stat -e $event -A -o ${stat_output} -- true"5460 $perf_command
+1
tools/perf/tests/tests-scripts.c
···260260 continue; /* Skip scripts that have a separate driver. */261261 fd = openat(dir_fd, ent->d_name, O_PATH);262262 append_scripts_in_dir(fd, result, result_sz);263263+ close(fd);263264 }264265 for (i = 0; i < n_dirs; i++) /* Clean up */265266 zfree(&entlist[i]);
···7373# Order correspond to 'make run_tests' order7474TEST_GEN_PROGS = test_verifier test_tag test_maps test_lru_map test_progs \7575 test_sockmap \7676- test_tcpnotify_user test_sysctl \7676+ test_tcpnotify_user \7777 test_progs-no_alu327878TEST_INST_SUBDIRS := no_alu327979···220220$(error Cannot find a vmlinux for VMLINUX_BTF at any of "$(VMLINUX_BTF_PATHS)")221221endif222222223223-# Define simple and short `make test_progs`, `make test_sysctl`, etc targets223223+# Define simple and short `make test_progs`, `make test_maps`, etc targets224224# to build individual tests.225225# NOTE: Semicolon at the end is critical to override lib.mk's default static226226# rule for binaries.···329329$(OUTPUT)/test_sockmap: $(CGROUP_HELPERS) $(TESTING_HELPERS)330330$(OUTPUT)/test_tcpnotify_user: $(CGROUP_HELPERS) $(TESTING_HELPERS) $(TRACE_HELPERS)331331$(OUTPUT)/test_sock_fields: $(CGROUP_HELPERS) $(TESTING_HELPERS)332332-$(OUTPUT)/test_sysctl: $(CGROUP_HELPERS) $(TESTING_HELPERS)333332$(OUTPUT)/test_tag: $(TESTING_HELPERS)334333$(OUTPUT)/test_lirc_mode2_user: $(TESTING_HELPERS)335334$(OUTPUT)/xdping: $(TESTING_HELPERS)
···32323333int percpu_arr[1] SEC(".data.percpu_arr");34343535+/* at least one extern is included, to ensure that a specific3636+ * regression is tested whereby resizing resulted in a free-after-use3737+ * bug after type information is invalidated by the resize operation.3838+ *3939+ * There isn't a particularly good API to test for this specific condition,4040+ * but by having externs for the resizing tests it will cover this path.4141+ */4242+extern int LINUX_KERNEL_VERSION __kconfig;4343+long version_sink;4444+3545SEC("tp/syscalls/sys_enter_getpid")3646int bss_array_sum(void *ctx)3747{···53435444 for (size_t i = 0; i < bss_array_len; ++i)5545 sum += array[i];4646+4747+ /* see above; ensure this is not optimized out */4848+ version_sink = LINUX_KERNEL_VERSION;56495750 return 0;5851}···71587259 for (size_t i = 0; i < data_array_len; ++i)7360 sum += my_array[i];6161+6262+ /* see above; ensure this is not optimized out */6363+ version_sink = LINUX_KERNEL_VERSION;74647565 return 0;7666}
···3838 raise KsftSkipEx("socket.SO_INCOMING_CPU was added in Python 3.11")39394040 input_xfrm = cfg.ethnl.rss_get(4141- {'header': {'dev-name': cfg.ifname}}).get('input_xfrm')4141+ {'header': {'dev-name': cfg.ifname}}).get('input-xfrm')42424343 # Check for symmetric xor/or-xor4444 if not input_xfrm or (input_xfrm != 1 and input_xfrm != 2):
···11+// SPDX-License-Identifier: GPL-2.0-only22+/*33+ * Copyright (C) 2025 Intel Corporation44+ */55+#define _GNU_SOURCE66+77+#include <err.h>88+#include <signal.h>99+#include <stdio.h>1010+#include <stdlib.h>1111+#include <string.h>1212+#include <sys/ucontext.h>1313+1414+#ifdef __x86_64__1515+# define REG_IP REG_RIP1616+#else1717+# define REG_IP REG_EIP1818+#endif1919+2020+static void sethandler(int sig, void (*handler)(int, siginfo_t *, void *), int flags)2121+{2222+ struct sigaction sa;2323+2424+ memset(&sa, 0, sizeof(sa));2525+ sa.sa_sigaction = handler;2626+ sa.sa_flags = SA_SIGINFO | flags;2727+ sigemptyset(&sa.sa_mask);2828+2929+ if (sigaction(sig, &sa, 0))3030+ err(1, "sigaction");3131+3232+ return;3333+}3434+3535+static void sigtrap(int sig, siginfo_t *info, void *ctx_void)3636+{3737+ ucontext_t *ctx = (ucontext_t *)ctx_void;3838+ static unsigned int loop_count_on_same_ip;3939+ static unsigned long last_trap_ip;4040+4141+ if (last_trap_ip == ctx->uc_mcontext.gregs[REG_IP]) {4242+ printf("\tTrapped at %016lx\n", last_trap_ip);4343+4444+ /*4545+ * If the same IP is hit more than 10 times in a row, it is4646+ * _considered_ an infinite loop.4747+ */4848+ if (++loop_count_on_same_ip > 10) {4949+ printf("[FAIL]\tDetected SIGTRAP infinite loop\n");5050+ exit(1);5151+ }5252+5353+ return;5454+ }5555+5656+ loop_count_on_same_ip = 0;5757+ last_trap_ip = ctx->uc_mcontext.gregs[REG_IP];5858+ printf("\tTrapped at %016lx\n", last_trap_ip);5959+}6060+6161+int main(int argc, char *argv[])6262+{6363+ sethandler(SIGTRAP, sigtrap, 0);6464+6565+ /*6666+ * Set the Trap Flag (TF) to single-step the test code, therefore to6767+ * trigger a SIGTRAP signal after each instruction until the TF is6868+ * cleared.6969+ *7070+ * Because the arithmetic flags are not significant here, the TF is7171+ * set by pushing 0x302 onto the stack and then popping it into the7272+ * flags register.7373+ *7474+ * Four instructions in the following asm code are executed with the7575+ * TF set, thus the SIGTRAP handler is expected to run four times.7676+ */7777+ printf("[RUN]\tSIGTRAP infinite loop detection\n");7878+ asm volatile(7979+#ifdef __x86_64__8080+ /*8181+ * Avoid clobbering the redzone8282+ *8383+ * Equivalent to "sub $128, %rsp", however -128 can be encoded8484+ * in a single byte immediate while 128 uses 4 bytes.8585+ */8686+ "add $-128, %rsp\n\t"8787+#endif8888+ "push $0x302\n\t"8989+ "popf\n\t"9090+ "nop\n\t"9191+ "nop\n\t"9292+ "push $0x202\n\t"9393+ "popf\n\t"9494+#ifdef __x86_64__9595+ "sub $-128, %rsp\n\t"9696+#endif9797+ );9898+9999+ printf("[OK]\tNo SIGTRAP infinite loop detected\n");100100+ return 0;101101+}