···11-What: sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/cap11+What: /sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/cap22Date: December 3, 200933KernelVersion: 2.6.3244Contact: dmaengine@vger.kernel.org55Description: Capabilities the DMA supports.Currently there are DMA_PQ, DMA_PQ_VAL,66 DMA_XOR,DMA_XOR_VAL,DMA_INTERRUPT.7788-What: sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/ring_active88+What: /sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/ring_active99Date: December 3, 20091010KernelVersion: 2.6.321111Contact: dmaengine@vger.kernel.org1212Description: The number of descriptors active in the ring.13131414-What: sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/ring_size1414+What: /sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/ring_size1515Date: December 3, 20091616KernelVersion: 2.6.321717Contact: dmaengine@vger.kernel.org1818Description: Descriptor ring size, total number of descriptors available.19192020-What: sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/version2020+What: /sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/version2121Date: December 3, 20092222KernelVersion: 2.6.322323Contact: dmaengine@vger.kernel.org2424Description: Version of ioatdma device.25252626-What: sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/intr_coalesce2626+What: /sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/intr_coalesce2727Date: August 8, 20172828KernelVersion: 4.142929Contact: dmaengine@vger.kernel.org
+1-1
Documentation/ABI/testing/sysfs-class-net
···152152 When an interface is under test, it cannot be expected153153 to pass packets as normal.154154155155-What: /sys/clas/net/<iface>/duplex155155+What: /sys/class/net/<iface>/duplex156156Date: October 2009157157KernelVersion: 2.6.33158158Contact: netdev@vger.kernel.org
+4
Documentation/Makefile
···2626PDFLATEX = xelatex2727LATEXOPTS = -interaction=batchmode28282929+ifeq ($(KBUILD_VERBOSE),0)3030+SPHINXOPTS += "-q"3131+endif3232+2933# User-friendly check for sphinx-build3034HAVE_SPHINX := $(shell if which $(SPHINXBUILD) >/dev/null 2>&1; then echo 1; else echo 0; fi)3135
+1-1
Documentation/admin-guide/LSM/SafeSetID.rst
···107107privileges, such as allowing a user to set up user namespace UID/GID mappings.108108109109Note on GID policies and setgroups()110110-==================110110+====================================111111In v5.9 we are adding support for limiting CAP_SETGID privileges as was done112112previously for CAP_SETUID. However, for compatibility with common sandboxing113113related code conventions in userspace, we currently allow arbitrary
+2-2
Documentation/admin-guide/pm/cpuidle.rst
···478478statistics of the given idle state. That information is exposed by the kernel479479via ``sysfs``.480480481481-For each CPU in the system, there is a :file:`/sys/devices/system/cpu<N>/cpuidle/`481481+For each CPU in the system, there is a :file:`/sys/devices/system/cpu/cpu<N>/cpuidle/`482482directory in ``sysfs``, where the number ``<N>`` is assigned to the given483483CPU at the initialization time. That directory contains a set of subdirectories484484called :file:`state0`, :file:`state1` and so on, up to the number of idle state···494494 residency.495495496496``below``497497- Total number of times this idle state had been asked for, but cerainly497497+ Total number of times this idle state had been asked for, but certainly498498 a deeper idle state would have been a better match for the observed idle499499 duration.500500
···148148 * User Manual149149150150 http://dl.linux-sunxi.org/A64/Allwinner%20A64%20User%20Manual%20v1.0.pdf151151+152152+ - Allwinner H6153153+154154+ * Datasheet155155+156156+ https://linux-sunxi.org/images/5/5c/Allwinner_H6_V200_Datasheet_V1.1.pdf157157+158158+ * User Manual159159+160160+ https://linux-sunxi.org/images/4/46/Allwinner_H6_V200_User_Manual_V1.1.pdf
+1-1
Documentation/conf.py
···5151 support for Sphinx v3.0 and above is brand new. Be prepared for5252 possible issues in the generated output.5353 ''')5454- if minor > 0 or patch >= 2:5454+ if (major > 3) or (minor > 0 or patch >= 2):5555 # Sphinx c function parser is more pedantic with regards to type5656 # checking. Due to that, having macros at c:function cause problems.5757 # Those needed to be scaped by using c_id_attributes[] array
+2
Documentation/dev-tools/kasan.rst
···295295pass::296296297297 ok 28 - kmalloc_double_kzfree298298+298299or, if kmalloc failed::299300300301 # kmalloc_large_oob_right: ASSERTION FAILED at lib/test_kasan.c:163301302 Expected ptr is not null, but is302303 not ok 4 - kmalloc_large_oob_right304304+303305or, if a KASAN report was expected, but not found::304306305307 # kmalloc_double_kzfree: EXPECTATION FAILED at lib/test_kasan.c:629
+1-1
Documentation/dev-tools/kunit/start.rst
···197197198198 config MISC_EXAMPLE_TEST199199 bool "Test for my example"200200- depends on MISC_EXAMPLE && KUNIT200200+ depends on MISC_EXAMPLE && KUNIT=y201201202202and the following to ``drivers/misc/Makefile``:203203
+5
Documentation/dev-tools/kunit/usage.rst
···561561562562...will run the tests.563563564564+.. note::565565+ Note that you should make sure your test depends on ``KUNIT=y`` in Kconfig566566+ if the test does not support module build. Otherwise, it will trigger567567+ compile errors if ``CONFIG_KUNIT`` is ``m``.568568+564569Writing new tests for other architectures565570-----------------------------------------566571
···44please refer the following document to know more about the binding rules55for these system controllers:6677-Documentation/devicetree/bindings/arm/hisilicon/hisilicon.txt77+Documentation/devicetree/bindings/arm/hisilicon/hisilicon.yaml8899Required Properties:1010
···11+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)22+%YAML 1.233+---44+$id: http://devicetree.org/schemas/net/can/fsl,flexcan.yaml#55+$schema: http://devicetree.org/meta-schemas/core.yaml#66+77+title:88+ Flexcan CAN controller on Freescale's ARM and PowerPC system-on-a-chip (SOC).99+1010+maintainers:1111+ - Marc Kleine-Budde <mkl@pengutronix.de>1212+1313+allOf:1414+ - $ref: can-controller.yaml#1515+1616+properties:1717+ compatible:1818+ oneOf:1919+ - enum:2020+ - fsl,imx8qm-flexcan2121+ - fsl,imx8mp-flexcan2222+ - fsl,imx6q-flexcan2323+ - fsl,imx53-flexcan2424+ - fsl,imx35-flexcan2525+ - fsl,imx28-flexcan2626+ - fsl,imx25-flexcan2727+ - fsl,p1010-flexcan2828+ - fsl,vf610-flexcan2929+ - fsl,ls1021ar2-flexcan3030+ - fsl,lx2160ar1-flexcan3131+ - items:3232+ - enum:3333+ - fsl,imx7d-flexcan3434+ - fsl,imx6ul-flexcan3535+ - fsl,imx6sx-flexcan3636+ - const: fsl,imx6q-flexcan3737+ - items:3838+ - enum:3939+ - fsl,ls1028ar1-flexcan4040+ - const: fsl,lx2160ar1-flexcan4141+4242+ reg:4343+ maxItems: 14444+4545+ interrupts:4646+ maxItems: 14747+4848+ clocks:4949+ maxItems: 25050+5151+ clock-names:5252+ items:5353+ - const: ipg5454+ - const: per5555+5656+ clock-frequency:5757+ description: |5858+ The oscillator frequency driving the flexcan device, filled in by the5959+ boot loader. This property should only be used the used operating system6060+ doesn't support the clocks and clock-names property.6161+6262+ xceiver-supply:6363+ description: Regulator that powers the CAN transceiver.6464+6565+ big-endian:6666+ $ref: /schemas/types.yaml#/definitions/flag6767+ description: |6868+ This means the registers of FlexCAN controller are big endian. This is6969+ optional property.i.e. if this property is not present in device tree7070+ node then controller is assumed to be little endian. If this property is7171+ present then controller is assumed to be big endian.7272+7373+ fsl,stop-mode:7474+ description: |7575+ Register bits of stop mode control.7676+7777+ The format should be as follows:7878+ <gpr req_gpr req_bit>7979+ gpr is the phandle to general purpose register node.8080+ req_gpr is the gpr register offset of CAN stop request.8181+ req_bit is the bit offset of CAN stop request.8282+ $ref: /schemas/types.yaml#/definitions/phandle-array8383+ items:8484+ - description: The 'gpr' is the phandle to general purpose register node.8585+ - description: The 'req_gpr' is the gpr register offset of CAN stop request.8686+ maximum: 0xff8787+ - description: The 'req_bit' is the bit offset of CAN stop request.8888+ maximum: 0x1f8989+9090+ fsl,clk-source:9191+ description: |9292+ Select the clock source to the CAN Protocol Engine (PE). It's SoC9393+ implementation dependent. Refer to RM for detailed definition. If this9494+ property is not set in device tree node then driver selects clock source 19595+ by default.9696+ 0: clock source 0 (oscillator clock)9797+ 1: clock source 1 (peripheral clock)9898+ $ref: /schemas/types.yaml#/definitions/uint329999+ default: 1100100+ minimum: 0101101+ maximum: 1102102+103103+ wakeup-source:104104+ $ref: /schemas/types.yaml#/definitions/flag105105+ description:106106+ Enable CAN remote wakeup.107107+108108+required:109109+ - compatible110110+ - reg111111+ - interrupts112112+113113+additionalProperties: false114114+115115+examples:116116+ - |117117+ can@1c000 {118118+ compatible = "fsl,p1010-flexcan";119119+ reg = <0x1c000 0x1000>;120120+ interrupts = <48 0x2>;121121+ interrupt-parent = <&mpic>;122122+ clock-frequency = <200000000>;123123+ fsl,clk-source = <0>;124124+ };125125+ - |126126+ #include <dt-bindings/interrupt-controller/irq.h>127127+128128+ can@2090000 {129129+ compatible = "fsl,imx6q-flexcan";130130+ reg = <0x02090000 0x4000>;131131+ interrupts = <0 110 IRQ_TYPE_LEVEL_HIGH>;132132+ clocks = <&clks 1>, <&clks 2>;133133+ clock-names = "ipg", "per";134134+ fsl,stop-mode = <&gpr 0x34 28>;135135+ };
···11-Flexcan CAN controller on Freescale's ARM and PowerPC system-on-a-chip (SOC).22-33-Required properties:44-55-- compatible : Should be "fsl,<processor>-flexcan"66-77- where <processor> is imx8qm, imx6q, imx28, imx53, imx35, imx25, p1010,88- vf610, ls1021ar2, lx2160ar1, ls1028ar1.99-1010- The ls1028ar1 must be followed by lx2160ar1, e.g.1111- - "fsl,ls1028ar1-flexcan", "fsl,lx2160ar1-flexcan"1212-1313- An implementation should also claim any of the following compatibles1414- that it is fully backwards compatible with:1515-1616- - fsl,p1010-flexcan1717-1818-- reg : Offset and length of the register set for this device1919-- interrupts : Interrupt tuple for this device2020-2121-Optional properties:2222-2323-- clock-frequency : The oscillator frequency driving the flexcan device2424-2525-- xceiver-supply: Regulator that powers the CAN transceiver2626-2727-- big-endian: This means the registers of FlexCAN controller are big endian.2828- This is optional property.i.e. if this property is not present in2929- device tree node then controller is assumed to be little endian.3030- if this property is present then controller is assumed to be big3131- endian.3232-3333-- fsl,stop-mode: register bits of stop mode control, the format is3434- <&gpr req_gpr req_bit>.3535- gpr is the phandle to general purpose register node.3636- req_gpr is the gpr register offset of CAN stop request.3737- req_bit is the bit offset of CAN stop request.3838-3939-- fsl,clk-source: Select the clock source to the CAN Protocol Engine (PE).4040- It's SoC Implementation dependent. Refer to RM for detailed4141- definition. If this property is not set in device tree node4242- then driver selects clock source 1 by default.4343- 0: clock source 0 (oscillator clock)4444- 1: clock source 1 (peripheral clock)4545-4646-- wakeup-source: enable CAN remote wakeup4747-4848-Example:4949-5050- can@1c000 {5151- compatible = "fsl,p1010-flexcan";5252- reg = <0x1c000 0x1000>;5353- interrupts = <48 0x2>;5454- interrupt-parent = <&mpic>;5555- clock-frequency = <200000000>; // filled in by bootloader5656- fsl,clk-source = <0>; // select clock source 0 for PE5757- };
···2020integrated 12 bit SAR ADC, accessed using a PMBus interface.21212222The driver is a client driver to the core PMBus driver. Please see2323-Documentation/hwmon/pmbus for details on PMBus client drivers.2323+Documentation/hwmon/pmbus.rst for details on PMBus client drivers.242425252626Sysfs entries
···2020vendor dual-loop, digital, multi-phase controller MP2975.21212222This device:2323+2324- Supports up to two power rail.2425- Provides 8 pulse-width modulations (PWMs), and can be configured up2526 to 8-phase operation for rail 1 and up to 4-phase operation for rail···3332 10-mV DAC, IMVP9 mode with 5-mV DAC.34333534Device supports:3535+3636- SVID interface.3737- AVSBus interface.38383939Device complaint with:4040+4041- PMBus rev 1.3 interface.41424243Device supports direct format for reading output current, output voltage,···4845The below VID modes are supported: VR12, VR13, IMVP9.49465047The driver provides the next attributes for the current:4848+5149- for current in: input, maximum alarm;5250- for current out input, maximum alarm and highest values;5351- for phase current: input and label.5454-attributes.5252+ attributes.5353+5554The driver exports the following attributes via the 'sysfs' files, where5555+5656- 'n' is number of telemetry pages (from 1 to 2);5757- 'k' is number of configured phases (from 1 to 8);5858- indexes 1, 1*n for "iin";···7165**curr[1-{2n+k}]_label**72667367The driver provides the next attributes for the voltage:6868+7469- for voltage in: input, high critical threshold, high critical alarm, all only7570 from page 0;7671- for voltage out: input, low and high critical thresholds, low and high7772 critical alarms, from pages 0 and 1;7373+7874The driver exports the following attributes via the 'sysfs' files, where7575+7976- 'n' is number of telemetry pages (from 1 to 2);8077- indexes 1 for "iin";8178- indexes n+1, n+2 for "vout";···9687**in[2-{n+1}1_lcrit_alarm**97889889The driver provides the next attributes for the power:9090+9991- for power in alarm and input.10092- for power out: highest and input.9393+10194The driver exports the following attributes via the 'sysfs' files, where9595+10296- 'n' is number of telemetry pages (from 1 to 2);10397- indexes 1 for "pin";10498- indexes n+1, n+2 for "pout";
···4242(4 usages * n STATEs + 1) categories:43434444where the 4 usages can be:4545+4546- 'ever held in STATE context'4647- 'ever held as readlock in STATE context'4748- 'ever held with STATE enabled'···50495150where the n STATEs are coded in kernel/locking/lockdep_states.h and as of5251now they include:5252+5353- hardirq5454- softirq55555656where the last 1 category is:5757+5758- 'ever used' [ == !unused ]58595960When locking rules are violated, these usage bits are presented in the···9996 +--------------+-------------+--------------+10097 | | irq enabled | irq disabled |10198 +--------------+-------------+--------------+102102- | ever in irq | ? | - |9999+ | ever in irq | '?' | '-' |103100 +--------------+-------------+--------------+104104- | never in irq | + | . |101101+ | never in irq | '+' | '.' |105102 +--------------+-------------+--------------+106103107104The character '-' suggests irq is disabled because if otherwise the···219216 BD_MUTEX_PARTITION220217 };221218222222-mutex_lock_nested(&bdev->bd_contains->bd_mutex, BD_MUTEX_PARTITION);219219+ mutex_lock_nested(&bdev->bd_contains->bd_mutex, BD_MUTEX_PARTITION);223220224221In this case the locking is done on a bdev object that is known to be a225222partition.···337334----------------338335339336The validator tracks a maximum of MAX_LOCKDEP_KEYS number of lock classes.340340-Exceeding this number will trigger the following lockdep warning:337337+Exceeding this number will trigger the following lockdep warning::341338342339 (DEBUG_LOCKS_WARN_ON(id >= MAX_LOCKDEP_KEYS))343340···423420424421The difference between recursive readers and non-recursive readers is because:425422recursive readers get blocked only by a write lock *holder*, while non-recursive426426-readers could get blocked by a write lock *waiter*. Considering the follow example:423423+readers could get blocked by a write lock *waiter*. Considering the follow424424+example::427425428426 TASK A: TASK B:429427···452448453449Block condition matrix, Y means the row blocks the column, and N means otherwise.454450455455- | E | r | R |456451 +---+---+---+---+457457- E | Y | Y | Y |452452+ | | E | r | R |458453 +---+---+---+---+459459- r | Y | Y | N |454454+ | E | Y | Y | Y |460455 +---+---+---+---+461461- R | Y | Y | N |456456+ | r | Y | Y | N |457457+ +---+---+---+---+458458+ | R | Y | Y | N |459459+ +---+---+---+---+462460463461 (W: writers, r: non-recursive readers, R: recursive readers)464462465463466464acquired recursively. Unlike non-recursive read locks, recursive read locks467465only get blocked by current write lock *holders* other than write lock468468-*waiters*, for example:466466+*waiters*, for example::469467470468 TASK A: TASK B:471469···497491even true for two non-recursive read locks). A non-recursive lock can block the498492corresponding recursive lock, and vice versa.499493500500-A deadlock case with recursive locks involved is as follow:494494+A deadlock case with recursive locks involved is as follow::501495502496 TASK A: TASK B:503497···516510dependencies, but we can show that 4 types of lock dependencies are enough for517511deadlock detection.518512519519-For each lock dependency:513513+For each lock dependency::520514521515 L1 -> L2522516···531525With the above combination for simplification, there are 4 types of dependency edges532526in the lockdep graph:533527534534-1) -(ER)->: exclusive writer to recursive reader dependency, "X -(ER)-> Y" means528528+1) -(ER)->:529529+ exclusive writer to recursive reader dependency, "X -(ER)-> Y" means535530 X -> Y and X is a writer and Y is a recursive reader.536531537537-2) -(EN)->: exclusive writer to non-recursive locker dependency, "X -(EN)-> Y" means532532+2) -(EN)->:533533+ exclusive writer to non-recursive locker dependency, "X -(EN)-> Y" means538534 X -> Y and X is a writer and Y is either a writer or non-recursive reader.539535540540-3) -(SR)->: shared reader to recursive reader dependency, "X -(SR)-> Y" means536536+3) -(SR)->:537537+ shared reader to recursive reader dependency, "X -(SR)-> Y" means541538 X -> Y and X is a reader (recursive or not) and Y is a recursive reader.542539543543-4) -(SN)->: shared reader to non-recursive locker dependency, "X -(SN)-> Y" means540540+4) -(SN)->:541541+ shared reader to non-recursive locker dependency, "X -(SN)-> Y" means544542 X -> Y and X is a reader (recursive or not) and Y is either a writer or545543 non-recursive reader.546544547547-Note that given two locks, they may have multiple dependencies between them, for example:545545+Note that given two locks, they may have multiple dependencies between them,546546+for example::548547549548 TASK A:550549···603592604593Proof for sufficiency (Lemma 1):605594606606-Let's say we have a strong circle:595595+Let's say we have a strong circle::607596608597 L1 -> L2 ... -> Ln -> L1609598610610-, which means we have dependencies:599599+, which means we have dependencies::611600612601 L1 -> L2613602 L2 -> L3···644633for a lock held by P1. Let's name the lock Px is waiting as Lx, so since P1 is waiting645634for L1 and holding Ln, so we will have Ln -> L1 in the dependency graph. Similarly,646635we have L1 -> L2, L2 -> L3, ..., Ln-1 -> Ln in the dependency graph, which means we647647-have a circle:636636+have a circle::648637649638 Ln -> L1 -> L2 -> ... -> Ln650639
···7070 that both the name (as reported by ``fw.app.name``) and version are7171 required to uniquely identify the package.7272 * - ``fw.app.bundle_id``7373+ - running7374 - 0xc00000017475 - Unique identifier for the DDP package loaded in the device. Also7576 referred to as the DDP Track ID. Can be used to uniquely identify
+60-60
Documentation/networking/j1939.rst
···1010SAE J1939 defines a higher layer protocol on CAN. It implements a more1111sophisticated addressing scheme and extends the maximum packet size above 81212bytes. Several derived specifications exist, which differ from the original1313-J1939 on the application level, like MilCAN A, NMEA2000 and especially1313+J1939 on the application level, like MilCAN A, NMEA2000, and especially1414ISO-11783 (ISOBUS). This last one specifies the so-called ETP (Extended1515-Transport Protocol) which is has been included in this implementation. This1515+Transport Protocol), which has been included in this implementation. This1616results in a maximum packet size of ((2 ^ 24) - 1) * 7 bytes == 111 MiB.17171818Specifications used···3232addressing and transport methods used by J1939.33333434* **Addressing:** when a process on an ECU communicates via J1939, it should3535- not necessarily know its source address. Although at least one process per3535+ not necessarily know its source address. Although, at least one process per3636 ECU should know the source address. Other processes should be able to reuse3737 that address. This way, address parameters for different processes3838 cooperating for the same ECU, are not duplicated. This way of working is3939- closely related to the UNIX concept where programs do just one thing, and do3939+ closely related to the UNIX concept, where programs do just one thing and do4040 it well.41414242* **Dynamic addressing:** Address Claiming in J1939 is time critical.4343- Furthermore data transport should be handled properly during the address4343+ Furthermore, data transport should be handled properly during the address4444 negotiation. Putting this functionality in the kernel eliminates it as a4545 requirement for _every_ user space process that communicates via J1939. This4646 results in a consistent J1939 bus with proper addressing.···58585959The J1939 sockets operate on CAN network devices (see SocketCAN). Any J19396060user space library operating on CAN raw sockets will still operate properly.6161-Since such library does not communicate with the in-kernel implementation, care6161+Since such a library does not communicate with the in-kernel implementation, care6262must be taken that these two do not interfere. In practice, this means they6363cannot share ECU addresses. A single ECU (or virtual ECU) address is used by6464the library exclusively, or by the in-kernel system exclusively.···77778 bits : PS (PDU Specific)78787979In J1939-21 distinction is made between PDU1 format (where PF < 240) and PDU28080-format (where PF >= 240). Furthermore, when using PDU2 format, the PS-field8080+format (where PF >= 240). Furthermore, when using the PDU2 format, the PS-field8181contains a so-called Group Extension, which is part of the PGN. When using PDU28282format, the Group Extension is set in the PS-field.83838484On the other hand, when using PDU1 format, the PS-field contains a so-called8585Destination Address, which is _not_ part of the PGN. When communicating a PGN8686-from user space to kernel (or visa versa) and PDU2 format is used, the PS-field8686+from user space to kernel (or vice versa) and PDU2 format is used, the PS-field8787of the PGN shall be set to zero. The Destination Address shall be set8888elsewhere.8989···96969797Both static and dynamic addressing methods can be used.98989999-For static addresses, no extra checks are made by the kernel, and provided9999+For static addresses, no extra checks are made by the kernel and provided100100addresses are considered right. This responsibility is for the OEM or system101101integrator.102102103103For dynamic addressing, so-called Address Claiming, extra support is foreseen104104-in the kernel. In J1939 any ECU is known by it's 64-bit NAME. At the moment of104104+in the kernel. In J1939 any ECU is known by its 64-bit NAME. At the moment of105105a successful address claim, the kernel keeps track of both NAME and source106106address being claimed. This serves as a base for filter schemes. By default,107107-packets with a destination that is not locally, will be rejected.107107+packets with a destination that is not locally will be rejected.108108109109Mixed mode packets (from a static to a dynamic address or vice versa) are110110allowed. The BSD sockets define separate API calls for getting/setting the···131131---------132132133133On CAN, you first need to open a socket for communicating over a CAN network.134134-To use J1939, #include <linux/can/j1939.h>. From there, <linux/can.h> will be134134+To use J1939, ``#include <linux/can/j1939.h>``. From there, ``<linux/can.h>`` will be135135included too. To open a socket, use:136136137137.. code-block:: C138138139139 s = socket(PF_CAN, SOCK_DGRAM, CAN_J1939);140140141141-J1939 does use SOCK_DGRAM sockets. In the J1939 specification, connections are141141+J1939 does use ``SOCK_DGRAM`` sockets. In the J1939 specification, connections are142142mentioned in the context of transport protocol sessions. These still deliver143143-packets to the other end (using several CAN packets). SOCK_STREAM is not143143+packets to the other end (using several CAN packets). ``SOCK_STREAM`` is not144144supported.145145146146-After the successful creation of the socket, you would normally use the bind(2)147147-and/or connect(2) system call to bind the socket to a CAN interface. After148148-binding and/or connecting the socket, you can read(2) and write(2) from/to the149149-socket or use send(2), sendto(2), sendmsg(2) and the recv*() counterpart146146+After the successful creation of the socket, you would normally use the ``bind(2)``147147+and/or ``connect(2)`` system call to bind the socket to a CAN interface. After148148+binding and/or connecting the socket, you can ``read(2)`` and ``write(2)`` from/to the149149+socket or use ``send(2)``, ``sendto(2)``, ``sendmsg(2)`` and the ``recv*()`` counterpart150150operations on the socket as usual. There are also J1939 specific socket options151151described below.152152153153-In order to send data, a bind(2) must have been successful. bind(2) assigns a153153+In order to send data, a ``bind(2)`` must have been successful. ``bind(2)`` assigns a154154local address to a socket.155155156156-Different from CAN is that the payload data is just the data that get send,157157-without it's header info. The header info is derived from the sockaddr supplied158158-to bind(2), connect(2), sendto(2) and recvfrom(2). A write(2) with size 4 will156156+Different from CAN is that the payload data is just the data that get sends,157157+without its header info. The header info is derived from the sockaddr supplied158158+to ``bind(2)``, ``connect(2)``, ``sendto(2)`` and ``recvfrom(2)``. A ``write(2)`` with size 4 will159159result in a packet with 4 bytes.160160161161The sockaddr structure has extensions for use with J1939 as specified below:···180180 } can_addr;181181 }182182183183-can_family & can_ifindex serve the same purpose as for other SocketCAN sockets.183183+``can_family`` & ``can_ifindex`` serve the same purpose as for other SocketCAN sockets.184184185185-can_addr.j1939.pgn specifies the PGN (max 0x3ffff). Individual bits are185185+``can_addr.j1939.pgn`` specifies the PGN (max 0x3ffff). Individual bits are186186specified above.187187188188-can_addr.j1939.name contains the 64-bit J1939 NAME.188188+``can_addr.j1939.name`` contains the 64-bit J1939 NAME.189189190190-can_addr.j1939.addr contains the address.190190+``can_addr.j1939.addr`` contains the address.191191192192-The bind(2) system call assigns the local address, i.e. the source address when193193-sending packages. If a PGN during bind(2) is set, it's used as a RX filter.194194-I.e. only packets with a matching PGN are received. If an ADDR or NAME is set192192+The ``bind(2)`` system call assigns the local address, i.e. the source address when193193+sending packages. If a PGN during ``bind(2)`` is set, it's used as a RX filter.194194+I.e. only packets with a matching PGN are received. If an ADDR or NAME is set195195it is used as a receive filter, too. It will match the destination NAME or ADDR196196of the incoming packet. The NAME filter will work only if appropriate Address197197Claiming for this name was done on the CAN bus and registered/cached by the198198kernel.199199200200-On the other hand connect(2) assigns the remote address, i.e. the destination201201-address. The PGN from connect(2) is used as the default PGN when sending200200+On the other hand ``connect(2)`` assigns the remote address, i.e. the destination201201+address. The PGN from ``connect(2)`` is used as the default PGN when sending202202packets. If ADDR or NAME is set it will be used as the default destination ADDR203203-or NAME. Further a set ADDR or NAME during connect(2) is used as a receive203203+or NAME. Further a set ADDR or NAME during ``connect(2)`` is used as a receive204204filter. It will match the source NAME or ADDR of the incoming packet.205205206206-Both write(2) and send(2) will send a packet with local address from bind(2) and207207-the remote address from connect(2). Use sendto(2) to overwrite the destination206206+Both ``write(2)`` and ``send(2)`` will send a packet with local address from ``bind(2)`` and the207207+remote address from ``connect(2)``. Use ``sendto(2)`` to overwrite the destination208208address.209209210210-If can_addr.j1939.name is set (!= 0) the NAME is looked up by the kernel and211211-the corresponding ADDR is used. If can_addr.j1939.name is not set (== 0),212212-can_addr.j1939.addr is used.210210+If ``can_addr.j1939.name`` is set (!= 0) the NAME is looked up by the kernel and211211+the corresponding ADDR is used. If ``can_addr.j1939.name`` is not set (== 0),212212+``can_addr.j1939.addr`` is used.213213214214When creating a socket, reasonable defaults are set. Some options can be215215-modified with setsockopt(2) & getsockopt(2).215215+modified with ``setsockopt(2)`` & ``getsockopt(2)``.216216217217RX path related options:218218219219-- SO_J1939_FILTER - configure array of filters220220-- SO_J1939_PROMISC - disable filters set by bind(2) and connect(2)219219+- ``SO_J1939_FILTER`` - configure array of filters220220+- ``SO_J1939_PROMISC`` - disable filters set by ``bind(2)`` and ``connect(2)``221221222222By default no broadcast packets can be send or received. To enable sending or223223-receiving broadcast packets use the socket option SO_BROADCAST:223223+receiving broadcast packets use the socket option ``SO_BROADCAST``:224224225225.. code-block:: C226226···261261 +---------------------------+262262263263TX path related options:264264-SO_J1939_SEND_PRIO - change default send priority for the socket264264+``SO_J1939_SEND_PRIO`` - change default send priority for the socket265265266266Message Flags during send() and Related System Calls267267^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^268268269269-send(2), sendto(2) and sendmsg(2) take a 'flags' argument. Currently269269+``send(2)``, ``sendto(2)`` and ``sendmsg(2)`` take a 'flags' argument. Currently270270supported flags are:271271272272-* MSG_DONTWAIT, i.e. non-blocking operation.272272+* ``MSG_DONTWAIT``, i.e. non-blocking operation.273273274274recvmsg(2)275275^^^^^^^^^^276276277277-In most cases recvmsg(2) is needed if you want to extract more information than278278-recvfrom(2) can provide. For example package priority and timestamp. The277277+In most cases ``recvmsg(2)`` is needed if you want to extract more information than278278+``recvfrom(2)`` can provide. For example package priority and timestamp. The279279Destination Address, name and packet priority (if applicable) are attached to280280-the msghdr in the recvmsg(2) call. They can be extracted using cmsg(3) macros,281281-with cmsg_level == SOL_J1939 && cmsg_type == SCM_J1939_DEST_ADDR,282282-SCM_J1939_DEST_NAME or SCM_J1939_PRIO. The returned data is a uint8_t for283283-priority and dst_addr, and uint64_t for dst_name.280280+the msghdr in the ``recvmsg(2)`` call. They can be extracted using ``cmsg(3)`` macros,281281+with ``cmsg_level == SOL_J1939 && cmsg_type == SCM_J1939_DEST_ADDR``,282282+``SCM_J1939_DEST_NAME`` or ``SCM_J1939_PRIO``. The returned data is a ``uint8_t`` for283283+``priority`` and ``dst_addr``, and ``uint64_t`` for ``dst_name``.284284285285.. code-block:: C286286···305305306306Distinction has to be made between using the claimed address and doing an307307address claim. To use an already claimed address, one has to fill in the308308-j1939.name member and provide it to bind(2). If the name had claimed an address308308+``j1939.name`` member and provide it to ``bind(2)``. If the name had claimed an address309309earlier, all further messages being sent will use that address. And the310310-j1939.addr member will be ignored.310310+``j1939.addr`` member will be ignored.311311312312An exception on this is PGN 0x0ee00. This is the "Address Claim/Cannot Claim313313-Address" message and the kernel will use the j1939.addr member for that PGN if313313+Address" message and the kernel will use the ``j1939.addr`` member for that PGN if314314necessary.315315316316To claim an address following code example can be used:···371371372372If another ECU claims the address, the kernel will mark the NAME-SA expired.373373No socket bound to the NAME can send packets (other than address claims). To374374-claim another address, some socket bound to NAME, must bind(2) again, but with375375-only j1939.addr changed to the new SA, and must then send a valid address claim374374+claim another address, some socket bound to NAME, must ``bind(2)`` again, but with375375+only ``j1939.addr`` changed to the new SA, and must then send a valid address claim376376packet. This restarts the state machine in the kernel (and any other377377participant on the bus) for this NAME.378378379379-can-utils also include the jacd tool, so it can be used as code example or as379379+``can-utils`` also include the ``j1939acd`` tool, so it can be used as code example or as380380default Address Claiming daemon.381381382382Send Examples···403403404404 bind(sock, (struct sockaddr *)&baddr, sizeof(baddr));405405406406-Now, the socket 'sock' is bound to the SA 0x20. Since no connect(2) was called,407407-at this point we can use only sendto(2) or sendmsg(2).406406+Now, the socket 'sock' is bound to the SA 0x20. Since no ``connect(2)`` was called,407407+at this point we can use only ``sendto(2)`` or ``sendmsg(2)``.408408409409Send:410410···414414 .can_family = AF_CAN,415415 .can_addr.j1939 = {416416 .name = J1939_NO_NAME;417417- .pgn = 0x30,418418- .addr = 0x12300,417417+ .addr = 0x30,418418+ .pgn = 0x12300,419419 },420420 };421421
+1-2
Documentation/networking/statistics.rst
···175175translated to netlink attributes when dumped. Drivers must not overwrite176176the statistics they don't report with 0.177177178178-.. kernel-doc:: include/linux/ethtool.h179179- :identifiers: ethtool_pause_stats178178+- ethtool_pause_stats()
+14-6
Documentation/sphinx/automarkup.py
···1616from itertools import chain17171818#1919+# Python 2 lacks re.ASCII...2020+#2121+try:2222+ ascii_p3 = re.ASCII2323+except AttributeError:2424+ ascii_p3 = 02525+2626+#1927# Regex nastiness. Of course.2028# Try to identify "function()" that's not already marked up some2129# other way. Sphinx doesn't like a lot of stuff right after a2230# :c:func: block (i.e. ":c:func:`mmap()`s" flakes out), so the last2331# bit tries to restrict matches to things that won't create trouble.2432#2525-RE_function = re.compile(r'\b(([a-zA-Z_]\w+)\(\))', flags=re.ASCII)3333+RE_function = re.compile(r'\b(([a-zA-Z_]\w+)\(\))', flags=ascii_p3)26342735#2836# Sphinx 2 uses the same :c:type role for struct, union, enum and typedef2937#3038RE_generic_type = re.compile(r'\b(struct|union|enum|typedef)\s+([a-zA-Z_]\w+)',3131- flags=re.ASCII)3939+ flags=ascii_p3)32403341#3442# Sphinx 3 uses a different C role for each one of struct, union, enum and3543# typedef3644#3737-RE_struct = re.compile(r'\b(struct)\s+([a-zA-Z_]\w+)', flags=re.ASCII)3838-RE_union = re.compile(r'\b(union)\s+([a-zA-Z_]\w+)', flags=re.ASCII)3939-RE_enum = re.compile(r'\b(enum)\s+([a-zA-Z_]\w+)', flags=re.ASCII)4040-RE_typedef = re.compile(r'\b(typedef)\s+([a-zA-Z_]\w+)', flags=re.ASCII)4545+RE_struct = re.compile(r'\b(struct)\s+([a-zA-Z_]\w+)', flags=ascii_p3)4646+RE_union = re.compile(r'\b(union)\s+([a-zA-Z_]\w+)', flags=ascii_p3)4747+RE_enum = re.compile(r'\b(enum)\s+([a-zA-Z_]\w+)', flags=ascii_p3)4848+RE_typedef = re.compile(r'\b(typedef)\s+([a-zA-Z_]\w+)', flags=ascii_p3)41494250#4351# Detects a reference to a documentation page of the form Documentation/... with
+1
Documentation/userspace-api/index.rst
···2222 spec_ctrl2323 accelerators/ocxl2424 ioctl/index2525+ iommu2526 media/index26272728.. only:: subproject and html
···6767 sr r5, [ARC_REG_LPB_CTRL]68681:6969#endif /* CONFIG_ARC_LPB_DISABLE */7070-#endif7070+7171+ /* On HSDK, CCMs need to remapped super early */7272+#ifdef CONFIG_ARC_SOC_HSDK7373+ mov r6, 0x600000007474+ lr r5, [ARC_REG_ICCM_BUILD]7575+ breq r5, 0, 1f7676+ sr r6, [ARC_REG_AUX_ICCM]7777+1:7878+ lr r5, [ARC_REG_DCCM_BUILD]7979+ breq r5, 0, 2f8080+ sr r6, [ARC_REG_AUX_DCCM]8181+2:8282+#endif /* CONFIG_ARC_SOC_HSDK */8383+8484+#endif /* CONFIG_ISA_ARCV2 */8585+7186 ; Config DSP_CTRL properly, so kernel may use integer multiply,7287 ; multiply-accumulate, and divide operations7388 DSP_EARLY_INIT
+6-1
arch/arc/kernel/stacktrace.c
···112112 int (*consumer_fn) (unsigned int, void *), void *arg)113113{114114#ifdef CONFIG_ARC_DW2_UNWIND115115- int ret = 0;115115+ int ret = 0, cnt = 0;116116 unsigned int address;117117 struct unwind_frame_info frame_info;118118···132132 break;133133134134 frame_info.regs.r63 = frame_info.regs.r31;135135+136136+ if (cnt++ > 128) {137137+ printk("unwinder looping too long, aborting !\n");138138+ return 0;139139+ }135140 }136141137142 return address; /* return the last address it saw */
-17
arch/arc/plat-hsdk/platform.c
···17171818#define ARC_CCM_UNUSED_ADDR 0x6000000019192020-static void __init hsdk_init_per_cpu(unsigned int cpu)2121-{2222- /*2323- * By default ICCM is mapped to 0x7z while this area is used for2424- * kernel virtual mappings, so move it to currently unused area.2525- */2626- if (cpuinfo_arc700[cpu].iccm.sz)2727- write_aux_reg(ARC_REG_AUX_ICCM, ARC_CCM_UNUSED_ADDR);2828-2929- /*3030- * By default DCCM is mapped to 0x8z while this area is used by kernel,3131- * so move it to currently unused area.3232- */3333- if (cpuinfo_arc700[cpu].dccm.sz)3434- write_aux_reg(ARC_REG_AUX_DCCM, ARC_CCM_UNUSED_ADDR);3535-}36203721#define ARC_PERIPHERAL_BASE 0xf00000003822#define CREG_BASE (ARC_PERIPHERAL_BASE + 0x1000)···323339MACHINE_START(SIMULATION, "hsdk")324340 .dt_compat = hsdk_compat,325341 .init_early = hsdk_init_early,326326- .init_per_cpu = hsdk_init_per_cpu,327342MACHINE_END
+2-2
arch/arm/mm/init.c
···354354 /* set highmem page free */355355 for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,356356 &range_start, &range_end, NULL) {357357- unsigned long start = PHYS_PFN(range_start);358358- unsigned long end = PHYS_PFN(range_end);357357+ unsigned long start = PFN_UP(range_start);358358+ unsigned long end = PFN_DOWN(range_end);359359360360 /* Ignore complete lowmem entries */361361 if (end <= max_low)
+1-1
arch/arm64/Kconfig
···10021002config NODES_SHIFT10031003 int "Maximum NUMA Nodes (as a power of 2)"10041004 range 1 1010051005- default "2"10051005+ default "4"10061006 depends on NEED_MULTIPLE_NODES10071007 help10081008 Specify the maximum number of NUMA Nodes available on the target
+2
arch/arm64/include/asm/brk-imm.h
···1010 * #imm16 values used for BRK instruction generation1111 * 0x004: for installing kprobes1212 * 0x005: for installing uprobes1313+ * 0x006: for kprobe software single-step1314 * Allowed values for kgdb are 0x400 - 0x7ff1415 * 0x100: for triggering a fault on purpose (reserved)1516 * 0x400: for dynamic BRK instruction···2019 */2120#define KPROBES_BRK_IMM 0x0042221#define UPROBES_BRK_IMM 0x0052222+#define KPROBES_BRK_SS_IMM 0x0062323#define FAULT_BRK_IMM 0x1002424#define KGDB_DYN_DBG_BRK_IMM 0x4002525#define KGDB_COMPILED_DBG_BRK_IMM 0x401
···1616#include <linux/percpu.h>17171818#define __ARCH_WANT_KPROBES_INSN_SLOT1919-#define MAX_INSN_SIZE 11919+#define MAX_INSN_SIZE 220202121#define flush_insn_slot(p) do { } while (0)2222#define kretprobe_blacklist_size 0
+32-11
arch/arm64/kernel/kexec_image.c
···4343 u64 flags, value;4444 bool be_image, be_kernel;4545 struct kexec_buf kbuf;4646- unsigned long text_offset;4646+ unsigned long text_offset, kernel_segment_number;4747 struct kexec_segment *kernel_segment;4848 int ret;4949···8888 /* Adjust kernel segment with TEXT_OFFSET */8989 kbuf.memsz += text_offset;90909191- ret = kexec_add_buffer(&kbuf);9292- if (ret)9393- return ERR_PTR(ret);9191+ kernel_segment_number = image->nr_segments;94929595- kernel_segment = &image->segment[image->nr_segments - 1];9393+ /*9494+ * The location of the kernel segment may make it impossible to satisfy9595+ * the other segment requirements, so we try repeatedly to find a9696+ * location that will work.9797+ */9898+ while ((ret = kexec_add_buffer(&kbuf)) == 0) {9999+ /* Try to load additional data */100100+ kernel_segment = &image->segment[kernel_segment_number];101101+ ret = load_other_segments(image, kernel_segment->mem,102102+ kernel_segment->memsz, initrd,103103+ initrd_len, cmdline);104104+ if (!ret)105105+ break;106106+107107+ /*108108+ * We couldn't find space for the other segments; erase the109109+ * kernel segment and try the next available hole.110110+ */111111+ image->nr_segments -= 1;112112+ kbuf.buf_min = kernel_segment->mem + kernel_segment->memsz;113113+ kbuf.mem = KEXEC_BUF_MEM_UNKNOWN;114114+ }115115+116116+ if (ret) {117117+ pr_err("Could not find any suitable kernel location!");118118+ return ERR_PTR(ret);119119+ }120120+121121+ kernel_segment = &image->segment[kernel_segment_number];96122 kernel_segment->mem += text_offset;97123 kernel_segment->memsz -= text_offset;98124 image->start = kernel_segment->mem;···127101 kernel_segment->mem, kbuf.bufsz,128102 kernel_segment->memsz);129103130130- /* Load additional data */131131- ret = load_other_segments(image,132132- kernel_segment->mem, kernel_segment->memsz,133133- initrd, initrd_len, cmdline);134134-135135- return ERR_PTR(ret);104104+ return 0;136105}137106138107#ifdef CONFIG_KEXEC_IMAGE_VERIFY_SIG
+8-1
arch/arm64/kernel/machine_kexec_file.c
···240240 return ret;241241}242242243243+/*244244+ * Tries to add the initrd and DTB to the image. If it is not possible to find245245+ * valid locations, this function will undo changes to the image and return non246246+ * zero.247247+ */243248int load_other_segments(struct kimage *image,244249 unsigned long kernel_load_addr,245250 unsigned long kernel_size,···253248{254249 struct kexec_buf kbuf;255250 void *headers, *dtb = NULL;256256- unsigned long headers_sz, initrd_load_addr = 0, dtb_len;251251+ unsigned long headers_sz, initrd_load_addr = 0, dtb_len,252252+ orig_segments = image->nr_segments;257253 int ret = 0;258254259255 kbuf.image = image;···340334 return 0;341335342336out_err:337337+ image->nr_segments = orig_segments;343338 vfree(dtb);344339 return ret;345340}
+24-47
arch/arm64/kernel/probes/kprobes.c
···3636static void __kprobes3737post_kprobe_handler(struct kprobe_ctlblk *, struct pt_regs *);38383939-static int __kprobes patch_text(kprobe_opcode_t *addr, u32 opcode)4040-{4141- void *addrs[1];4242- u32 insns[1];4343-4444- addrs[0] = addr;4545- insns[0] = opcode;4646-4747- return aarch64_insn_patch_text(addrs, insns, 1);4848-}4949-5039static void __kprobes arch_prepare_ss_slot(struct kprobe *p)5140{5252- /* prepare insn slot */5353- patch_text(p->ainsn.api.insn, p->opcode);4141+ kprobe_opcode_t *addr = p->ainsn.api.insn;4242+ void *addrs[] = {addr, addr + 1};4343+ u32 insns[] = {p->opcode, BRK64_OPCODE_KPROBES_SS};54445555- flush_icache_range((uintptr_t) (p->ainsn.api.insn),5656- (uintptr_t) (p->ainsn.api.insn) +5757- MAX_INSN_SIZE * sizeof(kprobe_opcode_t));4545+ /* prepare insn slot */4646+ aarch64_insn_patch_text(addrs, insns, 2);4747+4848+ flush_icache_range((uintptr_t)addr, (uintptr_t)(addr + MAX_INSN_SIZE));58495950 /*6051 * Needs restoring of return address after stepping xol.···119128/* arm kprobe: install breakpoint in text */120129void __kprobes arch_arm_kprobe(struct kprobe *p)121130{122122- patch_text(p->addr, BRK64_OPCODE_KPROBES);131131+ void *addr = p->addr;132132+ u32 insn = BRK64_OPCODE_KPROBES;133133+134134+ aarch64_insn_patch_text(&addr, &insn, 1);123135}124136125137/* disarm kprobe: remove breakpoint from text */126138void __kprobes arch_disarm_kprobe(struct kprobe *p)127139{128128- patch_text(p->addr, p->opcode);140140+ void *addr = p->addr;141141+142142+ aarch64_insn_patch_text(&addr, &p->opcode, 1);129143}130144131145void __kprobes arch_remove_kprobe(struct kprobe *p)···159163}160164161165/*162162- * Interrupts need to be disabled before single-step mode is set, and not163163- * reenabled until after single-step mode ends.164164- * Without disabling interrupt on local CPU, there is a chance of165165- * interrupt occurrence in the period of exception return and start of166166- * out-of-line single-step, that result in wrongly single stepping167167- * into the interrupt handler.166166+ * Mask all of DAIF while executing the instruction out-of-line, to keep things167167+ * simple and avoid nesting exceptions. Interrupts do have to be disabled since168168+ * the kprobe state is per-CPU and doesn't get migrated.168169 */169170static void __kprobes kprobes_save_local_irqflag(struct kprobe_ctlblk *kcb,170171 struct pt_regs *regs)171172{172173 kcb->saved_irqflag = regs->pstate & DAIF_MASK;173173- regs->pstate |= PSR_I_BIT;174174- /* Unmask PSTATE.D for enabling software step exceptions. */175175- regs->pstate &= ~PSR_D_BIT;174174+ regs->pstate |= DAIF_MASK;176175}177176178177static void __kprobes kprobes_restore_local_irqflag(struct kprobe_ctlblk *kcb,···210219 slot = (unsigned long)p->ainsn.api.insn;211220212221 set_ss_context(kcb, slot); /* mark pending ss */213213-214214- /* IRQs and single stepping do not mix well. */215222 kprobes_save_local_irqflag(kcb, regs);216216- kernel_enable_single_step(regs);217223 instruction_pointer_set(regs, slot);218224 } else {219225 /* insn simulation */···261273 }262274 /* call post handler */263275 kcb->kprobe_status = KPROBE_HIT_SSDONE;264264- if (cur->post_handler) {265265- /* post_handler can hit breakpoint and single step266266- * again, so we enable D-flag for recursive exception.267267- */276276+ if (cur->post_handler)268277 cur->post_handler(cur, regs, 0);269269- }270278271279 reset_current_kprobe();272280}···285301 instruction_pointer_set(regs, (unsigned long) cur->addr);286302 if (!instruction_pointer(regs))287303 BUG();288288-289289- kernel_disable_single_step();290304291305 if (kcb->kprobe_status == KPROBE_REENTER)292306 restore_previous_kprobe(kcb);···347365 * pre-handler and it returned non-zero, it will348366 * modify the execution path and no need to single349367 * stepping. Let's just reset current kprobe and exit.350350- *351351- * pre_handler can hit a breakpoint and can step thru352352- * before return, keep PSTATE D-flag enabled until353353- * pre_handler return back.354368 */355369 if (!p->pre_handler || !p->pre_handler(p, regs)) {356370 setup_singlestep(p, regs, kcb, 0);···377399}378400379401static int __kprobes380380-kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr)402402+kprobe_breakpoint_ss_handler(struct pt_regs *regs, unsigned int esr)381403{382404 struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();383405 int retval;···387409388410 if (retval == DBG_HOOK_HANDLED) {389411 kprobes_restore_local_irqflag(kcb, regs);390390- kernel_disable_single_step();391391-392412 post_kprobe_handler(kcb, regs);393413 }394414395415 return retval;396416}397417398398-static struct step_hook kprobes_step_hook = {399399- .fn = kprobe_single_step_handler,418418+static struct break_hook kprobes_break_ss_hook = {419419+ .imm = KPROBES_BRK_SS_IMM,420420+ .fn = kprobe_breakpoint_ss_handler,400421};401422402423static int __kprobes···463486int __init arch_init_kprobes(void)464487{465488 register_kernel_break_hook(&kprobes_break_hook);466466- register_kernel_step_hook(&kprobes_step_hook);489489+ register_kernel_break_hook(&kprobes_break_ss_hook);467490468491 return 0;469492}
+1-1
arch/powerpc/include/asm/nohash/32/kup-8xx.h
···6363static inline bool6464bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)6565{6666- return WARN(!((regs->kuap ^ MD_APG_KUAP) & 0xf0000000),6666+ return WARN(!((regs->kuap ^ MD_APG_KUAP) & 0xff000000),6767 "Bug: fault blocked by AP register !");6868}6969
+14-31
arch/powerpc/include/asm/nohash/32/mmu-8xx.h
···3333 * respectively NA for All or X for Supervisor and no access for User.3434 * Then we use the APG to say whether accesses are according to Page rules or3535 * "all Supervisor" rules (Access to all)3636- * Therefore, we define 2 APG groups. lsb is _PMD_USER3737- * 0 => Kernel => 01 (all accesses performed according to page definition)3838- * 1 => User => 00 (all accesses performed as supervisor iaw page definition)3939- * 2-15 => Not Used3636+ * _PAGE_ACCESSED is also managed via APG. When _PAGE_ACCESSED is not set, say3737+ * "all User" rules, that will lead to NA for all.3838+ * Therefore, we define 4 APG groups. lsb is _PAGE_ACCESSED3939+ * 0 => Kernel => 11 (all accesses performed according as user iaw page definition)4040+ * 1 => Kernel+Accessed => 01 (all accesses performed according to page definition)4141+ * 2 => User => 11 (all accesses performed according as user iaw page definition)4242+ * 3 => User+Accessed => 00 (all accesses performed as supervisor iaw page definition) for INIT4343+ * => 10 (all accesses performed according to swaped page definition) for KUEP4444+ * 4-15 => Not Used4045 */4141-#define MI_APG_INIT 0x400000004242-4343-/*4444- * 0 => Kernel => 01 (all accesses performed according to page definition)4545- * 1 => User => 10 (all accesses performed according to swaped page definition)4646- * 2-15 => Not Used4747- */4848-#define MI_APG_KUEP 0x600000004646+#define MI_APG_INIT 0xdc0000004747+#define MI_APG_KUEP 0xde00000049485049/* The effective page number register. When read, contains the information5150 * about the last instruction TLB miss. When MI_RPN is written, bits in···105106#define MD_Ks 0x80000000 /* Should not be set */106107#define MD_Kp 0x40000000 /* Should always be set */107108108108-/*109109- * All pages' PP data bits are set to either 000 or 011 or 001, which means110110- * respectively RW for Supervisor and no access for User, or RO for111111- * Supervisor and no access for user and NA for ALL.112112- * Then we use the APG to say whether accesses are according to Page rules or113113- * "all Supervisor" rules (Access to all)114114- * Therefore, we define 2 APG groups. lsb is _PMD_USER115115- * 0 => Kernel => 01 (all accesses performed according to page definition)116116- * 1 => User => 00 (all accesses performed as supervisor iaw page definition)117117- * 2-15 => Not Used118118- */119119-#define MD_APG_INIT 0x40000000120120-121121-/*122122- * 0 => No user => 01 (all accesses performed according to page definition)123123- * 1 => User => 10 (all accesses performed according to swaped page definition)124124- * 2-15 => Not Used125125- */126126-#define MD_APG_KUAP 0x60000000109109+/* See explanation above at the definition of MI_APG_INIT */110110+#define MD_APG_INIT 0xdc000000111111+#define MD_APG_KUAP 0xde000000127112128113/* The effective page number register. When read, contains the information129114 * about the last instruction TLB miss. When MD_RPN is written, bits in
+5-4
arch/powerpc/include/asm/nohash/32/pte-8xx.h
···3939 * into the TLB.4040 */4141#define _PAGE_GUARDED 0x0010 /* Copied to L1 G entry in DTLB */4242-#define _PAGE_SPECIAL 0x0020 /* SW entry */4242+#define _PAGE_ACCESSED 0x0020 /* Copied to L1 APG 1 entry in I/DTLB */4343#define _PAGE_EXEC 0x0040 /* Copied to PP (bit 21) in ITLB */4444-#define _PAGE_ACCESSED 0x0080 /* software: page referenced */4444+#define _PAGE_SPECIAL 0x0080 /* SW entry */45454646#define _PAGE_NA 0x0200 /* Supervisor NA, User no access */4747#define _PAGE_RO 0x0600 /* Supervisor RO, User no access */···59596060#define _PMD_PRESENT 0x00016161#define _PMD_PRESENT_MASK _PMD_PRESENT6262-#define _PMD_BAD 0x0fd06262+#define _PMD_BAD 0x0f906363#define _PMD_PAGE_MASK 0x000c6464#define _PMD_PAGE_8M 0x000c6565#define _PMD_PAGE_512K 0x00046666-#define _PMD_USER 0x0020 /* APG 1 */6666+#define _PMD_ACCESSED 0x0020 /* APG 1 */6767+#define _PMD_USER 0x0040 /* APG 2 */67686869#define _PTE_NONE_MASK 06970
···202202203203InstructionTLBMiss:204204 mtspr SPRN_SPRG_SCRATCH0, r10205205-#if defined(ITLB_MISS_KERNEL) || defined(CONFIG_SWAP) || defined(CONFIG_HUGETLBFS)206205 mtspr SPRN_SPRG_SCRATCH1, r11207207-#endif208206209207 /* If we are faulting a kernel address, we have to use the210208 * kernel page tables.···2222243:223225 mtcr r11224226#endif225225-#if defined(CONFIG_HUGETLBFS) || !defined(CONFIG_PIN_TLB_TEXT)226227 lwz r11, (swapper_pg_dir-PAGE_OFFSET)@l(r10) /* Get level 1 entry */227228 mtspr SPRN_MD_TWC, r11228228-#else229229- lwz r10, (swapper_pg_dir-PAGE_OFFSET)@l(r10) /* Get level 1 entry */230230- mtspr SPRN_MI_TWC, r10 /* Set segment attributes */231231- mtspr SPRN_MD_TWC, r10232232-#endif233229 mfspr r10, SPRN_MD_TWC234230 lwz r10, 0(r10) /* Get the pte */235235-#if defined(CONFIG_HUGETLBFS) || !defined(CONFIG_PIN_TLB_TEXT)231231+ rlwimi r11, r10, 0, _PAGE_GUARDED | _PAGE_ACCESSED236232 rlwimi r11, r10, 32 - 9, _PMD_PAGE_512K237233 mtspr SPRN_MI_TWC, r11238238-#endif239239-#ifdef CONFIG_SWAP240240- rlwinm r11, r10, 32-5, _PAGE_PRESENT241241- and r11, r11, r10242242- rlwimi r10, r11, 0, _PAGE_PRESENT243243-#endif244234 /* The Linux PTE won't go exactly into the MMU TLB.245235 * Software indicator bits 20 and 23 must be clear.246236 * Software indicator bits 22, 24, 25, 26, and 27 must be···242256243257 /* Restore registers */2442580: mfspr r10, SPRN_SPRG_SCRATCH0245245-#if defined(ITLB_MISS_KERNEL) || defined(CONFIG_SWAP) || defined(CONFIG_HUGETLBFS)246259 mfspr r11, SPRN_SPRG_SCRATCH1247247-#endif248260 rfi249261 patch_site 0b, patch__itlbmiss_exit_1250262···252268 addi r10, r10, 1253269 stw r10, (itlb_miss_counter - PAGE_OFFSET)@l(0)254270 mfspr r10, SPRN_SPRG_SCRATCH0255255-#if defined(ITLB_MISS_KERNEL) || defined(CONFIG_SWAP)256271 mfspr r11, SPRN_SPRG_SCRATCH1257257-#endif258272 rfi259273#endif260274···279297 mfspr r10, SPRN_MD_TWC280298 lwz r10, 0(r10) /* Get the pte */281299282282- /* Insert the Guarded flag into the TWC from the Linux PTE.300300+ /* Insert Guarded and Accessed flags into the TWC from the Linux PTE.283301 * It is bit 27 of both the Linux PTE and the TWC (at least284302 * I got that right :-). It will be better when we can put285303 * this into the Linux pgd/pmd and load it in the operation286304 * above.287305 */288288- rlwimi r11, r10, 0, _PAGE_GUARDED306306+ rlwimi r11, r10, 0, _PAGE_GUARDED | _PAGE_ACCESSED289307 rlwimi r11, r10, 32 - 9, _PMD_PAGE_512K290308 mtspr SPRN_MD_TWC, r11291309292292- /* Both _PAGE_ACCESSED and _PAGE_PRESENT has to be set.293293- * We also need to know if the insn is a load/store, so:294294- * Clear _PAGE_PRESENT and load that which will295295- * trap into DTLB Error with store bit set accordinly.296296- */297297- /* PRESENT=0x1, ACCESSED=0x20298298- * r11 = ((r10 & PRESENT) & ((r10 & ACCESSED) >> 5));299299- * r10 = (r10 & ~PRESENT) | r11;300300- */301301-#ifdef CONFIG_SWAP302302- rlwinm r11, r10, 32-5, _PAGE_PRESENT303303- and r11, r11, r10304304- rlwimi r10, r11, 0, _PAGE_PRESENT305305-#endif306310 /* The Linux PTE won't go exactly into the MMU TLB.307311 * Software indicator bits 24, 25, 26, and 27 must be308312 * set. All other Linux PTE bits control the behavior···679711 li r9, 4 /* up to 4 pages of 8M */680712 mtctr r9681713 lis r9, KERNELBASE@h /* Create vaddr for TLB */682682- li r10, MI_PS8MEG | MI_SVALID /* Set 8M byte page */714714+ li r10, MI_PS8MEG | _PMD_ACCESSED | MI_SVALID683715 li r11, MI_BOOTINIT /* Create RPN for address 0 */6847161:685717 mtspr SPRN_MI_CTR, r8 /* Set instruction MMU control */···743775#ifdef CONFIG_PIN_TLB_TEXT744776 LOAD_REG_IMMEDIATE(r5, 28 << 8)745777 LOAD_REG_IMMEDIATE(r6, PAGE_OFFSET)746746- LOAD_REG_IMMEDIATE(r7, MI_SVALID | MI_PS8MEG)778778+ LOAD_REG_IMMEDIATE(r7, MI_SVALID | MI_PS8MEG | _PMD_ACCESSED)747779 LOAD_REG_IMMEDIATE(r8, 0xf0 | _PAGE_RO | _PAGE_SPS | _PAGE_SH | _PAGE_PRESENT)748780 LOAD_REG_ADDR(r9, _sinittext)749781 li r0, 4···765797 LOAD_REG_IMMEDIATE(r5, 28 << 8 | MD_TWAM)766798#ifdef CONFIG_PIN_TLB_DATA767799 LOAD_REG_IMMEDIATE(r6, PAGE_OFFSET)768768- LOAD_REG_IMMEDIATE(r7, MI_SVALID | MI_PS8MEG)800800+ LOAD_REG_IMMEDIATE(r7, MI_SVALID | MI_PS8MEG | _PMD_ACCESSED)769801#ifdef CONFIG_PIN_TLB_IMMR770802 li r0, 3771803#else···802834#endif803835#ifdef CONFIG_PIN_TLB_IMMR804836 LOAD_REG_IMMEDIATE(r0, VIRT_IMMR_BASE | MD_EVALID)805805- LOAD_REG_IMMEDIATE(r7, MD_SVALID | MD_PS512K | MD_GUARDED)837837+ LOAD_REG_IMMEDIATE(r7, MD_SVALID | MD_PS512K | MD_GUARDED | _PMD_ACCESSED)806838 mfspr r8, SPRN_IMMR807839 rlwinm r8, r8, 0, 0xfff80000808840 ori r8, r8, 0xf0 | _PAGE_DIRTY | _PAGE_SPS | _PAGE_SH | \
-12
arch/powerpc/kernel/head_book3s_32.S
···457457 cmplw 0,r1,r3458458#endif459459 mfspr r2, SPRN_SPRG_PGDIR460460-#ifdef CONFIG_SWAP461460 li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC462462-#else463463- li r1,_PAGE_PRESENT | _PAGE_EXEC464464-#endif465461#if defined(CONFIG_MODULES) || defined(CONFIG_DEBUG_PAGEALLOC)466462 bgt- 112f467463 lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */···519523 lis r1, TASK_SIZE@h /* check if kernel address */520524 cmplw 0,r1,r3521525 mfspr r2, SPRN_SPRG_PGDIR522522-#ifdef CONFIG_SWAP523526 li r1, _PAGE_PRESENT | _PAGE_ACCESSED524524-#else525525- li r1, _PAGE_PRESENT526526-#endif527527 bgt- 112f528528 lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */529529 addi r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */···595603 lis r1, TASK_SIZE@h /* check if kernel address */596604 cmplw 0,r1,r3597605 mfspr r2, SPRN_SPRG_PGDIR598598-#ifdef CONFIG_SWAP599606 li r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT | _PAGE_ACCESSED600600-#else601601- li r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT602602-#endif603607 bgt- 112f604608 lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */605609 addi r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */
+2-1
arch/powerpc/kernel/smp.c
···13931393/* Activate a secondary processor. */13941394void start_secondary(void *unused)13951395{13961396- unsigned int cpu = smp_processor_id();13961396+ unsigned int cpu = raw_smp_processor_id();1397139713981398 mmgrab(&init_mm);13991399 current->active_mm = &init_mm;1400140014011401 smp_store_cpu_info(cpu);14021402 set_dec(tb_ticks_per_jiffy);14031403+ rcu_cpu_starting(cpu);14031404 preempt_disable();14041405 cpu_callin_map[cpu] = 1;14051406
+1-1
arch/riscv/include/asm/uaccess.h
···476476do { \477477 long __kr_err; \478478 \479479- __put_user_nocheck(*((type *)(dst)), (type *)(src), __kr_err); \479479+ __put_user_nocheck(*((type *)(src)), (type *)(dst), __kr_err); \480480 if (unlikely(__kr_err)) \481481 goto err_label; \482482} while (0)
···4343SYSCFLAGS_vdso.so.dbg = $(c_flags)4444$(obj)/vdso.so.dbg: $(src)/vdso.lds $(obj-vdso) FORCE4545 $(call if_changed,vdsold)4646+SYSCFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \4747+ -Wl,--build-id -Wl,--hash-style=both46484749# We also create a special relocatable object that should mirror the symbol4850# table and layout of the linked DSO. With ld --just-symbols we can then4951# refer to these symbols in the kernel code rather than hand-coded addresses.5050-5151-SYSCFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \5252- -Wl,--build-id=sha1 -Wl,--hash-style=both5353-$(obj)/vdso-dummy.o: $(src)/vdso.lds $(obj)/rt_sigreturn.o FORCE5454- $(call if_changed,vdsold)5555-5656-LDFLAGS_vdso-syms.o := -r --just-symbols5757-$(obj)/vdso-syms.o: $(obj)/vdso-dummy.o FORCE5858- $(call if_changed,ld)5252+$(obj)/vdso-syms.S: $(obj)/vdso.so FORCE5353+ $(call if_changed,so2s)59546055# strip rule for the .so file6156$(obj)/%.so: OBJCOPYFLAGS := -S···6772 $(CROSS_COMPILE)objcopy \6873 $(patsubst %, -G __vdso_%, $(vdso-syms)) $@.tmp $@ && \6974 rm $@.tmp7575+7676+# Extracts symbol offsets from the VDSO, converting them into an assembly file7777+# that contains the same symbols at the same offsets.7878+quiet_cmd_so2s = SO2S $@7979+ cmd_so2s = $(NM) -D $< | $(srctree)/$(src)/so2s.sh > $@70807181# install commands for the unstripped file7282quiet_cmd_vdso_install = INSTALL $@
···8686 pmd_t *pmd, *pmd_k;8787 pte_t *pte_k;8888 int index;8989+ unsigned long pfn;89909091 /* User mode accesses just cause a SIGSEGV */9192 if (user_mode(regs))···101100 * of a task switch.102101 */103102 index = pgd_index(addr);104104- pgd = (pgd_t *)pfn_to_virt(csr_read(CSR_SATP)) + index;103103+ pfn = csr_read(CSR_SATP) & SATP_PPN;104104+ pgd = (pgd_t *)pfn_to_virt(pfn) + index;105105 pgd_k = init_mm.pgd + index;106106107107 if (!pgd_present(*pgd_k)) {
+21-11
arch/riscv/mm/init.c
···154154155155void __init setup_bootmem(void)156156{157157- phys_addr_t mem_size = 0;158158- phys_addr_t total_mem = 0;159159- phys_addr_t mem_start, start, end = 0;157157+ phys_addr_t mem_start = 0;158158+ phys_addr_t start, end = 0;160159 phys_addr_t vmlinux_end = __pa_symbol(&_end);161160 phys_addr_t vmlinux_start = __pa_symbol(&_start);162161 u64 i;···163164 /* Find the memory region containing the kernel */164165 for_each_mem_range(i, &start, &end) {165166 phys_addr_t size = end - start;166166- if (!total_mem)167167+ if (!mem_start)167168 mem_start = start;168169 if (start <= vmlinux_start && vmlinux_end <= end)169170 BUG_ON(size == 0);170170- total_mem = total_mem + size;171171 }172172173173 /*174174- * Remove memblock from the end of usable area to the175175- * end of region174174+ * The maximal physical memory size is -PAGE_OFFSET.175175+ * Make sure that any memory beyond mem_start + (-PAGE_OFFSET) is removed176176+ * as it is unusable by kernel.176177 */177177- mem_size = min(total_mem, (phys_addr_t)-PAGE_OFFSET);178178- if (mem_start + mem_size < end)179179- memblock_remove(mem_start + mem_size,180180- end - mem_start - mem_size);178178+ memblock_enforce_memory_limit(mem_start - PAGE_OFFSET);181179182180 /* Reserve from the start of the kernel to the end of the kernel */183181 memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);···293297#define NUM_EARLY_PMDS (1UL + MAX_EARLY_MAPPING_SIZE / PGDIR_SIZE)294298#endif295299pmd_t early_pmd[PTRS_PER_PMD * NUM_EARLY_PMDS] __initdata __aligned(PAGE_SIZE);300300+pmd_t early_dtb_pmd[PTRS_PER_PMD] __initdata __aligned(PAGE_SIZE);296301297302static pmd_t *__init get_pmd_virt_early(phys_addr_t pa)298303{···491494 load_pa + (va - PAGE_OFFSET),492495 map_size, PAGE_KERNEL_EXEC);493496497497+#ifndef __PAGETABLE_PMD_FOLDED498498+ /* Setup early PMD for DTB */499499+ create_pgd_mapping(early_pg_dir, DTB_EARLY_BASE_VA,500500+ (uintptr_t)early_dtb_pmd, PGDIR_SIZE, PAGE_TABLE);501501+ /* Create two consecutive PMD mappings for FDT early scan */502502+ pa = dtb_pa & ~(PMD_SIZE - 1);503503+ create_pmd_mapping(early_dtb_pmd, DTB_EARLY_BASE_VA,504504+ pa, PMD_SIZE, PAGE_KERNEL);505505+ create_pmd_mapping(early_dtb_pmd, DTB_EARLY_BASE_VA + PMD_SIZE,506506+ pa + PMD_SIZE, PMD_SIZE, PAGE_KERNEL);507507+ dtb_early_va = (void *)DTB_EARLY_BASE_VA + (dtb_pa & (PMD_SIZE - 1));508508+#else494509 /* Create two consecutive PGD mappings for FDT early scan */495510 pa = dtb_pa & ~(PGDIR_SIZE - 1);496511 create_pgd_mapping(early_pg_dir, DTB_EARLY_BASE_VA,···510501 create_pgd_mapping(early_pg_dir, DTB_EARLY_BASE_VA + PGDIR_SIZE,511502 pa + PGDIR_SIZE, PGDIR_SIZE, PAGE_KERNEL);512503 dtb_early_va = (void *)DTB_EARLY_BASE_VA + (dtb_pa & (PGDIR_SIZE - 1));504504+#endif513505 dtb_early_pa = dtb_pa;514506515507 /*
+6-4
arch/s390/configs/debug_defconfig
···9393CONFIG_FRONTSWAP=y9494CONFIG_CMA_DEBUG=y9595CONFIG_CMA_DEBUGFS=y9696+CONFIG_CMA_AREAS=79697CONFIG_MEM_SOFT_DIRTY=y9798CONFIG_ZSWAP=y9898-CONFIG_ZSMALLOC=m9999+CONFIG_ZSMALLOC=y99100CONFIG_ZSMALLOC_STAT=y100101CONFIG_DEFERRED_STRUCT_PAGE_INIT=y101102CONFIG_IDLE_PAGE_TRACKING=y···379378CONFIG_CGROUP_NET_PRIO=y380379CONFIG_BPF_JIT=y381380CONFIG_NET_PKTGEN=m382382-# CONFIG_NET_DROP_MONITOR is not set383381CONFIG_PCI=y384382# CONFIG_PCIEASPM is not set385383CONFIG_PCI_DEBUG=y···386386CONFIG_HOTPLUG_PCI_S390=y387387CONFIG_DEVTMPFS=y388388CONFIG_CONNECTOR=y389389-CONFIG_ZRAM=m389389+CONFIG_ZRAM=y390390CONFIG_BLK_DEV_LOOP=m391391CONFIG_BLK_DEV_CRYPTOLOOP=m392392CONFIG_BLK_DEV_DRBD=m···689689CONFIG_CRYPTO_DH=m690690CONFIG_CRYPTO_ECDH=m691691CONFIG_CRYPTO_ECRDSA=m692692+CONFIG_CRYPTO_SM2=m692693CONFIG_CRYPTO_CURVE25519=m693694CONFIG_CRYPTO_GCM=y694695CONFIG_CRYPTO_CHACHA20POLY1305=m···710709CONFIG_CRYPTO_RMD256=m711710CONFIG_CRYPTO_RMD320=m712711CONFIG_CRYPTO_SHA3=m713713-CONFIG_CRYPTO_SM3=m714712CONFIG_CRYPTO_TGR192=m715713CONFIG_CRYPTO_WP512=m716714CONFIG_CRYPTO_AES_TI=m···753753CONFIG_CRYPTO_AES_S390=m754754CONFIG_CRYPTO_GHASH_S390=m755755CONFIG_CRYPTO_CRC32_S390=y756756+CONFIG_CRYPTO_DEV_VIRTIO=m756757CONFIG_CORDIC=m757758CONFIG_CRC32_SELFTEST=y758759CONFIG_CRC4=m···830829CONFIG_FAULT_INJECTION=y831830CONFIG_FAILSLAB=y832831CONFIG_FAIL_PAGE_ALLOC=y832832+CONFIG_FAULT_INJECTION_USERCOPY=y833833CONFIG_FAIL_MAKE_REQUEST=y834834CONFIG_FAIL_IO_TIMEOUT=y835835CONFIG_FAIL_FUTEX=y
+5-4
arch/s390/configs/defconfig
···8787CONFIG_TRANSPARENT_HUGEPAGE=y8888CONFIG_CLEANCACHE=y8989CONFIG_FRONTSWAP=y9090+CONFIG_CMA_AREAS=79091CONFIG_MEM_SOFT_DIRTY=y9192CONFIG_ZSWAP=y9292-CONFIG_ZSMALLOC=m9393+CONFIG_ZSMALLOC=y9394CONFIG_ZSMALLOC_STAT=y9495CONFIG_DEFERRED_STRUCT_PAGE_INIT=y9596CONFIG_IDLE_PAGE_TRACKING=y···372371CONFIG_CGROUP_NET_PRIO=y373372CONFIG_BPF_JIT=y374373CONFIG_NET_PKTGEN=m375375-# CONFIG_NET_DROP_MONITOR is not set376374CONFIG_PCI=y377375# CONFIG_PCIEASPM is not set378376CONFIG_HOTPLUG_PCI=y···379379CONFIG_UEVENT_HELPER=y380380CONFIG_DEVTMPFS=y381381CONFIG_CONNECTOR=y382382-CONFIG_ZRAM=m382382+CONFIG_ZRAM=y383383CONFIG_BLK_DEV_LOOP=m384384CONFIG_BLK_DEV_CRYPTOLOOP=m385385CONFIG_BLK_DEV_DRBD=m···680680CONFIG_CRYPTO_DH=m681681CONFIG_CRYPTO_ECDH=m682682CONFIG_CRYPTO_ECRDSA=m683683+CONFIG_CRYPTO_SM2=m683684CONFIG_CRYPTO_CURVE25519=m684685CONFIG_CRYPTO_GCM=y685686CONFIG_CRYPTO_CHACHA20POLY1305=m···702701CONFIG_CRYPTO_RMD256=m703702CONFIG_CRYPTO_RMD320=m704703CONFIG_CRYPTO_SHA3=m705705-CONFIG_CRYPTO_SM3=m706704CONFIG_CRYPTO_TGR192=m707705CONFIG_CRYPTO_WP512=m708706CONFIG_CRYPTO_AES_TI=m···745745CONFIG_CRYPTO_AES_S390=m746746CONFIG_CRYPTO_GHASH_S390=m747747CONFIG_CRYPTO_CRC32_S390=y748748+CONFIG_CRYPTO_DEV_VIRTIO=m748749CONFIG_CORDIC=m749750CONFIG_PRIME_NUMBERS=m750751CONFIG_CRC4=m
+1-1
arch/s390/configs/zfcpdump_defconfig
···1717# CONFIG_CHSC_SCH is not set1818# CONFIG_SCM_BUS is not set1919CONFIG_CRASH_DUMP=y2020-# CONFIG_SECCOMP is not set2120# CONFIG_PFAULT is not set2221# CONFIG_S390_HYPFS_FS is not set2322# CONFIG_VIRTUALIZATION is not set2423# CONFIG_S390_GUEST is not set2424+# CONFIG_SECCOMP is not set2525CONFIG_PARTITION_ADVANCED=y2626CONFIG_IBM_PARTITION=y2727# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
+30-22
arch/s390/include/asm/pgtable.h
···692692 return !!(pud_val(pud) & _REGION3_ENTRY_LARGE);693693}694694695695-static inline unsigned long pud_pfn(pud_t pud)696696-{697697- unsigned long origin_mask;698698-699699- origin_mask = _REGION_ENTRY_ORIGIN;700700- if (pud_large(pud))701701- origin_mask = _REGION3_ENTRY_ORIGIN_LARGE;702702- return (pud_val(pud) & origin_mask) >> PAGE_SHIFT;703703-}704704-705695#define pmd_leaf pmd_large706696static inline int pmd_large(pmd_t pmd)707697{···735745static inline int pmd_none(pmd_t pmd)736746{737747 return pmd_val(pmd) == _SEGMENT_ENTRY_EMPTY;738738-}739739-740740-static inline unsigned long pmd_pfn(pmd_t pmd)741741-{742742- unsigned long origin_mask;743743-744744- origin_mask = _SEGMENT_ENTRY_ORIGIN;745745- if (pmd_large(pmd))746746- origin_mask = _SEGMENT_ENTRY_ORIGIN_LARGE;747747- return (pmd_val(pmd) & origin_mask) >> PAGE_SHIFT;748748}749749750750#define pmd_write pmd_write···12181238#define pud_index(address) (((address) >> PUD_SHIFT) & (PTRS_PER_PUD-1))12191239#define pmd_index(address) (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1))1220124012211221-#define pmd_deref(pmd) (pmd_val(pmd) & _SEGMENT_ENTRY_ORIGIN)12221222-#define pud_deref(pud) (pud_val(pud) & _REGION_ENTRY_ORIGIN)12231241#define p4d_deref(pud) (p4d_val(pud) & _REGION_ENTRY_ORIGIN)12241242#define pgd_deref(pgd) (pgd_val(pgd) & _REGION_ENTRY_ORIGIN)12431243+12441244+static inline unsigned long pmd_deref(pmd_t pmd)12451245+{12461246+ unsigned long origin_mask;12471247+12481248+ origin_mask = _SEGMENT_ENTRY_ORIGIN;12491249+ if (pmd_large(pmd))12501250+ origin_mask = _SEGMENT_ENTRY_ORIGIN_LARGE;12511251+ return pmd_val(pmd) & origin_mask;12521252+}12531253+12541254+static inline unsigned long pmd_pfn(pmd_t pmd)12551255+{12561256+ return pmd_deref(pmd) >> PAGE_SHIFT;12571257+}12581258+12591259+static inline unsigned long pud_deref(pud_t pud)12601260+{12611261+ unsigned long origin_mask;12621262+12631263+ origin_mask = _REGION_ENTRY_ORIGIN;12641264+ if (pud_large(pud))12651265+ origin_mask = _REGION3_ENTRY_ORIGIN_LARGE;12661266+ return pud_val(pud) & origin_mask;12671267+}12681268+12691269+static inline unsigned long pud_pfn(pud_t pud)12701270+{12711271+ return pud_deref(pud) >> PAGE_SHIFT;12721272+}1225127312261274/*12271275 * The pgd_offset function *always* adds the index for the top-level
arch/s390/include/asm/vdso/vdso.h
-8
arch/s390/kernel/asm-offsets.c
···6161 BLANK();6262 OFFSET(__VDSO_GETCPU_VAL, vdso_per_cpu_data, getcpu_val);6363 BLANK();6464- /* constants used by the vdso */6565- DEFINE(__CLOCK_REALTIME, CLOCK_REALTIME);6666- DEFINE(__CLOCK_MONOTONIC, CLOCK_MONOTONIC);6767- DEFINE(__CLOCK_REALTIME_COARSE, CLOCK_REALTIME_COARSE);6868- DEFINE(__CLOCK_MONOTONIC_COARSE, CLOCK_MONOTONIC_COARSE);6969- DEFINE(__CLOCK_THREAD_CPUTIME_ID, CLOCK_THREAD_CPUTIME_ID);7070- DEFINE(__CLOCK_COARSE_RES, LOW_RES_NSEC);7171- BLANK();7264 /* idle data offsets */7365 OFFSET(__CLOCK_IDLE_ENTER, s390_idle_data, clock_idle_enter);7466 OFFSET(__CLOCK_IDLE_EXIT, s390_idle_data, clock_idle_exit);
+2-1
arch/s390/kernel/smp.c
···855855856856static void smp_init_secondary(void)857857{858858- int cpu = smp_processor_id();858858+ int cpu = raw_smp_processor_id();859859860860 S390_lowcore.last_update_clock = get_tod_clock();861861 restore_access_regs(S390_lowcore.access_regs_save_area);862862 set_cpu_flag(CIF_ASCE_PRIMARY);863863 set_cpu_flag(CIF_ASCE_SECONDARY);864864 cpu_init();865865+ rcu_cpu_starting(cpu);865866 preempt_disable();866867 init_cpu_timer();867868 vtime_init();
+4
arch/s390/pci/pci_event.c
···101101 if (ret)102102 break;103103104104+ /* the PCI function will be scanned once function 0 appears */105105+ if (!zdev->zbus->bus)106106+ break;107107+104108 pdev = pci_scan_single_device(zdev->zbus->bus, zdev->devfn);105109 if (!pdev)106110 break;
+1
arch/x86/boot/compressed/ident_map_64.c
···164164 add_identity_map(cmdline, cmdline + COMMAND_LINE_SIZE);165165166166 /* Load the new page-table. */167167+ sev_verify_cbit(top_level_pgt);167168 write_cr3(top_level_pgt);168169}169170
+19-1
arch/x86/boot/compressed/mem_encrypt.S
···6868SYM_FUNC_END(get_sev_encryption_bit)69697070 .code647171+7272+#include "../../kernel/sev_verify_cbit.S"7373+7174SYM_FUNC_START(set_sev_encryption_mask)7275#ifdef CONFIG_AMD_MEM_ENCRYPT7376 push %rbp···8380 jz .Lno_sev_mask84818582 bts %rax, sme_me_mask(%rip) /* Create the encryption mask */8383+8484+ /*8585+ * Read MSR_AMD64_SEV again and store it to sev_status. Can't do this in8686+ * get_sev_encryption_bit() because this function is 32-bit code and8787+ * shared between 64-bit and 32-bit boot path.8888+ */8989+ movl $MSR_AMD64_SEV, %ecx /* Read the SEV MSR */9090+ rdmsr9191+9292+ /* Store MSR value in sev_status */9393+ shlq $32, %rdx9494+ orq %rdx, %rax9595+ movq %rax, sev_status(%rip)86968797.Lno_sev_mask:8898 movq %rbp, %rsp /* Restore original stack pointer */···1129611397#ifdef CONFIG_AMD_MEM_ENCRYPT11498 .balign 8115115-SYM_DATA(sme_me_mask, .quad 0)9999+SYM_DATA(sme_me_mask, .quad 0)100100+SYM_DATA(sev_status, .quad 0)101101+SYM_DATA(sev_check_data, .quad 0)116102#endif
+2
arch/x86/boot/compressed/misc.h
···159159void boot_stage1_vc(void);160160void boot_stage2_vc(void);161161162162+unsigned long sev_verify_cbit(unsigned long cr3);163163+162164#endif /* BOOT_COMPRESSED_MISC_H */
+9-5
arch/x86/hyperv/hv_apic.c
···273273 pr_info("Hyper-V: Using enlightened APIC (%s mode)",274274 x2apic_enabled() ? "x2apic" : "xapic");275275 /*276276- * With x2apic, architectural x2apic MSRs are equivalent to the277277- * respective synthetic MSRs, so there's no need to override278278- * the apic accessors. The only exception is279279- * hv_apic_eoi_write, because it benefits from lazy EOI when280280- * available, but it works for both xapic and x2apic modes.276276+ * When in x2apic mode, don't use the Hyper-V specific APIC277277+ * accessors since the field layout in the ICR register is278278+ * different in x2apic mode. Furthermore, the architectural279279+ * x2apic MSRs function just as well as the Hyper-V280280+ * synthetic APIC MSRs, so there's no benefit in having281281+ * separate Hyper-V accessors for x2apic mode. The only282282+ * exception is hv_apic_eoi_write, because it benefits from283283+ * lazy EOI when available, but the same accessor works for284284+ * both xapic and x2apic because the field layout is the same.281285 */282286 apic_set_eoi_write(hv_apic_eoi_write);283287 if (!x2apic_enabled()) {
+18-5
arch/x86/kernel/apic/x2apic_uv_x.c
···290290{291291 /* Relies on 'to' being NULL chars so result will be NULL terminated */292292 strncpy(to, from, len-1);293293+294294+ /* Trim trailing spaces */295295+ (void)strim(to);293296}294297295298/* Find UV arch type entry in UVsystab */···369366 return ret;370367}371368372372-static int __init uv_set_system_type(char *_oem_id)369369+static int __init uv_set_system_type(char *_oem_id, char *_oem_table_id)373370{374371 /* Save OEM_ID passed from ACPI MADT */375372 uv_stringify(sizeof(oem_id), oem_id, _oem_id);···389386 /* (Not hubless), not a UV */390387 return 0;391388389389+ /* Is UV hubless system */390390+ uv_hubless_system = 0x01;391391+392392+ /* UV5 Hubless */393393+ if (strncmp(uv_archtype, "NSGI5", 5) == 0)394394+ uv_hubless_system |= 0x20;395395+392396 /* UV4 Hubless: CH */393393- if (strncmp(uv_archtype, "NSGI4", 5) == 0)394394- uv_hubless_system = 0x11;397397+ else if (strncmp(uv_archtype, "NSGI4", 5) == 0)398398+ uv_hubless_system |= 0x10;395399396400 /* UV3 Hubless: UV300/MC990X w/o hub */397401 else398398- uv_hubless_system = 0x9;402402+ uv_hubless_system |= 0x8;403403+404404+ /* Copy APIC type */405405+ uv_stringify(sizeof(oem_table_id), oem_table_id, _oem_table_id);399406400407 pr_info("UV: OEM IDs %s/%s, SystemType %d, HUBLESS ID %x\n",401408 oem_id, oem_table_id, uv_system_type, uv_hubless_system);···469456 uv_cpu_info->p_uv_hub_info = &uv_hub_info_node0;470457471458 /* If not UV, return. */472472- if (likely(uv_set_system_type(_oem_id) == 0))459459+ if (uv_set_system_type(_oem_id, _oem_table_id) == 0)473460 return 0;474461475462 /* Save and Decode OEM Table ID */
+33-18
arch/x86/kernel/cpu/bugs.c
···12541254 return 0;12551255}1256125612571257+static bool is_spec_ib_user_controlled(void)12581258+{12591259+ return spectre_v2_user_ibpb == SPECTRE_V2_USER_PRCTL ||12601260+ spectre_v2_user_ibpb == SPECTRE_V2_USER_SECCOMP ||12611261+ spectre_v2_user_stibp == SPECTRE_V2_USER_PRCTL ||12621262+ spectre_v2_user_stibp == SPECTRE_V2_USER_SECCOMP;12631263+}12641264+12571265static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)12581266{12591267 switch (ctrl) {···12691261 if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE &&12701262 spectre_v2_user_stibp == SPECTRE_V2_USER_NONE)12711263 return 0;12641264+12721265 /*12731273- * Indirect branch speculation is always disabled in strict12741274- * mode. It can neither be enabled if it was force-disabled12751275- * by a previous prctl call.12661266+ * With strict mode for both IBPB and STIBP, the instruction12671267+ * code paths avoid checking this task flag and instead,12681268+ * unconditionally run the instruction. However, STIBP and IBPB12691269+ * are independent and either can be set to conditionally12701270+ * enabled regardless of the mode of the other.12711271+ *12721272+ * If either is set to conditional, allow the task flag to be12731273+ * updated, unless it was force-disabled by a previous prctl12741274+ * call. Currently, this is possible on an AMD CPU which has the12751275+ * feature X86_FEATURE_AMD_STIBP_ALWAYS_ON. In this case, if the12761276+ * kernel is booted with 'spectre_v2_user=seccomp', then12771277+ * spectre_v2_user_ibpb == SPECTRE_V2_USER_SECCOMP and12781278+ * spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED.12761279 */12771277- if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT ||12781278- spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||12791279- spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED ||12801280+ if (!is_spec_ib_user_controlled() ||12801281 task_spec_ib_force_disable(task))12811282 return -EPERM;12831283+12821284 task_clear_spec_ib_disable(task);12831285 task_update_spec_tif(task);12841286 break;···13011283 if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE &&13021284 spectre_v2_user_stibp == SPECTRE_V2_USER_NONE)13031285 return -EPERM;13041304- if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT ||13051305- spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||13061306- spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED)12861286+12871287+ if (!is_spec_ib_user_controlled())13071288 return 0;12891289+13081290 task_set_spec_ib_disable(task);13091291 if (ctrl == PR_SPEC_FORCE_DISABLE)13101292 task_set_spec_ib_force_disable(task);···13691351 if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE &&13701352 spectre_v2_user_stibp == SPECTRE_V2_USER_NONE)13711353 return PR_SPEC_ENABLE;13721372- else if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT ||13731373- spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||13741374- spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED)13751375- return PR_SPEC_DISABLE;13761376- else if (spectre_v2_user_ibpb == SPECTRE_V2_USER_PRCTL ||13771377- spectre_v2_user_ibpb == SPECTRE_V2_USER_SECCOMP ||13781378- spectre_v2_user_stibp == SPECTRE_V2_USER_PRCTL ||13791379- spectre_v2_user_stibp == SPECTRE_V2_USER_SECCOMP) {13541354+ else if (is_spec_ib_user_controlled()) {13801355 if (task_spec_ib_force_disable(task))13811356 return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;13821357 if (task_spec_ib_disable(task))13831358 return PR_SPEC_PRCTL | PR_SPEC_DISABLE;13841359 return PR_SPEC_PRCTL | PR_SPEC_ENABLE;13851385- } else13601360+ } else if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT ||13611361+ spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||13621362+ spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED)13631363+ return PR_SPEC_DISABLE;13641364+ else13861365 return PR_SPEC_NOT_AFFECTED;13871366}13881367
+16
arch/x86/kernel/head_64.S
···161161162162 /* Setup early boot stage 4-/5-level pagetables. */163163 addq phys_base(%rip), %rax164164+165165+ /*166166+ * For SEV guests: Verify that the C-bit is correct. A malicious167167+ * hypervisor could lie about the C-bit position to perform a ROP168168+ * attack on the guest by writing to the unencrypted stack and wait for169169+ * the next RET instruction.170170+ * %rsi carries pointer to realmode data and is callee-clobbered. Save171171+ * and restore it.172172+ */173173+ pushq %rsi174174+ movq %rax, %rdi175175+ call sev_verify_cbit176176+ popq %rsi177177+178178+ /* Switch to new page-table */164179 movq %rax, %cr3165180166181 /* Ensure I am executing from virtual addresses */···294279SYM_CODE_END(secondary_startup_64)295280296281#include "verify_cpu.S"282282+#include "sev_verify_cbit.S"297283298284#ifdef CONFIG_HOTPLUG_CPU299285/*
+26
arch/x86/kernel/sev-es-shared.c
···178178 goto fail;179179 regs->dx = val >> 32;180180181181+ /*182182+ * This is a VC handler and the #VC is only raised when SEV-ES is183183+ * active, which means SEV must be active too. Do sanity checks on the184184+ * CPUID results to make sure the hypervisor does not trick the kernel185185+ * into the no-sev path. This could map sensitive data unencrypted and186186+ * make it accessible to the hypervisor.187187+ *188188+ * In particular, check for:189189+ * - Hypervisor CPUID bit190190+ * - Availability of CPUID leaf 0x8000001f191191+ * - SEV CPUID bit.192192+ *193193+ * The hypervisor might still report the wrong C-bit position, but this194194+ * can't be checked here.195195+ */196196+197197+ if ((fn == 1 && !(regs->cx & BIT(31))))198198+ /* Hypervisor bit */199199+ goto fail;200200+ else if (fn == 0x80000000 && (regs->ax < 0x8000001f))201201+ /* SEV leaf check */202202+ goto fail;203203+ else if ((fn == 0x8000001f && !(regs->ax & BIT(1))))204204+ /* SEV bit */205205+ goto fail;206206+181207 /* Skip over the CPUID two-byte opcode */182208 regs->ip += 2;183209
+13-7
arch/x86/kernel/sev-es.c
···374374 return ES_EXCEPTION;375375}376376377377-static bool vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt,378378- unsigned long vaddr, phys_addr_t *paddr)377377+static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt,378378+ unsigned long vaddr, phys_addr_t *paddr)379379{380380 unsigned long va = (unsigned long)vaddr;381381 unsigned int level;···394394 if (user_mode(ctxt->regs))395395 ctxt->fi.error_code |= X86_PF_USER;396396397397- return false;397397+ return ES_EXCEPTION;398398 }399399+400400+ if (WARN_ON_ONCE(pte_val(*pte) & _PAGE_ENC))401401+ /* Emulated MMIO to/from encrypted memory not supported */402402+ return ES_UNSUPPORTED;399403400404 pa = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT;401405 pa |= va & ~page_level_mask(level);402406403407 *paddr = pa;404408405405- return true;409409+ return ES_OK;406410}407411408412/* Include code shared with pre-decompression boot stage */···735731{736732 u64 exit_code, exit_info_1, exit_info_2;737733 unsigned long ghcb_pa = __pa(ghcb);734734+ enum es_result res;738735 phys_addr_t paddr;739736 void __user *ref;740737···745740746741 exit_code = read ? SVM_VMGEXIT_MMIO_READ : SVM_VMGEXIT_MMIO_WRITE;747742748748- if (!vc_slow_virt_to_phys(ghcb, ctxt, (unsigned long)ref, &paddr)) {749749- if (!read)743743+ res = vc_slow_virt_to_phys(ghcb, ctxt, (unsigned long)ref, &paddr);744744+ if (res != ES_OK) {745745+ if (res == ES_EXCEPTION && !read)750746 ctxt->fi.error_code |= X86_PF_WRITE;751747752752- return ES_EXCEPTION;748748+ return res;753749 }754750755751 exit_info_1 = paddr;
+89
arch/x86/kernel/sev_verify_cbit.S
···11+/* SPDX-License-Identifier: GPL-2.0-only */22+/*33+ * sev_verify_cbit.S - Code for verification of the C-bit position reported44+ * by the Hypervisor when running with SEV enabled.55+ *66+ * Copyright (c) 2020 Joerg Roedel (jroedel@suse.de)77+ *88+ * sev_verify_cbit() is called before switching to a new long-mode page-table99+ * at boot.1010+ *1111+ * Verify that the C-bit position is correct by writing a random value to1212+ * an encrypted memory location while on the current page-table. Then it1313+ * switches to the new page-table to verify the memory content is still the1414+ * same. After that it switches back to the current page-table and when the1515+ * check succeeded it returns. If the check failed the code invalidates the1616+ * stack pointer and goes into a hlt loop. The stack-pointer is invalidated to1717+ * make sure no interrupt or exception can get the CPU out of the hlt loop.1818+ *1919+ * New page-table pointer is expected in %rdi (first parameter)2020+ *2121+ */2222+SYM_FUNC_START(sev_verify_cbit)2323+#ifdef CONFIG_AMD_MEM_ENCRYPT2424+ /* First check if a C-bit was detected */2525+ movq sme_me_mask(%rip), %rsi2626+ testq %rsi, %rsi2727+ jz 3f2828+2929+ /* sme_me_mask != 0 could mean SME or SEV - Check also for SEV */3030+ movq sev_status(%rip), %rsi3131+ testq %rsi, %rsi3232+ jz 3f3333+3434+ /* Save CR4 in %rsi */3535+ movq %cr4, %rsi3636+3737+ /* Disable Global Pages */3838+ movq %rsi, %rdx3939+ andq $(~X86_CR4_PGE), %rdx4040+ movq %rdx, %cr44141+4242+ /*4343+ * Verified that running under SEV - now get a random value using4444+ * RDRAND. This instruction is mandatory when running as an SEV guest.4545+ *4646+ * Don't bail out of the loop if RDRAND returns errors. It is better to4747+ * prevent forward progress than to work with a non-random value here.4848+ */4949+1: rdrand %rdx5050+ jnc 1b5151+5252+ /* Store value to memory and keep it in %rdx */5353+ movq %rdx, sev_check_data(%rip)5454+5555+ /* Backup current %cr3 value to restore it later */5656+ movq %cr3, %rcx5757+5858+ /* Switch to new %cr3 - This might unmap the stack */5959+ movq %rdi, %cr36060+6161+ /*6262+ * Compare value in %rdx with memory location. If C-bit is incorrect6363+ * this would read the encrypted data and make the check fail.6464+ */6565+ cmpq %rdx, sev_check_data(%rip)6666+6767+ /* Restore old %cr3 */6868+ movq %rcx, %cr36969+7070+ /* Restore previous CR4 */7171+ movq %rsi, %cr47272+7373+ /* Check CMPQ result */7474+ je 3f7575+7676+ /*7777+ * The check failed, prevent any forward progress to prevent ROP7878+ * attacks, invalidate the stack and go into a hlt loop.7979+ */8080+ xorq %rsp, %rsp8181+ subq $0x1000, %rsp8282+2: hlt8383+ jmp 2b8484+3:8585+#endif8686+ /* Return page-table pointer */8787+ movq %rdi, %rax8888+ ret8989+SYM_FUNC_END(sev_verify_cbit)
+1-3
arch/x86/lib/memcpy_64.S
···1616 * to a jmp to memcpy_erms which does the REP; MOVSB mem copy.1717 */18181919-.weak memcpy2020-2119/*2220 * memcpy - Copy a memory block.2321 *···2830 * rax original destination2931 */3032SYM_FUNC_START_ALIAS(__memcpy)3131-SYM_FUNC_START_LOCAL(memcpy)3333+SYM_FUNC_START_WEAK(memcpy)3234 ALTERNATIVE_2 "jmp memcpy_orig", "", X86_FEATURE_REP_GOOD, \3335 "jmp memcpy_erms", X86_FEATURE_ERMS3436
···66#include <asm/alternative-asm.h>77#include <asm/export.h>8899-.weak memset1010-119/*1210 * ISO C memset - set a memory block to a byte value. This function uses fast1311 * string to get better performance than the original function. The code is···1719 *1820 * rax original destination1921 */2020-SYM_FUNC_START_ALIAS(memset)2222+SYM_FUNC_START_WEAK(memset)2123SYM_FUNC_START(__memset)2224 /*2325 * Some CPUs support enhanced REP MOVSB/STOSB feature. It is recommended
···8989 /* set highmem page free */9090 for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,9191 &range_start, &range_end, NULL) {9292- unsigned long start = PHYS_PFN(range_start);9393- unsigned long end = PHYS_PFN(range_end);9292+ unsigned long start = PFN_UP(range_start);9393+ unsigned long end = PFN_DOWN(range_end);94949595 /* Ignore complete lowmem entries */9696 if (end <= max_low)
+2-4
drivers/base/core.c
···773773 dev_dbg(link->consumer, "Dropping the link to %s\n",774774 dev_name(link->supplier));775775776776- if (link->flags & DL_FLAG_PM_RUNTIME)777777- pm_runtime_drop_link(link->consumer);776776+ pm_runtime_drop_link(link);778777779778 list_del_rcu(&link->s_node);780779 list_del_rcu(&link->c_node);···787788 dev_info(link->consumer, "Dropping the link to %s\n",788789 dev_name(link->supplier));789790790790- if (link->flags & DL_FLAG_PM_RUNTIME)791791- pm_runtime_drop_link(link->consumer);791791+ pm_runtime_drop_link(link);792792793793 list_del(&link->s_node);794794 list_del(&link->c_node);
···16431643}1644164416451645/**16461646- * pm_runtime_clean_up_links - Prepare links to consumers for driver removal.16471647- * @dev: Device whose driver is going to be removed.16481648- *16491649- * Check links from this device to any consumers and if any of them have active16501650- * runtime PM references to the device, drop the usage counter of the device16511651- * (as many times as needed).16521652- *16531653- * Links with the DL_FLAG_MANAGED flag unset are ignored.16541654- *16551655- * Since the device is guaranteed to be runtime-active at the point this is16561656- * called, nothing else needs to be done here.16571657- *16581658- * Moreover, this is called after device_links_busy() has returned 'false', so16591659- * the status of each link is guaranteed to be DL_STATE_SUPPLIER_UNBIND and16601660- * therefore rpm_active can't be manipulated concurrently.16611661- */16621662-void pm_runtime_clean_up_links(struct device *dev)16631663-{16641664- struct device_link *link;16651665- int idx;16661666-16671667- idx = device_links_read_lock();16681668-16691669- list_for_each_entry_rcu(link, &dev->links.consumers, s_node,16701670- device_links_read_lock_held()) {16711671- if (!(link->flags & DL_FLAG_MANAGED))16721672- continue;16731673-16741674- while (refcount_dec_not_one(&link->rpm_active))16751675- pm_runtime_put_noidle(dev);16761676- }16771677-16781678- device_links_read_unlock(idx);16791679-}16801680-16811681-/**16821646 * pm_runtime_get_suppliers - Resume and reference-count supplier devices.16831647 * @dev: Consumer device.16841648 */···16931729 spin_unlock_irq(&dev->power.lock);16941730}1695173116961696-void pm_runtime_drop_link(struct device *dev)17321732+static void pm_runtime_drop_link_count(struct device *dev)16971733{16981734 spin_lock_irq(&dev->power.lock);16991735 WARN_ON(dev->power.links_count == 0);17001736 dev->power.links_count--;17011737 spin_unlock_irq(&dev->power.lock);17381738+}17391739+17401740+/**17411741+ * pm_runtime_drop_link - Prepare for device link removal.17421742+ * @link: Device link going away.17431743+ *17441744+ * Drop the link count of the consumer end of @link and decrement the supplier17451745+ * device's runtime PM usage counter as many times as needed to drop all of the17461746+ * PM runtime reference to it from the consumer.17471747+ */17481748+void pm_runtime_drop_link(struct device_link *link)17491749+{17501750+ if (!(link->flags & DL_FLAG_PM_RUNTIME))17511751+ return;17521752+17531753+ pm_runtime_drop_link_count(link->consumer);17541754+17551755+ while (refcount_dec_not_one(&link->rpm_active))17561756+ pm_runtime_put(link->supplier);17021757}1703175817041759static bool pm_runtime_need_not_resume(struct device *dev)
+1-1
drivers/block/null_blk.h
···4747 unsigned int nr_zones_closed;4848 struct blk_zone *zones;4949 sector_t zone_size_sects;5050- spinlock_t zone_dev_lock;5050+ spinlock_t zone_lock;5151 unsigned long *zone_locks;52525353 unsigned long size; /* device size in MB */
+31-16
drivers/block/null_blk_zoned.c
···4646 if (!dev->zones)4747 return -ENOMEM;48484949- spin_lock_init(&dev->zone_dev_lock);5050- dev->zone_locks = bitmap_zalloc(dev->nr_zones, GFP_KERNEL);5151- if (!dev->zone_locks) {5252- kvfree(dev->zones);5353- return -ENOMEM;4949+ /*5050+ * With memory backing, the zone_lock spinlock needs to be temporarily5151+ * released to avoid scheduling in atomic context. To guarantee zone5252+ * information protection, use a bitmap to lock zones with5353+ * wait_on_bit_lock_io(). Sleeping on the lock is OK as memory backing5454+ * implies that the queue is marked with BLK_MQ_F_BLOCKING.5555+ */5656+ spin_lock_init(&dev->zone_lock);5757+ if (dev->memory_backed) {5858+ dev->zone_locks = bitmap_zalloc(dev->nr_zones, GFP_KERNEL);5959+ if (!dev->zone_locks) {6060+ kvfree(dev->zones);6161+ return -ENOMEM;6262+ }5463 }55645665 if (dev->zone_nr_conv >= dev->nr_zones) {···146137147138static inline void null_lock_zone(struct nullb_device *dev, unsigned int zno)148139{149149- wait_on_bit_lock_io(dev->zone_locks, zno, TASK_UNINTERRUPTIBLE);140140+ if (dev->memory_backed)141141+ wait_on_bit_lock_io(dev->zone_locks, zno, TASK_UNINTERRUPTIBLE);142142+ spin_lock_irq(&dev->zone_lock);150143}151144152145static inline void null_unlock_zone(struct nullb_device *dev, unsigned int zno)153146{154154- clear_and_wake_up_bit(zno, dev->zone_locks);147147+ spin_unlock_irq(&dev->zone_lock);148148+149149+ if (dev->memory_backed)150150+ clear_and_wake_up_bit(zno, dev->zone_locks);155151}156152157153int null_report_zones(struct gendisk *disk, sector_t sector,···336322 return null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors);337323338324 null_lock_zone(dev, zno);339339- spin_lock(&dev->zone_dev_lock);340325341326 switch (zone->cond) {342327 case BLK_ZONE_COND_FULL:···388375 if (zone->cond != BLK_ZONE_COND_EXP_OPEN)389376 zone->cond = BLK_ZONE_COND_IMP_OPEN;390377391391- spin_unlock(&dev->zone_dev_lock);378378+ /*379379+ * Memory backing allocation may sleep: release the zone_lock spinlock380380+ * to avoid scheduling in atomic context. Zone operation atomicity is381381+ * still guaranteed through the zone_locks bitmap.382382+ */383383+ if (dev->memory_backed)384384+ spin_unlock_irq(&dev->zone_lock);392385 ret = null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors);393393- spin_lock(&dev->zone_dev_lock);386386+ if (dev->memory_backed)387387+ spin_lock_irq(&dev->zone_lock);388388+394389 if (ret != BLK_STS_OK)395390 goto unlock;396391···413392 ret = BLK_STS_OK;414393415394unlock:416416- spin_unlock(&dev->zone_dev_lock);417395 null_unlock_zone(dev, zno);418396419397 return ret;···536516 null_lock_zone(dev, i);537517 zone = &dev->zones[i];538518 if (zone->cond != BLK_ZONE_COND_EMPTY) {539539- spin_lock(&dev->zone_dev_lock);540519 null_reset_zone(dev, zone);541541- spin_unlock(&dev->zone_dev_lock);542520 trace_nullb_zone_op(cmd, i, zone->cond);543521 }544522 null_unlock_zone(dev, i);···548530 zone = &dev->zones[zone_no];549531550532 null_lock_zone(dev, zone_no);551551- spin_lock(&dev->zone_dev_lock);552533553534 switch (op) {554535 case REQ_OP_ZONE_RESET:···566549 ret = BLK_STS_NOTSUPP;567550 break;568551 }569569-570570- spin_unlock(&dev->zone_dev_lock);571552572553 if (ret == BLK_STS_OK)573554 trace_nullb_zone_op(cmd, zone_no, zone->cond);
+5
drivers/char/tpm/eventlog/efi.c
···4141 log_size = log_tbl->size;4242 memunmap(log_tbl);43434444+ if (!log_size) {4545+ pr_warn("UEFI TPM log area empty\n");4646+ return -EIO;4747+ }4848+4449 log_tbl = memremap(efi.tpm_log, sizeof(*log_tbl) + log_size,4550 MEMREMAP_WB);4651 if (!log_tbl) {
···77 *88 * This file add support for MD5 and SHA1/SHA224/SHA256/SHA384/SHA512.99 *1010- * You could find the datasheet in Documentation/arm/sunxi/README1010+ * You could find the datasheet in Documentation/arm/sunxi.rst1111 */1212#include <linux/dma-mapping.h>1313#include <linux/pm_runtime.h>
+1-1
drivers/crypto/allwinner/sun8i-ce/sun8i-ce-prng.c
···77 *88 * This file handle the PRNG99 *1010- * You could find a link for the datasheet in Documentation/arm/sunxi/README1010+ * You could find a link for the datasheet in Documentation/arm/sunxi.rst1111 */1212#include "sun8i-ce.h"1313#include <linux/dma-mapping.h>
+1-1
drivers/crypto/allwinner/sun8i-ce/sun8i-ce-trng.c
···77 *88 * This file handle the TRNG99 *1010- * You could find a link for the datasheet in Documentation/arm/sunxi/README1010+ * You could find a link for the datasheet in Documentation/arm/sunxi.rst1111 */1212#include "sun8i-ce.h"1313#include <linux/dma-mapping.h>
···509509 return -ENOENT;510510511511 /*512512- * Already in the desired write domain? Nothing for us to do!513513- *514514- * We apply a little bit of cunning here to catch a broader set of515515- * no-ops. If obj->write_domain is set, we must be in the same516516- * obj->read_domains, and only that domain. Therefore, if that517517- * obj->write_domain matches the request read_domains, we are518518- * already in the same read/write domain and can skip the operation,519519- * without having to further check the requested write_domain.520520- */521521- if (READ_ONCE(obj->write_domain) == read_domains) {522522- err = 0;523523- goto out;524524- }525525-526526- /*527512 * Try to flush the object off the GPU without holding the lock.528513 * We will repeat the flush holding the lock in the normal manner529514 * to catch cases where we are gazumped.···544559 err = i915_gem_object_pin_pages(obj);545560 if (err)546561 goto out;562562+563563+ /*564564+ * Already in the desired write domain? Nothing for us to do!565565+ *566566+ * We apply a little bit of cunning here to catch a broader set of567567+ * no-ops. If obj->write_domain is set, we must be in the same568568+ * obj->read_domains, and only that domain. Therefore, if that569569+ * obj->write_domain matches the request read_domains, we are570570+ * already in the same read/write domain and can skip the operation,571571+ * without having to further check the requested write_domain.572572+ */573573+ if (READ_ONCE(obj->write_domain) == read_domains)574574+ goto out_unpin;547575548576 err = i915_gem_object_lock_interruptible(obj, NULL);549577 if (err)
+35-20
drivers/gpu/drm/i915/gt/intel_engine.h
···245245}246246247247static inline u32 *248248-__gen8_emit_ggtt_write_rcs(u32 *cs, u32 value, u32 gtt_offset, u32 flags0, u32 flags1)248248+__gen8_emit_write_rcs(u32 *cs, u32 value, u32 offset, u32 flags0, u32 flags1)249249{250250- /* We're using qword write, offset should be aligned to 8 bytes. */251251- GEM_BUG_ON(!IS_ALIGNED(gtt_offset, 8));252252-253253- /* w/a for post sync ops following a GPGPU operation we254254- * need a prior CS_STALL, which is emitted by the flush255255- * following the batch.256256- */257250 *cs++ = GFX_OP_PIPE_CONTROL(6) | flags0;258258- *cs++ = flags1 | PIPE_CONTROL_QW_WRITE | PIPE_CONTROL_GLOBAL_GTT_IVB;259259- *cs++ = gtt_offset;251251+ *cs++ = flags1 | PIPE_CONTROL_QW_WRITE;252252+ *cs++ = offset;260253 *cs++ = 0;261254 *cs++ = value;262262- /* We're thrashing one dword of HWS. */263263- *cs++ = 0;255255+ *cs++ = 0; /* We're thrashing one extra dword. */264256265257 return cs;266258}···260268static inline u32*261269gen8_emit_ggtt_write_rcs(u32 *cs, u32 value, u32 gtt_offset, u32 flags)262270{263263- return __gen8_emit_ggtt_write_rcs(cs, value, gtt_offset, 0, flags);271271+ /* We're using qword write, offset should be aligned to 8 bytes. */272272+ GEM_BUG_ON(!IS_ALIGNED(gtt_offset, 8));273273+274274+ return __gen8_emit_write_rcs(cs,275275+ value,276276+ gtt_offset,277277+ 0,278278+ flags | PIPE_CONTROL_GLOBAL_GTT_IVB);264279}265280266281static inline u32*267282gen12_emit_ggtt_write_rcs(u32 *cs, u32 value, u32 gtt_offset, u32 flags0, u32 flags1)268283{269269- return __gen8_emit_ggtt_write_rcs(cs, value, gtt_offset, flags0, flags1);284284+ /* We're using qword write, offset should be aligned to 8 bytes. */285285+ GEM_BUG_ON(!IS_ALIGNED(gtt_offset, 8));286286+287287+ return __gen8_emit_write_rcs(cs,288288+ value,289289+ gtt_offset,290290+ flags0,291291+ flags1 | PIPE_CONTROL_GLOBAL_GTT_IVB);292292+}293293+294294+static inline u32 *295295+__gen8_emit_flush_dw(u32 *cs, u32 value, u32 gtt_offset, u32 flags)296296+{297297+ *cs++ = (MI_FLUSH_DW + 1) | flags;298298+ *cs++ = gtt_offset;299299+ *cs++ = 0;300300+ *cs++ = value;301301+302302+ return cs;270303}271304272305static inline u32 *···302285 /* Offset should be aligned to 8 bytes for both (QW/DW) write types */303286 GEM_BUG_ON(!IS_ALIGNED(gtt_offset, 8));304287305305- *cs++ = (MI_FLUSH_DW + 1) | MI_FLUSH_DW_OP_STOREDW | flags;306306- *cs++ = gtt_offset | MI_FLUSH_DW_USE_GTT;307307- *cs++ = 0;308308- *cs++ = value;309309-310310- return cs;288288+ return __gen8_emit_flush_dw(cs,289289+ value,290290+ gtt_offset | MI_FLUSH_DW_USE_GTT,291291+ flags | MI_FLUSH_DW_OP_STOREDW);311292}312293313294static inline void __intel_engine_reset(struct intel_engine_cs *engine,
···111111 return 0;112112}113113114114-static void dw_hdmi_imx_encoder_disable(struct drm_encoder *encoder)115115-{116116-}117117-118114static void dw_hdmi_imx_encoder_enable(struct drm_encoder *encoder)119115{120116 struct imx_hdmi *hdmi = enc_to_imx_hdmi(encoder);···136140137141static const struct drm_encoder_helper_funcs dw_hdmi_imx_encoder_helper_funcs = {138142 .enable = dw_hdmi_imx_encoder_enable,139139- .disable = dw_hdmi_imx_encoder_disable,140143 .atomic_check = dw_hdmi_imx_atomic_check,141144};142145···214219 hdmi->dev = &pdev->dev;215220 encoder = &hdmi->encoder;216221217217- encoder->possible_crtcs = drm_of_find_possible_crtcs(drm, dev->of_node);218218- /*219219- * If we failed to find the CRTC(s) which this encoder is220220- * supposed to be connected to, it's because the CRTC has221221- * not been registered yet. Defer probing, and hope that222222- * the required CRTC is added later.223223- */224224- if (encoder->possible_crtcs == 0)225225- return -EPROBE_DEFER;222222+ ret = imx_drm_encoder_parse_of(drm, encoder, dev->of_node);223223+ if (ret)224224+ return ret;226225227226 ret = dw_hdmi_imx_parse_dt(hdmi);228227 if (ret < 0)
···168168169169int vc4_v3d_get_bin_slot(struct vc4_dev *vc4)170170{171171- struct drm_device *dev = vc4->dev;171171+ struct drm_device *dev = &vc4->base;172172 unsigned long irqflags;173173 int slot;174174 uint64_t seqno = 0;···246246 INIT_LIST_HEAD(&list);247247248248 while (true) {249249- struct vc4_bo *bo = vc4_bo_create(vc4->dev, size, true,249249+ struct vc4_bo *bo = vc4_bo_create(&vc4->base, size, true,250250 VC4_BO_TYPE_BIN);251251252252 if (IS_ERR(bo)) {···361361 struct vc4_v3d *v3d = dev_get_drvdata(dev);362362 struct vc4_dev *vc4 = v3d->vc4;363363364364- vc4_irq_uninstall(vc4->dev);364364+ vc4_irq_uninstall(&vc4->base);365365366366 clk_disable_unprepare(v3d->clk);367367···378378 if (ret != 0)379379 return ret;380380381381- vc4_v3d_init_hw(vc4->dev);381381+ vc4_v3d_init_hw(&vc4->base);382382383383 /* We disabled the IRQ as part of vc4_irq_uninstall in suspend. */384384- enable_irq(vc4->dev->irq);385385- vc4_irq_postinstall(vc4->dev);384384+ enable_irq(vc4->base.irq);385385+ vc4_irq_postinstall(&vc4->base);386386387387 return 0;388388}
-67
drivers/gpu/ipu-v3/ipu-common.c
···133133}134134EXPORT_SYMBOL_GPL(ipu_pixelformat_to_colorspace);135135136136-bool ipu_pixelformat_is_planar(u32 pixelformat)137137-{138138- switch (pixelformat) {139139- case V4L2_PIX_FMT_YUV420:140140- case V4L2_PIX_FMT_YVU420:141141- case V4L2_PIX_FMT_YUV422P:142142- case V4L2_PIX_FMT_NV12:143143- case V4L2_PIX_FMT_NV21:144144- case V4L2_PIX_FMT_NV16:145145- case V4L2_PIX_FMT_NV61:146146- return true;147147- }148148-149149- return false;150150-}151151-EXPORT_SYMBOL_GPL(ipu_pixelformat_is_planar);152152-153153-enum ipu_color_space ipu_mbus_code_to_colorspace(u32 mbus_code)154154-{155155- switch (mbus_code & 0xf000) {156156- case 0x1000:157157- return IPUV3_COLORSPACE_RGB;158158- case 0x2000:159159- return IPUV3_COLORSPACE_YUV;160160- default:161161- return IPUV3_COLORSPACE_UNKNOWN;162162- }163163-}164164-EXPORT_SYMBOL_GPL(ipu_mbus_code_to_colorspace);165165-166166-int ipu_stride_to_bytes(u32 pixel_stride, u32 pixelformat)167167-{168168- switch (pixelformat) {169169- case V4L2_PIX_FMT_YUV420:170170- case V4L2_PIX_FMT_YVU420:171171- case V4L2_PIX_FMT_YUV422P:172172- case V4L2_PIX_FMT_NV12:173173- case V4L2_PIX_FMT_NV21:174174- case V4L2_PIX_FMT_NV16:175175- case V4L2_PIX_FMT_NV61:176176- /*177177- * for the planar YUV formats, the stride passed to178178- * cpmem must be the stride in bytes of the Y plane.179179- * And all the planar YUV formats have an 8-bit180180- * Y component.181181- */182182- return (8 * pixel_stride) >> 3;183183- case V4L2_PIX_FMT_RGB565:184184- case V4L2_PIX_FMT_YUYV:185185- case V4L2_PIX_FMT_UYVY:186186- return (16 * pixel_stride) >> 3;187187- case V4L2_PIX_FMT_BGR24:188188- case V4L2_PIX_FMT_RGB24:189189- return (24 * pixel_stride) >> 3;190190- case V4L2_PIX_FMT_BGR32:191191- case V4L2_PIX_FMT_RGB32:192192- case V4L2_PIX_FMT_XBGR32:193193- case V4L2_PIX_FMT_XRGB32:194194- return (32 * pixel_stride) >> 3;195195- default:196196- break;197197- }198198-199199- return -EINVAL;200200-}201201-EXPORT_SYMBOL_GPL(ipu_stride_to_bytes);202202-203136int ipu_degrees_to_rot_mode(enum ipu_rotate_mode *mode, int degrees,204137 bool hflip, bool vflip)205138{
+1-1
drivers/hv/hv_balloon.c
···1275127512761276 /* Refuse to balloon below the floor. */12771277 if (avail_pages < num_pages || avail_pages - num_pages < floor) {12781278- pr_warn("Balloon request will be partially fulfilled. %s\n",12781278+ pr_info("Balloon request will be partially fulfilled. %s\n",12791279 avail_pages < num_pages ? "Not enough memory." :12801280 "Balloon floor reached.");12811281
+1-1
drivers/i2c/busses/Kconfig
···733733734734config I2C_MLXBF735735 tristate "Mellanox BlueField I2C controller"736736- depends on ARM64736736+ depends on MELLANOX_PLATFORM && ARM64737737 help738738 Enabling this option will add I2C SMBus support for Mellanox BlueField739739 system.
+18-32
drivers/i2c/busses/i2c-designware-slave.c
···159159 u32 raw_stat, stat, enabled, tmp;160160 u8 val = 0, slave_activity;161161162162- regmap_read(dev->map, DW_IC_INTR_STAT, &stat);163162 regmap_read(dev->map, DW_IC_ENABLE, &enabled);164163 regmap_read(dev->map, DW_IC_RAW_INTR_STAT, &raw_stat);165164 regmap_read(dev->map, DW_IC_STATUS, &tmp);···167168 if (!enabled || !(raw_stat & ~DW_IC_INTR_ACTIVITY) || !dev->slave)168169 return 0;169170171171+ stat = i2c_dw_read_clear_intrbits_slave(dev);170172 dev_dbg(dev->dev,171173 "%#x STATUS SLAVE_ACTIVITY=%#x : RAW_INTR_STAT=%#x : INTR_STAT=%#x\n",172174 enabled, slave_activity, raw_stat, stat);173175174174- if ((stat & DW_IC_INTR_RX_FULL) && (stat & DW_IC_INTR_STOP_DET))175175- i2c_slave_event(dev->slave, I2C_SLAVE_WRITE_REQUESTED, &val);176176+ if (stat & DW_IC_INTR_RX_FULL) {177177+ if (dev->status != STATUS_WRITE_IN_PROGRESS) {178178+ dev->status = STATUS_WRITE_IN_PROGRESS;179179+ i2c_slave_event(dev->slave, I2C_SLAVE_WRITE_REQUESTED,180180+ &val);181181+ }182182+183183+ regmap_read(dev->map, DW_IC_DATA_CMD, &tmp);184184+ val = tmp;185185+ if (!i2c_slave_event(dev->slave, I2C_SLAVE_WRITE_RECEIVED,186186+ &val))187187+ dev_vdbg(dev->dev, "Byte %X acked!", val);188188+ }176189177190 if (stat & DW_IC_INTR_RD_REQ) {178191 if (slave_activity) {179179- if (stat & DW_IC_INTR_RX_FULL) {180180- regmap_read(dev->map, DW_IC_DATA_CMD, &tmp);181181- val = tmp;192192+ regmap_read(dev->map, DW_IC_CLR_RD_REQ, &tmp);182193183183- if (!i2c_slave_event(dev->slave,184184- I2C_SLAVE_WRITE_RECEIVED,185185- &val)) {186186- dev_vdbg(dev->dev, "Byte %X acked!",187187- val);188188- }189189- regmap_read(dev->map, DW_IC_CLR_RD_REQ, &tmp);190190- stat = i2c_dw_read_clear_intrbits_slave(dev);191191- } else {192192- regmap_read(dev->map, DW_IC_CLR_RD_REQ, &tmp);193193- regmap_read(dev->map, DW_IC_CLR_RX_UNDER, &tmp);194194- stat = i2c_dw_read_clear_intrbits_slave(dev);195195- }194194+ dev->status = STATUS_READ_IN_PROGRESS;196195 if (!i2c_slave_event(dev->slave,197196 I2C_SLAVE_READ_REQUESTED,198197 &val))···202205 if (!i2c_slave_event(dev->slave, I2C_SLAVE_READ_PROCESSED,203206 &val))204207 regmap_read(dev->map, DW_IC_CLR_RX_DONE, &tmp);205205-206206- i2c_slave_event(dev->slave, I2C_SLAVE_STOP, &val);207207- stat = i2c_dw_read_clear_intrbits_slave(dev);208208- return 1;209208 }210209211211- if (stat & DW_IC_INTR_RX_FULL) {212212- regmap_read(dev->map, DW_IC_DATA_CMD, &tmp);213213- val = tmp;214214- if (!i2c_slave_event(dev->slave, I2C_SLAVE_WRITE_RECEIVED,215215- &val))216216- dev_vdbg(dev->dev, "Byte %X acked!", val);217217- } else {210210+ if (stat & DW_IC_INTR_STOP_DET) {211211+ dev->status = STATUS_IDLE;218212 i2c_slave_event(dev->slave, I2C_SLAVE_STOP, &val);219219- stat = i2c_dw_read_clear_intrbits_slave(dev);220213 }221214222215 return 1;···217230 struct dw_i2c_dev *dev = dev_id;218231 int ret;219232220220- i2c_dw_read_clear_intrbits_slave(dev);221233 ret = i2c_dw_irq_handler_slave(dev);222234 if (ret > 0)223235 complete(&dev->cmd_complete);
+86-118
drivers/i2c/busses/i2c-mlxbf.c
···6262 * Master. Default value is set to 400MHz.6363 */6464#define MLXBF_I2C_TYU_PLL_OUT_FREQ (400 * 1000 * 1000)6565-/* Reference clock for Bluefield 1 - 156 MHz. */6666-#define MLXBF_I2C_TYU_PLL_IN_FREQ (156 * 1000 * 1000)6767-/* Reference clock for BlueField 2 - 200 MHz. */6868-#define MLXBF_I2C_YU_PLL_IN_FREQ (200 * 1000 * 1000)6565+/* Reference clock for Bluefield - 156 MHz. */6666+#define MLXBF_I2C_PLL_IN_FREQ (156 * 1000 * 1000)69677068/* Constant used to determine the PLL frequency. */7169#define MLNXBF_I2C_COREPLL_CONST 16384···487489488490#define MLXBF_I2C_FREQUENCY_1GHZ 1000000000489491490490-static void mlxbf_i2c_write(void __iomem *io, int reg, u32 val)491491-{492492- writel(val, io + reg);493493-}494494-495495-static u32 mlxbf_i2c_read(void __iomem *io, int reg)496496-{497497- return readl(io + reg);498498-}499499-500500-/*501501- * This function is used to read data from Master GW Data Descriptor.502502- * Data bytes in the Master GW Data Descriptor are shifted left so the503503- * data starts at the MSB of the descriptor registers as set by the504504- * underlying hardware. TYU_READ_DATA enables byte swapping while505505- * reading data bytes, and MUST be called by the SMBus read routines506506- * to copy data from the 32 * 32-bit HW Data registers a.k.a Master GW507507- * Data Descriptor.508508- */509509-static u32 mlxbf_i2c_read_data(void __iomem *io, int reg)510510-{511511- return (u32)be32_to_cpu(mlxbf_i2c_read(io, reg));512512-}513513-514514-/*515515- * This function is used to write data to the Master GW Data Descriptor.516516- * Data copied to the Master GW Data Descriptor MUST be shifted left so517517- * the data starts at the MSB of the descriptor registers as required by518518- * the underlying hardware. TYU_WRITE_DATA enables byte swapping when519519- * writing data bytes, and MUST be called by the SMBus write routines to520520- * copy data to the 32 * 32-bit HW Data registers a.k.a Master GW Data521521- * Descriptor.522522- */523523-static void mlxbf_i2c_write_data(void __iomem *io, int reg, u32 val)524524-{525525- mlxbf_i2c_write(io, reg, (u32)cpu_to_be32(val));526526-}527527-528492/*529493 * Function to poll a set of bits at a specific address; it checks whether530494 * the bits are equal to zero when eq_zero is set to 'true', and not equal···501541 timeout = (timeout / MLXBF_I2C_POLL_FREQ_IN_USEC) + 1;502542503543 do {504504- bits = mlxbf_i2c_read(io, addr) & mask;544544+ bits = readl(io + addr) & mask;505545 if (eq_zero ? bits == 0 : bits != 0)506546 return eq_zero ? 1 : bits;507547 udelay(MLXBF_I2C_POLL_FREQ_IN_USEC);···569609 MLXBF_I2C_SMBUS_TIMEOUT);570610571611 /* Read cause status bits. */572572- cause_status_bits = mlxbf_i2c_read(priv->mst_cause->io,573573- MLXBF_I2C_CAUSE_ARBITER);612612+ cause_status_bits = readl(priv->mst_cause->io +613613+ MLXBF_I2C_CAUSE_ARBITER);574614 cause_status_bits &= MLXBF_I2C_CAUSE_MASTER_ARBITER_BITS_MASK;575615576616 /*577617 * Parse both Cause and Master GW bits, then return transaction status.578618 */579619580580- master_status_bits = mlxbf_i2c_read(priv->smbus->io,581581- MLXBF_I2C_SMBUS_MASTER_STATUS);620620+ master_status_bits = readl(priv->smbus->io +621621+ MLXBF_I2C_SMBUS_MASTER_STATUS);582622 master_status_bits &= MLXBF_I2C_SMBUS_MASTER_STATUS_MASK;583623584624 if (mlxbf_i2c_smbus_transaction_success(master_status_bits,···609649610650 aligned_length = round_up(length, 4);611651612612- /* Copy data bytes from 4-byte aligned source buffer. */652652+ /*653653+ * Copy data bytes from 4-byte aligned source buffer.654654+ * Data copied to the Master GW Data Descriptor MUST be shifted655655+ * left so the data starts at the MSB of the descriptor registers656656+ * as required by the underlying hardware. Enable byte swapping657657+ * when writing data bytes to the 32 * 32-bit HW Data registers658658+ * a.k.a Master GW Data Descriptor.659659+ */613660 for (offset = 0; offset < aligned_length; offset += sizeof(u32)) {614661 data32 = *((u32 *)(data + offset));615615- mlxbf_i2c_write_data(priv->smbus->io, addr + offset, data32);662662+ iowrite32be(data32, priv->smbus->io + addr + offset);616663 }617664}618665···631664632665 mask = sizeof(u32) - 1;633666667667+ /*668668+ * Data bytes in the Master GW Data Descriptor are shifted left669669+ * so the data starts at the MSB of the descriptor registers as670670+ * set by the underlying hardware. Enable byte swapping while671671+ * reading data bytes from the 32 * 32-bit HW Data registers672672+ * a.k.a Master GW Data Descriptor.673673+ */674674+634675 for (offset = 0; offset < (length & ~mask); offset += sizeof(u32)) {635635- data32 = mlxbf_i2c_read_data(priv->smbus->io, addr + offset);676676+ data32 = ioread32be(priv->smbus->io + addr + offset);636677 *((u32 *)(data + offset)) = data32;637678 }638679639680 if (!(length & mask))640681 return;641682642642- data32 = mlxbf_i2c_read_data(priv->smbus->io, addr + offset);683683+ data32 = ioread32be(priv->smbus->io + addr + offset);643684644685 for (byte = 0; byte < (length & mask); byte++) {645686 data[offset + byte] = data32 & GENMASK(7, 0);···673698 command |= rol32(pec_en, MLXBF_I2C_MASTER_SEND_PEC_SHIFT);674699675700 /* Clear status bits. */676676- mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_MASTER_STATUS, 0x0);701701+ writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_MASTER_STATUS);677702 /* Set the cause data. */678678- mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_CAUSE_OR_CLEAR, ~0x0);703703+ writel(~0x0, priv->smbus->io + MLXBF_I2C_CAUSE_OR_CLEAR);679704 /* Zero PEC byte. */680680- mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_MASTER_PEC, 0x0);705705+ writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_MASTER_PEC);681706 /* Zero byte count. */682682- mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_RS_BYTES, 0x0);707707+ writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_RS_BYTES);683708684709 /* GW activation. */685685- mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_MASTER_GW, command);710710+ writel(command, priv->smbus->io + MLXBF_I2C_SMBUS_MASTER_GW);686711687712 /*688713 * Poll master status and check status bits. An ACK is sent when···798823 * needs to be 'manually' reset. This should be removed in799824 * next tag integration.800825 */801801- mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_MASTER_FSM,802802- MLXBF_I2C_SMBUS_MASTER_FSM_PS_STATE_MASK);826826+ writel(MLXBF_I2C_SMBUS_MASTER_FSM_PS_STATE_MASK,827827+ priv->smbus->io + MLXBF_I2C_SMBUS_MASTER_FSM);803828 }804829805830 return ret;···10881113 timer |= mlxbf_i2c_set_timer(priv, timings->scl_low,10891114 false, MLXBF_I2C_MASK_16,10901115 MLXBF_I2C_SHIFT_16);10911091- mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_TIMER_SCL_LOW_SCL_HIGH,10921092- timer);11161116+ writel(timer, priv->smbus->io +11171117+ MLXBF_I2C_SMBUS_TIMER_SCL_LOW_SCL_HIGH);1093111810941119 timer = mlxbf_i2c_set_timer(priv, timings->sda_rise, false,10951120 MLXBF_I2C_MASK_8, MLXBF_I2C_SHIFT_0);···10991124 MLXBF_I2C_MASK_8, MLXBF_I2C_SHIFT_16);11001125 timer |= mlxbf_i2c_set_timer(priv, timings->scl_fall, false,11011126 MLXBF_I2C_MASK_8, MLXBF_I2C_SHIFT_24);11021102- mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_TIMER_FALL_RISE_SPIKE,11031103- timer);11271127+ writel(timer, priv->smbus->io +11281128+ MLXBF_I2C_SMBUS_TIMER_FALL_RISE_SPIKE);1104112911051130 timer = mlxbf_i2c_set_timer(priv, timings->hold_start, true,11061131 MLXBF_I2C_MASK_16, MLXBF_I2C_SHIFT_0);11071132 timer |= mlxbf_i2c_set_timer(priv, timings->hold_data, true,11081133 MLXBF_I2C_MASK_16, MLXBF_I2C_SHIFT_16);11091109- mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_TIMER_THOLD, timer);11341134+ writel(timer, priv->smbus->io + MLXBF_I2C_SMBUS_TIMER_THOLD);1110113511111136 timer = mlxbf_i2c_set_timer(priv, timings->setup_start, true,11121137 MLXBF_I2C_MASK_16, MLXBF_I2C_SHIFT_0);11131138 timer |= mlxbf_i2c_set_timer(priv, timings->setup_stop, true,11141139 MLXBF_I2C_MASK_16, MLXBF_I2C_SHIFT_16);11151115- mlxbf_i2c_write(priv->smbus->io,11161116- MLXBF_I2C_SMBUS_TIMER_TSETUP_START_STOP, timer);11401140+ writel(timer, priv->smbus->io +11411141+ MLXBF_I2C_SMBUS_TIMER_TSETUP_START_STOP);1117114211181143 timer = mlxbf_i2c_set_timer(priv, timings->setup_data, true,11191144 MLXBF_I2C_MASK_16, MLXBF_I2C_SHIFT_0);11201120- mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_TIMER_TSETUP_DATA,11211121- timer);11451145+ writel(timer, priv->smbus->io + MLXBF_I2C_SMBUS_TIMER_TSETUP_DATA);1122114611231147 timer = mlxbf_i2c_set_timer(priv, timings->buf, false,11241148 MLXBF_I2C_MASK_16, MLXBF_I2C_SHIFT_0);11251149 timer |= mlxbf_i2c_set_timer(priv, timings->thigh_max, false,11261150 MLXBF_I2C_MASK_16, MLXBF_I2C_SHIFT_16);11271127- mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_THIGH_MAX_TBUF,11281128- timer);11511151+ writel(timer, priv->smbus->io + MLXBF_I2C_SMBUS_THIGH_MAX_TBUF);1129115211301153 timer = timings->timeout;11311131- mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_SCL_LOW_TIMEOUT,11321132- timer);11541154+ writel(timer, priv->smbus->io + MLXBF_I2C_SMBUS_SCL_LOW_TIMEOUT);11331155}1134115611351157enum mlxbf_i2c_timings_config {···13981426 * platform firmware; disabling the bus might compromise the system13991427 * functionality.14001428 */14011401- config_reg = mlxbf_i2c_read(gpio_res->io,14021402- MLXBF_I2C_GPIO_0_FUNC_EN_0);14291429+ config_reg = readl(gpio_res->io + MLXBF_I2C_GPIO_0_FUNC_EN_0);14031430 config_reg = MLXBF_I2C_GPIO_SMBUS_GW_ASSERT_PINS(priv->bus,14041431 config_reg);14051405- mlxbf_i2c_write(gpio_res->io, MLXBF_I2C_GPIO_0_FUNC_EN_0,14061406- config_reg);14321432+ writel(config_reg, gpio_res->io + MLXBF_I2C_GPIO_0_FUNC_EN_0);1407143314081408- config_reg = mlxbf_i2c_read(gpio_res->io,14091409- MLXBF_I2C_GPIO_0_FORCE_OE_EN);14341434+ config_reg = readl(gpio_res->io + MLXBF_I2C_GPIO_0_FORCE_OE_EN);14101435 config_reg = MLXBF_I2C_GPIO_SMBUS_GW_RESET_PINS(priv->bus,14111436 config_reg);14121412- mlxbf_i2c_write(gpio_res->io, MLXBF_I2C_GPIO_0_FORCE_OE_EN,14131413- config_reg);14371437+ writel(config_reg, gpio_res->io + MLXBF_I2C_GPIO_0_FORCE_OE_EN);1414143814151439 mutex_unlock(gpio_res->lock);14161440···14201452 u32 corepll_val;14211453 u16 core_f;1422145414231423- pad_frequency = MLXBF_I2C_TYU_PLL_IN_FREQ;14551455+ pad_frequency = MLXBF_I2C_PLL_IN_FREQ;1424145614251425- corepll_val = mlxbf_i2c_read(corepll_res->io,14261426- MLXBF_I2C_CORE_PLL_REG1);14571457+ corepll_val = readl(corepll_res->io + MLXBF_I2C_CORE_PLL_REG1);1427145814281459 /* Get Core PLL configuration bits. */14291460 core_f = rol32(corepll_val, MLXBF_I2C_COREPLL_CORE_F_TYU_SHIFT) &···14551488 u8 core_od, core_r;14561489 u32 core_f;1457149014581458- pad_frequency = MLXBF_I2C_YU_PLL_IN_FREQ;14911491+ pad_frequency = MLXBF_I2C_PLL_IN_FREQ;1459149214601460- corepll_reg1_val = mlxbf_i2c_read(corepll_res->io,14611461- MLXBF_I2C_CORE_PLL_REG1);14621462- corepll_reg2_val = mlxbf_i2c_read(corepll_res->io,14631463- MLXBF_I2C_CORE_PLL_REG2);14931493+ corepll_reg1_val = readl(corepll_res->io + MLXBF_I2C_CORE_PLL_REG1);14941494+ corepll_reg2_val = readl(corepll_res->io + MLXBF_I2C_CORE_PLL_REG2);1464149514651496 /* Get Core PLL configuration bits */14661497 core_f = rol32(corepll_reg1_val, MLXBF_I2C_COREPLL_CORE_F_YU_SHIFT) &···15501585 * (7-bit address, 1 status bit (1 if enabled, 0 if not)).15511586 */15521587 for (reg = 0; reg < reg_cnt; reg++) {15531553- slave_reg = mlxbf_i2c_read(priv->smbus->io,15881588+ slave_reg = readl(priv->smbus->io +15541589 MLXBF_I2C_SMBUS_SLAVE_ADDR_CFG + reg * 0x4);15551590 /*15561591 * Each register holds 4 slave addresses. So, we have to keep···1608164316091644 /* Enable the slave address and update the register. */16101645 slave_reg |= (1 << MLXBF_I2C_SMBUS_SLAVE_ADDR_EN_BIT) << (byte * 8);16111611- mlxbf_i2c_write(priv->smbus->io,16121612- MLXBF_I2C_SMBUS_SLAVE_ADDR_CFG + reg * 0x4, slave_reg);16461646+ writel(slave_reg, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_ADDR_CFG +16471647+ reg * 0x4);1613164816141649 return 0;16151650}···16331668 * (7-bit address, 1 status bit (1 if enabled, 0 if not)).16341669 */16351670 for (reg = 0; reg < reg_cnt; reg++) {16361636- slave_reg = mlxbf_i2c_read(priv->smbus->io,16711671+ slave_reg = readl(priv->smbus->io +16371672 MLXBF_I2C_SMBUS_SLAVE_ADDR_CFG + reg * 0x4);1638167316391674 /* Check whether the address slots are empty. */···1673170816741709 /* Cleanup the slave address slot. */16751710 slave_reg &= ~(GENMASK(7, 0) << (slave_byte * 8));16761676- mlxbf_i2c_write(priv->smbus->io,16771677- MLXBF_I2C_SMBUS_SLAVE_ADDR_CFG + reg * 0x4, slave_reg);17111711+ writel(slave_reg, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_ADDR_CFG +17121712+ reg * 0x4);1678171316791714 return 0;16801715}···17661801 int ret;1767180217681803 /* Reset FSM. */17691769- mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_SLAVE_FSM, 0);18041804+ writel(0, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_FSM);1770180517711806 /*17721807 * Enable slave cause interrupt bits. Drive···17751810 * masters issue a Read and Write, respectively. But, clear all17761811 * interrupts first.17771812 */17781778- mlxbf_i2c_write(priv->slv_cause->io,17791779- MLXBF_I2C_CAUSE_OR_CLEAR, ~0);18131813+ writel(~0, priv->slv_cause->io + MLXBF_I2C_CAUSE_OR_CLEAR);17801814 int_reg = MLXBF_I2C_CAUSE_READ_WAIT_FW_RESPONSE;17811815 int_reg |= MLXBF_I2C_CAUSE_WRITE_SUCCESS;17821782- mlxbf_i2c_write(priv->slv_cause->io,17831783- MLXBF_I2C_CAUSE_OR_EVTEN0, int_reg);18161816+ writel(int_reg, priv->slv_cause->io + MLXBF_I2C_CAUSE_OR_EVTEN0);1784181717851818 /* Finally, set the 'ready' bit to start handling transactions. */17861786- mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_SLAVE_READY, 0x1);18191819+ writel(0x1, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_READY);1787182017881821 /* Initialize the cause coalesce resource. */17891822 ret = mlxbf_i2c_init_coalesce(pdev, priv);···18071844 MLXBF_I2C_CAUSE_YU_SLAVE_BIT :18081845 priv->bus + MLXBF_I2C_CAUSE_TYU_SLAVE_BIT;1809184618101810- coalesce0_reg = mlxbf_i2c_read(priv->coalesce->io,18111811- MLXBF_I2C_CAUSE_COALESCE_0);18471847+ coalesce0_reg = readl(priv->coalesce->io + MLXBF_I2C_CAUSE_COALESCE_0);18121848 is_set = coalesce0_reg & (1 << slave_shift);1813184918141850 if (!is_set)18151851 return false;1816185218171853 /* Check the source of the interrupt, i.e. whether a Read or Write. */18181818- cause_reg = mlxbf_i2c_read(priv->slv_cause->io,18191819- MLXBF_I2C_CAUSE_ARBITER);18541854+ cause_reg = readl(priv->slv_cause->io + MLXBF_I2C_CAUSE_ARBITER);18201855 if (cause_reg & MLXBF_I2C_CAUSE_READ_WAIT_FW_RESPONSE)18211856 *read = true;18221857 else if (cause_reg & MLXBF_I2C_CAUSE_WRITE_SUCCESS)18231858 *write = true;1824185918251860 /* Clear cause bits. */18261826- mlxbf_i2c_write(priv->slv_cause->io, MLXBF_I2C_CAUSE_OR_CLEAR, ~0x0);18611861+ writel(~0x0, priv->slv_cause->io + MLXBF_I2C_CAUSE_OR_CLEAR);1827186218281863 return true;18291864}···18611900 * address, if supplied.18621901 */18631902 if (recv_bytes > 0) {18641864- data32 = mlxbf_i2c_read_data(priv->smbus->io,18651865- MLXBF_I2C_SLAVE_DATA_DESC_ADDR);19031903+ data32 = ioread32be(priv->smbus->io +19041904+ MLXBF_I2C_SLAVE_DATA_DESC_ADDR);1866190518671906 /* Parse the received bytes. */18681907 switch (recv_bytes) {···19271966 control32 |= rol32(write_size, MLXBF_I2C_SLAVE_WRITE_BYTES_SHIFT);19281967 control32 |= rol32(pec_en, MLXBF_I2C_SLAVE_SEND_PEC_SHIFT);1929196819301930- mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_SLAVE_GW, control32);19691969+ writel(control32, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_GW);1931197019321971 /*19331972 * Wait until the transfer is completed; the driver will wait···19361975 mlxbf_smbus_slave_wait_for_idle(priv, MLXBF_I2C_SMBUS_TIMEOUT);1937197619381977 /* Release the Slave GW. */19391939- mlxbf_i2c_write(priv->smbus->io,19401940- MLXBF_I2C_SMBUS_SLAVE_RS_MASTER_BYTES, 0x0);19411941- mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_SLAVE_PEC, 0x0);19421942- mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_SLAVE_READY, 0x1);19781978+ writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_RS_MASTER_BYTES);19791979+ writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_PEC);19801980+ writel(0x1, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_READY);1943198119441982 return 0;19451983}···19832023 i2c_slave_event(slave, I2C_SLAVE_STOP, &value);1984202419852025 /* Release the Slave GW. */19861986- mlxbf_i2c_write(priv->smbus->io,19871987- MLXBF_I2C_SMBUS_SLAVE_RS_MASTER_BYTES, 0x0);19881988- mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_SLAVE_PEC, 0x0);19891989- mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_SLAVE_READY, 0x1);20262026+ writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_RS_MASTER_BYTES);20272027+ writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_PEC);20282028+ writel(0x1, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_READY);1990202919912030 return ret;19922031}···20202061 * slave, if the higher 8 bits are sent then the slave expect N bytes20212062 * from the master.20222063 */20232023- rw_bytes_reg = mlxbf_i2c_read(priv->smbus->io,20242024- MLXBF_I2C_SMBUS_SLAVE_RS_MASTER_BYTES);20642064+ rw_bytes_reg = readl(priv->smbus->io +20652065+ MLXBF_I2C_SMBUS_SLAVE_RS_MASTER_BYTES);20252066 recv_bytes = (rw_bytes_reg >> 8) & GENMASK(7, 0);2026206720272068 /*···2223226422242265MODULE_DEVICE_TABLE(of, mlxbf_i2c_dt_ids);2225226622672267+#ifdef CONFIG_ACPI22262268static const struct acpi_device_id mlxbf_i2c_acpi_ids[] = {22272269 { "MLNXBF03", (kernel_ulong_t)&mlxbf_i2c_chip[MLXBF_I2C_CHIP_TYPE_1] },22282270 { "MLNXBF23", (kernel_ulong_t)&mlxbf_i2c_chip[MLXBF_I2C_CHIP_TYPE_2] },···2265230522662306 return ret;22672307}23082308+#else23092309+static int mlxbf_i2c_acpi_probe(struct device *dev, struct mlxbf_i2c_priv *priv)23102310+{23112311+ return -ENOENT;23122312+}23132313+#endif /* CONFIG_ACPI */2268231422692315static int mlxbf_i2c_of_probe(struct device *dev, struct mlxbf_i2c_priv *priv)22702316{···24392473 .driver = {24402474 .name = "i2c-mlxbf",24412475 .of_match_table = mlxbf_i2c_dt_ids,24762476+#ifdef CONFIG_ACPI24422477 .acpi_match_table = ACPI_PTR(mlxbf_i2c_acpi_ids),24782478+#endif /* CONFIG_ACPI */24432479 },24442480};24452481···24702502module_exit(mlxbf_i2c_exit);2471250324722504MODULE_DESCRIPTION("Mellanox BlueField I2C bus driver");24732473-MODULE_AUTHOR("Khalil Blaiech <kblaiech@mellanox.com>");25052505+MODULE_AUTHOR("Khalil Blaiech <kblaiech@nvidia.com>");24742506MODULE_LICENSE("GPL v2");
···622622/**623623 * srpt_unregister_mad_agent - unregister MAD callback functions624624 * @sdev: SRPT HCA pointer.625625+ * @port_cnt: number of ports with registered MAD625626 *626627 * Note: It is safe to call this function more than once for the same device.627628 */628628-static void srpt_unregister_mad_agent(struct srpt_device *sdev)629629+static void srpt_unregister_mad_agent(struct srpt_device *sdev, int port_cnt)629630{630631 struct ib_port_modify port_modify = {631632 .clr_port_cap_mask = IB_PORT_DEVICE_MGMT_SUP,···634633 struct srpt_port *sport;635634 int i;636635637637- for (i = 1; i <= sdev->device->phys_port_cnt; i++) {636636+ for (i = 1; i <= port_cnt; i++) {638637 sport = &sdev->port[i - 1];639638 WARN_ON(sport->port != i);640639 if (sport->mad_agent) {···31863185 if (ret) {31873186 pr_err("MAD registration failed for %s-%d.\n",31883187 dev_name(&sdev->device->dev), i);31893189- goto err_event;31883188+ i--;31893189+ goto err_port;31903190 }31913191 }31923192···31993197 pr_debug("added %s.\n", dev_name(&device->dev));32003198 return 0;3201319932023202-err_event:32003200+err_port:32013201+ srpt_unregister_mad_agent(sdev, i);32033202 ib_unregister_event_handler(&sdev->event_handler);32043203err_cm:32053204 if (sdev->cm_id)···32243221 struct srpt_device *sdev = client_data;32253222 int i;3226322332273227- srpt_unregister_mad_agent(sdev);32243224+ srpt_unregister_mad_agent(sdev, sdev->device->phys_port_cnt);3228322532293226 ib_unregister_event_handler(&sdev->event_handler);32303227
+1
drivers/infiniband/ulp/srpt/ib_srpt.h
···256256 * @rdma_cm: See below.257257 * @rdma_cm.cm_id: RDMA CM ID associated with the channel.258258 * @cq: IB completion queue for this channel.259259+ * @cq_size: Number of CQEs in @cq.259260 * @zw_cqe: Zero-length write CQE.260261 * @rcu: RCU head.261262 * @kref: kref for this channel.
+5-1
drivers/iommu/amd/amd_iommu_types.h
···409409/* Only true if all IOMMUs support device IOTLBs */410410extern bool amd_iommu_iotlb_sup;411411412412-#define MAX_IRQS_PER_TABLE 256412412+/*413413+ * AMD IOMMU hardware only support 512 IRTEs despite414414+ * the architectural limitation of 2048 entries.415415+ */416416+#define MAX_IRQS_PER_TABLE 512413417#define IRQ_TABLE_ALIGNMENT 128414418415419struct irq_remap_table {
···8585 * @base: Base address of the memory mapped IO registers8686 * @pdev: Pointer to platform device.8787 * @ti_sci_id: TI-SCI device identifier8888+ * @unmapped_cnt: Number of @unmapped_dev_ids entries8989+ * @unmapped_dev_ids: Pointer to an array of TI-SCI device identifiers of9090+ * unmapped event sources.9191+ * Unmapped Events are not part of the Global Event Map and9292+ * they are converted to Global event within INTA to be9393+ * received by the same INTA to generate an interrupt.9494+ * In case an interrupt request comes for a device which is9595+ * generating Unmapped Event, we must use the INTA's TI-SCI9696+ * device identifier in place of the source device9797+ * identifier to let sysfw know where it has to program the9898+ * Global Event number.8899 */89100struct ti_sci_inta_irq_domain {90101 const struct ti_sci_handle *sci;···10796 void __iomem *base;10897 struct platform_device *pdev;10998 u32 ti_sci_id;9999+100100+ int unmapped_cnt;101101+ u16 *unmapped_dev_ids;110102};111103112104#define to_vint_desc(e, i) container_of(e, struct ti_sci_inta_vint_desc, \113105 events[i])106106+107107+static u16 ti_sci_inta_get_dev_id(struct ti_sci_inta_irq_domain *inta, u32 hwirq)108108+{109109+ u16 dev_id = HWIRQ_TO_DEVID(hwirq);110110+ int i;111111+112112+ if (inta->unmapped_cnt == 0)113113+ return dev_id;114114+115115+ /*116116+ * For devices sending Unmapped Events we must use the INTA's TI-SCI117117+ * device identifier number to be able to convert it to a Global Event118118+ * and map it to an interrupt.119119+ */120120+ for (i = 0; i < inta->unmapped_cnt; i++) {121121+ if (dev_id == inta->unmapped_dev_ids[i]) {122122+ dev_id = inta->ti_sci_id;123123+ break;124124+ }125125+ }126126+127127+ return dev_id;128128+}114129115130/**116131 * ti_sci_inta_irq_handler() - Chained IRQ handler for the vint irqs···288251 u16 dev_id, dev_index;289252 int err;290253291291- dev_id = HWIRQ_TO_DEVID(hwirq);254254+ dev_id = ti_sci_inta_get_dev_id(inta, hwirq);292255 dev_index = HWIRQ_TO_IRQID(hwirq);293256294257 event_desc = &vint_desc->events[free_bit];···389352{390353 struct ti_sci_inta_vint_desc *vint_desc;391354 struct ti_sci_inta_irq_domain *inta;355355+ u16 dev_id;392356393357 vint_desc = to_vint_desc(event_desc, event_desc->vint_bit);394358 inta = vint_desc->domain->host_data;359359+ dev_id = ti_sci_inta_get_dev_id(inta, hwirq);395360 /* free event irq */396361 mutex_lock(&inta->vint_mutex);397362 inta->sci->ops.rm_irq_ops.free_event_map(inta->sci,398398- HWIRQ_TO_DEVID(hwirq),399399- HWIRQ_TO_IRQID(hwirq),363363+ dev_id, HWIRQ_TO_IRQID(hwirq),400364 inta->ti_sci_id,401365 vint_desc->vint_id,402366 event_desc->global_event,···612574 .chip = &ti_sci_inta_msi_irq_chip,613575};614576577577+static int ti_sci_inta_get_unmapped_sources(struct ti_sci_inta_irq_domain *inta)578578+{579579+ struct device *dev = &inta->pdev->dev;580580+ struct device_node *node = dev_of_node(dev);581581+ struct of_phandle_iterator it;582582+ int count, err, ret, i;583583+584584+ count = of_count_phandle_with_args(node, "ti,unmapped-event-sources", NULL);585585+ if (count <= 0)586586+ return 0;587587+588588+ inta->unmapped_dev_ids = devm_kcalloc(dev, count,589589+ sizeof(*inta->unmapped_dev_ids),590590+ GFP_KERNEL);591591+ if (!inta->unmapped_dev_ids)592592+ return -ENOMEM;593593+594594+ i = 0;595595+ of_for_each_phandle(&it, err, node, "ti,unmapped-event-sources", NULL, 0) {596596+ u32 dev_id;597597+598598+ ret = of_property_read_u32(it.node, "ti,sci-dev-id", &dev_id);599599+ if (ret) {600600+ dev_err(dev, "ti,sci-dev-id read failure for %pOFf\n", it.node);601601+ of_node_put(it.node);602602+ return ret;603603+ }604604+ inta->unmapped_dev_ids[i++] = dev_id;605605+ }606606+607607+ inta->unmapped_cnt = count;608608+609609+ return 0;610610+}611611+615612static int ti_sci_inta_irq_domain_probe(struct platform_device *pdev)616613{617614 struct irq_domain *parent_domain, *domain, *msi_domain;···701628 inta->base = devm_ioremap_resource(dev, res);702629 if (IS_ERR(inta->base))703630 return PTR_ERR(inta->base);631631+632632+ ret = ti_sci_inta_get_unmapped_sources(inta);633633+ if (ret)634634+ return ret;704635705636 domain = irq_domain_add_linear(dev_of_node(dev),706637 ti_sci_get_num_resources(inta->vint),
+24-19
drivers/mtd/nand/raw/fsl_ifc_nand.c
···707707{708708 struct mtd_info *mtd = nand_to_mtd(chip);709709 struct fsl_ifc_mtd *priv = nand_get_controller_data(chip);710710+ struct fsl_ifc_ctrl *ctrl = priv->ctrl;711711+ struct fsl_ifc_global __iomem *ifc_global = ctrl->gregs;712712+ u32 csor;713713+714714+ csor = ifc_in32(&ifc_global->csor_cs[priv->bank].csor);715715+716716+ /* Must also set CSOR_NAND_ECC_ENC_EN if DEC_EN set */717717+ if (csor & CSOR_NAND_ECC_DEC_EN) {718718+ chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;719719+ mtd_set_ooblayout(mtd, &fsl_ifc_ooblayout_ops);720720+721721+ /* Hardware generates ECC per 512 Bytes */722722+ chip->ecc.size = 512;723723+ if ((csor & CSOR_NAND_ECC_MODE_MASK) == CSOR_NAND_ECC_MODE_4) {724724+ chip->ecc.bytes = 8;725725+ chip->ecc.strength = 4;726726+ } else {727727+ chip->ecc.bytes = 16;728728+ chip->ecc.strength = 8;729729+ }730730+ } else {731731+ chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;732732+ chip->ecc.algo = NAND_ECC_ALGO_HAMMING;733733+ }710734711735 dev_dbg(priv->dev, "%s: nand->numchips = %d\n", __func__,712736 nanddev_ntargets(&chip->base));···932908 default:933909 dev_err(priv->dev, "bad csor %#x: bad page size\n", csor);934910 return -ENODEV;935935- }936936-937937- /* Must also set CSOR_NAND_ECC_ENC_EN if DEC_EN set */938938- if (csor & CSOR_NAND_ECC_DEC_EN) {939939- chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;940940- mtd_set_ooblayout(mtd, &fsl_ifc_ooblayout_ops);941941-942942- /* Hardware generates ECC per 512 Bytes */943943- chip->ecc.size = 512;944944- if ((csor & CSOR_NAND_ECC_MODE_MASK) == CSOR_NAND_ECC_MODE_4) {945945- chip->ecc.bytes = 8;946946- chip->ecc.strength = 4;947947- } else {948948- chip->ecc.bytes = 16;949949- chip->ecc.strength = 8;950950- }951951- } else {952952- chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;953953- chip->ecc.algo = NAND_ECC_ALGO_HAMMING;954911 }955912956913 ret = fsl_ifc_sram_init(priv);
···17081708 return -EINVAL;17091709 }1710171017111711+ /* Default ECC settings in case they are not set in the device tree */17121712+ if (!chip->ecc.size)17131713+ chip->ecc.size = FMC2_ECC_STEP_SIZE;17141714+17151715+ if (!chip->ecc.strength)17161716+ chip->ecc.strength = FMC2_ECC_BCH8;17171717+17111718 ret = nand_ecc_choose_conf(chip, &stm32_fmc2_nfc_ecc_caps,17121719 mtd->oobsize - FMC2_BBM_LEN);17131720 if (ret) {···1734172717351728 mtd_set_ooblayout(mtd, &stm32_fmc2_nfc_ooblayout_ops);1736172917371737- if (chip->options & NAND_BUSWIDTH_16)17381738- stm32_fmc2_nfc_set_buswidth_16(nfc, true);17301730+ stm32_fmc2_nfc_setup(chip);1739173117401732 return 0;17411733}···19571951 chip->controller = &nfc->base;19581952 chip->options |= NAND_BUSWIDTH_AUTO | NAND_NO_SUBPAGE_WRITE |19591953 NAND_USES_DMA;19601960-19611961- /* Default ECC settings */19621962- chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;19631963- chip->ecc.size = FMC2_ECC_STEP_SIZE;19641964- chip->ecc.strength = FMC2_ECC_BCH8;1965195419661955 /* Scan to find existence of the device */19671956 ret = nand_scan(chip, nand->ncs);
+7-6
drivers/mtd/spi-nor/core.c
···2701270127022702 memcpy(&sfdp_params, nor->params, sizeof(sfdp_params));2703270327042704- if (spi_nor_parse_sfdp(nor, &sfdp_params)) {27042704+ if (spi_nor_parse_sfdp(nor, nor->params)) {27052705+ memcpy(nor->params, &sfdp_params, sizeof(*nor->params));27052706 nor->addr_width = 0;27062707 nor->flags &= ~SNOR_F_4B_OPCODES;27072707- } else {27082708- memcpy(nor->params, &sfdp_params, sizeof(*nor->params));27092708 }27102709}27112710···30083009 /* already configured from SFDP */30093010 } else if (nor->info->addr_width) {30103011 nor->addr_width = nor->info->addr_width;30113011- } else if (nor->mtd.size > 0x1000000) {30123012- /* enable 4-byte addressing if the device exceeds 16MiB */30133013- nor->addr_width = 4;30143012 } else {30153013 nor->addr_width = 3;30143014+ }30153015+30163016+ if (nor->addr_width == 3 && nor->mtd.size > 0x1000000) {30173017+ /* enable 4-byte addressing if the device exceeds 16MiB */30183018+ nor->addr_width = 4;30163019 }3017302030183021 if (nor->addr_width > SPI_NOR_MAX_ADDR_WIDTH) {
+11-3
drivers/net/can/dev.c
···512512 */513513 struct sk_buff *skb = priv->echo_skb[idx];514514 struct canfd_frame *cf = (struct canfd_frame *)skb->data;515515- u8 len = cf->len;516515517517- *len_ptr = len;516516+ /* get the real payload length for netdev statistics */517517+ if (cf->can_id & CAN_RTR_FLAG)518518+ *len_ptr = 0;519519+ else520520+ *len_ptr = cf->len;521521+518522 priv->echo_skb[idx] = NULL;519523520524 return skb;···542538 if (!skb)543539 return 0;544540545545- netif_rx(skb);541541+ skb_get(skb);542542+ if (netif_rx(skb) == NET_RX_SUCCESS)543543+ dev_consume_skb_any(skb);544544+ else545545+ dev_kfree_skb_any(skb);546546547547 return len;548548}
+7-5
drivers/net/can/flexcan.c
···217217 * MX8MP FlexCAN3 03.00.17.01 yes yes no yes yes yes218218 * VF610 FlexCAN3 ? no yes no yes yes? no219219 * LS1021A FlexCAN2 03.00.04.00 no yes no no yes no220220- * LX2160A FlexCAN3 03.00.23.00 no yes no no yes yes220220+ * LX2160A FlexCAN3 03.00.23.00 no yes no yes yes yes221221 *222222 * Some SOCs do not have the RX_WARN & TX_WARN interrupt line connected.223223 */···400400static const struct flexcan_devtype_data fsl_vf610_devtype_data = {401401 .quirks = FLEXCAN_QUIRK_DISABLE_RXFG | FLEXCAN_QUIRK_ENABLE_EACEN_RRS |402402 FLEXCAN_QUIRK_DISABLE_MECR | FLEXCAN_QUIRK_USE_OFF_TIMESTAMP |403403- FLEXCAN_QUIRK_BROKEN_PERR_STATE,403403+ FLEXCAN_QUIRK_BROKEN_PERR_STATE | FLEXCAN_QUIRK_SUPPORT_ECC,404404};405405406406static const struct flexcan_devtype_data fsl_ls1021a_r2_devtype_data = {407407 .quirks = FLEXCAN_QUIRK_DISABLE_RXFG | FLEXCAN_QUIRK_ENABLE_EACEN_RRS |408408- FLEXCAN_QUIRK_DISABLE_MECR | FLEXCAN_QUIRK_BROKEN_PERR_STATE |409409- FLEXCAN_QUIRK_USE_OFF_TIMESTAMP,408408+ FLEXCAN_QUIRK_BROKEN_PERR_STATE | FLEXCAN_QUIRK_USE_OFF_TIMESTAMP,410409};411410412411static const struct flexcan_devtype_data fsl_lx2160a_r1_devtype_data = {413412 .quirks = FLEXCAN_QUIRK_DISABLE_RXFG | FLEXCAN_QUIRK_ENABLE_EACEN_RRS |414413 FLEXCAN_QUIRK_DISABLE_MECR | FLEXCAN_QUIRK_BROKEN_PERR_STATE |415415- FLEXCAN_QUIRK_USE_OFF_TIMESTAMP | FLEXCAN_QUIRK_SUPPORT_FD,414414+ FLEXCAN_QUIRK_USE_OFF_TIMESTAMP | FLEXCAN_QUIRK_SUPPORT_FD |415415+ FLEXCAN_QUIRK_SUPPORT_ECC,416416};417417418418static const struct can_bittiming_const flexcan_bittiming_const = {···20622062{20632063 struct net_device *dev = platform_get_drvdata(pdev);2064206420652065+ device_set_wakeup_enable(&pdev->dev, false);20662066+ device_set_wakeup_capable(&pdev->dev, false);20652067 unregister_flexcandev(dev);20662068 pm_runtime_disable(&pdev->dev);20672069 free_candev(dev);
+8-3
drivers/net/can/peak_canfd/peak_canfd.c
···262262 cf_len = get_can_dlc(pucan_msg_get_dlc(msg));263263264264 /* if this frame is an echo, */265265- if ((rx_msg_flags & PUCAN_MSG_LOOPED_BACK) &&266266- !(rx_msg_flags & PUCAN_MSG_SELF_RECEIVE)) {265265+ if (rx_msg_flags & PUCAN_MSG_LOOPED_BACK) {267266 unsigned long flags;268267269268 spin_lock_irqsave(&priv->echo_lock, flags);···276277 netif_wake_queue(priv->ndev);277278278279 spin_unlock_irqrestore(&priv->echo_lock, flags);279279- return 0;280280+281281+ /* if this frame is only an echo, stop here. Otherwise,282282+ * continue to push this application self-received frame into283283+ * its own rx queue.284284+ */285285+ if (!(rx_msg_flags & PUCAN_MSG_SELF_RECEIVE))286286+ return 0;280287 }281288282289 /* otherwise, it should be pushed into rx fifo */
···12191219 priv->port_mtu[port] = new_mtu;1220122012211221 for (i = 0; i < QCA8K_NUM_PORTS; i++)12221222- if (priv->port_mtu[port] > mtu)12231223- mtu = priv->port_mtu[port];12221222+ if (priv->port_mtu[i] > mtu)12231223+ mtu = priv->port_mtu[i];1224122412251225 /* Include L2 header / FCS length */12261226 qca8k_write(priv, QCA8K_MAX_FRAME_SIZE, mtu + ETH_HLEN + ETH_FCS_LEN);
+2-1
drivers/net/ethernet/cadence/macb_main.c
···1929192919301930static int macb_pad_and_fcs(struct sk_buff **skb, struct net_device *ndev)19311931{19321932- bool cloned = skb_cloned(*skb) || skb_header_cloned(*skb);19321932+ bool cloned = skb_cloned(*skb) || skb_header_cloned(*skb) ||19331933+ skb_is_nonlinear(*skb);19331934 int padlen = ETH_ZLEN - (*skb)->len;19341935 int headroom = skb_headroom(*skb);19351936 int tailroom = skb_tailroom(*skb);
···456456 */457457#define FEC_QUIRK_HAS_FRREG (1 << 16)458458459459+/* Some FEC hardware blocks need the MMFR cleared at setup time to avoid460460+ * the generation of an MII event. This must be avoided in the older461461+ * FEC blocks where it will stop MII events being generated.462462+ */463463+#define FEC_QUIRK_CLEAR_SETUP_MII (1 << 17)464464+459465struct bufdesc_prop {460466 int qid;461467 /* Address of Rx and Tx buffers */
···18291829 fcb_len = GMAC_FCB_LEN + GMAC_TXPAL_LEN;1830183018311831 /* make space for additional header when fcb is needed */18321832- if (fcb_len && unlikely(skb_headroom(skb) < fcb_len)) {18331833- struct sk_buff *skb_new;18341834-18351835- skb_new = skb_realloc_headroom(skb, fcb_len);18361836- if (!skb_new) {18321832+ if (fcb_len) {18331833+ if (unlikely(skb_cow_head(skb, fcb_len))) {18371834 dev->stats.tx_errors++;18381835 dev_kfree_skb_any(skb);18391836 return NETDEV_TX_OK;18401837 }18411841-18421842- if (skb->sk)18431843- skb_set_owner_w(skb_new, skb->sk);18441844- dev_consume_skb_any(skb);18451845- skb = skb_new;18461838 }1847183918481840 /* total number of fragments in the SKB */···3372338033733381 if (dev->features & NETIF_F_IP_CSUM ||33743382 priv->device_flags & FSL_GIANFAR_DEV_HAS_TIMER)33753375- dev->needed_headroom = GMAC_FCB_LEN;33833383+ dev->needed_headroom = GMAC_FCB_LEN + GMAC_TXPAL_LEN;3376338433773385 /* Initializing some of the rx/tx queue level parameters */33783386 for (i = 0; i < priv->num_tx_queues; i++) {
+32-4
drivers/net/ethernet/ibm/ibmvnic.c
···11851185 if (adapter->state != VNIC_CLOSED) {11861186 rc = ibmvnic_login(netdev);11871187 if (rc)11881188- return rc;11881188+ goto out;1189118911901190 rc = init_resources(adapter);11911191 if (rc) {11921192 netdev_err(netdev, "failed to initialize resources\n");11931193 release_resources(adapter);11941194- return rc;11941194+ goto out;11951195 }11961196 }1197119711981198 rc = __ibmvnic_open(netdev);1199119912001200+out:12011201+ /*12021202+ * If open fails due to a pending failover, set device state and12031203+ * return. Device operation will be handled by reset routine.12041204+ */12051205+ if (rc && adapter->failover_pending) {12061206+ adapter->state = VNIC_OPEN;12071207+ rc = 0;12081208+ }12001209 return rc;12011210}12021211···19311922 rwi->reset_reason);1932192319331924 rtnl_lock();19251925+ /*19261926+ * Now that we have the rtnl lock, clear any pending failover.19271927+ * This will ensure ibmvnic_open() has either completed or will19281928+ * block until failover is complete.19291929+ */19301930+ if (rwi->reset_reason == VNIC_RESET_FAILOVER)19311931+ adapter->failover_pending = false;1934193219351933 netif_carrier_off(netdev);19361934 adapter->reset_reason = rwi->reset_reason;···22182202 /* CHANGE_PARAM requestor holds rtnl_lock */22192203 rc = do_change_param_reset(adapter, rwi, reset_state);22202204 } else if (adapter->force_reset_recovery) {22052205+ /*22062206+ * Since we are doing a hard reset now, clear the22072207+ * failover_pending flag so we don't ignore any22082208+ * future MOBILITY or other resets.22092209+ */22102210+ adapter->failover_pending = false;22112211+22212212 /* Transport event occurred during previous reset */22222213 if (adapter->wait_for_reset) {22232214 /* Previous was CHANGE_PARAM; caller locked */···22892266 unsigned long flags;22902267 int ret;2291226822692269+ /*22702270+ * If failover is pending don't schedule any other reset.22712271+ * Instead let the failover complete. If there is already a22722272+ * a failover reset scheduled, we will detect and drop the22732273+ * duplicate reset when walking the ->rwi_list below.22742274+ */22922275 if (adapter->state == VNIC_REMOVING ||22932276 adapter->state == VNIC_REMOVED ||22942294- adapter->failover_pending) {22772277+ (adapter->failover_pending && reason != VNIC_RESET_FAILOVER)) {22952278 ret = EBUSY;22962279 netdev_dbg(netdev, "Adapter removing or pending failover, skipping reset\n");22972280 goto err;···47424713 case IBMVNIC_CRQ_INIT:47434714 dev_info(dev, "Partner initialized\n");47444715 adapter->from_passive_init = true;47454745- adapter->failover_pending = false;47464716 if (!completion_done(&adapter->init_done)) {47474717 complete(&adapter->init_done);47484718 adapter->init_done_rc = -EIO;
···126126127127 ethtool_link_ksettings_zero_link_mode(ks, supported);128128129129+ if (!idev->port_info) {130130+ netdev_err(netdev, "port_info not initialized\n");131131+ return -EOPNOTSUPP;132132+ }133133+129134 /* The port_info data is found in a DMA space that the NIC keeps130135 * up-to-date, so there's no need to request the data from the131136 * NIC, we already have it in our memory space.
···639639 break;640640 case HWTSTAMP_FILTER_ALL:641641 case HWTSTAMP_FILTER_NTP_ALL:642642- return -ERANGE;643642 case HWTSTAMP_FILTER_PTP_V1_L4_EVENT:644643 case HWTSTAMP_FILTER_PTP_V1_L4_SYNC:645644 case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ:646646- priv->rx_ts_enabled = HWTSTAMP_FILTER_PTP_V1_L4_EVENT;647647- cfg.rx_filter = HWTSTAMP_FILTER_PTP_V1_L4_EVENT;648648- break;645645+ return -ERANGE;649646 case HWTSTAMP_FILTER_PTP_V2_L4_EVENT:650647 case HWTSTAMP_FILTER_PTP_V2_L4_SYNC:651648 case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ:
···112112 u64 dma_end = 0;113113114114 /* Determine the overall bounds of all DMA regions */115115- for (dma_start = ~0ULL; r->size; r++) {115115+ for (dma_start = ~0; r->size; r++) {116116 /* Take lower and upper limits */117117 if (r->dma_start < dma_start)118118 dma_start = r->dma_start;
+5-4
drivers/opp/core.c
···11811181 struct opp_device *opp_dev, *temp;11821182 int i;1183118311841184+ /* Drop the lock as soon as we can */11851185+ list_del(&opp_table->node);11861186+ mutex_unlock(&opp_table_lock);11871187+11841188 _of_clear_opp_table(opp_table);1185118911861190 /* Release clk */···1212120812131209 mutex_destroy(&opp_table->genpd_virt_dev_lock);12141210 mutex_destroy(&opp_table->lock);12151215- list_del(&opp_table->node);12161211 kfree(opp_table);12171217-12181218- mutex_unlock(&opp_table_lock);12191212}1220121312211214void dev_pm_opp_put_opp_table(struct opp_table *opp_table)···19311930 return ERR_PTR(-EINVAL);1932193119331932 opp_table = dev_pm_opp_get_opp_table(dev);19341934- if (!IS_ERR(opp_table))19331933+ if (IS_ERR(opp_table))19351934 return opp_table;1936193519371936 /* This should be called before OPPs are initialized */
···586586 * ATU, so we should not program the ATU here.587587 */588588 if (pp->bridge->child_ops == &dw_child_pcie_ops) {589589- struct resource_entry *entry =590590- resource_list_first_type(&pp->bridge->windows, IORESOURCE_MEM);589589+ struct resource_entry *tmp, *entry = NULL;590590+591591+ /* Get last memory resource entry */592592+ resource_list_for_each_entry(tmp, &pp->bridge->windows)593593+ if (resource_type(tmp->res) == IORESOURCE_MEM)594594+ entry = tmp;591595592596 dw_pcie_prog_outbound_atu(pci, PCIE_ATU_REGION_INDEX0,593597 PCIE_ATU_TYPE_MEM, entry->res->start,
+10-13
drivers/pci/controller/pci-mvebu.c
···958958}959959960960/*961961- * We can't use devm_of_pci_get_host_bridge_resources() because we962962- * need to parse our special DT properties encoding the MEM and IO963963- * apertures.961961+ * devm_of_pci_get_host_bridge_resources() only sets up translateable resources,962962+ * so we need extra resource setup parsing our special DT properties encoding963963+ * the MEM and IO apertures.964964 */965965static int mvebu_pcie_parse_request_resources(struct mvebu_pcie *pcie)966966{967967 struct device *dev = &pcie->pdev->dev;968968- struct device_node *np = dev->of_node;969968 struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);970969 int ret;971971-972972- /* Get the bus range */973973- ret = of_pci_parse_bus_range(np, &pcie->busn);974974- if (ret) {975975- dev_err(dev, "failed to parse bus-range property: %d\n", ret);976976- return ret;977977- }978978- pci_add_resource(&bridge->windows, &pcie->busn);979970980971 /* Get the PCIe memory aperture */981972 mvebu_mbus_get_pcie_mem_aperture(&pcie->mem);···977986978987 pcie->mem.name = "PCI MEM";979988 pci_add_resource(&bridge->windows, &pcie->mem);989989+ ret = devm_request_resource(dev, &iomem_resource, &pcie->mem);990990+ if (ret)991991+ return ret;980992981993 /* Get the PCIe IO aperture */982994 mvebu_mbus_get_pcie_io_aperture(&pcie->io);···993999 pcie->realio.name = "PCI I/O";99410009951001 pci_add_resource(&bridge->windows, &pcie->realio);10021002+ ret = devm_request_resource(dev, &ioport_resource, &pcie->realio);10031003+ if (ret)10041004+ return ret;9961005 }9971006998998- return devm_request_pci_bus_resources(dev, &bridge->windows);10071007+ return 0;9991008}1000100910011010/*
+7-2
drivers/pci/pci.c
···35163516{35173517 dev->acs_cap = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ACS);3518351835193519- if (dev->acs_cap)35203520- pci_enable_acs(dev);35193519+ /*35203520+ * Attempt to enable ACS regardless of capability because some Root35213521+ * Ports (e.g. those quirked with *_intel_pch_acs_*) do not have35223522+ * the standard ACS capability but still support ACS via those35233523+ * quirks.35243524+ */35253525+ pci_enable_acs(dev);35213526}3522352735233528/**
···41654165 ret = rdev->desc->fixed_uV;41664166 } else if (rdev->supply) {41674167 ret = regulator_get_voltage_rdev(rdev->supply->rdev);41684168+ } else if (rdev->supply_name) {41694169+ return -EPROBE_DEFER;41684170 } else {41694171 return -EINVAL;41704172 }
+12-2
drivers/s390/crypto/ap_bus.c
···680680{681681 struct ap_device *ap_dev = to_ap_dev(dev);682682 struct ap_driver *ap_drv = to_ap_drv(dev->driver);683683- int card, queue, devres, drvres, rc;683683+ int card, queue, devres, drvres, rc = -ENODEV;684684+685685+ if (!get_device(dev))686686+ return rc;684687685688 if (is_queue_dev(dev)) {686689 /*···700697 mutex_unlock(&ap_perms_mutex);701698 drvres = ap_drv->flags & AP_DRIVER_FLAG_DEFAULT;702699 if (!!devres != !!drvres)703703- return -ENODEV;700700+ goto out;704701 }705702706703 /* Add queue/card to list of active queues/cards */···721718 ap_dev->drv = NULL;722719 }723720721721+out:722722+ if (rc)723723+ put_device(dev);724724 return rc;725725}726726···749743 if (is_queue_dev(dev))750744 hash_del(&to_ap_queue(dev)->hnode);751745 spin_unlock_bh(&ap_queues_lock);746746+747747+ put_device(dev);752748753749 return 0;754750}···13791371 __func__, ac->id, dom);13801372 goto put_dev_and_continue;13811373 }13741374+ /* get it and thus adjust reference counter */13751375+ get_device(dev);13821376 if (decfg)13831377 AP_DBF_INFO("%s(%d,%d) new (decfg) queue device created\n",13841378 __func__, ac->id, dom);
+16-14
drivers/s390/crypto/pkey_api.c
···3535#define PROTKEYBLOBBUFSIZE 256 /* protected key buffer size used internal */3636#define MAXAPQNSINLIST 64 /* max 64 apqns within a apqn list */37373838-/* mask of available pckmo subfunctions, fetched once at module init */3939-static cpacf_mask_t pckmo_functions;4040-4138/*4239 * debug feature data and functions4340 */···8891 const struct pkey_clrkey *clrkey,8992 struct pkey_protkey *protkey)9093{9494+ /* mask of available pckmo subfunctions */9595+ static cpacf_mask_t pckmo_functions;9696+9197 long fc;9298 int keysize;9399 u8 paramblock[64];···114114 return -EINVAL;115115 }116116117117- /*118118- * Check if the needed pckmo subfunction is available.119119- * These subfunctions can be enabled/disabled by customers120120- * in the LPAR profile or may even change on the fly.121121- */117117+ /* Did we already check for PCKMO ? */118118+ if (!pckmo_functions.bytes[0]) {119119+ /* no, so check now */120120+ if (!cpacf_query(CPACF_PCKMO, &pckmo_functions))121121+ return -ENODEV;122122+ }123123+ /* check for the pckmo subfunction we need now */122124 if (!cpacf_test_func(&pckmo_functions, fc)) {123125 DEBUG_ERR("%s pckmo functions not available\n", __func__);124126 return -ENODEV;···20602058 */20612059static int __init pkey_init(void)20622060{20632063- cpacf_mask_t kmc_functions;20612061+ cpacf_mask_t func_mask;2064206220652063 /*20662064 * The pckmo instruction should be available - even if we don't···20682066 * is also the minimum level for the kmc instructions which20692067 * are able to work with protected keys.20702068 */20712071- if (!cpacf_query(CPACF_PCKMO, &pckmo_functions))20692069+ if (!cpacf_query(CPACF_PCKMO, &func_mask))20722070 return -ENODEV;2073207120742072 /* check for kmc instructions available */20752075- if (!cpacf_query(CPACF_KMC, &kmc_functions))20732073+ if (!cpacf_query(CPACF_KMC, &func_mask))20762074 return -ENODEV;20772077- if (!cpacf_test_func(&kmc_functions, CPACF_KMC_PAES_128) ||20782078- !cpacf_test_func(&kmc_functions, CPACF_KMC_PAES_192) ||20792079- !cpacf_test_func(&kmc_functions, CPACF_KMC_PAES_256))20752075+ if (!cpacf_test_func(&func_mask, CPACF_KMC_PAES_128) ||20762076+ !cpacf_test_func(&func_mask, CPACF_KMC_PAES_192) ||20772077+ !cpacf_test_func(&func_mask, CPACF_KMC_PAES_256))20802078 return -ENODEV;2081207920822080 pkey_debug_init();
···17401740 reply_q->irq_poll_scheduled = false;17411741 reply_q->irq_line_enable = true;17421742 enable_irq(reply_q->os_irq);17431743+ /*17441744+ * Go for one more round of processing the17451745+ * reply descriptor post queue incase if HBA17461746+ * Firmware has posted some reply descriptors17471747+ * while reenabling the IRQ.17481748+ */17491749+ _base_process_reply_queue(reply_q);17431750 }1744175117451752 return num_entries;
+1-14
drivers/spi/spi-bcm2835.c
···11931193 struct spi_controller *ctlr = spi->controller;11941194 struct bcm2835_spi *bs = spi_controller_get_devdata(ctlr);11951195 struct gpio_chip *chip;11961196- enum gpio_lookup_flags lflags;11971196 u32 cs;1198119711991198 /*···12581259 if (!chip)12591260 return 0;1260126112611261- /*12621262- * Retrieve the corresponding GPIO line used for CS.12631263- * The inversion semantics will be handled by the GPIO core12641264- * code, so we pass GPIOD_OUT_LOW for "unasserted" and12651265- * the correct flag for inversion semantics. The SPI_CS_HIGH12661266- * on spi->mode cannot be checked for polarity in this case12671267- * as the flag use_gpio_descriptors enforces SPI_CS_HIGH.12681268- */12691269- if (of_property_read_bool(spi->dev.of_node, "spi-cs-high"))12701270- lflags = GPIO_ACTIVE_HIGH;12711271- else12721272- lflags = GPIO_ACTIVE_LOW;12731262 spi->cs_gpiod = gpiochip_request_own_desc(chip, 8 - spi->chip_select,12741263 DRV_NAME,12751275- lflags,12641264+ GPIO_LOOKUP_FLAGS_DEFAULT,12761265 GPIOD_OUT_LOW);12771266 if (IS_ERR(spi->cs_gpiod))12781267 return PTR_ERR(spi->cs_gpiod);
+4-6
drivers/spi/spi-fsl-dspi.c
···10801080#ifdef CONFIG_PM_SLEEP10811081static int dspi_suspend(struct device *dev)10821082{10831083- struct spi_controller *ctlr = dev_get_drvdata(dev);10841084- struct fsl_dspi *dspi = spi_controller_get_devdata(ctlr);10831083+ struct fsl_dspi *dspi = dev_get_drvdata(dev);1085108410861085 if (dspi->irq)10871086 disable_irq(dspi->irq);10881088- spi_controller_suspend(ctlr);10871087+ spi_controller_suspend(dspi->ctlr);10891088 clk_disable_unprepare(dspi->clk);1090108910911090 pinctrl_pm_select_sleep_state(dev);···1094109510951096static int dspi_resume(struct device *dev)10961097{10971097- struct spi_controller *ctlr = dev_get_drvdata(dev);10981098- struct fsl_dspi *dspi = spi_controller_get_devdata(ctlr);10981098+ struct fsl_dspi *dspi = dev_get_drvdata(dev);10991099 int ret;1100110011011101 pinctrl_pm_select_default_state(dev);···11021104 ret = clk_prepare_enable(dspi->clk);11031105 if (ret)11041106 return ret;11051105- spi_controller_resume(ctlr);11071107+ spi_controller_resume(dspi->ctlr);11061108 if (dspi->irq)11071109 enable_irq(dspi->irq);11081110
+15-8
drivers/spi/spi-imx.c
···16761676 goto out_master_put;16771677 }1678167816791679- pm_runtime_enable(spi_imx->dev);16791679+ ret = clk_prepare_enable(spi_imx->clk_per);16801680+ if (ret)16811681+ goto out_master_put;16821682+16831683+ ret = clk_prepare_enable(spi_imx->clk_ipg);16841684+ if (ret)16851685+ goto out_put_per;16861686+16801687 pm_runtime_set_autosuspend_delay(spi_imx->dev, MXC_RPM_TIMEOUT);16811688 pm_runtime_use_autosuspend(spi_imx->dev);16821682-16831683- ret = pm_runtime_get_sync(spi_imx->dev);16841684- if (ret < 0) {16851685- dev_err(spi_imx->dev, "failed to enable clock\n");16861686- goto out_runtime_pm_put;16871687- }16891689+ pm_runtime_set_active(spi_imx->dev);16901690+ pm_runtime_enable(spi_imx->dev);1688169116891692 spi_imx->spi_clk = clk_get_rate(spi_imx->clk_per);16901693 /*···17251722 spi_imx_sdma_exit(spi_imx);17261723out_runtime_pm_put:17271724 pm_runtime_dont_use_autosuspend(spi_imx->dev);17281728- pm_runtime_put_sync(spi_imx->dev);17251725+ pm_runtime_set_suspended(&pdev->dev);17291726 pm_runtime_disable(spi_imx->dev);17271727+17281728+ clk_disable_unprepare(spi_imx->clk_ipg);17291729+out_put_per:17301730+ clk_disable_unprepare(spi_imx->clk_per);17301731out_master_put:17311732 spi_master_put(master);17321733
···2424 In addition, it is recommended to declare a mmc-pwrseq on SDIO host above2525 WFx. Without it, you may encounter issues with warm boot. The mmc-pwrseq2626 should be compatible with mmc-pwrseq-simple. Please consult2727- Documentation/devicetree/bindings/mmc/mmc-pwrseq-simple.txt for more2727+ Documentation/devicetree/bindings/mmc/mmc-pwrseq-simple.yaml for more2828 information.29293030 For SPI':'
···522522 depends on OF523523 select SERIAL_EARLYCON524524 select SERIAL_CORE_CONSOLE525525+ default y if SERIAL_IMX_CONSOLE525526 help526527 If you have enabled the earlycon on the Freescale IMX527528 CPU you can make it the earlycon by answering Y to this option.
+3
drivers/tty/serial/serial_txx9.c
···1280128012811281#ifdef ENABLE_SERIAL_TXX9_PCI12821282 ret = pci_register_driver(&serial_txx9_pci_driver);12831283+ if (ret) {12841284+ platform_driver_unregister(&serial_txx9_plat_driver);12851285+ }12831286#endif12841287 if (ret == 0)12851288 goto out;
+4-2
drivers/tty/tty_io.c
···15151515 tty->ops->shutdown(tty);15161516 tty_save_termios(tty);15171517 tty_driver_remove_tty(tty->driver, tty);15181518- tty->port->itty = NULL;15181518+ if (tty->port)15191519+ tty->port->itty = NULL;15191520 if (tty->link)15201521 tty->link->port->itty = NULL;15211521- tty_buffer_cancel_work(tty->port);15221522+ if (tty->port)15231523+ tty_buffer_cancel_work(tty->port);15221524 if (tty->link)15231525 tty_buffer_cancel_work(tty->link->port);15241526
+2-22
drivers/tty/vt/vt.c
···47044704 return rc;47054705}4706470647074707-static int con_font_copy(struct vc_data *vc, struct console_font_op *op)47084708-{47094709- int con = op->height;47104710- int rc;47114711-47124712-47134713- console_lock();47144714- if (vc->vc_mode != KD_TEXT)47154715- rc = -EINVAL;47164716- else if (!vc->vc_sw->con_font_copy)47174717- rc = -ENOSYS;47184718- else if (con < 0 || !vc_cons_allocated(con))47194719- rc = -ENOTTY;47204720- else if (con == vc->vc_num) /* nothing to do */47214721- rc = 0;47224722- else47234723- rc = vc->vc_sw->con_font_copy(vc, con);47244724- console_unlock();47254725- return rc;47264726-}47274727-47284707int con_font_op(struct vc_data *vc, struct console_font_op *op)47294708{47304709 switch (op->op) {···47144735 case KD_FONT_OP_SET_DEFAULT:47154736 return con_font_default(vc, op);47164737 case KD_FONT_OP_COPY:47174717- return con_font_copy(vc, op);47384738+ /* was buggy and never really used */47394739+ return -EINVAL;47184740 }47194741 return -ENOSYS;47204742}
+19-17
drivers/tty/vt/vt_ioctl.c
···484484 return 0;485485}486486487487-static inline int do_fontx_ioctl(int cmd,487487+static inline int do_fontx_ioctl(struct vc_data *vc, int cmd,488488 struct consolefontdesc __user *user_cfd,489489 struct console_font_op *op)490490{···502502 op->height = cfdarg.charheight;503503 op->charcount = cfdarg.charcount;504504 op->data = cfdarg.chardata;505505- return con_font_op(vc_cons[fg_console].d, op);506506- case GIO_FONTX: {505505+ return con_font_op(vc, op);506506+507507+ case GIO_FONTX:507508 op->op = KD_FONT_OP_GET;508509 op->flags = KD_FONT_FLAG_OLD;509510 op->width = 8;510511 op->height = cfdarg.charheight;511512 op->charcount = cfdarg.charcount;512513 op->data = cfdarg.chardata;513513- i = con_font_op(vc_cons[fg_console].d, op);514514+ i = con_font_op(vc, op);514515 if (i)515516 return i;516517 cfdarg.charheight = op->height;···519518 if (copy_to_user(user_cfd, &cfdarg, sizeof(struct consolefontdesc)))520519 return -EFAULT;521520 return 0;522522- }523521 }524522 return -EINVAL;525523}526524527527-static int vt_io_fontreset(struct console_font_op *op)525525+static int vt_io_fontreset(struct vc_data *vc, struct console_font_op *op)528526{529527 int ret;530528···537537538538 op->op = KD_FONT_OP_SET_DEFAULT;539539 op->data = NULL;540540- ret = con_font_op(vc_cons[fg_console].d, op);540540+ ret = con_font_op(vc, op);541541 if (ret)542542 return ret;543543544544 console_lock();545545- con_set_default_unimap(vc_cons[fg_console].d);545545+ con_set_default_unimap(vc);546546 console_unlock();547547548548 return 0;···584584 op.height = 0;585585 op.charcount = 256;586586 op.data = up;587587- return con_font_op(vc_cons[fg_console].d, &op);587587+ return con_font_op(vc, &op);588588589589 case GIO_FONT:590590 op.op = KD_FONT_OP_GET;···593593 op.height = 32;594594 op.charcount = 256;595595 op.data = up;596596- return con_font_op(vc_cons[fg_console].d, &op);596596+ return con_font_op(vc, &op);597597598598 case PIO_CMAP:599599 if (!perm)···609609610610 fallthrough;611611 case GIO_FONTX:612612- return do_fontx_ioctl(cmd, up, &op);612612+ return do_fontx_ioctl(vc, cmd, up, &op);613613614614 case PIO_FONTRESET:615615 if (!perm)616616 return -EPERM;617617618618- return vt_io_fontreset(&op);618618+ return vt_io_fontreset(vc, &op);619619620620 case PIO_SCRNMAP:621621 if (!perm)···10661066};1067106710681068static inline int10691069-compat_fontx_ioctl(int cmd, struct compat_consolefontdesc __user *user_cfd,10701070- int perm, struct console_font_op *op)10691069+compat_fontx_ioctl(struct vc_data *vc, int cmd,10701070+ struct compat_consolefontdesc __user *user_cfd,10711071+ int perm, struct console_font_op *op)10711072{10721073 struct compat_consolefontdesc cfdarg;10731074 int i;···10861085 op->height = cfdarg.charheight;10871086 op->charcount = cfdarg.charcount;10881087 op->data = compat_ptr(cfdarg.chardata);10891089- return con_font_op(vc_cons[fg_console].d, op);10881088+ return con_font_op(vc, op);10891089+10901090 case GIO_FONTX:10911091 op->op = KD_FONT_OP_GET;10921092 op->flags = KD_FONT_FLAG_OLD;···10951093 op->height = cfdarg.charheight;10961094 op->charcount = cfdarg.charcount;10971095 op->data = compat_ptr(cfdarg.chardata);10981098- i = con_font_op(vc_cons[fg_console].d, op);10961096+ i = con_font_op(vc, op);10991097 if (i)11001098 return i;11011099 cfdarg.charheight = op->height;···11851183 */11861184 case PIO_FONTX:11871185 case GIO_FONTX:11881188- return compat_fontx_ioctl(cmd, up, perm, &op);11861186+ return compat_fontx_ioctl(vc, cmd, up, perm, &op);1189118711901188 case KDFONTOP:11911189 return compat_kdfontop_ioctl(up, perm, &op, vc);
···42314231 dname.len, dname.name);4232423242334233 mutex_lock(&session->s_mutex);42344234- session->s_seq++;42344234+ inc_session_sequence(session);4235423542364236 if (!inode) {42374237 dout("handle_lease no inode %llx\n", vino.ino);···4385438543864386bool check_session_state(struct ceph_mds_session *s)43874387{43884388- if (s->s_state == CEPH_MDS_SESSION_CLOSING) {43894389- dout("resending session close request for mds%d\n",43904390- s->s_mds);43914391- request_close_session(s);43924392- return false;43934393- }43944394- if (s->s_ttl && time_after(jiffies, s->s_ttl)) {43954395- if (s->s_state == CEPH_MDS_SESSION_OPEN) {43884388+ switch (s->s_state) {43894389+ case CEPH_MDS_SESSION_OPEN:43904390+ if (s->s_ttl && time_after(jiffies, s->s_ttl)) {43964391 s->s_state = CEPH_MDS_SESSION_HUNG;43974392 pr_info("mds%d hung\n", s->s_mds);43984393 }43994399- }44004400- if (s->s_state == CEPH_MDS_SESSION_NEW ||44014401- s->s_state == CEPH_MDS_SESSION_RESTARTING ||44024402- s->s_state == CEPH_MDS_SESSION_CLOSED ||44034403- s->s_state == CEPH_MDS_SESSION_REJECTED)44044404- /* this mds is failed or recovering, just wait */43944394+ break;43954395+ case CEPH_MDS_SESSION_CLOSING:43964396+ /* Should never reach this when we're unmounting */43974397+ WARN_ON_ONCE(true);43984398+ fallthrough;43994399+ case CEPH_MDS_SESSION_NEW:44004400+ case CEPH_MDS_SESSION_RESTARTING:44014401+ case CEPH_MDS_SESSION_CLOSED:44024402+ case CEPH_MDS_SESSION_REJECTED:44054403 return false;44044404+ }4406440544074406 return true;44074407+}44084408+44094409+/*44104410+ * If the sequence is incremented while we're waiting on a REQUEST_CLOSE reply,44114411+ * then we need to retransmit that request.44124412+ */44134413+void inc_session_sequence(struct ceph_mds_session *s)44144414+{44154415+ lockdep_assert_held(&s->s_mutex);44164416+44174417+ s->s_seq++;44184418+44194419+ if (s->s_state == CEPH_MDS_SESSION_CLOSING) {44204420+ int ret;44214421+44224422+ dout("resending session close request for mds%d\n", s->s_mds);44234423+ ret = request_close_session(s);44244424+ if (ret < 0)44254425+ pr_err("unable to close session to mds%d: %d\n",44264426+ s->s_mds, ret);44274427+ }44084428}4409442944104430/*
···995995 if (mm) {996996 kthread_unuse_mm(mm);997997 mmput(mm);998998+ current->mm = NULL;998999 }9991000}1000100110011002static int __io_sq_thread_acquire_mm(struct io_ring_ctx *ctx)10021003{10031003- if (!current->mm) {10041004- if (unlikely(!(ctx->flags & IORING_SETUP_SQPOLL) ||10051005- !ctx->sqo_task->mm ||10061006- !mmget_not_zero(ctx->sqo_task->mm)))10071007- return -EFAULT;10081008- kthread_use_mm(ctx->sqo_task->mm);10041004+ struct mm_struct *mm;10051005+10061006+ if (current->mm)10071007+ return 0;10081008+10091009+ /* Should never happen */10101010+ if (unlikely(!(ctx->flags & IORING_SETUP_SQPOLL)))10111011+ return -EFAULT;10121012+10131013+ task_lock(ctx->sqo_task);10141014+ mm = ctx->sqo_task->mm;10151015+ if (unlikely(!mm || !mmget_not_zero(mm)))10161016+ mm = NULL;10171017+ task_unlock(ctx->sqo_task);10181018+10191019+ if (mm) {10201020+ kthread_use_mm(mm);10211021+ return 0;10091022 }1010102310111011- return 0;10241024+ return -EFAULT;10121025}1013102610141027static int io_sq_thread_acquire_mm(struct io_ring_ctx *ctx,···12871274 /* add one for this request */12881275 refcount_inc(&id->count);1289127612901290- /* drop old identity, assign new one. one ref for req, one for tctx */12911291- if (req->work.identity != tctx->identity &&12921292- refcount_sub_and_test(2, &req->work.identity->count))12771277+ /* drop tctx and req identity references, if needed */12781278+ if (tctx->identity != &tctx->__identity &&12791279+ refcount_dec_and_test(&tctx->identity->count))12801280+ kfree(tctx->identity);12811281+ if (req->work.identity != &tctx->__identity &&12821282+ refcount_dec_and_test(&req->work.identity->count))12931283 kfree(req->work.identity);1294128412951285 req->work.identity = id;···15931577 }15941578}1595157915961596-static inline bool io_match_files(struct io_kiocb *req,15971597- struct files_struct *files)15801580+static inline bool __io_match_files(struct io_kiocb *req,15811581+ struct files_struct *files)15981582{15831583+ return ((req->flags & REQ_F_WORK_INITIALIZED) &&15841584+ (req->work.flags & IO_WQ_WORK_FILES)) &&15851585+ req->work.identity->files == files;15861586+}15871587+15881588+static bool io_match_files(struct io_kiocb *req,15891589+ struct files_struct *files)15901590+{15911591+ struct io_kiocb *link;15921592+15991593 if (!files)16001594 return true;16011601- if ((req->flags & REQ_F_WORK_INITIALIZED) &&16021602- (req->work.flags & IO_WQ_WORK_FILES))16031603- return req->work.identity->files == files;15951595+ if (__io_match_files(req, files))15961596+ return true;15971597+ if (req->flags & REQ_F_LINK_HEAD) {15981598+ list_for_each_entry(link, &req->link_list, link_list) {15991599+ if (__io_match_files(link, files))16001600+ return true;16011601+ }16021602+ }16041603 return false;16051604}16061605···16991668 WRITE_ONCE(cqe->user_data, req->user_data);17001669 WRITE_ONCE(cqe->res, res);17011670 WRITE_ONCE(cqe->flags, cflags);17021702- } else if (ctx->cq_overflow_flushed || req->task->io_uring->in_idle) {16711671+ } else if (ctx->cq_overflow_flushed ||16721672+ atomic_read(&req->task->io_uring->in_idle)) {17031673 /*17041674 * If we're in ring overflow flush mode, or in task cancel mode,17051675 * then we cannot store the request for later flushing, we need···18701838 io_dismantle_req(req);1871183918721840 percpu_counter_dec(&tctx->inflight);18731873- if (tctx->in_idle)18411841+ if (atomic_read(&tctx->in_idle))18741842 wake_up(&tctx->wait);18751843 put_task_struct(req->task);18761844···77277695 xa_init(&tctx->xa);77287696 init_waitqueue_head(&tctx->wait);77297697 tctx->last = NULL;77307730- tctx->in_idle = 0;76987698+ atomic_set(&tctx->in_idle, 0);76997699+ tctx->sqpoll = false;77317700 io_init_identity(&tctx->__identity);77327701 tctx->identity = &tctx->__identity;77337702 task->io_uring = tctx;···84218388 return false;84228389}8423839084248424-static bool io_match_link_files(struct io_kiocb *req,84258425- struct files_struct *files)84268426-{84278427- struct io_kiocb *link;84288428-84298429- if (io_match_files(req, files))84308430- return true;84318431- if (req->flags & REQ_F_LINK_HEAD) {84328432- list_for_each_entry(link, &req->link_list, link_list) {84338433- if (io_match_files(link, files))84348434- return true;84358435- }84368436- }84378437- return false;84388438-}84398439-84408391/*84418392 * We're looking to cancel 'req' because it's holding on to our files, but84428393 * 'req' could be a link to another request. See if it is, and cancel that···8470845384718454static bool io_cancel_link_cb(struct io_wq_work *work, void *data)84728455{84738473- return io_match_link(container_of(work, struct io_kiocb, work), data);84568456+ struct io_kiocb *req = container_of(work, struct io_kiocb, work);84578457+ bool ret;84588458+84598459+ if (req->flags & REQ_F_LINK_TIMEOUT) {84608460+ unsigned long flags;84618461+ struct io_ring_ctx *ctx = req->ctx;84628462+84638463+ /* protect against races with linked timeouts */84648464+ spin_lock_irqsave(&ctx->completion_lock, flags);84658465+ ret = io_match_link(req, data);84668466+ spin_unlock_irqrestore(&ctx->completion_lock, flags);84678467+ } else {84688468+ ret = io_match_link(req, data);84698469+ }84708470+ return ret;84748471}8475847284768473static void io_attempt_cancel(struct io_ring_ctx *ctx, struct io_kiocb *req)···85108479}8511848085128481static void io_cancel_defer_files(struct io_ring_ctx *ctx,84828482+ struct task_struct *task,85138483 struct files_struct *files)85148484{85158485 struct io_defer_entry *de = NULL;···8518848685198487 spin_lock_irq(&ctx->completion_lock);85208488 list_for_each_entry_reverse(de, &ctx->defer_list, list) {85218521- if (io_match_link_files(de->req, files)) {84898489+ if (io_task_match(de->req, task) &&84908490+ io_match_files(de->req, files)) {85228491 list_cut_position(&list, &ctx->defer_list, &de->list);85238492 break;85248493 }···85458512 if (list_empty_careful(&ctx->inflight_list))85468513 return false;8547851485488548- io_cancel_defer_files(ctx, files);85498515 /* cancel all at once, should be faster than doing it one by one*/85508516 io_wq_cancel_cb(ctx->io_wq, io_wq_files_match, files, true);85518517···86308598{86318599 struct task_struct *task = current;8632860086338633- if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data)86018601+ if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data) {86348602 task = ctx->sq_data->thread;86038603+ atomic_inc(&task->io_uring->in_idle);86048604+ io_sq_thread_park(ctx->sq_data);86058605+ }86068606+86078607+ if (files)86088608+ io_cancel_defer_files(ctx, NULL, files);86098609+ else86108610+ io_cancel_defer_files(ctx, task, NULL);8635861186368612 io_cqring_overflow_flush(ctx, true, task, files);86378613···86478607 io_run_task_work();86488608 cond_resched();86498609 }86108610+86118611+ if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data) {86128612+ atomic_dec(&task->io_uring->in_idle);86138613+ /*86148614+ * If the files that are going away are the ones in the thread86158615+ * identity, clear them out.86168616+ */86178617+ if (task->io_uring->identity->files == files)86188618+ task->io_uring->identity->files = NULL;86198619+ io_sq_thread_unpark(ctx->sq_data);86208620+ }86508621}8651862286528623/*86538624 * Note that this task has used io_uring. We use it for cancelation purposes.86548625 */86558655-static int io_uring_add_task_file(struct file *file)86268626+static int io_uring_add_task_file(struct io_ring_ctx *ctx, struct file *file)86568627{86578628 struct io_uring_task *tctx = current->io_uring;86588629···86848633 }86858634 tctx->last = file;86868635 }86368636+86378637+ /*86388638+ * This is race safe in that the task itself is doing this, hence it86398639+ * cannot be going through the exit/cancel paths at the same time.86408640+ * This cannot be modified while exit/cancel is running.86418641+ */86428642+ if (!tctx->sqpoll && (ctx->flags & IORING_SETUP_SQPOLL))86438643+ tctx->sqpoll = true;8687864486888645 return 0;86898646}···87348675 unsigned long index;8735867687368677 /* make sure overflow events are dropped */87378737- tctx->in_idle = true;86788678+ atomic_inc(&tctx->in_idle);8738867987398680 xa_for_each(&tctx->xa, index, file) {87408681 struct io_ring_ctx *ctx = file->private_data;···87438684 if (files)87448685 io_uring_del_task_file(file);87458686 }86878687+86888688+ atomic_dec(&tctx->in_idle);86898689+}86908690+86918691+static s64 tctx_inflight(struct io_uring_task *tctx)86928692+{86938693+ unsigned long index;86948694+ struct file *file;86958695+ s64 inflight;86968696+86978697+ inflight = percpu_counter_sum(&tctx->inflight);86988698+ if (!tctx->sqpoll)86998699+ return inflight;87008700+87018701+ /*87028702+ * If we have SQPOLL rings, then we need to iterate and find them, and87038703+ * add the pending count for those.87048704+ */87058705+ xa_for_each(&tctx->xa, index, file) {87068706+ struct io_ring_ctx *ctx = file->private_data;87078707+87088708+ if (ctx->flags & IORING_SETUP_SQPOLL) {87098709+ struct io_uring_task *__tctx = ctx->sqo_task->io_uring;87108710+87118711+ inflight += percpu_counter_sum(&__tctx->inflight);87128712+ }87138713+ }87148714+87158715+ return inflight;87468716}8747871787488718/*···87858697 s64 inflight;8786869887878699 /* make sure overflow events are dropped */87888788- tctx->in_idle = true;87008700+ atomic_inc(&tctx->in_idle);8789870187908702 do {87918703 /* read completions before cancelations */87928792- inflight = percpu_counter_sum(&tctx->inflight);87048704+ inflight = tctx_inflight(tctx);87938705 if (!inflight)87948706 break;87958707 __io_uring_files_cancel(NULL);···88008712 * If we've seen completions, retry. This avoids a race where88018713 * a completion comes in before we did prepare_to_wait().88028714 */88038803- if (inflight != percpu_counter_sum(&tctx->inflight))87158715+ if (inflight != tctx_inflight(tctx))88048716 continue;88058717 schedule();88068718 } while (1);8807871988088720 finish_wait(&tctx->wait, &wait);88098809- tctx->in_idle = false;87218721+ atomic_dec(&tctx->in_idle);88108722}8811872388128724static int io_uring_flush(struct file *file, void *data)···89518863 io_sqpoll_wait_sq(ctx);89528864 submitted = to_submit;89538865 } else if (to_submit) {89548954- ret = io_uring_add_task_file(f.file);88668866+ ret = io_uring_add_task_file(ctx, f.file);89558867 if (unlikely(ret))89568868 goto out;89578869 mutex_lock(&ctx->uring_lock);···89888900#ifdef CONFIG_PROC_FS89898901static int io_uring_show_cred(int id, void *p, void *data)89908902{89918991- const struct cred *cred = p;89038903+ struct io_identity *iod = p;89048904+ const struct cred *cred = iod->creds;89928905 struct seq_file *m = data;89938906 struct user_namespace *uns = seq_user_ns(m);89948907 struct group_info *gi;···91819092#if defined(CONFIG_UNIX)91829093 ctx->ring_sock->file = file;91839094#endif91849184- if (unlikely(io_uring_add_task_file(file))) {90959095+ if (unlikely(io_uring_add_task_file(ctx, file))) {91859096 file = ERR_PTR(-ENOMEM);91869097 goto err_fd;91879098 }
+10-20
fs/iomap/buffered-io.c
···13741374 WARN_ON_ONCE(!wpc->ioend && !list_empty(&submit_list));13751375 WARN_ON_ONCE(!PageLocked(page));13761376 WARN_ON_ONCE(PageWriteback(page));13771377+ WARN_ON_ONCE(PageDirty(page));1377137813781379 /*13791380 * We cannot cancel the ioend directly here on error. We may have···13831382 * appropriately.13841383 */13851384 if (unlikely(error)) {13851385+ /*13861386+ * Let the filesystem know what portion of the current page13871387+ * failed to map. If the page wasn't been added to ioend, it13881388+ * won't be affected by I/O completion and we must unlock it13891389+ * now.13901390+ */13911391+ if (wpc->ops->discard_page)13921392+ wpc->ops->discard_page(page, file_offset);13861393 if (!count) {13871387- /*13881388- * If the current page hasn't been added to ioend, it13891389- * won't be affected by I/O completions and we must13901390- * discard and unlock it right here.13911391- */13921392- if (wpc->ops->discard_page)13931393- wpc->ops->discard_page(page);13941394 ClearPageUptodate(page);13951395 unlock_page(page);13961396 goto done;13971397 }13981398-13991399- /*14001400- * If the page was not fully cleaned, we need to ensure that the14011401- * higher layers come back to it correctly. That means we need14021402- * to keep the page dirty, and for WB_SYNC_ALL writeback we need14031403- * to ensure the PAGECACHE_TAG_TOWRITE index mark is not removed14041404- * so another attempt to write this page in this writeback sweep14051405- * will be made.14061406- */14071407- set_page_writeback_keepwrite(page);14081408- } else {14091409- clear_page_dirty_for_io(page);14101410- set_page_writeback(page);14111398 }1412139914001400+ set_page_writeback(page);14131401 unlock_page(page);1414140214151403 /*
···911911 error = iomap_zero_range(inode, oldsize, newsize - oldsize,912912 &did_zeroing, &xfs_buffered_write_iomap_ops);913913 } else {914914+ /*915915+ * iomap won't detect a dirty page over an unwritten block (or a916916+ * cow block over a hole) and subsequently skips zeroing the917917+ * newly post-EOF portion of the page. Flush the new EOF to918918+ * convert the block before the pagecache truncate.919919+ */920920+ error = filemap_write_and_wait_range(inode->i_mapping, newsize,921921+ newsize);922922+ if (error)923923+ return error;914924 error = iomap_truncate_page(inode, newsize, &did_zeroing,915925 &xfs_buffered_write_iomap_ops);916926 }
+2-1
fs/xfs/xfs_reflink.c
···15021502 &xfs_buffered_write_iomap_ops);15031503 if (error)15041504 goto out;15051505- error = filemap_write_and_wait(inode->i_mapping);15051505+15061506+ error = filemap_write_and_wait_range(inode->i_mapping, offset, len);15061507 if (error)15071508 goto out;15081509
+8-8
include/kunit/test.h
···252252}253253#endif /* IS_BUILTIN(CONFIG_KUNIT) */254254255255+#ifdef MODULE255256/**256256- * kunit_test_suites() - used to register one or more &struct kunit_suite257257- * with KUnit.257257+ * kunit_test_suites_for_module() - used to register one or more258258+ * &struct kunit_suite with KUnit.258259 *259259- * @suites_list...: a statically allocated list of &struct kunit_suite.260260+ * @__suites: a statically allocated list of &struct kunit_suite.260261 *261261- * Registers @suites_list with the test framework. See &struct kunit_suite for262262+ * Registers @__suites with the test framework. See &struct kunit_suite for262263 * more information.263264 *264265 * If a test suite is built-in, module_init() gets translated into···268267 * module_{init|exit} functions for the builtin case when registering269268 * suites via kunit_test_suites() below.270269 */271271-#ifdef MODULE272270#define kunit_test_suites_for_module(__suites) \273271 static int __init kunit_test_suites_init(void) \274272 { \···294294 * kunit_test_suites() - used to register one or more &struct kunit_suite295295 * with KUnit.296296 *297297- * @suites: a statically allocated list of &struct kunit_suite.297297+ * @__suites: a statically allocated list of &struct kunit_suite.298298 *299299 * Registers @suites with the test framework. See &struct kunit_suite for300300 * more information.···308308 * module.309309 *310310 */311311-#define kunit_test_suites(...) \311311+#define kunit_test_suites(__suites...) \312312 __kunit_test_suites(__UNIQUE_ID(array), \313313 __UNIQUE_ID(suites), \314314- __VA_ARGS__)314314+ ##__suites)315315316316#define kunit_test_suite(suite) kunit_test_suites(&suite)317317
+2
include/linux/blk-mq.h
···235235 * @flags: Zero or more BLK_MQ_F_* flags.236236 * @driver_data: Pointer to data owned by the block driver that created this237237 * tag set.238238+ * @active_queues_shared_sbitmap:239239+ * number of active request queues per tag set.238240 * @__bitmap_tags: A shared tags sbitmap, used over all hctx's239241 * @__breserved_tags:240242 * A shared reserved tags sbitmap, used over all hctx's
+8-12
include/linux/can/skb.h
···6161 */6262static inline struct sk_buff *can_create_echo_skb(struct sk_buff *skb)6363{6464- if (skb_shared(skb)) {6565- struct sk_buff *nskb = skb_clone(skb, GFP_ATOMIC);6464+ struct sk_buff *nskb;66656767- if (likely(nskb)) {6868- can_skb_set_owner(nskb, skb->sk);6969- consume_skb(skb);7070- return nskb;7171- } else {7272- kfree_skb(skb);7373- return NULL;7474- }6666+ nskb = skb_clone(skb, GFP_ATOMIC);6767+ if (unlikely(!nskb)) {6868+ kfree_skb(skb);6969+ return NULL;7570 }76717777- /* we can assume to have an unshared skb with proper owner */7878- return skb;7272+ can_skb_set_owner(nskb, skb->sk);7373+ consume_skb(skb);7474+ return nskb;7975}80768177#endif /* !_CAN_SKB_H */
···221221 * Optional, allows the file system to discard state on a page where222222 * we failed to submit any I/O.223223 */224224- void (*discard_page)(struct page *page);224224+ void (*discard_page)(struct page *page, loff_t fileoff);225225};226226227227struct iomap_writepage_ctx {
+9
include/linux/mm.h
···27592759 return VM_FAULT_NOPAGE;27602760}2761276127622762+#ifndef io_remap_pfn_range27632763+static inline int io_remap_pfn_range(struct vm_area_struct *vma,27642764+ unsigned long addr, unsigned long pfn,27652765+ unsigned long size, pgprot_t prot)27662766+{27672767+ return remap_pfn_range(vma, addr, pfn, size, pgprot_decrypted(prot));27682768+}27692769+#endif27702770+27622771static inline vm_fault_t vmf_error(int err)27632772{27642773 if (err == -ENOMEM)
+8-1
include/linux/netfilter/nfnetlink.h
···2424 const u_int16_t attr_count; /* number of nlattr's */2525};26262727+enum nfnl_abort_action {2828+ NFNL_ABORT_NONE = 0,2929+ NFNL_ABORT_AUTOLOAD,3030+ NFNL_ABORT_VALIDATE,3131+};3232+2733struct nfnetlink_subsystem {2834 const char *name;2935 __u8 subsys_id; /* nfnetlink subsystem ID */···3731 const struct nfnl_callback *cb; /* callback for individual types */3832 struct module *owner;3933 int (*commit)(struct net *net, struct sk_buff *skb);4040- int (*abort)(struct net *net, struct sk_buff *skb, bool autoload);3434+ int (*abort)(struct net *net, struct sk_buff *skb,3535+ enum nfnl_abort_action action);4136 void (*cleanup)(struct net *net);4237 bool (*valid_genid)(struct net *net, u32 genid);4338};
···4242#if IS_MODULE(CONFIG_IPV6)4343 int (*chk_addr)(struct net *net, const struct in6_addr *addr,4444 const struct net_device *dev, int strict);4545- int (*route_me_harder)(struct net *net, struct sk_buff *skb);4545+ int (*route_me_harder)(struct net *net, struct sock *sk, struct sk_buff *skb);4646 int (*dev_get_saddr)(struct net *net, const struct net_device *dev,4747 const struct in6_addr *daddr, unsigned int srcprefs,4848 struct in6_addr *saddr);···143143#endif144144}145145146146-int ip6_route_me_harder(struct net *net, struct sk_buff *skb);146146+int ip6_route_me_harder(struct net *net, struct sock *sk, struct sk_buff *skb);147147148148-static inline int nf_ip6_route_me_harder(struct net *net, struct sk_buff *skb)148148+static inline int nf_ip6_route_me_harder(struct net *net, struct sock *sk, struct sk_buff *skb)149149{150150#if IS_MODULE(CONFIG_IPV6)151151 const struct nf_ipv6_ops *v6_ops = nf_get_ipv6_ops();···153153 if (!v6_ops)154154 return -EHOSTUNREACH;155155156156- return v6_ops->route_me_harder(net, skb);156156+ return v6_ops->route_me_harder(net, sk, skb);157157#elif IS_BUILTIN(CONFIG_IPV6)158158- return ip6_route_me_harder(net, skb);158158+ return ip6_route_me_harder(net, sk, skb);159159#else160160 return -EHOSTUNREACH;161161#endif
+4-4
include/linux/pagemap.h
···344344/**345345 * find_lock_page - locate, pin and lock a pagecache page346346 * @mapping: the address_space to search347347- * @offset: the page index347347+ * @index: the page index348348 *349349- * Looks up the page cache entry at @mapping & @offset. If there is a349349+ * Looks up the page cache entry at @mapping & @index. If there is a350350 * page cache page, it is returned locked and with an increased351351 * refcount.352352 *···363363/**364364 * find_lock_head - Locate, pin and lock a pagecache page.365365 * @mapping: The address_space to search.366366- * @offset: The page index.366366+ * @index: The page index.367367 *368368- * Looks up the page cache entry at @mapping & @offset. If there is a368368+ * Looks up the page cache entry at @mapping & @index. If there is a369369 * page cache page, its head page is returned locked and with an increased370370 * refcount.371371 *
···147147 PHY_INTERFACE_MODE_MAX,148148} phy_interface_t;149149150150-/**150150+/*151151 * phy_supported_speeds - return all speeds currently supported by a PHY device152152- * @phy: The PHY device to return supported speeds of.153153- * @speeds: buffer to store supported speeds in.154154- * @size: size of speeds buffer.155155- *156156- * Description: Returns the number of supported speeds, and fills157157- * the speeds buffer with the supported speeds. If speeds buffer is158158- * too small to contain all currently supported speeds, will return as159159- * many speeds as can fit.160152 */161153unsigned int phy_supported_speeds(struct phy_device *phy,162154 unsigned int *speeds,···10141022 regnum, mask, set);10151023}1016102410171017-/**10251025+/*10181026 * phy_read_mmd - Convenience function for reading a register10191027 * from an MMD on a given PHY.10201020- * @phydev: The phy_device struct10211021- * @devad: The MMD to read from10221022- * @regnum: The register on the MMD to read10231023- *10241024- * Same rules as for phy_read();10251028 */10261029int phy_read_mmd(struct phy_device *phydev, int devad, u32 regnum);10271030···10511064 __ret; \10521065})1053106610541054-/**10671067+/*10551068 * __phy_read_mmd - Convenience function for reading a register10561069 * from an MMD on a given PHY.10571057- * @phydev: The phy_device struct10581058- * @devad: The MMD to read from10591059- * @regnum: The register on the MMD to read10601060- *10611061- * Same rules as for __phy_read();10621070 */10631071int __phy_read_mmd(struct phy_device *phydev, int devad, u32 regnum);1064107210651065-/**10731073+/*10661074 * phy_write_mmd - Convenience function for writing a register10671075 * on an MMD on a given PHY.10681068- * @phydev: The phy_device struct10691069- * @devad: The MMD to write to10701070- * @regnum: The register on the MMD to read10711071- * @val: value to write to @regnum10721072- *10731073- * Same rules as for phy_write();10741076 */10751077int phy_write_mmd(struct phy_device *phydev, int devad, u32 regnum, u16 val);1076107810771077-/**10791079+/*10781080 * __phy_write_mmd - Convenience function for writing a register10791081 * on an MMD on a given PHY.10801080- * @phydev: The phy_device struct10811081- * @devad: The MMD to write to10821082- * @regnum: The register on the MMD to read10831083- * @val: value to write to @regnum10841084- *10851085- * Same rules as for __phy_write();10861082 */10871083int __phy_write_mmd(struct phy_device *phydev, int devad, u32 regnum, u16 val);10881084
···147147 return atomic_read(&r->refs);148148}149149150150-/**151151- * refcount_add_not_zero - add a value to a refcount unless it is 0152152- * @i: the value to add to the refcount153153- * @r: the refcount154154- *155155- * Will saturate at REFCOUNT_SATURATED and WARN.156156- *157157- * Provides no memory ordering, it is assumed the caller has guaranteed the158158- * object memory to be stable (RCU, etc.). It does provide a control dependency159159- * and thereby orders future stores. See the comment on top.160160- *161161- * Use of this function is not recommended for the normal reference counting162162- * use case in which references are taken and released one at a time. In these163163- * cases, refcount_inc(), or one of its variants, should instead be used to164164- * increment a reference count.165165- *166166- * Return: false if the passed refcount is 0, true otherwise167167- */168150static inline __must_check bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp)169151{170152 int old = refcount_read(r);···165183 return old;166184}167185186186+/**187187+ * refcount_add_not_zero - add a value to a refcount unless it is 0188188+ * @i: the value to add to the refcount189189+ * @r: the refcount190190+ *191191+ * Will saturate at REFCOUNT_SATURATED and WARN.192192+ *193193+ * Provides no memory ordering, it is assumed the caller has guaranteed the194194+ * object memory to be stable (RCU, etc.). It does provide a control dependency195195+ * and thereby orders future stores. See the comment on top.196196+ *197197+ * Use of this function is not recommended for the normal reference counting198198+ * use case in which references are taken and released one at a time. In these199199+ * cases, refcount_inc(), or one of its variants, should instead be used to200200+ * increment a reference count.201201+ *202202+ * Return: false if the passed refcount is 0, true otherwise203203+ */168204static inline __must_check bool refcount_add_not_zero(int i, refcount_t *r)169205{170206 return __refcount_add_not_zero(i, r, NULL);207207+}208208+209209+static inline void __refcount_add(int i, refcount_t *r, int *oldp)210210+{211211+ int old = atomic_fetch_add_relaxed(i, &r->refs);212212+213213+ if (oldp)214214+ *oldp = old;215215+216216+ if (unlikely(!old))217217+ refcount_warn_saturate(r, REFCOUNT_ADD_UAF);218218+ else if (unlikely(old < 0 || old + i < 0))219219+ refcount_warn_saturate(r, REFCOUNT_ADD_OVF);171220}172221173222/**···217204 * cases, refcount_inc(), or one of its variants, should instead be used to218205 * increment a reference count.219206 */220220-static inline void __refcount_add(int i, refcount_t *r, int *oldp)221221-{222222- int old = atomic_fetch_add_relaxed(i, &r->refs);223223-224224- if (oldp)225225- *oldp = old;226226-227227- if (unlikely(!old))228228- refcount_warn_saturate(r, REFCOUNT_ADD_UAF);229229- else if (unlikely(old < 0 || old + i < 0))230230- refcount_warn_saturate(r, REFCOUNT_ADD_OVF);231231-}232232-233207static inline void refcount_add(int i, refcount_t *r)234208{235209 __refcount_add(i, r, NULL);210210+}211211+212212+static inline __must_check bool __refcount_inc_not_zero(refcount_t *r, int *oldp)213213+{214214+ return __refcount_add_not_zero(1, r, oldp);236215}237216238217/**···240235 *241236 * Return: true if the increment was successful, false otherwise242237 */243243-static inline __must_check bool __refcount_inc_not_zero(refcount_t *r, int *oldp)244244-{245245- return __refcount_add_not_zero(1, r, oldp);246246-}247247-248238static inline __must_check bool refcount_inc_not_zero(refcount_t *r)249239{250240 return __refcount_inc_not_zero(r, NULL);241241+}242242+243243+static inline void __refcount_inc(refcount_t *r, int *oldp)244244+{245245+ __refcount_add(1, r, oldp);251246}252247253248/**···262257 * Will WARN if the refcount is 0, as this represents a possible use-after-free263258 * condition.264259 */265265-static inline void __refcount_inc(refcount_t *r, int *oldp)266266-{267267- __refcount_add(1, r, oldp);268268-}269269-270260static inline void refcount_inc(refcount_t *r)271261{272262 __refcount_inc(r, NULL);263263+}264264+265265+static inline __must_check bool __refcount_sub_and_test(int i, refcount_t *r, int *oldp)266266+{267267+ int old = atomic_fetch_sub_release(i, &r->refs);268268+269269+ if (oldp)270270+ *oldp = old;271271+272272+ if (old == i) {273273+ smp_acquire__after_ctrl_dep();274274+ return true;275275+ }276276+277277+ if (unlikely(old < 0 || old - i < 0))278278+ refcount_warn_saturate(r, REFCOUNT_SUB_UAF);279279+280280+ return false;273281}274282275283/**···305287 *306288 * Return: true if the resulting refcount is 0, false otherwise307289 */308308-static inline __must_check bool __refcount_sub_and_test(int i, refcount_t *r, int *oldp)309309-{310310- int old = atomic_fetch_sub_release(i, &r->refs);311311-312312- if (oldp)313313- *oldp = old;314314-315315- if (old == i) {316316- smp_acquire__after_ctrl_dep();317317- return true;318318- }319319-320320- if (unlikely(old < 0 || old - i < 0))321321- refcount_warn_saturate(r, REFCOUNT_SUB_UAF);322322-323323- return false;324324-}325325-326290static inline __must_check bool refcount_sub_and_test(int i, refcount_t *r)327291{328292 return __refcount_sub_and_test(i, r, NULL);293293+}294294+295295+static inline __must_check bool __refcount_dec_and_test(refcount_t *r, int *oldp)296296+{297297+ return __refcount_sub_and_test(1, r, oldp);329298}330299331300/**···328323 *329324 * Return: true if the resulting refcount is 0, false otherwise330325 */331331-static inline __must_check bool __refcount_dec_and_test(refcount_t *r, int *oldp)332332-{333333- return __refcount_sub_and_test(1, r, oldp);334334-}335335-336326static inline __must_check bool refcount_dec_and_test(refcount_t *r)337327{338328 return __refcount_dec_and_test(r, NULL);329329+}330330+331331+static inline void __refcount_dec(refcount_t *r, int *oldp)332332+{333333+ int old = atomic_fetch_sub_release(1, &r->refs);334334+335335+ if (oldp)336336+ *oldp = old;337337+338338+ if (unlikely(old <= 1))339339+ refcount_warn_saturate(r, REFCOUNT_DEC_LEAK);339340}340341341342/**···354343 * Provides release memory ordering, such that prior loads and stores are done355344 * before.356345 */357357-static inline void __refcount_dec(refcount_t *r, int *oldp)358358-{359359- int old = atomic_fetch_sub_release(1, &r->refs);360360-361361- if (oldp)362362- *oldp = old;363363-364364- if (unlikely(old <= 1))365365- refcount_warn_saturate(r, REFCOUNT_DEC_LEAK);366366-}367367-368346static inline void refcount_dec(refcount_t *r)369347{370348 __refcount_dec(r, NULL);
···14441444 enum cfg80211_station_type statype);1445144514461446/**14471447- * enum station_info_rate_flags - bitrate info flags14471447+ * enum rate_info_flags - bitrate info flags14481448 *14491449 * Used by the driver to indicate the specific rate transmission14501450 * type for 802.11n transmissions.···15171517};1518151815191519/**15201520- * enum station_info_rate_flags - bitrate info flags15201520+ * enum bss_param_flags - bitrate info flags15211521 *15221522 * Used by the driver to indicate the specific rate transmission15231523 * type for 802.11n transmissions.···64676467 struct ieee80211_channel *channel, gfp_t gfp);6468646864696469/**64706470- * cfg80211_notify_new_candidate - notify cfg80211 of a new mesh peer candidate64706470+ * cfg80211_notify_new_peer_candidate - notify cfg80211 of a new mesh peer64716471+ * candidate64716472 *64726473 * @dev: network device64736474 * @macaddr: the MAC address of the new candidate···76077606void cfg80211_unregister_wdev(struct wireless_dev *wdev);7608760776097608/**76107610- * struct cfg80211_ft_event - FT Information Elements76097609+ * struct cfg80211_ft_event_params - FT Information Elements76117610 * @ies: FT IEs76127611 * @ies_len: length of the FT IE in bytes76137612 * @target_ap: target AP's MAC address
+4-3
include/net/mac80211.h
···33113311};3312331233133313/**33143314- * enum ieee80211_reconfig_complete_type - reconfig type33143314+ * enum ieee80211_reconfig_type - reconfig type33153315 *33163316 * This enum is used by the reconfig_complete() callback to indicate what33173317 * reconfiguration type was completed.···63346334 int band, struct ieee80211_sta **sta);6335633563366336/**63376337- * Sanity-check and parse the radiotap header of injected frames63376337+ * ieee80211_parse_tx_radiotap - Sanity-check and parse the radiotap header63386338+ * of injected frames63386339 * @skb: packet injected by userspace63396340 * @dev: the &struct device of this 802.11 device63406341 */···63906389void ieee80211_update_p2p_noa(struct ieee80211_noa_data *data, u32 tsf);6391639063926391/**63936393- * ieee80211_tdls_oper - request userspace to perform a TDLS operation63926392+ * ieee80211_tdls_oper_request - request userspace to perform a TDLS operation63946393 * @vif: virtual interface63956394 * @peer: the peer's destination address63966395 * @oper: the requested TDLS operation
+1-1
include/sound/control.h
···4242 snd_ctl_elem_iface_t iface; /* interface identifier */4343 unsigned int device; /* device/client number */4444 unsigned int subdevice; /* subdevice (substream) number */4545- const unsigned char *name; /* ASCII name of item */4545+ const char *name; /* ASCII name of item */4646 unsigned int index; /* index of item */4747 unsigned int access; /* access rights */4848 unsigned int count; /* count of same elements */
+2-1
include/sound/core.h
···332332#define snd_BUG() WARN(1, "BUG?\n")333333334334/**335335- * Suppress high rates of output when CONFIG_SND_DEBUG is enabled.335335+ * snd_printd_ratelimit - Suppress high rates of output when336336+ * CONFIG_SND_DEBUG is enabled.336337 */337338#define snd_printd_ratelimit() printk_ratelimit()338339
+2-2
include/sound/pcm.h
···12841284}1285128512861286/**12871287- * snd_pcm_sgbuf_chunk_size - Compute the max size that fits within the contig.12881288- * page from the given size12871287+ * snd_pcm_sgbuf_get_chunk_size - Compute the max size that fits within the12881288+ * contig. page from the given size12891289 * @substream: PCM substream12901290 * @ofs: byte offset12911291 * @size: byte size to examine
···337337 * already contains a warning when RCU is not watching, so no point338338 * in having another one here.339339 */340340+ lockdep_hardirqs_off(CALLER_ADDR0);340341 instrumentation_begin();341342 rcu_irq_enter_check_tick();342342- /* Use the combo lockdep/tracing function */343343- trace_hardirqs_off();343343+ trace_hardirqs_off_finish();344344 instrumentation_end();345345346346 return ret;
···21672167 /* ok, now we should be set up.. */21682168 p->pid = pid_nr(pid);21692169 if (clone_flags & CLONE_THREAD) {21702170- p->exit_signal = -1;21712170 p->group_leader = current->group_leader;21722171 p->tgid = current->tgid;21732172 } else {21742174- if (clone_flags & CLONE_PARENT)21752175- p->exit_signal = current->group_leader->exit_signal;21762176- else21772177- p->exit_signal = args->exit_signal;21782173 p->group_leader = p;21792174 p->tgid = p->pid;21802175 }···22132218 if (clone_flags & (CLONE_PARENT|CLONE_THREAD)) {22142219 p->real_parent = current->real_parent;22152220 p->parent_exec_id = current->parent_exec_id;22212221+ if (clone_flags & CLONE_THREAD)22222222+ p->exit_signal = -1;22232223+ else22242224+ p->exit_signal = current->group_leader->exit_signal;22162225 } else {22172226 p->real_parent = current;22182227 p->parent_exec_id = current->self_exec_id;22282228+ p->exit_signal = args->exit_signal;22192229 }2220223022212231 klp_copy_process(p);
+14-2
kernel/futex.c
···23802380 }2381238123822382 /*23832383- * Since we just failed the trylock; there must be an owner.23832383+ * The trylock just failed, so either there is an owner or23842384+ * there is a higher priority waiter than this one.23842385 */23852386 newowner = rt_mutex_owner(&pi_state->pi_mutex);23862386- BUG_ON(!newowner);23872387+ /*23882388+ * If the higher priority waiter has not yet taken over the23892389+ * rtmutex then newowner is NULL. We can't return here with23902390+ * that state because it's inconsistent vs. the user space23912391+ * state. So drop the locks and try again. It's a valid23922392+ * situation and not any different from the other retry23932393+ * conditions.23942394+ */23952395+ if (unlikely(!newowner)) {23962396+ err = -EAGAIN;23972397+ goto handle_err;23982398+ }23872399 } else {23882400 WARN_ON_ONCE(argowner != current);23892401 if (oldowner == current) {
+1-2
kernel/hung_task.c
···225225 * Process updating of timeout sysctl226226 */227227int proc_dohung_task_timeout_secs(struct ctl_table *table, int write,228228- void __user *buffer,229229- size_t *lenp, loff_t *ppos)228228+ void *buffer, size_t *lenp, loff_t *ppos)230229{231230 int ret;232231
···1249124912501250 *head = &kretprobe_inst_table[hash];12511251 hlist_lock = kretprobe_table_lock_ptr(hash);12521252- raw_spin_lock_irqsave(hlist_lock, *flags);12521252+ /*12531253+ * Nested is a workaround that will soon not be needed.12541254+ * There's other protections that make sure the same lock12551255+ * is not taken on the same CPU that lockdep is unaware of.12561256+ * Differentiate when it is taken in NMI context.12571257+ */12581258+ raw_spin_lock_irqsave_nested(hlist_lock, *flags, !!in_nmi());12531259}12541260NOKPROBE_SYMBOL(kretprobe_hash_lock);12551261···12641258__acquires(hlist_lock)12651259{12661260 raw_spinlock_t *hlist_lock = kretprobe_table_lock_ptr(hash);12671267- raw_spin_lock_irqsave(hlist_lock, *flags);12611261+ /*12621262+ * Nested is a workaround that will soon not be needed.12631263+ * There's other protections that make sure the same lock12641264+ * is not taken on the same CPU that lockdep is unaware of.12651265+ * Differentiate when it is taken in NMI context.12661266+ */12671267+ raw_spin_lock_irqsave_nested(hlist_lock, *flags, !!in_nmi());12681268}12691269NOKPROBE_SYMBOL(kretprobe_table_lock);12701270···2040202820412029 /* TODO: consider to only swap the RA after the last pre_handler fired */20422030 hash = hash_ptr(current, KPROBE_HASH_BITS);20432043- raw_spin_lock_irqsave(&rp->lock, flags);20312031+ /*20322032+ * Nested is a workaround that will soon not be needed.20332033+ * There's other protections that make sure the same lock20342034+ * is not taken on the same CPU that lockdep is unaware of.20352035+ */20362036+ raw_spin_lock_irqsave_nested(&rp->lock, flags, 1);20442037 if (!hlist_empty(&rp->free_instances)) {20452038 ri = hlist_entry(rp->free_instances.first,20462039 struct kretprobe_instance, hlist);···20562039 ri->task = current;2057204020582041 if (rp->entry_handler && rp->entry_handler(ri, regs)) {20592059- raw_spin_lock_irqsave(&rp->lock, flags);20422042+ raw_spin_lock_irqsave_nested(&rp->lock, flags, 1);20602043 hlist_add_head(&ri->hlist, &rp->free_instances);20612044 raw_spin_unlock_irqrestore(&rp->lock, flags);20622045 return 0;
+2-1
kernel/kthread.c
···897897 /* Move the work from worker->delayed_work_list. */898898 WARN_ON_ONCE(list_empty(&work->node));899899 list_del_init(&work->node);900900- kthread_insert_work(worker, work, &worker->work_list);900900+ if (!work->canceling)901901+ kthread_insert_work(worker, work, &worker->work_list);901902902903 raw_spin_unlock_irqrestore(&worker->lock, flags);903904}
+10-12
kernel/sched/cpufreq_schedutil.c
···102102static bool sugov_update_next_freq(struct sugov_policy *sg_policy, u64 time,103103 unsigned int next_freq)104104{105105- if (sg_policy->next_freq == next_freq &&106106- !cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS))107107- return false;105105+ if (!sg_policy->need_freq_update) {106106+ if (sg_policy->next_freq == next_freq)107107+ return false;108108+ } else {109109+ sg_policy->need_freq_update = cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS);110110+ }108111109112 sg_policy->next_freq = next_freq;110113 sg_policy->last_freq_update_time = time;···165162166163 freq = map_util_freq(util, freq, max);167164168168- if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update &&169169- !cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS))165165+ if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update)170166 return sg_policy->next_freq;171167172172- sg_policy->need_freq_update = false;173168 sg_policy->cached_raw_freq = freq;174169 return cpufreq_driver_resolve_freq(policy, freq);175170}···443442 struct sugov_policy *sg_policy = sg_cpu->sg_policy;444443 unsigned long util, max;445444 unsigned int next_f;446446- bool busy;447445 unsigned int cached_freq = sg_policy->cached_raw_freq;448446449447 sugov_iowait_boost(sg_cpu, time, flags);···453453 if (!sugov_should_update_freq(sg_policy, time))454454 return;455455456456- /* Limits may have changed, don't skip frequency update */457457- busy = !sg_policy->need_freq_update && sugov_cpu_is_busy(sg_cpu);458458-459456 util = sugov_get_util(sg_cpu);460457 max = sg_cpu->max;461458 util = sugov_iowait_apply(sg_cpu, time, util, max);···461464 * Do not reduce the frequency if the CPU has not been idle462465 * recently, as the reduction is likely to be premature then.463466 */464464- if (busy && next_f < sg_policy->next_freq) {467467+ if (sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq) {465468 next_f = sg_policy->next_freq;466469467470 /* Restore cached freq as next_freq has changed */···826829 sg_policy->next_freq = 0;827830 sg_policy->work_in_progress = false;828831 sg_policy->limits_changed = false;829829- sg_policy->need_freq_update = false;830832 sg_policy->cached_raw_freq = 0;833833+834834+ sg_policy->need_freq_update = cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS);831835832836 for_each_cpu(cpu, policy->cpus) {833837 struct sugov_cpu *sg_cpu = &per_cpu(sugov_cpu, cpu);
+10-9
kernel/signal.c
···391391392392void task_join_group_stop(struct task_struct *task)393393{394394+ unsigned long mask = current->jobctl & JOBCTL_STOP_SIGMASK;395395+ struct signal_struct *sig = current->signal;396396+397397+ if (sig->group_stop_count) {398398+ sig->group_stop_count++;399399+ mask |= JOBCTL_STOP_CONSUME;400400+ } else if (!(sig->flags & SIGNAL_STOP_STOPPED))401401+ return;402402+394403 /* Have the new thread join an on-going signal group stop */395395- unsigned long jobctl = current->jobctl;396396- if (jobctl & JOBCTL_STOP_PENDING) {397397- struct signal_struct *sig = current->signal;398398- unsigned long signr = jobctl & JOBCTL_STOP_SIGMASK;399399- unsigned long gstop = JOBCTL_STOP_PENDING | JOBCTL_STOP_CONSUME;400400- if (task_set_jobctl_pending(task, signr | gstop)) {401401- sig->group_stop_count++;402402- }403403- }404404+ task_set_jobctl_pending(task, mask | JOBCTL_STOP_PENDING);404405}405406406407/*
+46-12
kernel/trace/ring_buffer.c
···438438};439439/*440440 * Used for which event context the event is in.441441- * NMI = 0442442- * IRQ = 1443443- * SOFTIRQ = 2444444- * NORMAL = 3441441+ * TRANSITION = 0442442+ * NMI = 1443443+ * IRQ = 2444444+ * SOFTIRQ = 3445445+ * NORMAL = 4445446 *446447 * See trace_recursive_lock() comment below for more details.447448 */448449enum {450450+ RB_CTX_TRANSITION,449451 RB_CTX_NMI,450452 RB_CTX_IRQ,451453 RB_CTX_SOFTIRQ,···30163014 * a bit of overhead in something as critical as function tracing,30173015 * we use a bitmask trick.30183016 *30193019- * bit 0 = NMI context30203020- * bit 1 = IRQ context30213021- * bit 2 = SoftIRQ context30223022- * bit 3 = normal context.30173017+ * bit 1 = NMI context30183018+ * bit 2 = IRQ context30193019+ * bit 3 = SoftIRQ context30203020+ * bit 4 = normal context.30233021 *30243022 * This works because this is the order of contexts that can30253023 * preempt other contexts. A SoftIRQ never preempts an IRQ···30423040 * The least significant bit can be cleared this way, and it30433041 * just so happens that it is the same bit corresponding to30443042 * the current context.30433043+ *30443044+ * Now the TRANSITION bit breaks the above slightly. The TRANSITION bit30453045+ * is set when a recursion is detected at the current context, and if30463046+ * the TRANSITION bit is already set, it will fail the recursion.30473047+ * This is needed because there's a lag between the changing of30483048+ * interrupt context and updating the preempt count. In this case,30493049+ * a false positive will be found. To handle this, one extra recursion30503050+ * is allowed, and this is done by the TRANSITION bit. If the TRANSITION30513051+ * bit is already set, then it is considered a recursion and the function30523052+ * ends. Otherwise, the TRANSITION bit is set, and that bit is returned.30533053+ *30543054+ * On the trace_recursive_unlock(), the TRANSITION bit will be the first30553055+ * to be cleared. Even if it wasn't the context that set it. That is,30563056+ * if an interrupt comes in while NORMAL bit is set and the ring buffer30573057+ * is called before preempt_count() is updated, since the check will30583058+ * be on the NORMAL bit, the TRANSITION bit will then be set. If an30593059+ * NMI then comes in, it will set the NMI bit, but when the NMI code30603060+ * does the trace_recursive_unlock() it will clear the TRANSTION bit30613061+ * and leave the NMI bit set. But this is fine, because the interrupt30623062+ * code that set the TRANSITION bit will then clear the NMI bit when it30633063+ * calls trace_recursive_unlock(). If another NMI comes in, it will30643064+ * set the TRANSITION bit and continue.30653065+ *30663066+ * Note: The TRANSITION bit only handles a single transition between context.30453067 */3046306830473069static __always_inline int···30813055 bit = pc & NMI_MASK ? RB_CTX_NMI :30823056 pc & HARDIRQ_MASK ? RB_CTX_IRQ : RB_CTX_SOFTIRQ;3083305730843084- if (unlikely(val & (1 << (bit + cpu_buffer->nest))))30853085- return 1;30583058+ if (unlikely(val & (1 << (bit + cpu_buffer->nest)))) {30593059+ /*30603060+ * It is possible that this was called by transitioning30613061+ * between interrupt context, and preempt_count() has not30623062+ * been updated yet. In this case, use the TRANSITION bit.30633063+ */30643064+ bit = RB_CTX_TRANSITION;30653065+ if (val & (1 << (bit + cpu_buffer->nest)))30663066+ return 1;30673067+ }3086306830873069 val |= (1 << (bit + cpu_buffer->nest));30883070 cpu_buffer->current_context = val;···31053071 cpu_buffer->current_context - (1 << cpu_buffer->nest);31063072}3107307331083108-/* The recursive locking above uses 4 bits */31093109-#define NESTED_BITS 430743074+/* The recursive locking above uses 5 bits */30753075+#define NESTED_BITS 53110307631113077/**31123078 * ring_buffer_nest_start - Allow to trace while nested
+3-3
kernel/trace/trace.c
···27502750 /*27512751 * If tracing is off, but we have triggers enabled27522752 * we still need to look at the event data. Use the temp_buffer27532753- * to store the trace event for the tigger to use. It's recusive27532753+ * to store the trace event for the trigger to use. It's recursive27542754 * safe and will not be recorded anywhere.27552755 */27562756 if (!entry && trace_file->flags & EVENT_FILE_FL_TRIGGER_COND) {···29522952 stackidx = __this_cpu_inc_return(ftrace_stack_reserve) - 1;2953295329542954 /* This should never happen. If it does, yell once and skip */29552955- if (WARN_ON_ONCE(stackidx > FTRACE_KSTACK_NESTING))29552955+ if (WARN_ON_ONCE(stackidx >= FTRACE_KSTACK_NESTING))29562956 goto out;2957295729582958 /*···3132313231333133 /* Interrupts must see nesting incremented before we use the buffer */31343134 barrier();31353135- return &buffer->buffer[buffer->nesting][0];31353135+ return &buffer->buffer[buffer->nesting - 1][0];31363136}3137313731383138static void put_trace_buf(void)
+23-3
kernel/trace/trace.h
···637637 * function is called to clear it.638638 */639639 TRACE_GRAPH_NOTRACE_BIT,640640+641641+ /*642642+ * When transitioning between context, the preempt_count() may643643+ * not be correct. Allow for a single recursion to cover this case.644644+ */645645+ TRACE_TRANSITION_BIT,640646};641647642648#define trace_recursion_set(bit) do { (current)->trace_recursion |= (1<<(bit)); } while (0)···697691 return 0;698692699693 bit = trace_get_context_bit() + start;700700- if (unlikely(val & (1 << bit)))701701- return -1;694694+ if (unlikely(val & (1 << bit))) {695695+ /*696696+ * It could be that preempt_count has not been updated during697697+ * a switch between contexts. Allow for a single recursion.698698+ */699699+ bit = TRACE_TRANSITION_BIT;700700+ if (trace_recursion_test(bit))701701+ return -1;702702+ trace_recursion_set(bit);703703+ barrier();704704+ return bit + 1;705705+ }706706+707707+ /* Normal check passed, clear the transition to allow it again */708708+ trace_recursion_clear(TRACE_TRANSITION_BIT);702709703710 val |= 1 << bit;704711 current->trace_recursion = val;705712 barrier();706713707707- return bit;714714+ return bit + 1;708715}709716710717static __always_inline void trace_clear_recursion(int bit)···727708 if (!bit)728709 return;729710711711+ bit--;730712 bit = 1 << bit;731713 val &= ~bit;732714
+7-10
kernel/trace/trace_events_synth.c
···584584{585585 struct synth_field *field;586586 const char *prefix = NULL, *field_type = argv[0], *field_name, *array;587587- int len, ret = 0;587587+ int len, ret = -ENOMEM;588588 struct seq_buf s;589589 ssize_t size;590590···617617 len--;618618619619 field->name = kmemdup_nul(field_name, len, GFP_KERNEL);620620- if (!field->name) {621621- ret = -ENOMEM;620620+ if (!field->name)622621 goto free;623623- }622622+624623 if (!is_good_name(field->name)) {625624 synth_err(SYNTH_ERR_BAD_NAME, errpos(field_name));626625 ret = -EINVAL;···637638 len += strlen(prefix);638639639640 field->type = kzalloc(len, GFP_KERNEL);640640- if (!field->type) {641641- ret = -ENOMEM;641641+ if (!field->type)642642 goto free;643643- }643643+644644 seq_buf_init(&s, field->type, len);645645 if (prefix)646646 seq_buf_puts(&s, prefix);···651653 }652654 if (WARN_ON_ONCE(!seq_buf_buffer_left(&s)))653655 goto free;656656+654657 s.buffer[s.len] = '\0';655658656659 size = synth_field_size(field->type);···665666666667 len = sizeof("__data_loc ") + strlen(field->type) + 1;667668 type = kzalloc(len, GFP_KERNEL);668668- if (!type) {669669- ret = -ENOMEM;669669+ if (!type)670670 goto free;671671- }672671673672 seq_buf_init(&s, type, len);674673 seq_buf_puts(&s, "__data_loc ");
+7-2
kernel/trace/trace_selftest.c
···492492 unregister_ftrace_function(&test_rec_probe);493493494494 ret = -1;495495- if (trace_selftest_recursion_cnt != 1) {496496- pr_cont("*callback not called once (%d)* ",495495+ /*496496+ * Recursion allows for transitions between context,497497+ * and may call the callback twice.498498+ */499499+ if (trace_selftest_recursion_cnt != 1 &&500500+ trace_selftest_recursion_cnt != 2) {501501+ pr_cont("*callback not called once (or twice) (%d)* ",497502 trace_selftest_recursion_cnt);498503 goto out;499504 }
-4
lib/crc32test.c
···683683684684 /* reduce OS noise */685685 local_irq_save(flags);686686- local_irq_disable();687686688687 nsec = ktime_get_ns();689688 for (i = 0; i < 100; i++) {···693694 nsec = ktime_get_ns() - nsec;694695695696 local_irq_restore(flags);696696- local_irq_enable();697697698698 pr_info("crc32c: CRC_LE_BITS = %d\n", CRC_LE_BITS);699699···766768767769 /* reduce OS noise */768770 local_irq_save(flags);769769- local_irq_disable();770771771772 nsec = ktime_get_ns();772773 for (i = 0; i < 100; i++) {···780783 nsec = ktime_get_ns() - nsec;781784782785 local_irq_restore(flags);783783- local_irq_enable();784786785787 pr_info("crc32: CRC_LE_BITS = %d, CRC_BE BITS = %d\n",786788 CRC_LE_BITS, CRC_BE_BITS);
···216216 u64 words[2];217217 } *ptr1, *ptr2;218218219219+ /* This test is specifically crafted for the generic mode. */220220+ if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) {221221+ kunit_info(test, "CONFIG_KASAN_GENERIC required\n");222222+ return;223223+ }224224+219225 ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);220226 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1);221227···231225 KUNIT_EXPECT_KASAN_FAIL(test, *ptr1 = *ptr2);232226 kfree(ptr1);233227 kfree(ptr2);228228+}229229+230230+static void kmalloc_uaf_16(struct kunit *test)231231+{232232+ struct {233233+ u64 words[2];234234+ } *ptr1, *ptr2;235235+236236+ ptr1 = kmalloc(sizeof(*ptr1), GFP_KERNEL);237237+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1);238238+239239+ ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);240240+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2);241241+ kfree(ptr2);242242+243243+ KUNIT_EXPECT_KASAN_FAIL(test, *ptr1 = *ptr2);244244+ kfree(ptr1);234245}235246236247static void kmalloc_oob_memset_2(struct kunit *test)···452429 volatile int i = 3;453430 char *p = &global_array[ARRAY_SIZE(global_array) + i];454431432432+ /* Only generic mode instruments globals. */433433+ if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) {434434+ kunit_info(test, "CONFIG_KASAN_GENERIC required");435435+ return;436436+ }437437+455438 KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)p);456439}457440···496467 char alloca_array[i];497468 char *p = alloca_array - 1;498469470470+ /* Only generic mode instruments dynamic allocas. */471471+ if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) {472472+ kunit_info(test, "CONFIG_KASAN_GENERIC required");473473+ return;474474+ }475475+499476 if (!IS_ENABLED(CONFIG_KASAN_STACK)) {500477 kunit_info(test, "CONFIG_KASAN_STACK is not enabled");501478 return;···515480 volatile int i = 10;516481 char alloca_array[i];517482 char *p = alloca_array + i;483483+484484+ /* Only generic mode instruments dynamic allocas. */485485+ if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) {486486+ kunit_info(test, "CONFIG_KASAN_GENERIC required");487487+ return;488488+ }518489519490 if (!IS_ENABLED(CONFIG_KASAN_STACK)) {520491 kunit_info(test, "CONFIG_KASAN_STACK is not enabled");···592551 return;593552 }594553554554+ if (OOB_TAG_OFF)555555+ size = round_up(size, OOB_TAG_OFF);556556+595557 ptr = kmalloc(size, GFP_KERNEL | __GFP_ZERO);596558 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);597559···616572 "str* functions are not instrumented with CONFIG_AMD_MEM_ENCRYPT");617573 return;618574 }575575+576576+ if (OOB_TAG_OFF)577577+ size = round_up(size, OOB_TAG_OFF);619578620579 ptr = kmalloc(size, GFP_KERNEL | __GFP_ZERO);621580 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);···666619 KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strnlen(ptr, 1));667620}668621669669-static void kasan_bitops(struct kunit *test)622622+static void kasan_bitops_modify(struct kunit *test, int nr, void *addr)670623{624624+ KUNIT_EXPECT_KASAN_FAIL(test, set_bit(nr, addr));625625+ KUNIT_EXPECT_KASAN_FAIL(test, __set_bit(nr, addr));626626+ KUNIT_EXPECT_KASAN_FAIL(test, clear_bit(nr, addr));627627+ KUNIT_EXPECT_KASAN_FAIL(test, __clear_bit(nr, addr));628628+ KUNIT_EXPECT_KASAN_FAIL(test, clear_bit_unlock(nr, addr));629629+ KUNIT_EXPECT_KASAN_FAIL(test, __clear_bit_unlock(nr, addr));630630+ KUNIT_EXPECT_KASAN_FAIL(test, change_bit(nr, addr));631631+ KUNIT_EXPECT_KASAN_FAIL(test, __change_bit(nr, addr));632632+}633633+634634+static void kasan_bitops_test_and_modify(struct kunit *test, int nr, void *addr)635635+{636636+ KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit(nr, addr));637637+ KUNIT_EXPECT_KASAN_FAIL(test, __test_and_set_bit(nr, addr));638638+ KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit_lock(nr, addr));639639+ KUNIT_EXPECT_KASAN_FAIL(test, test_and_clear_bit(nr, addr));640640+ KUNIT_EXPECT_KASAN_FAIL(test, __test_and_clear_bit(nr, addr));641641+ KUNIT_EXPECT_KASAN_FAIL(test, test_and_change_bit(nr, addr));642642+ KUNIT_EXPECT_KASAN_FAIL(test, __test_and_change_bit(nr, addr));643643+ KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = test_bit(nr, addr));644644+645645+#if defined(clear_bit_unlock_is_negative_byte)646646+ KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result =647647+ clear_bit_unlock_is_negative_byte(nr, addr));648648+#endif649649+}650650+651651+static void kasan_bitops_generic(struct kunit *test)652652+{653653+ long *bits;654654+655655+ /* This test is specifically crafted for the generic mode. */656656+ if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) {657657+ kunit_info(test, "CONFIG_KASAN_GENERIC required\n");658658+ return;659659+ }660660+671661 /*672662 * Allocate 1 more byte, which causes kzalloc to round up to 16-bytes;673663 * this way we do not actually corrupt other memory.674664 */675675- long *bits = kzalloc(sizeof(*bits) + 1, GFP_KERNEL);665665+ bits = kzalloc(sizeof(*bits) + 1, GFP_KERNEL);676666 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, bits);677667678668 /*···717633 * below accesses are still out-of-bounds, since bitops are defined to718634 * operate on the whole long the bit is in.719635 */720720- KUNIT_EXPECT_KASAN_FAIL(test, set_bit(BITS_PER_LONG, bits));721721-722722- KUNIT_EXPECT_KASAN_FAIL(test, __set_bit(BITS_PER_LONG, bits));723723-724724- KUNIT_EXPECT_KASAN_FAIL(test, clear_bit(BITS_PER_LONG, bits));725725-726726- KUNIT_EXPECT_KASAN_FAIL(test, __clear_bit(BITS_PER_LONG, bits));727727-728728- KUNIT_EXPECT_KASAN_FAIL(test, clear_bit_unlock(BITS_PER_LONG, bits));729729-730730- KUNIT_EXPECT_KASAN_FAIL(test, __clear_bit_unlock(BITS_PER_LONG, bits));731731-732732- KUNIT_EXPECT_KASAN_FAIL(test, change_bit(BITS_PER_LONG, bits));733733-734734- KUNIT_EXPECT_KASAN_FAIL(test, __change_bit(BITS_PER_LONG, bits));636636+ kasan_bitops_modify(test, BITS_PER_LONG, bits);735637736638 /*737639 * Below calls try to access bit beyond allocated memory.738640 */739739- KUNIT_EXPECT_KASAN_FAIL(test,740740- test_and_set_bit(BITS_PER_LONG + BITS_PER_BYTE, bits));641641+ kasan_bitops_test_and_modify(test, BITS_PER_LONG + BITS_PER_BYTE, bits);741642742742- KUNIT_EXPECT_KASAN_FAIL(test,743743- __test_and_set_bit(BITS_PER_LONG + BITS_PER_BYTE, bits));643643+ kfree(bits);644644+}744645745745- KUNIT_EXPECT_KASAN_FAIL(test,746746- test_and_set_bit_lock(BITS_PER_LONG + BITS_PER_BYTE, bits));646646+static void kasan_bitops_tags(struct kunit *test)647647+{648648+ long *bits;747649748748- KUNIT_EXPECT_KASAN_FAIL(test,749749- test_and_clear_bit(BITS_PER_LONG + BITS_PER_BYTE, bits));650650+ /* This test is specifically crafted for the tag-based mode. */651651+ if (IS_ENABLED(CONFIG_KASAN_GENERIC)) {652652+ kunit_info(test, "CONFIG_KASAN_SW_TAGS required\n");653653+ return;654654+ }750655751751- KUNIT_EXPECT_KASAN_FAIL(test,752752- __test_and_clear_bit(BITS_PER_LONG + BITS_PER_BYTE, bits));656656+ /* Allocation size will be rounded to up granule size, which is 16. */657657+ bits = kzalloc(sizeof(*bits), GFP_KERNEL);658658+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, bits);753659754754- KUNIT_EXPECT_KASAN_FAIL(test,755755- test_and_change_bit(BITS_PER_LONG + BITS_PER_BYTE, bits));660660+ /* Do the accesses past the 16 allocated bytes. */661661+ kasan_bitops_modify(test, BITS_PER_LONG, &bits[1]);662662+ kasan_bitops_test_and_modify(test, BITS_PER_LONG + BITS_PER_BYTE, &bits[1]);756663757757- KUNIT_EXPECT_KASAN_FAIL(test,758758- __test_and_change_bit(BITS_PER_LONG + BITS_PER_BYTE, bits));759759-760760- KUNIT_EXPECT_KASAN_FAIL(test,761761- kasan_int_result =762762- test_bit(BITS_PER_LONG + BITS_PER_BYTE, bits));763763-764764-#if defined(clear_bit_unlock_is_negative_byte)765765- KUNIT_EXPECT_KASAN_FAIL(test,766766- kasan_int_result = clear_bit_unlock_is_negative_byte(767767- BITS_PER_LONG + BITS_PER_BYTE, bits));768768-#endif769664 kfree(bits);770665}771666···791728 KUNIT_CASE(kmalloc_oob_krealloc_more),792729 KUNIT_CASE(kmalloc_oob_krealloc_less),793730 KUNIT_CASE(kmalloc_oob_16),731731+ KUNIT_CASE(kmalloc_uaf_16),794732 KUNIT_CASE(kmalloc_oob_in_memset),795733 KUNIT_CASE(kmalloc_oob_memset_2),796734 KUNIT_CASE(kmalloc_oob_memset_4),···815751 KUNIT_CASE(kasan_memchr),816752 KUNIT_CASE(kasan_memcmp),817753 KUNIT_CASE(kasan_strings),818818- KUNIT_CASE(kasan_bitops),754754+ KUNIT_CASE(kasan_bitops_generic),755755+ KUNIT_CASE(kasan_bitops_tags),819756 KUNIT_CASE(kmalloc_double_kzfree),820757 KUNIT_CASE(vmalloc_oob),821758 {}
+11-9
mm/hugetlb.c
···648648 }649649650650 del += t - f;651651+ hugetlb_cgroup_uncharge_file_region(652652+ resv, rg, t - f);651653652654 /* New entry for end of split region */653655 nrg->from = t;···661659662660 /* Original entry is trimmed */663661 rg->to = f;664664-665665- hugetlb_cgroup_uncharge_file_region(666666- resv, rg, nrg->to - nrg->from);667662668663 list_add(&nrg->link, &rg->link);669664 nrg = NULL;···677678 }678679679680 if (f <= rg->from) { /* Trim beginning of region */680680- del += t - rg->from;681681- rg->from = t;682682-683681 hugetlb_cgroup_uncharge_file_region(resv, rg,684682 t - rg->from);685685- } else { /* Trim end of region */686686- del += rg->to - f;687687- rg->to = f;688683684684+ del += t - rg->from;685685+ rg->from = t;686686+ } else { /* Trim end of region */689687 hugetlb_cgroup_uncharge_file_region(resv, rg,690688 rg->to - f);689689+690690+ del += rg->to - f;691691+ rg->to = f;691692 }692693 }693694···2442244324432444 rsv_adjust = hugepage_subpool_put_pages(spool, 1);24442445 hugetlb_acct_memory(h, -rsv_adjust);24462446+ if (deferred_reserve)24472447+ hugetlb_cgroup_uncharge_page_rsvd(hstate_index(h),24482448+ pages_per_huge_page(h), page);24452449 }24462450 return page;24472451
+18-7
mm/memcontrol.c
···41104110 (u64)memsw * PAGE_SIZE);4111411141124112 for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) {41134113+ unsigned long nr;41144114+41134115 if (memcg1_stats[i] == MEMCG_SWAP && !do_memsw_account())41144116 continue;41174117+ nr = memcg_page_state(memcg, memcg1_stats[i]);41184118+#ifdef CONFIG_TRANSPARENT_HUGEPAGE41194119+ if (memcg1_stats[i] == NR_ANON_THPS)41204120+ nr *= HPAGE_PMD_NR;41214121+#endif41154122 seq_printf(m, "total_%s %llu\n", memcg1_stat_names[i],41164116- (u64)memcg_page_state(memcg, memcg1_stats[i]) *41174117- PAGE_SIZE);41234123+ (u64)nr * PAGE_SIZE);41184124 }4119412541204126 for (i = 0; i < ARRAY_SIZE(memcg1_events); i++)···53455339 memcg->swappiness = mem_cgroup_swappiness(parent);53465340 memcg->oom_kill_disable = parent->oom_kill_disable;53475341 }53485348- if (parent && parent->use_hierarchy) {53425342+ if (!parent) {53435343+ page_counter_init(&memcg->memory, NULL);53445344+ page_counter_init(&memcg->swap, NULL);53455345+ page_counter_init(&memcg->kmem, NULL);53465346+ page_counter_init(&memcg->tcpmem, NULL);53475347+ } else if (parent->use_hierarchy) {53495348 memcg->use_hierarchy = true;53505349 page_counter_init(&memcg->memory, &parent->memory);53515350 page_counter_init(&memcg->swap, &parent->swap);53525351 page_counter_init(&memcg->kmem, &parent->kmem);53535352 page_counter_init(&memcg->tcpmem, &parent->tcpmem);53545353 } else {53555355- page_counter_init(&memcg->memory, NULL);53565356- page_counter_init(&memcg->swap, NULL);53575357- page_counter_init(&memcg->kmem, NULL);53585358- page_counter_init(&memcg->tcpmem, NULL);53545354+ page_counter_init(&memcg->memory, &root_mem_cgroup->memory);53555355+ page_counter_init(&memcg->swap, &root_mem_cgroup->swap);53565356+ page_counter_init(&memcg->kmem, &root_mem_cgroup->kmem);53575357+ page_counter_init(&memcg->tcpmem, &root_mem_cgroup->tcpmem);53595358 /*53605359 * Deeper hierachy with use_hierarchy == false doesn't make53615360 * much sense so let cgroup subsystem know about this
···6262 communication between CAN nodes via two defined CAN Identifiers.6363 As CAN frames can only transport a small amount of data bytes6464 (max. 8 bytes for 'classic' CAN and max. 64 bytes for CAN FD) this6565- segmentation is needed to transport longer PDUs as needed e.g. for6666- vehicle diagnosis (UDS, ISO 14229) or IP-over-CAN traffic.6565+ segmentation is needed to transport longer Protocol Data Units (PDU)6666+ as needed e.g. for vehicle diagnosis (UDS, ISO 14229) or IP-over-CAN6767+ traffic.6768 This protocol driver implements data transfers according to6869 ISO 15765-2:2016 for 'classic' CAN and CAN FD frame types.6970 If you want to perform automotive vehicle diagnostic services (UDS),
···475475 goto out_release_sock;476476 }477477478478+ if (!(ndev->flags & IFF_UP)) {479479+ dev_put(ndev);480480+ ret = -ENETDOWN;481481+ goto out_release_sock;482482+ }483483+478484 priv = j1939_netdev_start(ndev);479485 dev_put(ndev);480486 if (IS_ERR(priv)) {
+4-2
net/can/proc.c
···462462 */463463void can_remove_proc(struct net *net)464464{465465+ if (!net->can.proc_dir)466466+ return;467467+465468 if (net->can.pde_stats)466469 remove_proc_entry(CAN_PROC_STATS, net->can.proc_dir);467470···489486 if (net->can.pde_rcvlist_sff)490487 remove_proc_entry(CAN_PROC_RCVLIST_SFF, net->can.proc_dir);491488492492- if (net->can.proc_dir)493493- remove_proc_entry("can", net->proc_net);489489+ remove_proc_entry("can", net->proc_net);494490}
···158158 tp = skb_header_pointer(skb,159159 ptr+offsetof(struct icmp6hdr, icmp6_type),160160 sizeof(_type), &_type);161161- if (!tp || !(*tp & ICMPV6_INFOMSG_MASK))161161+162162+ /* Based on RFC 8200, Section 4.5 Fragment Header, return163163+ * false if this is a fragment packet with no icmp header info.164164+ */165165+ if (!tp && frag_off != 0)166166+ return false;167167+ else if (!tp || !(*tp & ICMPV6_INFOMSG_MASK))162168 return true;163169 }164170 return false;
···4242#include <linux/skbuff.h>4343#include <linux/slab.h>4444#include <linux/export.h>4545+#include <linux/tcp.h>4646+#include <linux/udp.h>45474648#include <net/sock.h>4749#include <net/snmp.h>···324322 struct frag_queue *fq;325323 const struct ipv6hdr *hdr = ipv6_hdr(skb);326324 struct net *net = dev_net(skb_dst(skb)->dev);327327- int iif;325325+ __be16 frag_off;326326+ int iif, offset;327327+ u8 nexthdr;328328329329 if (IP6CB(skb)->flags & IP6SKB_FRAGMENTED)330330 goto fail_hdr;···353349 IP6CB(skb)->nhoff = (u8 *)fhdr - skb_network_header(skb);354350 IP6CB(skb)->flags |= IP6SKB_FRAGMENTED;355351 return 1;352352+ }353353+354354+ /* RFC 8200, Section 4.5 Fragment Header:355355+ * If the first fragment does not include all headers through an356356+ * Upper-Layer header, then that fragment should be discarded and357357+ * an ICMP Parameter Problem, Code 3, message should be sent to358358+ * the source of the fragment, with the Pointer field set to zero.359359+ */360360+ nexthdr = hdr->nexthdr;361361+ offset = ipv6_skip_exthdr(skb, skb_transport_offset(skb), &nexthdr, &frag_off);362362+ if (offset >= 0) {363363+ /* Check some common protocols' header */364364+ if (nexthdr == IPPROTO_TCP)365365+ offset += sizeof(struct tcphdr);366366+ else if (nexthdr == IPPROTO_UDP)367367+ offset += sizeof(struct udphdr);368368+ else if (nexthdr == IPPROTO_ICMPV6)369369+ offset += sizeof(struct icmp6hdr);370370+ else371371+ offset += 1;372372+373373+ if (!(frag_off & htons(IP6_OFFSET)) && offset > skb->len) {374374+ __IP6_INC_STATS(net, __in6_dev_get_safely(skb->dev),375375+ IPSTATS_MIB_INHDRERRORS);376376+ icmpv6_param_prob(skb, ICMPV6_HDR_INCOMP, 0);377377+ return -1;378378+ }356379 }357380358381 iif = skb->dev ? skb->dev->ifindex : 0;
···258258 */259259void sta_info_free(struct ieee80211_local *local, struct sta_info *sta)260260{261261+ /*262262+ * If we had used sta_info_pre_move_state() then we might not263263+ * have gone through the state transitions down again, so do264264+ * it here now (and warn if it's inserted).265265+ *266266+ * This will clear state such as fast TX/RX that may have been267267+ * allocated during state transitions.268268+ */269269+ while (sta->sta_state > IEEE80211_STA_NONE) {270270+ int ret;271271+272272+ WARN_ON_ONCE(test_sta_flag(sta, WLAN_STA_INSERTED));273273+274274+ ret = sta_info_move_state(sta, sta->sta_state - 1);275275+ if (WARN_ONCE(ret, "sta_info_move_state() returned %d\n", ret))276276+ break;277277+ }278278+261279 if (sta->rate_ctrl)262280 rate_control_free_sta(sta);263281
+8-1
net/mac80211/sta_info.h
···785785void sta_info_stop(struct ieee80211_local *local);786786787787/**788788- * sta_info_flush - flush matching STA entries from the STA table788788+ * __sta_info_flush - flush matching STA entries from the STA table789789 *790790 * Returns the number of removed STA entries.791791 *···794794 */795795int __sta_info_flush(struct ieee80211_sub_if_data *sdata, bool vlans);796796797797+/**798798+ * sta_info_flush - flush matching STA entries from the STA table799799+ *800800+ * Returns the number of removed STA entries.801801+ *802802+ * @sdata: sdata to remove all stations from803803+ */797804static inline int sta_info_flush(struct ieee80211_sub_if_data *sdata)798805{799806 return __sta_info_flush(sdata, false);
+28-16
net/mac80211/tx.c
···1942194219431943/* device xmit handlers */1944194419451945+enum ieee80211_encrypt {19461946+ ENCRYPT_NO,19471947+ ENCRYPT_MGMT,19481948+ ENCRYPT_DATA,19491949+};19501950+19451951static int ieee80211_skb_resize(struct ieee80211_sub_if_data *sdata,19461952 struct sk_buff *skb,19471947- int head_need, bool may_encrypt)19531953+ int head_need,19541954+ enum ieee80211_encrypt encrypt)19481955{19491956 struct ieee80211_local *local = sdata->local;19501950- struct ieee80211_hdr *hdr;19511957 bool enc_tailroom;19521958 int tail_need = 0;1953195919541954- hdr = (struct ieee80211_hdr *) skb->data;19551955- enc_tailroom = may_encrypt &&19561956- (sdata->crypto_tx_tailroom_needed_cnt ||19571957- ieee80211_is_mgmt(hdr->frame_control));19601960+ enc_tailroom = encrypt == ENCRYPT_MGMT ||19611961+ (encrypt == ENCRYPT_DATA &&19621962+ sdata->crypto_tx_tailroom_needed_cnt);1958196319591964 if (enc_tailroom) {19601965 tail_need = IEEE80211_ENCRYPT_TAILROOM;···19901985{19911986 struct ieee80211_local *local = sdata->local;19921987 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);19931993- struct ieee80211_hdr *hdr;19881988+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data;19941989 int headroom;19951995- bool may_encrypt;19901990+ enum ieee80211_encrypt encrypt;1996199119971997- may_encrypt = !(info->flags & IEEE80211_TX_INTFL_DONT_ENCRYPT);19921992+ if (info->flags & IEEE80211_TX_INTFL_DONT_ENCRYPT)19931993+ encrypt = ENCRYPT_NO;19941994+ else if (ieee80211_is_mgmt(hdr->frame_control))19951995+ encrypt = ENCRYPT_MGMT;19961996+ else19971997+ encrypt = ENCRYPT_DATA;1998199819991999 headroom = local->tx_headroom;20002000- if (may_encrypt)20002000+ if (encrypt != ENCRYPT_NO)20012001 headroom += sdata->encrypt_headroom;20022002 headroom -= skb_headroom(skb);20032003 headroom = max_t(int, 0, headroom);2004200420052005- if (ieee80211_skb_resize(sdata, skb, headroom, may_encrypt)) {20052005+ if (ieee80211_skb_resize(sdata, skb, headroom, encrypt)) {20062006 ieee80211_free_txskb(&local->hw, skb);20072007 return;20082008 }2009200920102010+ /* reload after potential resize */20102011 hdr = (struct ieee80211_hdr *) skb->data;20112012 info->control.vif = &sdata->vif;20122013···28392828 head_need += sdata->encrypt_headroom;28402829 head_need += local->tx_headroom;28412830 head_need = max_t(int, 0, head_need);28422842- if (ieee80211_skb_resize(sdata, skb, head_need, true)) {28312831+ if (ieee80211_skb_resize(sdata, skb, head_need, ENCRYPT_DATA)) {28432832 ieee80211_free_txskb(&local->hw, skb);28442833 skb = NULL;28452834 return ERR_PTR(-ENOMEM);···35133502 if (unlikely(ieee80211_skb_resize(sdata, skb,35143503 max_t(int, extra_head + hw_headroom -35153504 skb_headroom(skb), 0),35163516- false))) {35053505+ ENCRYPT_NO))) {35173506 kfree_skb(skb);35183507 return true;35193508 }···36303619 tx.skb = skb;36313620 tx.sdata = vif_to_sdata(info->control.vif);3632362136333633- if (txq->sta && !(info->flags & IEEE80211_TX_CTL_INJECTED)) {36223622+ if (txq->sta) {36343623 tx.sta = container_of(txq->sta, struct sta_info, sta);36353624 /*36363625 * Drop unicast frames to unauthorised stations unless they are36373637- * EAPOL frames from the local station.36263626+ * injected frames or EAPOL frames from the local station.36383627 */36393639- if (unlikely(ieee80211_is_data(hdr->frame_control) &&36283628+ if (unlikely(!(info->flags & IEEE80211_TX_CTL_INJECTED) &&36293629+ ieee80211_is_data(hdr->frame_control) &&36403630 !ieee80211_vif_is_mesh(&tx.sdata->vif) &&36413631 tx.sdata->vif.type != NL80211_IFTYPE_OCB &&36423632 !is_multicast_ether_addr(hdr->addr1) &&
+1-1
net/mptcp/token.c
···291291{292292 struct mptcp_sock *ret = NULL;293293 struct hlist_nulls_node *pos;294294- int slot, num;294294+ int slot, num = 0;295295296296 for (slot = *s_slot; slot <= token_mask; *s_num = 0, slot++) {297297 struct token_bucket *bucket = &token_hash[slot];
···38853885 * P2P Device and NAN do not have a netdev, so don't go38863886 * through the netdev notifier and must be added here38873887 */38883888- cfg80211_init_wdev(rdev, wdev);38883888+ cfg80211_init_wdev(wdev);38893889+ cfg80211_register_wdev(rdev, wdev);38893890 break;38903891 default:38913892 break;
···287287sub output_rest {288288 create_labels();289289290290+ my $part = "";291291+290292 foreach my $what (sort {291293 ($data{$a}->{type} eq "File") cmp ($data{$b}->{type} eq "File") ||292294 $a cmp $b···308306 $w =~ s/([\(\)\_\-\*\=\^\~\\])/\\$1/g;309307310308 if ($type ne "File") {309309+ my $cur_part = $what;310310+ if ($what =~ '/') {311311+ if ($what =~ m#^(\/?(?:[\w\-]+\/?){1,2})#) {312312+ $cur_part = "Symbols under $1";313313+ $cur_part =~ s,/$,,;314314+ }315315+ }316316+317317+ if ($cur_part ne "" && $part ne $cur_part) {318318+ $part = $cur_part;319319+ my $bar = $part;320320+ $bar =~ s/./-/g;321321+ print "$part\n$bar\n\n";322322+ }323323+311324 printf ".. _%s:\n\n", $data{$what}->{label};312325313326 my @names = split /, /,$w;···369352370353 if (!($desc =~ /^\s*$/)) {371354 if ($description_is_rst) {355355+ # Remove title markups from the description356356+ # Having titles inside ABI files will only work if extra357357+ # care would be taken in order to strictly follow the same358358+ # level order for each markup.359359+ $desc =~ s/\n[\-\*\=\^\~]+\n/\n\n/g;360360+372361 # Enrich text by creating cross-references373362374363 $desc =~ s,Documentation/(?!devicetree)(\S+)\.rst,:doc:`/$1`,g;
+15-6
scripts/kernel-doc
···10921092 print "\n\n.. c:type:: " . $name . "\n\n";10931093 } else {10941094 my $name = $args{'struct'};10951095- print "\n\n.. c:struct:: " . $name . "\n\n";10951095+ if ($args{'type'} eq 'union') {10961096+ print "\n\n.. c:union:: " . $name . "\n\n";10971097+ } else {10981098+ print "\n\n.. c:struct:: " . $name . "\n\n";10991099+ }10961100 }10971101 print_lineno($declaration_start_line);10981102 $lineprefix = " ";···14311427 }14321428}1433142914301430+my $typedef_type = qr { ((?:\s+[\w\*]+){1,8})\s* }x;14311431+my $typedef_ident = qr { \*?\s*(\w\S+)\s* }x;14321432+my $typedef_args = qr { \s*\((.*)\); }x;14331433+14341434+my $typedef1 = qr { typedef$typedef_type\($typedef_ident\)$typedef_args }x;14351435+my $typedef2 = qr { typedef$typedef_type$typedef_ident$typedef_args }x;14361436+14341437sub dump_typedef($$) {14351438 my $x = shift;14361439 my $file = shift;1437144014381441 $x =~ s@/\*.*?\*/@@gos; # strip comments.1439144214401440- # Parse function prototypes14411441- if ($x =~ /typedef\s+(\w+)\s*\(\*\s*(\w\S+)\s*\)\s*\((.*)\);/ ||14421442- $x =~ /typedef\s+(\w+)\s*(\w\S+)\s*\s*\((.*)\);/) {14431443-14441444- # Function typedefs14431443+ # Parse function typedef prototypes14441444+ if ($x =~ $typedef1 || $x =~ $typedef2) {14451445 $return_type = $1;14461446 $declaration_name = $2;14471447 my $args = $3;14481448+ $return_type =~ s/^\s+//;1448144914491450 create_parameterlist($args, ',', $file, $declaration_name);14501451
+2-2
sound/core/control.c
···1925192519261926#ifdef CONFIG_COMPAT19271927/**19281928- * snd_ctl_unregister_ioctl - de-register the device-specific compat 32bit19291929- * control-ioctls19281928+ * snd_ctl_unregister_ioctl_compat - de-register the device-specific compat19291929+ * 32bit control-ioctls19301930 * @fcn: ioctl callback function to unregister19311931 */19321932int snd_ctl_unregister_ioctl_compat(snd_kctl_ioctl_func_t fcn)
+2-1
sound/core/pcm_dmaengine.c
···356356EXPORT_SYMBOL_GPL(snd_dmaengine_pcm_close);357357358358/**359359- * snd_dmaengine_pcm_release_chan_close - Close a dmaengine based PCM substream and release channel359359+ * snd_dmaengine_pcm_close_release_chan - Close a dmaengine based PCM360360+ * substream and release channel360361 * @substream: PCM substream361362 *362363 * Releases the DMA channel associated with the PCM substream.
+1-1
sound/core/pcm_lib.c
···490490EXPORT_SYMBOL(snd_pcm_set_ops);491491492492/**493493- * snd_pcm_sync - set the PCM sync id493493+ * snd_pcm_set_sync - set the PCM sync id494494 * @substream: the pcm substream495495 *496496 * Sets the PCM sync identifier for the card.
+2-2
sound/core/pcm_native.c
···112112EXPORT_SYMBOL_GPL(snd_pcm_stream_lock);113113114114/**115115- * snd_pcm_stream_lock - Unlock the PCM stream115115+ * snd_pcm_stream_unlock - Unlock the PCM stream116116 * @substream: PCM substream117117 *118118 * This unlocks the PCM stream that has been locked via snd_pcm_stream_lock().···595595}596596597597/**598598- * snd_pcm_hw_param_choose - choose a configuration defined by @params598598+ * snd_pcm_hw_params_choose - choose a configuration defined by @params599599 * @pcm: PCM instance600600 * @params: the hw_params instance601601 *
+2
sound/hda/ext/hdac_ext_controller.c
···148148 return NULL;149149 if (bus->idx != bus_idx)150150 return NULL;151151+ if (addr < 0 || addr > 31)152152+ return NULL;151153152154 list_for_each_entry(hlink, &bus->hlink_list, list) {153155 for (i = 0; i < HDA_MAX_CODECS; i++) {
+29-16
sound/pci/hda/hda_codec.c
···29342934 snd_hdac_leave_pm(&codec->core);29352935}2936293629372937-static int hda_codec_runtime_suspend(struct device *dev)29372937+static int hda_codec_suspend(struct device *dev)29382938{29392939 struct hda_codec *codec = dev_to_hda_codec(dev);29402940 unsigned int state;···29532953 return 0;29542954}2955295529562956-static int hda_codec_runtime_resume(struct device *dev)29562956+static int hda_codec_resume(struct device *dev)29572957{29582958 struct hda_codec *codec = dev_to_hda_codec(dev);29592959···29672967 pm_runtime_mark_last_busy(dev);29682968 return 0;29692969}29702970+29712971+static int hda_codec_runtime_suspend(struct device *dev)29722972+{29732973+ return hda_codec_suspend(dev);29742974+}29752975+29762976+static int hda_codec_runtime_resume(struct device *dev)29772977+{29782978+ return hda_codec_resume(dev);29792979+}29802980+29702981#endif /* CONFIG_PM */2971298229722983#ifdef CONFIG_PM_SLEEP29732973-static int hda_codec_force_resume(struct device *dev)29842984+static int hda_codec_pm_prepare(struct device *dev)29852985+{29862986+ return pm_runtime_suspended(dev);29872987+}29882988+29892989+static void hda_codec_pm_complete(struct device *dev)29742990{29752991 struct hda_codec *codec = dev_to_hda_codec(dev);29762976- int ret;2977299229782978- ret = pm_runtime_force_resume(dev);29792979- /* schedule jackpoll work for jack detection update */29802980- if (codec->jackpoll_interval ||29812981- (pm_runtime_suspended(dev) && hda_codec_need_resume(codec)))29822982- schedule_delayed_work(&codec->jackpoll_work,29832983- codec->jackpoll_interval);29842984- return ret;29932993+ if (pm_runtime_suspended(dev) && (codec->jackpoll_interval ||29942994+ hda_codec_need_resume(codec) || codec->forced_resume))29952995+ pm_request_resume(dev);29852996}2986299729872998static int hda_codec_pm_suspend(struct device *dev)29882999{29893000 dev->power.power_state = PMSG_SUSPEND;29902990- return pm_runtime_force_suspend(dev);30013001+ return hda_codec_suspend(dev);29913002}2992300329933004static int hda_codec_pm_resume(struct device *dev)29943005{29953006 dev->power.power_state = PMSG_RESUME;29962996- return hda_codec_force_resume(dev);30073007+ return hda_codec_resume(dev);29973008}2998300929993010static int hda_codec_pm_freeze(struct device *dev)30003011{30013012 dev->power.power_state = PMSG_FREEZE;30023002- return pm_runtime_force_suspend(dev);30133013+ return hda_codec_suspend(dev);30033014}3004301530053016static int hda_codec_pm_thaw(struct device *dev)30063017{30073018 dev->power.power_state = PMSG_THAW;30083008- return hda_codec_force_resume(dev);30193019+ return hda_codec_resume(dev);30093020}3010302130113022static int hda_codec_pm_restore(struct device *dev)30123023{30133024 dev->power.power_state = PMSG_RESTORE;30143014- return hda_codec_force_resume(dev);30253025+ return hda_codec_resume(dev);30153026}30163027#endif /* CONFIG_PM_SLEEP */3017302830183029/* referred in hda_bind.c */30193030const struct dev_pm_ops hda_codec_driver_pm = {30203031#ifdef CONFIG_PM_SLEEP30323032+ .prepare = hda_codec_pm_prepare,30333033+ .complete = hda_codec_pm_complete,30213034 .suspend = hda_codec_pm_suspend,30223035 .resume = hda_codec_pm_resume,30233036 .freeze = hda_codec_pm_freeze,
+2-1
sound/pci/hda/hda_controller.h
···4141/* 24 unused */4242#define AZX_DCAPS_COUNT_LPIB_DELAY (1 << 25) /* Take LPIB as delay */4343#define AZX_DCAPS_PM_RUNTIME (1 << 26) /* runtime PM support */4444-#define AZX_DCAPS_SUSPEND_SPURIOUS_WAKEUP (1 << 27) /* Workaround for spurious wakeups after suspend */4444+/* 27 unused */4545#define AZX_DCAPS_CORBRP_SELF_CLEAR (1 << 28) /* CORBRP clears itself after reset */4646#define AZX_DCAPS_NO_MSI64 (1 << 29) /* Stick to 32-bit MSIs */4747#define AZX_DCAPS_SEPARATE_STREAM_TAG (1 << 30) /* capture and playback use separate stream tag */···143143 unsigned int align_buffer_size:1;144144 unsigned int region_requested:1;145145 unsigned int disabled:1; /* disabled by vga_switcheroo */146146+ unsigned int pm_prepared:1;146147147148 /* GTS present */148149 unsigned int gts_present:1;
+35-28
sound/pci/hda/hda_intel.c
···297297/* PCH for HSW/BDW; with runtime PM */298298/* no i915 binding for this as HSW/BDW has another controller for HDMI */299299#define AZX_DCAPS_INTEL_PCH \300300- (AZX_DCAPS_INTEL_PCH_BASE | AZX_DCAPS_PM_RUNTIME |\301301- AZX_DCAPS_SUSPEND_SPURIOUS_WAKEUP)300300+ (AZX_DCAPS_INTEL_PCH_BASE | AZX_DCAPS_PM_RUNTIME)302301303302/* HSW HDMI */304303#define AZX_DCAPS_INTEL_HASWELL \···984985 display_power(chip, false);985986}986987987987-static void __azx_runtime_resume(struct azx *chip, bool from_rt)988988+static void __azx_runtime_resume(struct azx *chip)988989{989990 struct hda_intel *hda = container_of(chip, struct hda_intel, chip);990991 struct hdac_bus *bus = azx_bus(chip);···10011002 azx_init_pci(chip);10021003 hda_intel_init_chip(chip, true);1003100410041004- if (from_rt) {10051005+ /* Avoid codec resume if runtime resume is for system suspend */10061006+ if (!chip->pm_prepared) {10051007 list_for_each_codec(codec, &chip->bus) {10061008 if (codec->relaxed_resume)10071009 continue;···10181018}1019101910201020#ifdef CONFIG_PM_SLEEP10211021+static int azx_prepare(struct device *dev)10221022+{10231023+ struct snd_card *card = dev_get_drvdata(dev);10241024+ struct azx *chip;10251025+10261026+ chip = card->private_data;10271027+ chip->pm_prepared = 1;10281028+10291029+ /* HDA controller always requires different WAKEEN for runtime suspend10301030+ * and system suspend, so don't use direct-complete here.10311031+ */10321032+ return 0;10331033+}10341034+10351035+static void azx_complete(struct device *dev)10361036+{10371037+ struct snd_card *card = dev_get_drvdata(dev);10381038+ struct azx *chip;10391039+10401040+ chip = card->private_data;10411041+ chip->pm_prepared = 0;10421042+}10431043+10211044static int azx_suspend(struct device *dev)10221045{10231046 struct snd_card *card = dev_get_drvdata(dev);···1052102910531030 chip = card->private_data;10541031 bus = azx_bus(chip);10551055- snd_power_change_state(card, SNDRV_CTL_POWER_D3hot);10561056- /* An ugly workaround: direct call of __azx_runtime_suspend() and10571057- * __azx_runtime_resume() for old Intel platforms that suffer from10581058- * spurious wakeups after S3 suspend10591059- */10601060- if (chip->driver_caps & AZX_DCAPS_SUSPEND_SPURIOUS_WAKEUP)10611061- __azx_runtime_suspend(chip);10621062- else10631063- pm_runtime_force_suspend(dev);10321032+ __azx_runtime_suspend(chip);10641033 if (bus->irq >= 0) {10651034 free_irq(bus->irq, chip);10661035 bus->irq = -1;···10811066 if (azx_acquire_irq(chip, 1) < 0)10821067 return -EIO;1083106810841084- if (chip->driver_caps & AZX_DCAPS_SUSPEND_SPURIOUS_WAKEUP)10851085- __azx_runtime_resume(chip, false);10861086- else10871087- pm_runtime_force_resume(dev);10881088- snd_power_change_state(card, SNDRV_CTL_POWER_D0);10691069+ __azx_runtime_resume(chip);1089107010901071 trace_azx_resume(chip);10911072 return 0;···11291118 chip = card->private_data;1130111911311120 /* enable controller wake up event */11321132- if (snd_power_get_state(card) == SNDRV_CTL_POWER_D0) {11331133- azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) |11341134- STATESTS_INT_MASK);11351135- }11211121+ azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) | STATESTS_INT_MASK);1136112211371123 __azx_runtime_suspend(chip);11381124 trace_azx_runtime_suspend(chip);···11401132{11411133 struct snd_card *card = dev_get_drvdata(dev);11421134 struct azx *chip;11431143- bool from_rt = snd_power_get_state(card) == SNDRV_CTL_POWER_D0;1144113511451136 if (!azx_is_pm_ready(card))11461137 return 0;11471138 chip = card->private_data;11481148- __azx_runtime_resume(chip, from_rt);11391139+ __azx_runtime_resume(chip);1149114011501141 /* disable controller Wake Up event*/11511151- if (from_rt) {11521152- azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) &11531153- ~STATESTS_INT_MASK);11541154- }11421142+ azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) & ~STATESTS_INT_MASK);1155114311561144 trace_azx_runtime_resume(chip);11571145 return 0;···11811177static const struct dev_pm_ops azx_pm = {11821178 SET_SYSTEM_SLEEP_PM_OPS(azx_suspend, azx_resume)11831179#ifdef CONFIG_PM_SLEEP11801180+ .prepare = azx_prepare,11811181+ .complete = azx_complete,11841182 .freeze_noirq = azx_freeze_noirq,11851183 .thaw_noirq = azx_thaw_noirq,11861184#endif···2362235623632357 if (azx_has_pm_runtime(chip)) {23642358 pm_runtime_use_autosuspend(&pci->dev);23592359+ pm_runtime_allow(&pci->dev);23652360 pm_runtime_put_autosuspend(&pci->dev);23662361 }23672362
···401401 struct snd_interval *chan = hw_param_interval(params,402402 SNDRV_PCM_HW_PARAM_CHANNELS);403403 struct snd_mask *fmt = hw_param_mask(params, SNDRV_PCM_HW_PARAM_FORMAT);404404- struct snd_soc_dpcm *dpcm = container_of(405405- params, struct snd_soc_dpcm, hw_params);406406- struct snd_soc_dai_link *fe_dai_link = dpcm->fe->dai_link;407407- struct snd_soc_dai_link *be_dai_link = dpcm->be->dai_link;404404+ struct snd_soc_dpcm *dpcm, *rtd_dpcm = NULL;405405+406406+ /*407407+ * The following loop will be called only for playback stream408408+ * In this platform, there is only one playback device on every SSP409409+ */410410+ for_each_dpcm_fe(rtd, SNDRV_PCM_STREAM_PLAYBACK, dpcm) {411411+ rtd_dpcm = dpcm;412412+ break;413413+ }414414+415415+ /*416416+ * This following loop will be called only for capture stream417417+ * In this platform, there is only one capture device on every SSP418418+ */419419+ for_each_dpcm_fe(rtd, SNDRV_PCM_STREAM_CAPTURE, dpcm) {420420+ rtd_dpcm = dpcm;421421+ break;422422+ }423423+424424+ if (!rtd_dpcm)425425+ return -EINVAL;426426+427427+ /*428428+ * The above 2 loops are mutually exclusive based on the stream direction,429429+ * thus rtd_dpcm variable will never be overwritten430430+ */408431409432 /*410433 * The ADSP will convert the FE rate to 48k, stereo, 24 bit411434 */412412- if (!strcmp(fe_dai_link->name, "Kbl Audio Port") ||413413- !strcmp(fe_dai_link->name, "Kbl Audio Headset Playback") ||414414- !strcmp(fe_dai_link->name, "Kbl Audio Capture Port")) {435435+ if (!strcmp(rtd_dpcm->fe->dai_link->name, "Kbl Audio Port") ||436436+ !strcmp(rtd_dpcm->fe->dai_link->name, "Kbl Audio Headset Playback") ||437437+ !strcmp(rtd_dpcm->fe->dai_link->name, "Kbl Audio Capture Port")) {415438 rate->min = rate->max = 48000;416439 chan->min = chan->max = 2;417440 snd_mask_none(fmt);···444421 * The speaker on the SSP0 supports S16_LE and not S24_LE.445422 * thus changing the mask here446423 */447447- if (!strcmp(be_dai_link->name, "SSP0-Codec"))424424+ if (!strcmp(rtd_dpcm->be->dai_link->name, "SSP0-Codec"))448425 snd_mask_set_format(fmt, SNDRV_PCM_FORMAT_S16_LE);449426450427 return 0;
+6-3
sound/soc/intel/catpt/dsp.c
···267267 reg, (reg & CATPT_ISD_DCPWM),268268 500, 10000);269269 if (ret) {270270- dev_err(cdev->dev, "await WAITI timeout\n");271271- mutex_unlock(&cdev->clk_mutex);272272- return ret;270270+ dev_warn(cdev->dev, "await WAITI timeout\n");271271+ /* no signal - only high clock selection allowed */272272+ if (lp) {273273+ mutex_unlock(&cdev->clk_mutex);274274+ return 0;275275+ }273276 }274277 }275278
+10
sound/soc/intel/catpt/pcm.c
···667667 break;668668 }669669670670+ /* see if this is a new configuration */671671+ if (!memcmp(&cdev->devfmt[devfmt.iface], &devfmt, sizeof(devfmt)))672672+ return 0;673673+674674+ pm_runtime_get_sync(cdev->dev);675675+670676 ret = catpt_ipc_set_device_format(cdev, &devfmt);677677+678678+ pm_runtime_mark_last_busy(cdev->dev);679679+ pm_runtime_put_autosuspend(cdev->dev);680680+671681 if (ret)672682 return CATPT_IPC_ERROR(ret);673683
···23412341}2342234223432343/**23442344- * snd_soc_unregister_dai - Unregister DAIs from the ASoC core23442344+ * snd_soc_unregister_dais - Unregister DAIs from the ASoC core23452345 *23462346 * @component: The component for which the DAIs should be unregistered23472347 */
+1-1
sound/soc/soc-dapm.c
···12761276}1277127712781278/**12791279- * snd_soc_dapm_get_connected_widgets - query audio path and it's widgets.12791279+ * snd_soc_dapm_dai_get_connected_widgets - query audio path and it's widgets.12801280 * @dai: the soc DAI.12811281 * @stream: stream direction.12821282 * @list: list of active widgets for this stream.
+5
sound/soc/sof/loader.c
···118118 case SOF_IPC_EXT_CC_INFO:119119 ret = get_cc_info(sdev, ext_hdr);120120 break;121121+ case SOF_IPC_EXT_UNUSED:122122+ case SOF_IPC_EXT_PROBE_INFO:123123+ case SOF_IPC_EXT_USER_ABI_INFO:124124+ /* They are supported but we don't do anything here */125125+ break;121126 default:122127 dev_warn(sdev->dev, "warning: unknown ext header type %d size 0x%x\n",123128 ext_hdr->type, ext_hdr->hdr.size);
+6
sound/usb/pcm.c
···336336 switch (subs->stream->chip->usb_id) {337337 case USB_ID(0x0763, 0x2030): /* M-Audio Fast Track C400 */338338 case USB_ID(0x0763, 0x2031): /* M-Audio Fast Track C600 */339339+ case USB_ID(0x22f0, 0x0006): /* Allen&Heath Qu-16 */339340 ep = 0x81;340341 ifnum = 3;341342 goto add_sync_ep_from_ifnum;···346345 ifnum = 2;347346 goto add_sync_ep_from_ifnum;348347 case USB_ID(0x2466, 0x8003): /* Fractal Audio Axe-Fx II */348348+ case USB_ID(0x0499, 0x172a): /* Yamaha MODX */349349 ep = 0x86;350350 ifnum = 2;351351 goto add_sync_ep_from_ifnum;352352 case USB_ID(0x2466, 0x8010): /* Fractal Audio Axe-Fx III */353353 ep = 0x81;354354+ ifnum = 2;355355+ goto add_sync_ep_from_ifnum;356356+ case USB_ID(0x1686, 0xf029): /* Zoom UAC-2 */357357+ ep = 0x82;354358 ifnum = 2;355359 goto add_sync_ep_from_ifnum;356360 case USB_ID(0x1397, 0x0001): /* Behringer UFX1604 */
+1
sound/usb/quirks.c
···18001800 case 0x278b: /* Rotel? */18011801 case 0x292b: /* Gustard/Ess based devices */18021802 case 0x2ab6: /* T+A devices */18031803+ case 0x3353: /* Khadas devices */18031804 case 0x3842: /* EVGA */18041805 case 0xc502: /* HiBy devices */18051806 if (fp->dsd_raw)
+25
tools/arch/arm64/include/uapi/asm/kvm.h
···159159struct kvm_arch_memory_slot {160160};161161162162+/*163163+ * PMU filter structure. Describe a range of events with a particular164164+ * action. To be used with KVM_ARM_VCPU_PMU_V3_FILTER.165165+ */166166+struct kvm_pmu_event_filter {167167+ __u16 base_event;168168+ __u16 nevents;169169+170170+#define KVM_PMU_EVENT_ALLOW 0171171+#define KVM_PMU_EVENT_DENY 1172172+173173+ __u8 action;174174+ __u8 pad[3];175175+};176176+162177/* for KVM_GET/SET_VCPU_EVENTS */163178struct kvm_vcpu_events {164179 struct {···257242#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL 0258243#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL 1259244#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_REQUIRED 2245245+246246+/*247247+ * Only two states can be presented by the host kernel:248248+ * - NOT_REQUIRED: the guest doesn't need to do anything249249+ * - NOT_AVAIL: the guest isn't mitigated (it can still use SSBS if available)250250+ *251251+ * All the other values are deprecated. The host still accepts all252252+ * values (they are ABI), but will narrow them to the above two.253253+ */260254#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2 KVM_REG_ARM_FW_REG(2)261255#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL 0262256#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN 1···353329#define KVM_ARM_VCPU_PMU_V3_CTRL 0354330#define KVM_ARM_VCPU_PMU_V3_IRQ 0355331#define KVM_ARM_VCPU_PMU_V3_INIT 1332332+#define KVM_ARM_VCPU_PMU_V3_FILTER 2356333#define KVM_ARM_VCPU_TIMER_CTRL 1357334#define KVM_ARM_VCPU_TIMER_IRQ_VTIMER 0358335#define KVM_ARM_VCPU_TIMER_IRQ_PTIMER 1
+1-1
tools/arch/s390/include/uapi/asm/sie.h
···2929 { 0x13, "SIGP conditional emergency signal" }, \3030 { 0x15, "SIGP sense running" }, \3131 { 0x16, "SIGP set multithreading"}, \3232- { 0x17, "SIGP store additional status ait address"}3232+ { 0x17, "SIGP store additional status at address"}33333434#define icpt_prog_codes \3535 { 0x0001, "Prog Operation" }, \
···5454#endif55555656#ifdef CONFIG_X86_645757-#ifdef CONFIG_PARAVIRT5757+#ifdef CONFIG_PARAVIRT_XXL5858/* Paravirtualized systems may not have PSE or PGE available */5959#define NEED_PSE 06060#define NEED_PGE 0
+20
tools/arch/x86/include/uapi/asm/kvm.h
···192192 __u32 indices[0];193193};194194195195+/* Maximum size of any access bitmap in bytes */196196+#define KVM_MSR_FILTER_MAX_BITMAP_SIZE 0x600197197+198198+/* for KVM_X86_SET_MSR_FILTER */199199+struct kvm_msr_filter_range {200200+#define KVM_MSR_FILTER_READ (1 << 0)201201+#define KVM_MSR_FILTER_WRITE (1 << 1)202202+ __u32 flags;203203+ __u32 nmsrs; /* number of msrs in bitmap */204204+ __u32 base; /* MSR index the bitmap starts at */205205+ __u8 *bitmap; /* a 1 bit allows the operations in flags, 0 denies */206206+};207207+208208+#define KVM_MSR_FILTER_MAX_RANGES 16209209+struct kvm_msr_filter {210210+#define KVM_MSR_FILTER_DEFAULT_ALLOW (0 << 0)211211+#define KVM_MSR_FILTER_DEFAULT_DENY (1 << 0)212212+ __u32 flags;213213+ struct kvm_msr_filter_range ranges[KVM_MSR_FILTER_MAX_RANGES];214214+};195215196216struct kvm_cpuid_entry {197217 __u32 function;
···4747#ifndef noinline4848#define noinline4949#endif5050-#ifndef __no_tail_call5151-#define __no_tail_call5252-#endif53505451/* Are two types/vars the same type (ignoring qualifiers)? */5552#ifndef __same_type
+3-1
tools/include/uapi/asm-generic/unistd.h
···857857__SYSCALL(__NR_pidfd_getfd, sys_pidfd_getfd)858858#define __NR_faccessat2 439859859__SYSCALL(__NR_faccessat2, sys_faccessat2)860860+#define __NR_process_madvise 440861861+__SYSCALL(__NR_process_madvise, sys_process_madvise)860862861863#undef __NR_syscalls862862-#define __NR_syscalls 440864864+#define __NR_syscalls 441863865864866/*865867 * 32 bit systems traditionally used different
+56-3
tools/include/uapi/drm/i915_drm.h
···619619 */620620#define I915_PARAM_PERF_REVISION 54621621622622+/* Query whether DRM_I915_GEM_EXECBUFFER2 supports supplying an array of623623+ * timeline syncobj through drm_i915_gem_execbuffer_ext_timeline_fences. See624624+ * I915_EXEC_USE_EXTENSIONS.625625+ */626626+#define I915_PARAM_HAS_EXEC_TIMELINE_FENCES 55627627+622628/* Must be kept compact -- no holes and well documented */623629624630typedef struct drm_i915_getparam {···10521046 __u32 flags;10531047};1054104810491049+/**10501050+ * See drm_i915_gem_execbuffer_ext_timeline_fences.10511051+ */10521052+#define DRM_I915_GEM_EXECBUFFER_EXT_TIMELINE_FENCES 010531053+10541054+/**10551055+ * This structure describes an array of drm_syncobj and associated points for10561056+ * timeline variants of drm_syncobj. It is invalid to append this structure to10571057+ * the execbuf if I915_EXEC_FENCE_ARRAY is set.10581058+ */10591059+struct drm_i915_gem_execbuffer_ext_timeline_fences {10601060+ struct i915_user_extension base;10611061+10621062+ /**10631063+ * Number of element in the handles_ptr & value_ptr arrays.10641064+ */10651065+ __u64 fence_count;10661066+10671067+ /**10681068+ * Pointer to an array of struct drm_i915_gem_exec_fence of length10691069+ * fence_count.10701070+ */10711071+ __u64 handles_ptr;10721072+10731073+ /**10741074+ * Pointer to an array of u64 values of length fence_count. Values10751075+ * must be 0 for a binary drm_syncobj. A Value of 0 for a timeline10761076+ * drm_syncobj is invalid as it turns a drm_syncobj into a binary one.10771077+ */10781078+ __u64 values_ptr;10791079+};10801080+10551081struct drm_i915_gem_execbuffer2 {10561082 /**10571083 * List of gem_exec_object2 structs···11001062 __u32 num_cliprects;11011063 /**11021064 * This is a struct drm_clip_rect *cliprects if I915_EXEC_FENCE_ARRAY11031103- * is not set. If I915_EXEC_FENCE_ARRAY is set, then this is a11041104- * struct drm_i915_gem_exec_fence *fences.10651065+ * & I915_EXEC_USE_EXTENSIONS are not set.10661066+ *10671067+ * If I915_EXEC_FENCE_ARRAY is set, then this is a pointer to an array10681068+ * of struct drm_i915_gem_exec_fence and num_cliprects is the length10691069+ * of the array.10701070+ *10711071+ * If I915_EXEC_USE_EXTENSIONS is set, then this is a pointer to a10721072+ * single struct i915_user_extension and num_cliprects is 0.11051073 */11061074 __u64 cliprects_ptr;11071075#define I915_EXEC_RING_MASK (0x3f)···12251181 */12261182#define I915_EXEC_FENCE_SUBMIT (1 << 20)1227118312281228-#define __I915_EXEC_UNKNOWN_FLAGS (-(I915_EXEC_FENCE_SUBMIT << 1))11841184+/*11851185+ * Setting I915_EXEC_USE_EXTENSIONS implies that11861186+ * drm_i915_gem_execbuffer2.cliprects_ptr is treated as a pointer to an linked11871187+ * list of i915_user_extension. Each i915_user_extension node is the base of a11881188+ * larger structure. The list of supported structures are listed in the11891189+ * drm_i915_gem_execbuffer_ext enum.11901190+ */11911191+#define I915_EXEC_USE_EXTENSIONS (1 << 21)11921192+11931193+#define __I915_EXEC_UNKNOWN_FLAGS (-(I915_EXEC_USE_EXTENSIONS << 1))1229119412301195#define I915_EXEC_CONTEXT_ID_MASK (0xffffffff)12311196#define i915_execbuffer2_set_context_id(eb2, context) \
···1616#define MS_REMOUNT 32 /* Alter flags of a mounted FS */1717#define MS_MANDLOCK 64 /* Allow mandatory locks on an FS */1818#define MS_DIRSYNC 128 /* Directory modifications are synchronous */1919+#define MS_NOSYMFOLLOW 256 /* Do not follow symlinks */1920#define MS_NOATIME 1024 /* Do not update access times. */2021#define MS_NODIRATIME 2048 /* Do not update directory access times */2122#define MS_BIND 4096
···146146147147/* Set event fd for config interrupt*/148148#define VHOST_VDPA_SET_CONFIG_CALL _IOW(VHOST_VIRTIO, 0x77, int)149149+150150+/* Get the valid iova range */151151+#define VHOST_VDPA_GET_IOVA_RANGE _IOR(VHOST_VIRTIO, 0x78, \152152+ struct vhost_vdpa_iova_range)149153#endif
···361361437 common openat2 sys_openat2362362438 common pidfd_getfd sys_pidfd_getfd363363439 common faccessat2 sys_faccessat2364364+440 common process_madvise sys_process_madvise364365365366#366366-# x32-specific system call numbers start at 512 to avoid cache impact367367-# for native 64-bit operation. The __x32_compat_sys stubs are created368368-# on-the-fly for compat_sys_*() compatibility system calls if X86_X32369369-# is defined.367367+# Due to a historical design error, certain syscalls are numbered differently368368+# in x32 as compared to native x86_64. These syscalls have numbers 512-547.369369+# Do not add new syscalls to this range. Numbers 548 and above are available370370+# for non-x32 use.370371#371372512 x32 rt_sigaction compat_sys_rt_sigaction372373513 x32 rt_sigreturn compat_sys_x32_rt_sigreturn···405404545 x32 execveat compat_sys_execveat406405546 x32 preadv2 compat_sys_preadv64v2407406547 x32 pwritev2 compat_sys_pwritev64v2407407+# This is the end of the legacy x32 range. Numbers 548 and above are408408+# not special and are not to be used for x32-specific syscalls.
+9-6
tools/perf/builtin-trace.c
···46394639 err = 0;4640464046414641 if (lists[0]) {46424642- struct option o = OPT_CALLBACK('e', "event", &trace->evlist, "event",46434643- "event selector. use 'perf list' to list available events",46444644- parse_events_option);46424642+ struct option o = {46434643+ .value = &trace->evlist,46444644+ };46454645 err = parse_events_option(&o, lists[0], 0);46464646 }46474647out:···46554655{46564656 struct trace *trace = opt->value;4657465746584658- if (!list_empty(&trace->evlist->core.entries))46594659- return parse_cgroups(opt, str, unset);46604660-46584658+ if (!list_empty(&trace->evlist->core.entries)) {46594659+ struct option o = {46604660+ .value = &trace->evlist,46614661+ };46624662+ return parse_cgroups(&o, str, unset);46634663+ }46614664 trace->cgroup = evlist__findnew_cgroup(trace->evlist, str);4662466546634666 return 0;
···515515 }516516}517517518518+void dso__delete_symbol(struct dso *dso, struct symbol *sym)519519+{520520+ rb_erase_cached(&sym->rb_node, &dso->symbols);521521+ symbol__delete(sym);522522+ dso__reset_find_symbol_cache(dso);523523+}524524+518525struct symbol *dso__find_symbol(struct dso *dso, u64 addr)519526{520527 if (dso->last_find_result.addr != addr || dso->last_find_result.symbol == NULL) {
+2
tools/perf/util/symbol.h
···131131132132void dso__insert_symbol(struct dso *dso,133133 struct symbol *sym);134134+void dso__delete_symbol(struct dso *dso,135135+ struct symbol *sym);134136135137struct symbol *dso__find_symbol(struct dso *dso, u64 addr);136138struct symbol *dso__find_symbol_by_name(struct dso *dso, const char *name);
+1-2
tools/testing/kunit/kunit_parser.py
···6666def raw_output(kernel_output):6767 for line in kernel_output:6868 print(line)6969- yield line70697170DIVIDER = '=' * 607271···241242 return None242243 test_suite.name = name243244 expected_test_case_num = parse_subtest_plan(lines)244244- if not expected_test_case_num:245245+ if expected_test_case_num is None:245246 return None246247 while expected_test_case_num > 0:247248 test_case = parse_test_case(lines)
+25-7
tools/testing/kunit/kunit_tool_test.py
···179179 print_mock = mock.patch('builtins.print').start()180180 result = kunit_parser.parse_run_tests(181181 kunit_parser.isolate_kunit_output(file.readlines()))182182- print_mock.assert_any_call(StrContains("no kunit output detected"))182182+ print_mock.assert_any_call(StrContains('no tests run!'))183183 print_mock.stop()184184 file.close()185185···198198 'test_data/test_config_printk_time.log')199199 with open(prefix_log) as file:200200 result = kunit_parser.parse_run_tests(file.readlines())201201- self.assertEqual('kunit-resource-test', result.suites[0].name)201201+ self.assertEqual(202202+ kunit_parser.TestStatus.SUCCESS,203203+ result.status)204204+ self.assertEqual('kunit-resource-test', result.suites[0].name)202205203206 def test_ignores_multiple_prefixes(self):204207 prefix_log = get_absolute_path(205208 'test_data/test_multiple_prefixes.log')206209 with open(prefix_log) as file:207210 result = kunit_parser.parse_run_tests(file.readlines())208208- self.assertEqual('kunit-resource-test', result.suites[0].name)211211+ self.assertEqual(212212+ kunit_parser.TestStatus.SUCCESS,213213+ result.status)214214+ self.assertEqual('kunit-resource-test', result.suites[0].name)209215210216 def test_prefix_mixed_kernel_output(self):211217 mixed_prefix_log = get_absolute_path(212218 'test_data/test_interrupted_tap_output.log')213219 with open(mixed_prefix_log) as file:214220 result = kunit_parser.parse_run_tests(file.readlines())215215- self.assertEqual('kunit-resource-test', result.suites[0].name)221221+ self.assertEqual(222222+ kunit_parser.TestStatus.SUCCESS,223223+ result.status)224224+ self.assertEqual('kunit-resource-test', result.suites[0].name)216225217226 def test_prefix_poundsign(self):218227 pound_log = get_absolute_path('test_data/test_pound_sign.log')219228 with open(pound_log) as file:220229 result = kunit_parser.parse_run_tests(file.readlines())221221- self.assertEqual('kunit-resource-test', result.suites[0].name)230230+ self.assertEqual(231231+ kunit_parser.TestStatus.SUCCESS,232232+ result.status)233233+ self.assertEqual('kunit-resource-test', result.suites[0].name)222234223235 def test_kernel_panic_end(self):224236 panic_log = get_absolute_path('test_data/test_kernel_panic_interrupt.log')225237 with open(panic_log) as file:226238 result = kunit_parser.parse_run_tests(file.readlines())227227- self.assertEqual('kunit-resource-test', result.suites[0].name)239239+ self.assertEqual(240240+ kunit_parser.TestStatus.TEST_CRASHED,241241+ result.status)242242+ self.assertEqual('kunit-resource-test', result.suites[0].name)228243229244 def test_pound_no_prefix(self):230245 pound_log = get_absolute_path('test_data/test_pound_no_prefix.log')231246 with open(pound_log) as file:232247 result = kunit_parser.parse_run_tests(file.readlines())233233- self.assertEqual('kunit-resource-test', result.suites[0].name)248248+ self.assertEqual(249249+ kunit_parser.TestStatus.SUCCESS,250250+ result.status)251251+ self.assertEqual('kunit-resource-test', result.suites[0].name)234252235253class KUnitJsonTest(unittest.TestCase):236254
···11 printk: console [mc-1] enabled22 random: get_random_bytes called from init_oops_id+0x35/0x40 with crng_init=033 TAP version 1444+ 1..345 # Subtest: kunit-resource-test56 1..567 ok 1 - kunit_resource_test_init_resources
+1
tools/testing/kunit/test_data/test_pound_sign.log
···11[ 0.060000] printk: console [mc-1] enabled22[ 0.060000] random: get_random_bytes called from init_oops_id+0x35/0x40 with crng_init=033[ 0.060000] TAP version 1444+[ 0.060000] 1..345[ 0.060000] # Subtest: kunit-resource-test56[ 0.060000] 1..567[ 0.060000] ok 1 - kunit_resource_test_init_resources
···145145 test_clone3_supported();146146147147 EXPECT_EQ(getuid(), 0)148148- XFAIL(return, "Skipping all tests as non-root\n");148148+ SKIP(return, "Skipping all tests as non-root");149149150150 memset(&set_tid, 0, sizeof(set_tid));151151
+4-4
tools/testing/selftests/core/close_range_test.c
···4444 fd = open("/dev/null", O_RDONLY | O_CLOEXEC);4545 ASSERT_GE(fd, 0) {4646 if (errno == ENOENT)4747- XFAIL(return, "Skipping test since /dev/null does not exist");4747+ SKIP(return, "Skipping test since /dev/null does not exist");4848 }49495050 open_fds[i] = fd;···52525353 EXPECT_EQ(-1, sys_close_range(open_fds[0], open_fds[100], -1)) {5454 if (errno == ENOSYS)5555- XFAIL(return, "close_range() syscall not supported");5555+ SKIP(return, "close_range() syscall not supported");5656 }57575858 EXPECT_EQ(0, sys_close_range(open_fds[0], open_fds[50], 0));···108108 fd = open("/dev/null", O_RDONLY | O_CLOEXEC);109109 ASSERT_GE(fd, 0) {110110 if (errno == ENOENT)111111- XFAIL(return, "Skipping test since /dev/null does not exist");111111+ SKIP(return, "Skipping test since /dev/null does not exist");112112 }113113114114 open_fds[i] = fd;···197197 fd = open("/dev/null", O_RDONLY | O_CLOEXEC);198198 ASSERT_GE(fd, 0) {199199 if (errno == ENOENT)200200- XFAIL(return, "Skipping test since /dev/null does not exist");200200+ SKIP(return, "Skipping test since /dev/null does not exist");201201 }202202203203 open_fds[i] = fd;
···7474 ret = mount(NULL, binderfs_mntpt, "binder", 0, 0);7575 EXPECT_EQ(ret, 0) {7676 if (errno == ENODEV)7777- XFAIL(goto out, "binderfs missing");7777+ SKIP(goto out, "binderfs missing");7878 TH_LOG("%s - Failed to mount binderfs", strerror(errno));7979 goto rmdir;8080 }···475475TEST(binderfs_test_privileged)476476{477477 if (geteuid() != 0)478478- XFAIL(return, "Tests are not run as root. Skipping privileged tests");478478+ SKIP(return, "Tests are not run as root. Skipping privileged tests");479479480480 if (__do_binderfs_test(_metadata))481481- XFAIL(return, "The Android binderfs filesystem is not available");481481+ SKIP(return, "The Android binderfs filesystem is not available");482482}483483484484TEST(binderfs_test_unprivileged)···511511 ret = wait_for_pid(pid);512512 if (ret) {513513 if (ret == 2)514514- XFAIL(return, "The Android binderfs filesystem is not available");514514+ SKIP(return, "The Android binderfs filesystem is not available");515515 ASSERT_EQ(ret, 0) {516516 TH_LOG("wait_for_pid() failed");517517 }
···32823282 close(ctx.epfd);32833283}3284328432853285+struct epoll61_ctx {32863286+ int epfd;32873287+ int evfd;32883288+};32893289+32903290+static void *epoll61_write_eventfd(void *ctx_)32913291+{32923292+ struct epoll61_ctx *ctx = ctx_;32933293+ int64_t l = 1;32943294+32953295+ usleep(10950);32963296+ write(ctx->evfd, &l, sizeof(l));32973297+ return NULL;32983298+}32993299+33003300+static void *epoll61_epoll_with_timeout(void *ctx_)33013301+{33023302+ struct epoll61_ctx *ctx = ctx_;33033303+ struct epoll_event events[1];33043304+ int n;33053305+33063306+ n = epoll_wait(ctx->epfd, events, 1, 11);33073307+ /*33083308+ * If epoll returned the eventfd, write on the eventfd to wake up the33093309+ * blocking poller.33103310+ */33113311+ if (n == 1) {33123312+ int64_t l = 1;33133313+33143314+ write(ctx->evfd, &l, sizeof(l));33153315+ }33163316+ return NULL;33173317+}33183318+33193319+static void *epoll61_blocking_epoll(void *ctx_)33203320+{33213321+ struct epoll61_ctx *ctx = ctx_;33223322+ struct epoll_event events[1];33233323+33243324+ epoll_wait(ctx->epfd, events, 1, -1);33253325+ return NULL;33263326+}33273327+33283328+TEST(epoll61)33293329+{33303330+ struct epoll61_ctx ctx;33313331+ struct epoll_event ev;33323332+ int i, r;33333333+33343334+ ctx.epfd = epoll_create1(0);33353335+ ASSERT_GE(ctx.epfd, 0);33363336+ ctx.evfd = eventfd(0, EFD_NONBLOCK);33373337+ ASSERT_GE(ctx.evfd, 0);33383338+33393339+ ev.events = EPOLLIN | EPOLLET | EPOLLERR | EPOLLHUP;33403340+ ev.data.ptr = NULL;33413341+ r = epoll_ctl(ctx.epfd, EPOLL_CTL_ADD, ctx.evfd, &ev);33423342+ ASSERT_EQ(r, 0);33433343+33443344+ /*33453345+ * We are testing a race. Repeat the test case 1000 times to make it33463346+ * more likely to fail in case of a bug.33473347+ */33483348+ for (i = 0; i < 1000; i++) {33493349+ pthread_t threads[3];33503350+ int n;33513351+33523352+ /*33533353+ * Start 3 threads:33543354+ * Thread 1 sleeps for 10.9ms and writes to the evenfd.33553355+ * Thread 2 calls epoll with a timeout of 11ms.33563356+ * Thread 3 calls epoll with a timeout of -1.33573357+ *33583358+ * The eventfd write by Thread 1 should either wakeup Thread 233593359+ * or Thread 3. If it wakes up Thread 2, Thread 2 writes on the33603360+ * eventfd to wake up Thread 3.33613361+ *33623362+ * If no events are missed, all three threads should eventually33633363+ * be joinable.33643364+ */33653365+ ASSERT_EQ(pthread_create(&threads[0], NULL,33663366+ epoll61_write_eventfd, &ctx), 0);33673367+ ASSERT_EQ(pthread_create(&threads[1], NULL,33683368+ epoll61_epoll_with_timeout, &ctx), 0);33693369+ ASSERT_EQ(pthread_create(&threads[2], NULL,33703370+ epoll61_blocking_epoll, &ctx), 0);33713371+33723372+ for (n = 0; n < ARRAY_SIZE(threads); ++n)33733373+ ASSERT_EQ(pthread_join(threads[n], NULL), 0);33743374+ }33753375+33763376+ close(ctx.epfd);33773377+ close(ctx.evfd);33783378+}33793379+32853380TEST_HARNESS_MAIN
···133133 ping $LOCALHOST -c 1 || sleep .001 || usleep 1 || sleep 1134134}135135136136+# The fork function in the kernel was renamed from "_do_fork" to137137+# "kernel_fork". As older tests should still work with older kernels138138+# as well as newer kernels, check which version of fork is used on this139139+# kernel so that the tests can use the fork function for the running kernel.140140+FUNCTION_FORK=`(if grep '\bkernel_clone\b' /proc/kallsyms > /dev/null; then141141+ echo kernel_clone; else echo '_do_fork'; fi)`142142+136143# Since probe event command may include backslash, explicitly use printf "%s"137144# to NOT interpret it.138145ftrace_errlog_check() { # err-prefix command-with-error-pos-by-^ command-file
···330330 ksft_exit_fail_msg("%s test: Failed to recycle pid %d\n",331331 test_name, PID_RECYCLE);332332 case PIDFD_SKIP:333333- ksft_print_msg("%s test: Skipping test\n", test_name);333333+ ksft_test_result_skip("%s test: Skipping test\n", test_name);334334 ret = 0;335335 break;336336 case PIDFD_XFAIL:
-1
tools/testing/selftests/proc/proc-loadavg-001.c
···1414 * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.1515 */1616/* Test that /proc/loadavg correctly reports last pid in pid namespace. */1717-#define _GNU_SOURCE1817#include <errno.h>1918#include <sched.h>2019#include <sys/types.h>
-1
tools/testing/selftests/proc/proc-self-syscall.c
···1313 * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF1414 * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.1515 */1616-#define _GNU_SOURCE1716#include <unistd.h>1817#include <sys/syscall.h>1918#include <sys/types.h>
-1
tools/testing/selftests/proc/proc-uptime-002.c
···1515 */1616// Test that values in /proc/uptime increment monotonically1717// while shifting across CPUs.1818-#define _GNU_SOURCE1918#undef NDEBUG2019#include <assert.h>2120#include <unistd.h>
+8
tools/testing/selftests/wireguard/netns.sh
···316316n2 ping -W 1 -c 1 192.168.241.1317317n1 wg set wg0 peer "$pub2" persistent-keepalive 0318318319319+# Test that sk_bound_dev_if works320320+n1 ping -I wg0 -c 1 -W 1 192.168.241.2321321+# What about when the mark changes and the packet must be rerouted?322322+n1 iptables -t mangle -I OUTPUT -j MARK --set-xmark 1323323+n1 ping -c 1 -W 1 192.168.241.2 # First the boring case324324+n1 ping -I wg0 -c 1 -W 1 192.168.241.2 # Then the sk_bound_dev_if case325325+n1 iptables -t mangle -D OUTPUT -j MARK --set-xmark 1326326+319327# Test that onion routing works, even when it loops320328n1 wg set wg0 peer "$pub3" allowed-ips 192.168.242.2/32 endpoint 192.168.241.2:5321329ip1 addr add 192.168.242.1/24 dev wg0