···3737 0-| / \/ \/3838 +---0----1----2----3----4----5----6------------> time (s)39394040- 2. To make the LED go instantly from one brigntess value to another,4141- we should use use zero-time lengths (the brightness must be same as4040+ 2. To make the LED go instantly from one brightness value to another,4141+ we should use zero-time lengths (the brightness must be same as4242 the previous tuple's). So the format should be:4343 "brightness_1 duration_1 brightness_1 0 brightness_2 duration_24444 brightness_2 0 ...". For example:
+62-3
Documentation/admin-guide/kernel-parameters.txt
···856856 causing system reset or hang due to sending857857 INIT from AP to BSP.858858859859- disable_counter_freezing [HW]859859+ perf_v4_pmi= [X86,INTEL]860860+ Format: <bool>860861 Disable Intel PMU counter freezing feature.861862 The feature only exists starting from862863 Arch Perfmon v4 (Skylake and newer).···35053504 before loading.35063505 See Documentation/blockdev/ramdisk.txt.3507350635073507+ psi= [KNL] Enable or disable pressure stall information35083508+ tracking.35093509+ Format: <bool>35103510+35083511 psmouse.proto= [HW,MOUSE] Highest PS2 mouse protocol extension to35093512 probe for; one of (bare|imps|exps|lifebook|any).35103513 psmouse.rate= [HW,MOUSE] Set desired mouse report rate, in reports···4199419442004195 spectre_v2= [X86] Control mitigation of Spectre variant 242014196 (indirect branch speculation) vulnerability.41974197+ The default operation protects the kernel from41984198+ user space attacks.4202419942034203- on - unconditionally enable42044204- off - unconditionally disable42004200+ on - unconditionally enable, implies42014201+ spectre_v2_user=on42024202+ off - unconditionally disable, implies42034203+ spectre_v2_user=off42054204 auto - kernel detects whether your CPU model is42064205 vulnerable42074206···42154206 CONFIG_RETPOLINE configuration option, and the42164207 compiler with which the kernel was built.4217420842094209+ Selecting 'on' will also enable the mitigation42104210+ against user space to user space task attacks.42114211+42124212+ Selecting 'off' will disable both the kernel and42134213+ the user space protections.42144214+42184215 Specific mitigations can also be selected manually:4219421642204217 retpoline - replace indirect branches···4229421442304215 Not specifying this option is equivalent to42314216 spectre_v2=auto.42174217+42184218+ spectre_v2_user=42194219+ [X86] Control mitigation of Spectre variant 242204220+ (indirect branch speculation) vulnerability between42214221+ user space tasks42224222+42234223+ on - Unconditionally enable mitigations. Is42244224+ enforced by spectre_v2=on42254225+42264226+ off - Unconditionally disable mitigations. Is42274227+ enforced by spectre_v2=off42284228+42294229+ prctl - Indirect branch speculation is enabled,42304230+ but mitigation can be enabled via prctl42314231+ per thread. The mitigation control state42324232+ is inherited on fork.42334233+42344234+ prctl,ibpb42354235+ - Like "prctl" above, but only STIBP is42364236+ controlled per thread. IBPB is issued42374237+ always when switching between different user42384238+ space processes.42394239+42404240+ seccomp42414241+ - Same as "prctl" above, but all seccomp42424242+ threads will enable the mitigation unless42434243+ they explicitly opt out.42444244+42454245+ seccomp,ibpb42464246+ - Like "seccomp" above, but only STIBP is42474247+ controlled per thread. IBPB is issued42484248+ always when switching between different42494249+ user space processes.42504250+42514251+ auto - Kernel selects the mitigation depending on42524252+ the available CPU features and vulnerability.42534253+42544254+ Default mitigation:42554255+ If CONFIG_SECCOMP=y then "seccomp", otherwise "prctl"42564256+42574257+ Not specifying this option is equivalent to42584258+ spectre_v2_user=auto.4232425942334260 spec_store_bypass_disable=42344261 [HW] Control Speculative Store Bypass (SSB) Disable mitigation···47704713 prevent spurious wakeup);47714714 n = USB_QUIRK_DELAY_CTRL_MSG (Device needs a47724715 pause after every control message);47164716+ o = USB_QUIRK_HUB_SLOW_RESET (Hub needs extra47174717+ delay after resetting its port);47734718 Example: quirks=0781:5580:bk,0a5c:5834:gij4774471947754720 usbhid.mousepoll=
+1-1
Documentation/admin-guide/pm/cpufreq.rst
···150150a governor ``sysfs`` interface to it. Next, the governor is started by151151invoking its ``->start()`` callback.152152153153-That callback it expected to register per-CPU utilization update callbacks for153153+That callback is expected to register per-CPU utilization update callbacks for154154all of the online CPUs belonging to the given policy with the CPU scheduler.155155The utilization update callbacks will be invoked by the CPU scheduler on156156important events, like task enqueue and dequeue, on every iteration of the
+10-9
Documentation/admin-guide/security-bugs.rst
···3232The security list is not a disclosure channel. For that, see Coordination3333below.34343535-Once a robust fix has been developed, our preference is to release the3636-fix in a timely fashion, treating it no differently than any of the other3737-thousands of changes and fixes the Linux kernel project releases every3838-month.3535+Once a robust fix has been developed, the release process starts. Fixes3636+for publicly known bugs are released immediately.39374040-However, at the request of the reporter, we will postpone releasing the4141-fix for up to 5 business days after the date of the report or after the4242-embargo has lifted; whichever comes first. The only exception to that4343-rule is if the bug is publicly known, in which case the preference is to4444-release the fix as soon as it's available.3838+Although our preference is to release fixes for publicly undisclosed bugs3939+as soon as they become available, this may be postponed at the request of4040+the reporter or an affected party for up to 7 calendar days from the start4141+of the release process, with an exceptional extension to 14 calendar days4242+if it is agreed that the criticality of the bug requires more time. The4343+only valid reason for deferring the publication of a fix is to accommodate4444+the logistics of QA and large scale rollouts which require release4545+coordination.45464647Whilst embargoed information may be shared with trusted individuals in4748order to develop a fix, such information will not be published alongside
+1
Documentation/arm64/silicon-errata.txt
···5757| ARM | Cortex-A73 | #858921 | ARM64_ERRATUM_858921 |5858| ARM | Cortex-A55 | #1024718 | ARM64_ERRATUM_1024718 |5959| ARM | Cortex-A76 | #1188873 | ARM64_ERRATUM_1188873 |6060+| ARM | Cortex-A76 | #1286807 | ARM64_ERRATUM_1286807 |6061| ARM | MMU-500 | #841119,#826419 | N/A |6162| | | | |6263| Cavium | ThunderX ITS | #22375, #24313 | CAVIUM_ERRATUM_22375 |
+41-11
Documentation/core-api/xarray.rst
···7474new entry and return the previous entry stored at that index. You can7575use :c:func:`xa_erase` instead of calling :c:func:`xa_store` with a7676``NULL`` entry. There is no difference between an entry that has never7777-been stored to and one that has most recently had ``NULL`` stored to it.7777+been stored to, one that has been erased and one that has most recently7878+had ``NULL`` stored to it.78797980You can conditionally replace an entry at an index by using8081:c:func:`xa_cmpxchg`. Like :c:func:`cmpxchg`, it will only succeed if···106105indices. Storing into one index may result in the entry retrieved by107106some, but not all of the other indices changing.108107108108+Sometimes you need to ensure that a subsequent call to :c:func:`xa_store`109109+will not need to allocate memory. The :c:func:`xa_reserve` function110110+will store a reserved entry at the indicated index. Users of the normal111111+API will see this entry as containing ``NULL``. If you do not need to112112+use the reserved entry, you can call :c:func:`xa_release` to remove the113113+unused entry. If another user has stored to the entry in the meantime,114114+:c:func:`xa_release` will do nothing; if instead you want the entry to115115+become ``NULL``, you should use :c:func:`xa_erase`.116116+117117+If all entries in the array are ``NULL``, the :c:func:`xa_empty` function118118+will return ``true``.119119+109120Finally, you can remove all entries from an XArray by calling110121:c:func:`xa_destroy`. If the XArray entries are pointers, you may wish111122to free the entries first. You can do this by iterating over all present112123entries in the XArray using the :c:func:`xa_for_each` iterator.113124114114-ID assignment115115--------------125125+Allocating XArrays126126+------------------127127+128128+If you use :c:func:`DEFINE_XARRAY_ALLOC` to define the XArray, or129129+initialise it by passing ``XA_FLAGS_ALLOC`` to :c:func:`xa_init_flags`,130130+the XArray changes to track whether entries are in use or not.116131117132You can call :c:func:`xa_alloc` to store the entry at any unused index118133in the XArray. If you need to modify the array from interrupt context,119134you can use :c:func:`xa_alloc_bh` or :c:func:`xa_alloc_irq` to disable120120-interrupts while allocating the ID. Unlike :c:func:`xa_store`, allocating121121-a ``NULL`` pointer does not delete an entry. Instead it reserves an122122-entry like :c:func:`xa_reserve` and you can release it using either123123-:c:func:`xa_erase` or :c:func:`xa_release`. To use ID assignment, the124124-XArray must be defined with :c:func:`DEFINE_XARRAY_ALLOC`, or initialised125125-by passing ``XA_FLAGS_ALLOC`` to :c:func:`xa_init_flags`,135135+interrupts while allocating the ID.136136+137137+Using :c:func:`xa_store`, :c:func:`xa_cmpxchg` or :c:func:`xa_insert`138138+will mark the entry as being allocated. Unlike a normal XArray, storing139139+``NULL`` will mark the entry as being in use, like :c:func:`xa_reserve`.140140+To free an entry, use :c:func:`xa_erase` (or :c:func:`xa_release` if141141+you only want to free the entry if it's ``NULL``).142142+143143+You cannot use ``XA_MARK_0`` with an allocating XArray as this mark144144+is used to track whether an entry is free or not. The other marks are145145+available for your use.126146127147Memory allocation128148-----------------···180158181159Takes xa_lock internally:182160 * :c:func:`xa_store`161161+ * :c:func:`xa_store_bh`162162+ * :c:func:`xa_store_irq`183163 * :c:func:`xa_insert`184164 * :c:func:`xa_erase`185165 * :c:func:`xa_erase_bh`···191167 * :c:func:`xa_alloc`192168 * :c:func:`xa_alloc_bh`193169 * :c:func:`xa_alloc_irq`170170+ * :c:func:`xa_reserve`171171+ * :c:func:`xa_reserve_bh`172172+ * :c:func:`xa_reserve_irq`194173 * :c:func:`xa_destroy`195174 * :c:func:`xa_set_mark`196175 * :c:func:`xa_clear_mark`···204177 * :c:func:`__xa_erase`205178 * :c:func:`__xa_cmpxchg`206179 * :c:func:`__xa_alloc`180180+ * :c:func:`__xa_reserve`207181 * :c:func:`__xa_set_mark`208182 * :c:func:`__xa_clear_mark`209183···262234using :c:func:`xa_lock_irqsave` in both the interrupt handler and process263235context, or :c:func:`xa_lock_irq` in process context and :c:func:`xa_lock`264236in the interrupt handler. Some of the more common patterns have helper265265-functions such as :c:func:`xa_erase_bh` and :c:func:`xa_erase_irq`.237237+functions such as :c:func:`xa_store_bh`, :c:func:`xa_store_irq`,238238+:c:func:`xa_erase_bh` and :c:func:`xa_erase_irq`.266239267240Sometimes you need to protect access to the XArray with a mutex because268241that lock sits above another mutex in the locking hierarchy. That does···351322 - :c:func:`xa_is_zero`352323 - Zero entries appear as ``NULL`` through the Normal API, but occupy353324 an entry in the XArray which can be used to reserve the index for354354- future use.325325+ future use. This is used by allocating XArrays for allocated entries326326+ which are ``NULL``.355327356328Other internal entries may be added in the future. As far as possible, they357329will be handled by :c:func:`xas_retry`.
+5-3
Documentation/cpu-freq/cpufreq-stats.txt
···8686This will give a fine grained information about all the CPU frequency8787transitions. The cat output here is a two dimensional matrix, where an entry8888<i,j> (row i, column j) represents the count of number of transitions from 8989-Freq_i to Freq_j. Freq_i is in descending order with increasing rows and 9090-Freq_j is in descending order with increasing columns. The output here also 9191-contains the actual freq values for each row and column for better readability.8989+Freq_i to Freq_j. Freq_i rows and Freq_j columns follow the sorting order in9090+which the driver has provided the frequency table initially to the cpufreq core9191+and so can be sorted (ascending or descending) or unsorted. The output here9292+also contains the actual freq values for each row and column for better9393+readability.92949395If the transition table is bigger than PAGE_SIZE, reading this will9496return an -EFBIG error.
···11-Generic ARM big LITTLE cpufreq driver's DT glue22------------------------------------------------33-44-This is DT specific glue layer for generic cpufreq driver for big LITTLE55-systems.66-77-Both required and optional properties listed below must be defined88-under node /cpus/cpu@x. Where x is the first cpu inside a cluster.99-1010-FIXME: Cpus should boot in the order specified in DT and all cpus for a cluster1111-must be present contiguously. Generic DT driver will check only node 'x' for1212-cpu:x.1313-1414-Required properties:1515-- operating-points: Refer to Documentation/devicetree/bindings/opp/opp.txt1616- for details1717-1818-Optional properties:1919-- clock-latency: Specify the possible maximum transition latency for clock,2020- in unit of nanoseconds.2121-2222-Examples:2323-2424-cpus {2525- #address-cells = <1>;2626- #size-cells = <0>;2727-2828- cpu@0 {2929- compatible = "arm,cortex-a15";3030- reg = <0>;3131- next-level-cache = <&L2>;3232- operating-points = <3333- /* kHz uV */3434- 792000 11000003535- 396000 9500003636- 198000 8500003737- >;3838- clock-latency = <61036>; /* two CLK32 periods */3939- };4040-4141- cpu@1 {4242- compatible = "arm,cortex-a15";4343- reg = <1>;4444- next-level-cache = <&L2>;4545- };4646-4747- cpu@100 {4848- compatible = "arm,cortex-a7";4949- reg = <100>;5050- next-level-cache = <&L2>;5151- operating-points = <5252- /* kHz uV */5353- 792000 9500005454- 396000 7500005555- 198000 4500005656- >;5757- clock-latency = <61036>; /* two CLK32 periods */5858- };5959-6060- cpu@101 {6161- compatible = "arm,cortex-a7";6262- reg = <101>;6363- next-level-cache = <&L2>;6464- };6565-};
···11I2C for OMAP platforms2233Required properties :44-- compatible : Must be "ti,omap2420-i2c", "ti,omap2430-i2c", "ti,omap3-i2c"55- or "ti,omap4-i2c"44+- compatible : Must be55+ "ti,omap2420-i2c" for OMAP2420 SoCs66+ "ti,omap2430-i2c" for OMAP2430 SoCs77+ "ti,omap3-i2c" for OMAP3 SoCs88+ "ti,omap4-i2c" for OMAP4+ SoCs99+ "ti,am654-i2c", "ti,omap4-i2c" for AM654 SoCs610- ti,hwmods : Must be "i2c<n>", n being the instance number (1-based)711- #address-cells = <1>;812- #size-cells = <0>;
···55- compatible: "renesas,can-r8a7743" if CAN controller is a part of R8A7743 SoC.66 "renesas,can-r8a7744" if CAN controller is a part of R8A7744 SoC.77 "renesas,can-r8a7745" if CAN controller is a part of R8A7745 SoC.88+ "renesas,can-r8a774a1" if CAN controller is a part of R8A774A1 SoC.89 "renesas,can-r8a7778" if CAN controller is a part of R8A7778 SoC.910 "renesas,can-r8a7779" if CAN controller is a part of R8A7779 SoC.1011 "renesas,can-r8a7790" if CAN controller is a part of R8A7790 SoC.···1514 "renesas,can-r8a7794" if CAN controller is a part of R8A7794 SoC.1615 "renesas,can-r8a7795" if CAN controller is a part of R8A7795 SoC.1716 "renesas,can-r8a7796" if CAN controller is a part of R8A7796 SoC.1717+ "renesas,can-r8a77965" if CAN controller is a part of R8A77965 SoC.1818 "renesas,rcar-gen1-can" for a generic R-Car Gen1 compatible device.1919 "renesas,rcar-gen2-can" for a generic R-Car Gen2 or RZ/G12020 compatible device.2121- "renesas,rcar-gen3-can" for a generic R-Car Gen3 compatible device.2121+ "renesas,rcar-gen3-can" for a generic R-Car Gen3 or RZ/G22222+ compatible device.2223 When compatible with the generic version, nodes must list the2324 SoC-specific version corresponding to the platform first2425 followed by the generic version.25262627- reg: physical base address and size of the R-Car CAN register map.2728- interrupts: interrupt specifier for the sole interrupt.2828-- clocks: phandles and clock specifiers for 3 CAN clock inputs.2929-- clock-names: 3 clock input name strings: "clkp1", "clkp2", "can_clk".2929+- clocks: phandles and clock specifiers for 2 CAN clock inputs for RZ/G23030+ devices.3131+ phandles and clock specifiers for 3 CAN clock inputs for every other3232+ SoC.3333+- clock-names: 2 clock input name strings for RZ/G2: "clkp1", "can_clk".3434+ 3 clock input name strings for every other SoC: "clkp1", "clkp2",3535+ "can_clk".3036- pinctrl-0: pin control group to be used for this controller.3137- pinctrl-names: must be "default".32383333-Required properties for "renesas,can-r8a7795" and "renesas,can-r8a7796"3434-compatible:3535-In R8A7795 and R8A7796 SoCs, "clkp2" can be CANFD clock. This is a div6 clock3636-and can be used by both CAN and CAN FD controller at the same time. It needs to3737-be scaled to maximum frequency if any of these controllers use it. This is done3939+Required properties for R8A7795, R8A7796 and R8A77965:4040+For the denoted SoCs, "clkp2" can be CANFD clock. This is a div6 clock and can4141+be used by both CAN and CAN FD controller at the same time. It needs to be4242+scaled to maximum frequency if any of these controllers use it. This is done3843using the below properties:39444045- assigned-clocks: phandle of clkp2(CANFD) clock.···4942Optional properties:5043- renesas,can-clock-select: R-Car CAN Clock Source Select. Valid values are:5144 <0x0> (default) : Peripheral clock (clkp1)5252- <0x1> : Peripheral clock (clkp2)5353- <0x3> : Externally input clock4545+ <0x1> : Peripheral clock (clkp2) (not supported by4646+ RZ/G2 devices)4747+ <0x3> : External input clock54485549Example5650-------
+1-1
Documentation/devicetree/bindings/net/dsa/dsa.txt
···77Current Binding88---------------991010-Switches are true Linux devices and can be probes by any means. Once1010+Switches are true Linux devices and can be probed by any means. Once1111probed, they register to the DSA framework, passing a node1212pointer. This node is expected to fulfil the following binding, and1313may contain additional properties as required by the device it is
···4040 "ref" for 19.2 MHz ref clk,4141 "com_aux" for phy common block aux clock,4242 "ref_aux" for phy reference aux clock,4343+4444+ For "qcom,ipq8074-qmp-pcie-phy": no clocks are listed.4345 For "qcom,msm8996-qmp-pcie-phy" must contain:4446 "aux", "cfg_ahb", "ref".4547 For "qcom,msm8996-qmp-usb3-phy" must contain:4648 "aux", "cfg_ahb", "ref".4747- For "qcom,qmp-v3-usb3-phy" must contain:4949+ For "qcom,sdm845-qmp-usb3-phy" must contain:4850 "aux", "cfg_ahb", "ref", "com_aux".5151+ For "qcom,sdm845-qmp-usb3-uni-phy" must contain:5252+ "aux", "cfg_ahb", "ref", "com_aux".5353+ For "qcom,sdm845-qmp-ufs-phy" must contain:5454+ "ref", "ref_aux".49555056 - resets: a list of phandles and reset controller specifier pairs,5157 one for each entry in reset-names.5258 - reset-names: "phy" for reset of phy block,5359 "common" for phy common block reset,5454- "cfg" for phy's ahb cfg block reset (Optional).5555- For "qcom,msm8996-qmp-pcie-phy" must contain:5656- "phy", "common", "cfg".5757- For "qcom,msm8996-qmp-usb3-phy" must contain5858- "phy", "common".6060+ "cfg" for phy's ahb cfg block reset.6161+5962 For "qcom,ipq8074-qmp-pcie-phy" must contain:6060- "phy", "common".6363+ "phy", "common".6464+ For "qcom,msm8996-qmp-pcie-phy" must contain:6565+ "phy", "common", "cfg".6666+ For "qcom,msm8996-qmp-usb3-phy" must contain6767+ "phy", "common".6868+ For "qcom,sdm845-qmp-usb3-phy" must contain:6969+ "phy", "common".7070+ For "qcom,sdm845-qmp-usb3-uni-phy" must contain:7171+ "phy", "common".7272+ For "qcom,sdm845-qmp-ufs-phy": no resets are listed.61736274 - vdda-phy-supply: Phandle to a regulator supply to PHY core block.6375 - vdda-pll-supply: Phandle to 1.8V regulator supply to PHY refclk pll block.···91799280 - #phy-cells: must be 093818282+Required properties child node of pcie and usb3 qmp phys:9483 - clocks: a list of phandles and clock-specifier pairs,9584 one for each entry in clock-names.9696- - clock-names: Must contain following for pcie and usb qmp phys:8585+ - clock-names: Must contain following:9786 "pipe<lane-number>" for pipe clock specific to each lane.9887 - clock-output-names: Name of the PHY clock that will be the parent for9988 the above pipe clock.···10491 (or)10592 "pcie20_phy1_pipe_clk"106939494+Required properties for child node of PHYs with lane reset, AKA:9595+ "qcom,msm8996-qmp-pcie-phy"10796 - resets: a list of phandles and reset controller specifier pairs,10897 one for each entry in reset-names.109109- - reset-names: Must contain following for pcie qmp phys:9898+ - reset-names: Must contain following:11099 "lane<lane-number>" for reset specific to each lane.111100112101Example:
···55Required properties:66 - compatible: should be "socionext,uniphier-scssi"77 - reg: address and length of the spi master registers88- - #address-cells: must be <1>, see spi-bus.txt99- - #size-cells: must be <0>, see spi-bus.txt1010- - clocks: A phandle to the clock for the device.1111- - resets: A phandle to the reset control for the device.88+ - interrupts: a single interrupt specifier99+ - pinctrl-names: should be "default"1010+ - pinctrl-0: pin control state for the default mode1111+ - clocks: a phandle to the clock for the device1212+ - resets: a phandle to the reset control for the device12131314Example:14151516spi0: spi@54006000 {1617 compatible = "socionext,uniphier-scssi";1718 reg = <0x54006000 0x100>;1818- #address-cells = <1>;1919- #size-cells = <0>;1919+ interrupts = <0 39 4>;2020+ pinctrl-names = "default";2121+ pinctrl-0 = <&pinctrl_spi0>;2022 clocks = <&peri_clk 11>;2123 resets = <&peri_rst 11>;2224};
+18
Documentation/i2c/busses/i2c-nvidia-gpu
···11+Kernel driver i2c-nvidia-gpu22+33+Datasheet: not publicly available.44+55+Authors:66+ Ajay Gupta <ajayg@nvidia.com>77+88+Description99+-----------1010+1111+i2c-nvidia-gpu is a driver for I2C controller included in NVIDIA Turing1212+and later GPUs and it is used to communicate with Type-C controller on GPUs.1313+1414+If your 'lspci -v' listing shows something like the following,1515+1616+01:00.3 Serial bus controller [0c80]: NVIDIA Corporation Device 1ad9 (rev a1)1717+1818+then this driver should support the I2C controller of your GPU.
+1-10
Documentation/input/event-codes.rst
···190190* REL_WHEEL, REL_HWHEEL:191191192192 - These codes are used for vertical and horizontal scroll wheels,193193- respectively. The value is the number of "notches" moved on the wheel, the194194- physical size of which varies by device. For high-resolution wheels (which195195- report multiple events for each notch of movement, or do not have notches)196196- this may be an approximation based on the high-resolution scroll events.197197-198198-* REL_WHEEL_HI_RES:199199-200200- - If a vertical scroll wheel supports high-resolution scrolling, this code201201- will be emitted in addition to REL_WHEEL. The value is the (approximate)202202- distance travelled by the user's finger, in microns.193193+ respectively.203194204195EV_ABS205196------
+1-1
Documentation/media/uapi/v4l/dev-meta.rst
···4040the desired operation. Both drivers and applications must set the remainder of4141the :c:type:`v4l2_format` structure to 0.42424343-.. _v4l2-meta-format:4343+.. c:type:: v4l2_meta_format44444545.. tabularcolumns:: |p{1.4cm}|p{2.2cm}|p{13.9cm}|4646
+5
Documentation/media/uapi/v4l/vidioc-g-fmt.rst
···133133 - Definition of a data format, see :ref:`pixfmt`, used by SDR134134 capture and output devices.135135 * -136136+ - struct :c:type:`v4l2_meta_format`137137+ - ``meta``138138+ - Definition of a metadata format, see :ref:`meta-formats`, used by139139+ metadata capture devices.140140+ * -136141 - __u8137142 - ``raw_data``\ [200]138143 - Place holder for future extensions.
+11-6
Documentation/networking/rxrpc.txt
···1056105610571057 u32 rxrpc_kernel_check_life(struct socket *sock,10581058 struct rxrpc_call *call);10591059+ void rxrpc_kernel_probe_life(struct socket *sock,10601060+ struct rxrpc_call *call);1059106110601060- This returns a number that is updated when ACKs are received from the peer10611061- (notably including PING RESPONSE ACKs which we can elicit by sending PING10621062- ACKs to see if the call still exists on the server). The caller should10631063- compare the numbers of two calls to see if the call is still alive after10641064- waiting for a suitable interval.10621062+ The first function returns a number that is updated when ACKs are received10631063+ from the peer (notably including PING RESPONSE ACKs which we can elicit by10641064+ sending PING ACKs to see if the call still exists on the server). The10651065+ caller should compare the numbers of two calls to see if the call is still10661066+ alive after waiting for a suitable interval.1065106710661068 This allows the caller to work out if the server is still contactable and10671069 if the call is still alive on the server whilst waiting for the server to10681070 process a client operation.1069107110701070- This function may transmit a PING ACK.10721072+ The second function causes a ping ACK to be transmitted to try to provoke10731073+ the peer into responding, which would then cause the value returned by the10741074+ first function to change. Note that this must be called in TASK_RUNNING10751075+ state.1071107610721077 (*) Get reply timestamp.10731078
···6161 to struct boot_params for loading bzImage and ramdisk6262 above 4G in 64bit.63636464-Protocol 2.13: (Kernel 3.14) Support 32- and 64-bit flags being set in6565- xloadflags to support booting a 64-bit kernel from 32-bit6666- EFI6767-6868-Protocol 2.14: (Kernel 4.20) Added acpi_rsdp_addr holding the physical6969- address of the ACPI RSDP table.7070- The bootloader updates version with:7171- 0x8000 | min(kernel-version, bootloader-version)7272- kernel-version being the protocol version supported by7373- the kernel and bootloader-version the protocol version7474- supported by the bootloader.7575-7664**** MEMORY LAYOUT77657866The traditional memory map for the kernel loader, used for Image or···1972090258/8 2.10+ pref_address Preferred loading address1982100260/4 2.10+ init_size Linear memory required during initialization1992110264/4 2.11+ handover_offset Offset of handover entry point200200-0268/8 2.14+ acpi_rsdp_addr Physical address of RSDP table201212202213(1) For backwards compatibility, if the setup_sects field contains 0, the203214 real value is 4.···309322 Contains the magic number "HdrS" (0x53726448).310323311324Field name: version312312-Type: modify325325+Type: read313326Offset/size: 0x206/2314327Protocol: 2.00+315328316329 Contains the boot protocol version, in (major << 8)+minor format,317330 e.g. 0x0204 for version 2.04, and 0x0a11 for a hypothetical version318331 10.17.319319-320320- Up to protocol version 2.13 this information is only read by the321321- bootloader. From protocol version 2.14 onwards the bootloader will322322- write the used protocol version or-ed with 0x8000 to the field. The323323- used protocol version will be the minimum of the supported protocol324324- versions of the bootloader and the kernel.325332326333Field name: realmode_swtch327334Type: modify (optional)···743762 handover protocol to boot the kernel should jump to this offset.744763745764 See EFI HANDOVER PROTOCOL below for more details.746746-747747-Field name: acpi_rsdp_addr748748-Type: write749749-Offset/size: 0x268/8750750-Protocol: 2.14+751751-752752- This field can be set by the boot loader to tell the kernel the753753- physical address of the ACPI RSDP table.754754-755755- A value of 0 indicates the kernel should fall back to the standard756756- methods to locate the RSDP.757765758766759767**** THE IMAGE CHECKSUM
···25250C8/004 ALL ext_cmd_line_ptr cmd_line_ptr high 32bits2626140/080 ALL edid_info Video mode setup (struct edid_info)27271C0/020 ALL efi_info EFI 32 information (struct efi_info)2828-1E0/004 ALL alk_mem_k Alternative mem check, in KB2828+1E0/004 ALL alt_mem_k Alternative mem check, in KB29291E4/004 ALL scratch Scratch field for the kernel setup code30301E8/001 ALL e820_entries Number of entries in e820_table (below)31311E9/001 ALL eddbuf_entries Number of entries in eddbuf (below)
+151-57
MAINTAINERS
···1801801811818169 10/100/1000 GIGABIT ETHERNET DRIVER182182M: Realtek linux nic maintainers <nic_swsd@realtek.com>183183+M: Heiner Kallweit <hkallweit1@gmail.com>183184L: netdev@vger.kernel.org184185S: Maintained185186F: drivers/net/ethernet/realtek/r8169.c···718717F: include/dt-bindings/reset/altr,rst-mgr-a10sr.h719718720719ALTERA TRIPLE SPEED ETHERNET DRIVER721721-M: Vince Bridgers <vbridger@opensource.altera.com>720720+M: Thor Thayer <thor.thayer@linux.intel.com>722721L: netdev@vger.kernel.org723722L: nios2-dev@lists.rocketboards.org (moderated for non-subscribers)724723S: Maintained···19311930M: Andy Gross <andy.gross@linaro.org>19321931M: David Brown <david.brown@linaro.org>19331932L: linux-arm-msm@vger.kernel.org19341934-L: linux-soc@vger.kernel.org19351933S: Maintained19361934F: Documentation/devicetree/bindings/soc/qcom/19371935F: arch/arm/boot/dts/qcom-*.dts···24982498ATHEROS ATH5K WIRELESS DRIVER24992499M: Jiri Slaby <jirislaby@gmail.com>25002500M: Nick Kossifidis <mickflemm@gmail.com>25012501-M: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>25012501+M: Luis Chamberlain <mcgrof@kernel.org>25022502L: linux-wireless@vger.kernel.org25032503W: http://wireless.kernel.org/en/users/Drivers/ath5k25042504S: Maintained···28082808T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git28092809Q: https://patchwork.ozlabs.org/project/netdev/list/?delegate=7714728102810S: Supported28112811-F: arch/x86/net/bpf_jit*28112811+F: arch/*/net/*28122812F: Documentation/networking/filter.txt28132813F: Documentation/bpf/28142814F: include/linux/bpf*···28272827F: tools/bpf/28282828F: tools/lib/bpf/28292829F: tools/testing/selftests/bpf/28302830+28312831+BPF JIT for ARM28322832+M: Shubham Bansal <illusionist.neo@gmail.com>28332833+L: netdev@vger.kernel.org28342834+S: Maintained28352835+F: arch/arm/net/28362836+28372837+BPF JIT for ARM6428382838+M: Daniel Borkmann <daniel@iogearbox.net>28392839+M: Alexei Starovoitov <ast@kernel.org>28402840+M: Zi Shen Lim <zlim.lnx@gmail.com>28412841+L: netdev@vger.kernel.org28422842+S: Supported28432843+F: arch/arm64/net/28442844+28452845+BPF JIT for MIPS (32-BIT AND 64-BIT)28462846+M: Paul Burton <paul.burton@mips.com>28472847+L: netdev@vger.kernel.org28482848+S: Maintained28492849+F: arch/mips/net/28502850+28512851+BPF JIT for NFP NICs28522852+M: Jakub Kicinski <jakub.kicinski@netronome.com>28532853+L: netdev@vger.kernel.org28542854+S: Supported28552855+F: drivers/net/ethernet/netronome/nfp/bpf/28562856+28572857+BPF JIT for POWERPC (32-BIT AND 64-BIT)28582858+M: Naveen N. Rao <naveen.n.rao@linux.ibm.com>28592859+M: Sandipan Das <sandipan@linux.ibm.com>28602860+L: netdev@vger.kernel.org28612861+S: Maintained28622862+F: arch/powerpc/net/28632863+28642864+BPF JIT for S39028652865+M: Martin Schwidefsky <schwidefsky@de.ibm.com>28662866+M: Heiko Carstens <heiko.carstens@de.ibm.com>28672867+L: netdev@vger.kernel.org28682868+S: Maintained28692869+F: arch/s390/net/28702870+X: arch/s390/net/pnet.c28712871+28722872+BPF JIT for SPARC (32-BIT AND 64-BIT)28732873+M: David S. Miller <davem@davemloft.net>28742874+L: netdev@vger.kernel.org28752875+S: Maintained28762876+F: arch/sparc/net/28772877+28782878+BPF JIT for X86 32-BIT28792879+M: Wang YanQing <udknight@gmail.com>28802880+L: netdev@vger.kernel.org28812881+S: Maintained28822882+F: arch/x86/net/bpf_jit_comp32.c28832883+28842884+BPF JIT for X86 64-BIT28852885+M: Alexei Starovoitov <ast@kernel.org>28862886+M: Daniel Borkmann <daniel@iogearbox.net>28872887+L: netdev@vger.kernel.org28882888+S: Supported28892889+F: arch/x86/net/28902890+X: arch/x86/net/bpf_jit_comp32.c2830289128312892BROADCOM B44 10/100 ETHERNET DRIVER28322893M: Michael Chan <michael.chan@broadcom.com>···29292868BROADCOM BCM47XX MIPS ARCHITECTURE29302869M: Hauke Mehrtens <hauke@hauke-m.de>29312870M: Rafał Miłecki <zajec5@gmail.com>29322932-L: linux-mips@linux-mips.org28712871+L: linux-mips@vger.kernel.org29332872S: Maintained29342873F: Documentation/devicetree/bindings/mips/brcm/29352874F: arch/mips/bcm47xx/*···29382877BROADCOM BCM5301X ARM ARCHITECTURE29392878M: Hauke Mehrtens <hauke@hauke-m.de>29402879M: Rafał Miłecki <zajec5@gmail.com>29412941-M: Jon Mason <jonmason@broadcom.com>29422880M: bcm-kernel-feedback-list@broadcom.com29432881L: linux-arm-kernel@lists.infradead.org29442882S: Maintained···29922932BROADCOM BMIPS MIPS ARCHITECTURE29932933M: Kevin Cernekee <cernekee@gmail.com>29942934M: Florian Fainelli <f.fainelli@gmail.com>29952995-L: linux-mips@linux-mips.org29352935+L: linux-mips@vger.kernel.org29962936T: git git://github.com/broadcom/stblinux.git29972937S: Maintained29982938F: arch/mips/bmips/*···30833023BROADCOM IPROC ARM ARCHITECTURE30843024M: Ray Jui <rjui@broadcom.com>30853025M: Scott Branden <sbranden@broadcom.com>30863086-M: Jon Mason <jonmason@broadcom.com>30873026M: bcm-kernel-feedback-list@broadcom.com30883027L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)30893028T: git git://github.com/broadcom/cygnus-linux.git···3129307031303071BROADCOM NVRAM DRIVER31313072M: Rafał Miłecki <zajec5@gmail.com>31323132-L: linux-mips@linux-mips.org30733073+L: linux-mips@vger.kernel.org31333074S: Maintained31343075F: drivers/firmware/broadcom/*31353076···33423283F: include/uapi/linux/caif/33433284F: include/net/caif/33443285F: net/caif/32863286+32873287+CAKE QDISC32883288+M: Toke Høiland-Jørgensen <toke@toke.dk>32893289+L: cake@lists.bufferbloat.net (moderated for non-subscribers)32903290+S: Maintained32913291+F: net/sched/sch_cake.c3345329233463293CALGARY x86-64 IOMMU33473294M: Muli Ben-Yehuda <mulix@mulix.org>···4231416642324167DECSTATION PLATFORM SUPPORT42334168M: "Maciej W. Rozycki" <macro@linux-mips.org>42344234-L: linux-mips@linux-mips.org41694169+L: linux-mips@vger.kernel.org42354170W: http://www.linux-mips.org/wiki/DECstation42364171S: Maintained42374172F: arch/mips/dec/···53225257M: Ralf Baechle <ralf@linux-mips.org>53235258M: David Daney <david.daney@cavium.com>53245259L: linux-edac@vger.kernel.org53255325-L: linux-mips@linux-mips.org52605260+L: linux-mips@vger.kernel.org53265261S: Supported53275262F: drivers/edac/octeon_edac*53285263···56015536ETHERNET PHY LIBRARY56025537M: Andrew Lunn <andrew@lunn.ch>56035538M: Florian Fainelli <f.fainelli@gmail.com>55395539+M: Heiner Kallweit <hkallweit1@gmail.com>56045540L: netdev@vger.kernel.org56055541S: Maintained56065542F: Documentation/ABI/testing/sysfs-bus-mdio···58405774F: tools/firewire/5841577558425776FIRMWARE LOADER (request_firmware)58435843-M: Luis R. Rodriguez <mcgrof@kernel.org>57775777+M: Luis Chamberlain <mcgrof@kernel.org>58445778L: linux-kernel@vger.kernel.org58455779S: Maintained58465780F: Documentation/firmware_class/···6373630763746308GPIO SUBSYSTEM63756309M: Linus Walleij <linus.walleij@linaro.org>63106310+M: Bartosz Golaszewski <bgolaszewski@baylibre.com>63766311L: linux-gpio@vger.kernel.org63776312T: git git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio.git63786313S: Maintained···6682661566836616HID CORE LAYER66846617M: Jiri Kosina <jikos@kernel.org>66856685-R: Benjamin Tissoires <benjamin.tissoires@redhat.com>66186618+M: Benjamin Tissoires <benjamin.tissoires@redhat.com>66866619L: linux-input@vger.kernel.org66876687-T: git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid.git66206620+T: git git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid.git66886621S: Maintained66896622F: drivers/hid/66906623F: include/linux/hid*···69356868L: linux-acpi@vger.kernel.org69366869S: Maintained69376870F: drivers/i2c/i2c-core-acpi.c68716871+68726872+I2C CONTROLLER DRIVER FOR NVIDIA GPU68736873+M: Ajay Gupta <ajayg@nvidia.com>68746874+L: linux-i2c@vger.kernel.org68756875+S: Maintained68766876+F: Documentation/i2c/busses/i2c-nvidia-gpu68776877+F: drivers/i2c/busses/i2c-nvidia-gpu.c6938687869396879I2C MUXES69406880M: Peter Rosin <peda@axentia.se>···75117437F: Documentation/fb/intelfb.txt75127438F: drivers/video/fbdev/intelfb/7513743974407440+INTEL GPIO DRIVERS74417441+M: Andy Shevchenko <andriy.shevchenko@linux.intel.com>74427442+L: linux-gpio@vger.kernel.org74437443+S: Maintained74447444+T: git git://git.kernel.org/pub/scm/linux/kernel/git/andy/linux-gpio-intel.git74457445+F: drivers/gpio/gpio-ich.c74467446+F: drivers/gpio/gpio-intel-mid.c74477447+F: drivers/gpio/gpio-lynxpoint.c74487448+F: drivers/gpio/gpio-merrifield.c74497449+F: drivers/gpio/gpio-ml-ioh.c74507450+F: drivers/gpio/gpio-pch.c74517451+F: drivers/gpio/gpio-sch.c74527452+F: drivers/gpio/gpio-sodaville.c74537453+75147454INTEL GVT-g DRIVERS (Intel GPU Virtualization)75157455M: Zhenyu Wang <zhenyuw@linux.intel.com>75167456M: Zhi Wang <zhi.a.wang@intel.com>···75347446T: git https://github.com/intel/gvt-linux.git75357447S: Supported75367448F: drivers/gpu/drm/i915/gvt/75377537-75387538-INTEL PMIC GPIO DRIVER75397539-R: Andy Shevchenko <andriy.shevchenko@linux.intel.com>75407540-S: Maintained75417541-F: drivers/gpio/gpio-*cove.c75427542-F: drivers/gpio/gpio-msic.c7543744975447450INTEL HID EVENT DRIVER75457451M: Alex Hung <alex.hung@canonical.com>···76227540S: Supported76237541F: drivers/platform/x86/intel_menlow.c7624754276257625-INTEL MERRIFIELD GPIO DRIVER76267626-M: Andy Shevchenko <andriy.shevchenko@linux.intel.com>76277627-L: linux-gpio@vger.kernel.org76287628-S: Maintained76297629-F: drivers/gpio/gpio-merrifield.c76307630-76317543INTEL MIC DRIVERS (mic)76327544M: Sudeep Dutt <sudeep.dutt@intel.com>76337545M: Ashutosh Dixit <ashutosh.dixit@intel.com>···76537577F: drivers/platform/x86/intel_punit_ipc.c76547578F: arch/x86/include/asm/intel_pmc_ipc.h76557579F: arch/x86/include/asm/intel_punit_ipc.h75807580+75817581+INTEL PMIC GPIO DRIVERS75827582+M: Andy Shevchenko <andriy.shevchenko@linux.intel.com>75837583+S: Maintained75847584+T: git git://git.kernel.org/pub/scm/linux/kernel/git/andy/linux-gpio-intel.git75857585+F: drivers/gpio/gpio-*cove.c75867586+F: drivers/gpio/gpio-msic.c7656758776577588INTEL MULTIFUNCTION PMIC DEVICE DRIVERS76587589R: Andy Shevchenko <andriy.shevchenko@linux.intel.com>···7769768677707687IOC3 ETHERNET DRIVER77717688M: Ralf Baechle <ralf@linux-mips.org>77727772-L: linux-mips@linux-mips.org76897689+L: linux-mips@vger.kernel.org77737690S: Maintained77747691F: drivers/net/ethernet/sgi/ioc3-eth.c77757692···81408057F: Documentation/dev-tools/kselftest*8141805881428059KERNEL USERMODE HELPER81438143-M: "Luis R. Rodriguez" <mcgrof@kernel.org>80608060+M: Luis Chamberlain <mcgrof@kernel.org>81448061L: linux-kernel@vger.kernel.org81458062S: Maintained81468063F: kernel/umh.c···8197811481988115KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)81998116M: James Hogan <jhogan@kernel.org>82008200-L: linux-mips@linux-mips.org81178117+L: linux-mips@vger.kernel.org82018118S: Supported82028119F: arch/mips/include/uapi/asm/kvm*82038120F: arch/mips/include/asm/kvm*···83168233F: mm/kmemleak-test.c8317823483188235KMOD KERNEL MODULE LOADER - USERMODE HELPER83198319-M: "Luis R. Rodriguez" <mcgrof@kernel.org>82368236+M: Luis Chamberlain <mcgrof@kernel.org>83208237L: linux-kernel@vger.kernel.org83218238S: Maintained83228239F: kernel/kmod.c···8370828783718288LANTIQ MIPS ARCHITECTURE83728289M: John Crispin <john@phrozen.org>83738373-L: linux-mips@linux-mips.org82908290+L: linux-mips@vger.kernel.org83748291S: Maintained83758292F: arch/mips/lantiq83768293F: drivers/soc/lantiq···84588375LIBATA PATA ARASAN COMPACT FLASH CONTROLLER84598376M: Viresh Kumar <vireshk@kernel.org>84608377L: linux-ide@vger.kernel.org84618461-T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata.git83788378+T: git git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git84628379S: Maintained84638380F: include/linux/pata_arasan_cf_data.h84648381F: drivers/ata/pata_arasan_cf.c···84758392LIBATA PATA FARADAY FTIDE010 AND GEMINI SATA BRIDGE DRIVERS84768393M: Linus Walleij <linus.walleij@linaro.org>84778394L: linux-ide@vger.kernel.org84788478-T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata.git83958395+T: git git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git84798396S: Maintained84808397F: drivers/ata/pata_ftide010.c84818398F: drivers/ata/sata_gemini.c···84948411LIBATA SATA PROMISE TX2/TX4 CONTROLLER DRIVER84958412M: Mikael Pettersson <mikpelinux@gmail.com>84968413L: linux-ide@vger.kernel.org84978497-T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata.git84148414+T: git git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git84988415S: Maintained84998416F: drivers/ata/sata_promise.*85008417···8933885089348851MARDUK (CREATOR CI40) DEVICE TREE SUPPORT89358852M: Rahul Bedarkar <rahulbedarkar89@gmail.com>89368936-L: linux-mips@linux-mips.org88538853+L: linux-mips@vger.kernel.org89378854S: Maintained89388855F: arch/mips/boot/dts/img/pistachio_marduk.dts89398856···9892980998939810MICROSEMI MIPS SOCS98949811M: Alexandre Belloni <alexandre.belloni@bootlin.com>98959895-L: linux-mips@linux-mips.org98129812+L: linux-mips@vger.kernel.org98969813S: Maintained98979814F: arch/mips/generic/board-ocelot.c98989815F: arch/mips/configs/generic/board-ocelot.config···99329849M: Ralf Baechle <ralf@linux-mips.org>99339850M: Paul Burton <paul.burton@mips.com>99349851M: James Hogan <jhogan@kernel.org>99359935-L: linux-mips@linux-mips.org98529852+L: linux-mips@vger.kernel.org99369853W: http://www.linux-mips.org/99379854T: git git://git.linux-mips.org/pub/scm/ralf/linux.git99389855T: git git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux.git···9945986299469863MIPS BOSTON DEVELOPMENT BOARD99479864M: Paul Burton <paul.burton@mips.com>99489948-L: linux-mips@linux-mips.org98659865+L: linux-mips@vger.kernel.org99499866S: Maintained99509867F: Documentation/devicetree/bindings/clock/img,boston-clock.txt99519868F: arch/mips/boot/dts/img/boston.dts···9955987299569873MIPS GENERIC PLATFORM99579874M: Paul Burton <paul.burton@mips.com>99589958-L: linux-mips@linux-mips.org98759875+L: linux-mips@vger.kernel.org99599876S: Supported99609877F: Documentation/devicetree/bindings/power/mti,mips-cpc.txt99619878F: arch/mips/generic/···9963988099649881MIPS/LOONGSON1 ARCHITECTURE99659882M: Keguang Zhang <keguang.zhang@gmail.com>99669966-L: linux-mips@linux-mips.org98839883+L: linux-mips@vger.kernel.org99679884S: Maintained99689885F: arch/mips/loongson32/99699886F: arch/mips/include/asm/mach-loongson32/···9972988999739890MIPS/LOONGSON2 ARCHITECTURE99749891M: Jiaxun Yang <jiaxun.yang@flygoat.com>99759975-L: linux-mips@linux-mips.org98929892+L: linux-mips@vger.kernel.org99769893S: Maintained99779894F: arch/mips/loongson64/fuloong-2e/99789895F: arch/mips/loongson64/lemote-2f/···9982989999839900MIPS/LOONGSON3 ARCHITECTURE99849901M: Huacai Chen <chenhc@lemote.com>99859985-L: linux-mips@linux-mips.org99029902+L: linux-mips@vger.kernel.org99869903S: Maintained99879904F: arch/mips/loongson64/99889905F: arch/mips/include/asm/mach-loongson64/···9992990999939910MIPS RINT INSTRUCTION EMULATION99949911M: Aleksandar Markovic <aleksandar.markovic@mips.com>99959995-L: linux-mips@linux-mips.org99129912+L: linux-mips@vger.kernel.org99969913S: Supported99979914F: arch/mips/math-emu/sp_rint.c99989915F: arch/mips/math-emu/dp_rint.c···1087510792S: Maintained1087610793F: arch/arm/mach-omap2/omap_hwmod.*10877107941079510795+OMAP I2C DRIVER1079610796+M: Vignesh R <vigneshr@ti.com>1079710797+L: linux-omap@vger.kernel.org1079810798+L: linux-i2c@vger.kernel.org1079910799+S: Maintained1080010800+F: Documentation/devicetree/bindings/i2c/i2c-omap.txt1080110801+F: drivers/i2c/busses/i2c-omap.c1080210802+1087810803OMAP IMAGING SUBSYSTEM (OMAP3 ISP and OMAP4 ISS)1087910804M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>1088010805L: linux-media@vger.kernel.org···1089210801F: drivers/staging/media/omap4iss/10893108021089410803OMAP MMC SUPPORT1089510895-M: Jarkko Lavinen <jarkko.lavinen@nokia.com>1080410804+M: Aaro Koskinen <aaro.koskinen@iki.fi>1089610805L: linux-omap@vger.kernel.org1089710897-S: Maintained1080610806+S: Odd Fixes1089810807F: drivers/mmc/host/omap.c10899108081090010809OMAP POWER MANAGEMENT SUPPORT···10977108861097810887ONION OMEGA2+ BOARD1097910888M: Harvey Hunt <harveyhuntnexus@gmail.com>1098010980-L: linux-mips@linux-mips.org1088910889+L: linux-mips@vger.kernel.org1098110890S: Maintained1098210891F: arch/mips/boot/dts/ralink/omega2p.dts1098310892···1182911738PIN CONTROLLER - INTEL1183011739M: Mika Westerberg <mika.westerberg@linux.intel.com>1183111740M: Andy Shevchenko <andriy.shevchenko@linux.intel.com>1174111741+T: git git://git.kernel.org/pub/scm/linux/kernel/git/pinctrl/intel.git1183211742S: Maintained1183311743F: drivers/pinctrl/intel/1183411744···11886117941188711795PISTACHIO SOC SUPPORT1188811796M: James Hartley <james.hartley@sondrel.com>1188911889-L: linux-mips@linux-mips.org1179711797+L: linux-mips@vger.kernel.org1189011798S: Odd Fixes1189111799F: arch/mips/pistachio/1189211800F: arch/mips/include/asm/mach-pistachio/···1207311981F: include/linux/printk.h12074119821207511983PRISM54 WIRELESS DRIVER1207612076-M: "Luis R. Rodriguez" <mcgrof@gmail.com>1198411984+M: Luis Chamberlain <mcgrof@kernel.org>1207711985L: linux-wireless@vger.kernel.org1207811986W: http://wireless.kernel.org/en/users/Drivers/p541207911987S: Obsolete···1208711995F: fs/proc/1208811996F: include/linux/proc_fs.h1208911997F: tools/testing/selftests/proc/1199811998+F: Documentation/filesystems/proc.txt12090119991209112000PROC SYSCTL1209212092-M: "Luis R. Rodriguez" <mcgrof@kernel.org>1200112001+M: Luis Chamberlain <mcgrof@kernel.org>1209312002M: Kees Cook <keescook@chromium.org>1209412003L: linux-kernel@vger.kernel.org1209512004L: linux-fsdevel@vger.kernel.org···12553124601255412461RALINK MIPS ARCHITECTURE1255512462M: John Crispin <john@phrozen.org>1255612556-L: linux-mips@linux-mips.org1246312463+L: linux-mips@vger.kernel.org1255712464S: Maintained1255812465F: arch/mips/ralink1255912466···12573124801257412481RANCHU VIRTUAL BOARD FOR MIPS1257512482M: Miodrag Dinic <miodrag.dinic@mips.com>1257612576-L: linux-mips@linux-mips.org1248312483+L: linux-mips@vger.kernel.org1257712484S: Supported1257812485F: arch/mips/generic/board-ranchu.c1257912486F: arch/mips/configs/generic/board-ranchu.config···1402413931F: Documentation/devicetree/bindings/sound/1402513932F: Documentation/sound/soc/1402613933F: sound/soc/1393413934+F: include/dt-bindings/sound/1402713935F: include/sound/soc*14028139361402913937SOUNDWIRE SUBSYSTEM···1407213978F: drivers/tty/vcc.c14073139791407413980SPARSE CHECKER1407514075-M: "Christopher Li" <sparse@chrisli.org>1398113981+M: "Luc Van Oostenryck" <luc.vanoostenryck@gmail.com>1407613982L: linux-sparse@vger.kernel.org1407713983W: https://sparse.wiki.kernel.org/1407813984T: git git://git.kernel.org/pub/scm/devel/sparse/sparse.git1407914079-T: git git://git.kernel.org/pub/scm/devel/sparse/chrisl/sparse.git1408013985S: Maintained1408113986F: include/linux/compiler.h1408213987···14180140871418114088STABLE BRANCH1418214089M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>1409014090+M: Sasha Levin <sashal@kernel.org>1418314091L: stable@vger.kernel.org1418414092S: Supported1418514093F: Documentation/process/stable-kernel-rules.rst···1531815224TURBOCHANNEL SUBSYSTEM1531915225M: "Maciej W. Rozycki" <macro@linux-mips.org>1532015226M: Ralf Baechle <ralf@linux-mips.org>1532115321-L: linux-mips@linux-mips.org1522715227+L: linux-mips@vger.kernel.org1532215228Q: http://patchwork.linux-mips.org/project/linux-mips/list/1532315229S: Maintained1532415230F: drivers/tc/···15554154601555515461USB HID/HIDBP DRIVERS (USB KEYBOARDS, MICE, REMOTE CONTROLS, ...)1555615462M: Jiri Kosina <jikos@kernel.org>1555715557-R: Benjamin Tissoires <benjamin.tissoires@redhat.com>1546315463+M: Benjamin Tissoires <benjamin.tissoires@redhat.com>1555815464L: linux-usb@vger.kernel.org1555915559-T: git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid.git1546515465+T: git git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid.git1556015466S: Maintained1556115467F: Documentation/hid/hiddev.txt1556215468F: drivers/hid/usbhid/···16139160451614016046VOCORE VOCORE2 BOARD1614116047M: Harvey Hunt <harveyhuntnexus@gmail.com>1614216142-L: linux-mips@linux-mips.org1604816048+L: linux-mips@vger.kernel.org1614316049S: Maintained1614416050F: arch/mips/boot/dts/ralink/vocore2.dts1614516051
+2-2
Makefile
···22VERSION = 433PATCHLEVEL = 2044SUBLEVEL = 055-EXTRAVERSION = -rc166-NAME = "People's Front"55+EXTRAVERSION = -rc566+NAME = Shy Crocodile7788# *DOCUMENTATION*99# To see a list of typical targets execute "make help"
···1010#ifndef _ASM_PGTABLE_2LEVEL_H1111#define _ASM_PGTABLE_2LEVEL_H12121313-#define __PAGETABLE_PMD_FOLDED1313+#define __PAGETABLE_PMD_FOLDED 114141515/*1616 * Hardware-wise, we have a two level page table structure, where the first
+49-12
arch/arm/include/asm/proc-fns.h
···2323/*2424 * Don't change this structure - ASM code relies on it.2525 */2626-extern struct processor {2626+struct processor {2727 /* MISC2828 * get data abort address/flags2929 */···7979 unsigned int suspend_size;8080 void (*do_suspend)(void *);8181 void (*do_resume)(void *);8282-} processor;8282+};83838484#ifndef MULTI_CPU8585+static inline void init_proc_vtable(const struct processor *p)8686+{8787+}8888+8589extern void cpu_proc_init(void);8690extern void cpu_proc_fin(void);8791extern int cpu_do_idle(void);···10298extern void cpu_do_suspend(void *);10399extern void cpu_do_resume(void *);104100#else105105-#define cpu_proc_init processor._proc_init106106-#define cpu_proc_fin processor._proc_fin107107-#define cpu_reset processor.reset108108-#define cpu_do_idle processor._do_idle109109-#define cpu_dcache_clean_area processor.dcache_clean_area110110-#define cpu_set_pte_ext processor.set_pte_ext111111-#define cpu_do_switch_mm processor.switch_mm112101113113-/* These three are private to arch/arm/kernel/suspend.c */114114-#define cpu_do_suspend processor.do_suspend115115-#define cpu_do_resume processor.do_resume102102+extern struct processor processor;103103+#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)104104+#include <linux/smp.h>105105+/*106106+ * This can't be a per-cpu variable because we need to access it before107107+ * per-cpu has been initialised. We have a couple of functions that are108108+ * called in a pre-emptible context, and so can't use smp_processor_id()109109+ * there, hence PROC_TABLE(). We insist in init_proc_vtable() that the110110+ * function pointers for these are identical across all CPUs.111111+ */112112+extern struct processor *cpu_vtable[];113113+#define PROC_VTABLE(f) cpu_vtable[smp_processor_id()]->f114114+#define PROC_TABLE(f) cpu_vtable[0]->f115115+static inline void init_proc_vtable(const struct processor *p)116116+{117117+ unsigned int cpu = smp_processor_id();118118+ *cpu_vtable[cpu] = *p;119119+ WARN_ON_ONCE(cpu_vtable[cpu]->dcache_clean_area !=120120+ cpu_vtable[0]->dcache_clean_area);121121+ WARN_ON_ONCE(cpu_vtable[cpu]->set_pte_ext !=122122+ cpu_vtable[0]->set_pte_ext);123123+}124124+#else125125+#define PROC_VTABLE(f) processor.f126126+#define PROC_TABLE(f) processor.f127127+static inline void init_proc_vtable(const struct processor *p)128128+{129129+ processor = *p;130130+}131131+#endif132132+133133+#define cpu_proc_init PROC_VTABLE(_proc_init)134134+#define cpu_check_bugs PROC_VTABLE(check_bugs)135135+#define cpu_proc_fin PROC_VTABLE(_proc_fin)136136+#define cpu_reset PROC_VTABLE(reset)137137+#define cpu_do_idle PROC_VTABLE(_do_idle)138138+#define cpu_dcache_clean_area PROC_TABLE(dcache_clean_area)139139+#define cpu_set_pte_ext PROC_TABLE(set_pte_ext)140140+#define cpu_do_switch_mm PROC_VTABLE(switch_mm)141141+142142+/* These two are private to arch/arm/kernel/suspend.c */143143+#define cpu_do_suspend PROC_VTABLE(do_suspend)144144+#define cpu_do_resume PROC_VTABLE(do_resume)116145#endif117146118147extern void cpu_resume(void);
+2-2
arch/arm/kernel/bugs.c
···66void check_other_bugs(void)77{88#ifdef MULTI_CPU99- if (processor.check_bugs)1010- processor.check_bugs();99+ if (cpu_check_bugs)1010+ cpu_check_bugs();1111#endif1212}1313
+1-16
arch/arm/kernel/ftrace.c
···183183 unsigned long frame_pointer)184184{185185 unsigned long return_hooker = (unsigned long) &return_to_handler;186186- struct ftrace_graph_ent trace;187186 unsigned long old;188188- int err;189187190188 if (unlikely(atomic_read(¤t->tracing_graph_pause)))191189 return;···191193 old = *parent;192194 *parent = return_hooker;193195194194- trace.func = self_addr;195195- trace.depth = current->curr_ret_stack + 1;196196-197197- /* Only trace if the calling function expects to */198198- if (!ftrace_graph_entry(&trace)) {196196+ if (function_graph_enter(old, self_addr, frame_pointer, NULL))199197 *parent = old;200200- return;201201- }202202-203203- err = ftrace_push_return_trace(old, self_addr, &trace.depth,204204- frame_pointer, NULL);205205- if (err == -EBUSY) {206206- *parent = old;207207- return;208208- }209198}210199211200#ifdef CONFIG_DYNAMIC_FTRACE
+3-3
arch/arm/kernel/head-common.S
···145145#endif146146 .size __mmap_switched_data, . - __mmap_switched_data147147148148+ __FINIT149149+ .text150150+148151/*149152 * This provides a C-API version of __lookup_processor_type150153 */···158155 mov r0, r5159156 ldmfd sp!, {r4 - r6, r9, pc}160157ENDPROC(lookup_processor_type)161161-162162- __FINIT163163- .text164158165159/*166160 * Read processor ID register (CP#15, CR0), and look up in the linker-built
+27-17
arch/arm/kernel/setup.c
···114114115115#ifdef MULTI_CPU116116struct processor processor __ro_after_init;117117+#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)118118+struct processor *cpu_vtable[NR_CPUS] = {119119+ [0] = &processor,120120+};121121+#endif117122#endif118123#ifdef MULTI_TLB119124struct cpu_tlb_fns cpu_tlb __ro_after_init;···671666}672667#endif673668669669+/*670670+ * locate processor in the list of supported processor types. The linker671671+ * builds this table for us from the entries in arch/arm/mm/proc-*.S672672+ */673673+struct proc_info_list *lookup_processor(u32 midr)674674+{675675+ struct proc_info_list *list = lookup_processor_type(midr);676676+677677+ if (!list) {678678+ pr_err("CPU%u: configuration botched (ID %08x), CPU halted\n",679679+ smp_processor_id(), midr);680680+ while (1)681681+ /* can't use cpu_relax() here as it may require MMU setup */;682682+ }683683+684684+ return list;685685+}686686+674687static void __init setup_processor(void)675688{676676- struct proc_info_list *list;677677-678678- /*679679- * locate processor in the list of supported processor680680- * types. The linker builds this table for us from the681681- * entries in arch/arm/mm/proc-*.S682682- */683683- list = lookup_processor_type(read_cpuid_id());684684- if (!list) {685685- pr_err("CPU configuration botched (ID %08x), unable to continue.\n",686686- read_cpuid_id());687687- while (1);688688- }689689+ unsigned int midr = read_cpuid_id();690690+ struct proc_info_list *list = lookup_processor(midr);689691690692 cpu_name = list->cpu_name;691693 __cpu_architecture = __get_cpu_architecture();692694693693-#ifdef MULTI_CPU694694- processor = *list->proc;695695-#endif695695+ init_proc_vtable(list->proc);696696#ifdef MULTI_TLB697697 cpu_tlb = *list->tlb;698698#endif···709699#endif710700711701 pr_info("CPU: %s [%08x] revision %d (ARMv%s), cr=%08lx\n",712712- cpu_name, read_cpuid_id(), read_cpuid_id() & 15,702702+ list->cpu_name, midr, midr & 15,713703 proc_arch[cpu_architecture()], get_cr());714704715705 snprintf(init_utsname()->machine, __NEW_UTS_LEN + 1, "%s%c",
+31
arch/arm/kernel/smp.c
···4242#include <asm/mmu_context.h>4343#include <asm/pgtable.h>4444#include <asm/pgalloc.h>4545+#include <asm/procinfo.h>4546#include <asm/processor.h>4647#include <asm/sections.h>4748#include <asm/tlbflush.h>···103102#endif104103}105104105105+#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)106106+static int secondary_biglittle_prepare(unsigned int cpu)107107+{108108+ if (!cpu_vtable[cpu])109109+ cpu_vtable[cpu] = kzalloc(sizeof(*cpu_vtable[cpu]), GFP_KERNEL);110110+111111+ return cpu_vtable[cpu] ? 0 : -ENOMEM;112112+}113113+114114+static void secondary_biglittle_init(void)115115+{116116+ init_proc_vtable(lookup_processor(read_cpuid_id())->proc);117117+}118118+#else119119+static int secondary_biglittle_prepare(unsigned int cpu)120120+{121121+ return 0;122122+}123123+124124+static void secondary_biglittle_init(void)125125+{126126+}127127+#endif128128+106129int __cpu_up(unsigned int cpu, struct task_struct *idle)107130{108131 int ret;109132110133 if (!smp_ops.smp_boot_secondary)111134 return -ENOSYS;135135+136136+ ret = secondary_biglittle_prepare(cpu);137137+ if (ret)138138+ return ret;112139113140 /*114141 * We need to tell the secondary core where to find···387358{388359 struct mm_struct *mm = &init_mm;389360 unsigned int cpu;361361+362362+ secondary_biglittle_init();390363391364 /*392365 * The identity mapping is uncached (strongly ordered), so
···209209210210 return 0;211211}212212-#else213213-static inline int omapdss_init_fbdev(void)212212+213213+static const char * const omapdss_compat_names[] __initconst = {214214+ "ti,omap2-dss",215215+ "ti,omap3-dss",216216+ "ti,omap4-dss",217217+ "ti,omap5-dss",218218+ "ti,dra7-dss",219219+};220220+221221+static struct device_node * __init omapdss_find_dss_of_node(void)214222{215215- return 0;223223+ struct device_node *node;224224+ int i;225225+226226+ for (i = 0; i < ARRAY_SIZE(omapdss_compat_names); ++i) {227227+ node = of_find_compatible_node(NULL, NULL,228228+ omapdss_compat_names[i]);229229+ if (node)230230+ return node;231231+ }232232+233233+ return NULL;216234}235235+236236+static int __init omapdss_init_of(void)237237+{238238+ int r;239239+ struct device_node *node;240240+ struct platform_device *pdev;241241+242242+ /* only create dss helper devices if dss is enabled in the .dts */243243+244244+ node = omapdss_find_dss_of_node();245245+ if (!node)246246+ return 0;247247+248248+ if (!of_device_is_available(node))249249+ return 0;250250+251251+ pdev = of_find_device_by_node(node);252252+253253+ if (!pdev) {254254+ pr_err("Unable to find DSS platform device\n");255255+ return -ENODEV;256256+ }257257+258258+ r = of_platform_populate(node, NULL, NULL, &pdev->dev);259259+ if (r) {260260+ pr_err("Unable to populate DSS submodule devices\n");261261+ return r;262262+ }263263+264264+ return omapdss_init_fbdev();265265+}266266+omap_device_initcall(omapdss_init_of);217267#endif /* CONFIG_FB_OMAP2 */218268219269static void dispc_disable_outputs(void)···411361412362 return r;413363}414414-415415-static const char * const omapdss_compat_names[] __initconst = {416416- "ti,omap2-dss",417417- "ti,omap3-dss",418418- "ti,omap4-dss",419419- "ti,omap5-dss",420420- "ti,dra7-dss",421421-};422422-423423-static struct device_node * __init omapdss_find_dss_of_node(void)424424-{425425- struct device_node *node;426426- int i;427427-428428- for (i = 0; i < ARRAY_SIZE(omapdss_compat_names); ++i) {429429- node = of_find_compatible_node(NULL, NULL,430430- omapdss_compat_names[i]);431431- if (node)432432- return node;433433- }434434-435435- return NULL;436436-}437437-438438-static int __init omapdss_init_of(void)439439-{440440- int r;441441- struct device_node *node;442442- struct platform_device *pdev;443443-444444- /* only create dss helper devices if dss is enabled in the .dts */445445-446446- node = omapdss_find_dss_of_node();447447- if (!node)448448- return 0;449449-450450- if (!of_device_is_available(node))451451- return 0;452452-453453- pdev = of_find_device_by_node(node);454454-455455- if (!pdev) {456456- pr_err("Unable to find DSS platform device\n");457457- return -ENODEV;458458- }459459-460460- r = of_platform_populate(node, NULL, NULL, &pdev->dev);461461- if (r) {462462- pr_err("Unable to populate DSS submodule devices\n");463463- return r;464464- }465465-466466- return omapdss_init_fbdev();467467-}468468-omap_device_initcall(omapdss_init_of);
+1-1
arch/arm/mach-omap2/prm44xx.c
···351351 * to occur, WAKEUPENABLE bits must be set in the pad mux registers, and352352 * omap44xx_prm_reconfigure_io_chain() must be called. No return value.353353 */354354-static void __init omap44xx_prm_enable_io_wakeup(void)354354+static void omap44xx_prm_enable_io_wakeup(void)355355{356356 s32 inst = omap4_prmst_get_prm_dev_inst();357357
+2-15
arch/arm/mm/proc-v7-bugs.c
···5252 case ARM_CPU_PART_CORTEX_A17:5353 case ARM_CPU_PART_CORTEX_A73:5454 case ARM_CPU_PART_CORTEX_A75:5555- if (processor.switch_mm != cpu_v7_bpiall_switch_mm)5656- goto bl_error;5755 per_cpu(harden_branch_predictor_fn, cpu) =5856 harden_branch_predictor_bpiall;5957 spectre_v2_method = "BPIALL";···59616062 case ARM_CPU_PART_CORTEX_A15:6163 case ARM_CPU_PART_BRAHMA_B15:6262- if (processor.switch_mm != cpu_v7_iciallu_switch_mm)6363- goto bl_error;6464 per_cpu(harden_branch_predictor_fn, cpu) =6565 harden_branch_predictor_iciallu;6666 spectre_v2_method = "ICIALLU";···8488 ARM_SMCCC_ARCH_WORKAROUND_1, &res);8589 if ((int)res.a0 != 0)8690 break;8787- if (processor.switch_mm != cpu_v7_hvc_switch_mm && cpu)8888- goto bl_error;8991 per_cpu(harden_branch_predictor_fn, cpu) =9092 call_hvc_arch_workaround_1;9191- processor.switch_mm = cpu_v7_hvc_switch_mm;9393+ cpu_do_switch_mm = cpu_v7_hvc_switch_mm;9294 spectre_v2_method = "hypervisor";9395 break;9496···95101 ARM_SMCCC_ARCH_WORKAROUND_1, &res);96102 if ((int)res.a0 != 0)97103 break;9898- if (processor.switch_mm != cpu_v7_smc_switch_mm && cpu)9999- goto bl_error;100104 per_cpu(harden_branch_predictor_fn, cpu) =101105 call_smc_arch_workaround_1;102102- processor.switch_mm = cpu_v7_smc_switch_mm;106106+ cpu_do_switch_mm = cpu_v7_smc_switch_mm;103107 spectre_v2_method = "firmware";104108 break;105109···111119 if (spectre_v2_method)112120 pr_info("CPU%u: Spectre v2: using %s workaround\n",113121 smp_processor_id(), spectre_v2_method);114114- return;115115-116116-bl_error:117117- pr_err("CPU%u: Spectre v2: incorrect context switching function, system vulnerable\n",118118- cpu);119122}120123#else121124static void cpu_v7_spectre_init(void)
···497497498498 If unsure, say Y.499499500500+config ARM64_ERRATUM_1286807501501+ bool "Cortex-A76: Modification of the translation table for a virtual address might lead to read-after-read ordering violation"502502+ default y503503+ select ARM64_WORKAROUND_REPEAT_TLBI504504+ help505505+ This option adds workaround for ARM Cortex-A76 erratum 1286807506506+507507+ On the affected Cortex-A76 cores (r0p0 to r3p0), if a virtual508508+ address for a cacheable mapping of a location is being509509+ accessed by a core while another core is remapping the virtual510510+ address to a new physical page using the recommended511511+ break-before-make sequence, then under very rare circumstances512512+ TLBI+DSB completes before a read using the translation being513513+ invalidated has been observed by other observers. The514514+ workaround repeats the TLBI+DSB operation.515515+516516+ If unsure, say Y.517517+500518config CAVIUM_ERRATUM_22375501519 bool "Cavium erratum 22375, 24313"502520 default y···584566 is unchanged. Work around the erratum by invalidating the walk cache585567 entries for the trampoline before entering the kernel proper.586568569569+config ARM64_WORKAROUND_REPEAT_TLBI570570+ bool571571+ help572572+ Enable the repeat TLBI workaround for Falkor erratum 1009 and573573+ Cortex-A76 erratum 1286807.574574+587575config QCOM_FALKOR_ERRATUM_1009588576 bool "Falkor E1009: Prematurely complete a DSB after a TLBI"589577 default y578578+ select ARM64_WORKAROUND_REPEAT_TLBI590579 help591580 On Falkor v1, the CPU may prematurely complete a DSB following a592581 TLBI xxIS invalidate maintenance operation. Repeat the TLBI operation
···5656{5757 return is_compat_task();5858}5959+6060+#define ARCH_HAS_SYSCALL_MATCH_SYM_NAME6161+6262+static inline bool arch_syscall_match_sym_name(const char *sym,6363+ const char *name)6464+{6565+ /*6666+ * Since all syscall functions have __arm64_ prefix, we must skip it.6767+ * However, as we described above, we decided to ignore compat6868+ * syscalls, so we don't care about __arm64_compat_ prefix here.6969+ */7070+ return !strcmp(sym + 8, name);7171+}5972#endif /* ifndef __ASSEMBLY__ */60736174#endif /* __ASM_FTRACE_H */
+8
arch/arm64/include/asm/processor.h
···2424#define KERNEL_DS UL(-1)2525#define USER_DS (TASK_SIZE_64 - 1)26262727+/*2828+ * On arm64 systems, unaligned accesses by the CPU are cheap, and so there is2929+ * no point in shifting all network buffers by 2 bytes just to make some IP3030+ * header fields appear aligned in memory, potentially sacrificing some DMA3131+ * performance on some platforms.3232+ */3333+#define NET_IP_ALIGN 03434+2735#ifndef __ASSEMBLY__2836#ifdef __KERNEL__2937
···216216{217217 unsigned long return_hooker = (unsigned long)&return_to_handler;218218 unsigned long old;219219- struct ftrace_graph_ent trace;220220- int err;221219222220 if (unlikely(atomic_read(¤t->tracing_graph_pause)))223221 return;···227229 */228230 old = *parent;229231230230- trace.func = self_addr;231231- trace.depth = current->curr_ret_stack + 1;232232-233233- /* Only trace if the calling function expects to */234234- if (!ftrace_graph_entry(&trace))235235- return;236236-237237- err = ftrace_push_return_trace(old, self_addr, &trace.depth,238238- frame_pointer, NULL);239239- if (err == -EBUSY)240240- return;241241- else232232+ if (!function_graph_enter(old, self_addr, frame_pointer, NULL))242233 *parent = return_hooker;243234}244235
···351351 * >0 - successfully JITed a 16-byte eBPF instruction.352352 * <0 - failed to JIT.353353 */354354-static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)354354+static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,355355+ bool extra_pass)355356{356357 const u8 code = insn->code;357358 const u8 dst = bpf2a64[insn->dst_reg];···626625 case BPF_JMP | BPF_CALL:627626 {628627 const u8 r0 = bpf2a64[BPF_REG_0];629629- const u64 func = (u64)__bpf_call_base + imm;628628+ bool func_addr_fixed;629629+ u64 func_addr;630630+ int ret;630631631631- if (ctx->prog->is_func)632632- emit_addr_mov_i64(tmp, func, ctx);632632+ ret = bpf_jit_get_func_addr(ctx->prog, insn, extra_pass,633633+ &func_addr, &func_addr_fixed);634634+ if (ret < 0)635635+ return ret;636636+ if (func_addr_fixed)637637+ /* We can use optimized emission here. */638638+ emit_a64_mov_i64(tmp, func_addr, ctx);633639 else634634- emit_a64_mov_i64(tmp, func, ctx);640640+ emit_addr_mov_i64(tmp, func_addr, ctx);635641 emit(A64_BLR(tmp), ctx);636642 emit(A64_MOV(1, r0, A64_R(0)), ctx);637643 break;···761753 return 0;762754}763755764764-static int build_body(struct jit_ctx *ctx)756756+static int build_body(struct jit_ctx *ctx, bool extra_pass)765757{766758 const struct bpf_prog *prog = ctx->prog;767759 int i;···770762 const struct bpf_insn *insn = &prog->insnsi[i];771763 int ret;772764773773- ret = build_insn(insn, ctx);765765+ ret = build_insn(insn, ctx, extra_pass);774766 if (ret > 0) {775767 i++;776768 if (ctx->image == NULL)···866858 /* 1. Initial fake pass to compute ctx->idx. */867859868860 /* Fake pass to fill in ctx->offset. */869869- if (build_body(&ctx)) {861861+ if (build_body(&ctx, extra_pass)) {870862 prog = orig_prog;871863 goto out_off;872864 }···896888897889 build_prologue(&ctx, was_classic);898890899899- if (build_body(&ctx)) {891891+ if (build_body(&ctx, extra_pass)) {900892 bpf_jit_binary_free(header);901893 prog = orig_prog;902894 goto out_off;
+3-1
arch/ia64/include/asm/numa.h
···5959 */60606161extern u8 numa_slit[MAX_NUMNODES * MAX_NUMNODES];6262-#define node_distance(from,to) (numa_slit[(from) * MAX_NUMNODES + (to)])6262+#define slit_distance(from,to) (numa_slit[(from) * MAX_NUMNODES + (to)])6363+extern int __node_distance(int from, int to);6464+#define node_distance(from,to) __node_distance(from, to)63656466extern int paddr_to_nid(unsigned long paddr);6567
+3-3
arch/ia64/kernel/acpi.c
···578578 if (!slit_table) {579579 for (i = 0; i < MAX_NUMNODES; i++)580580 for (j = 0; j < MAX_NUMNODES; j++)581581- node_distance(i, j) = i == j ? LOCAL_DISTANCE :582582- REMOTE_DISTANCE;581581+ slit_distance(i, j) = i == j ?582582+ LOCAL_DISTANCE : REMOTE_DISTANCE;583583 return;584584 }585585···592592 if (!pxm_bit_test(j))593593 continue;594594 node_to = pxm_to_node(j);595595- node_distance(node_from, node_to) =595595+ slit_distance(node_from, node_to) =596596 slit_table->entry[i * slit_table->locality_count + j];597597 }598598 }
+6
arch/ia64/mm/numa.c
···3636 */3737u8 numa_slit[MAX_NUMNODES * MAX_NUMNODES];38383939+int __node_distance(int from, int to)4040+{4141+ return slit_distance(from, to);4242+}4343+EXPORT_SYMBOL(__node_distance);4444+3945/* Identify which cnode a physical address resides on */4046int4147paddr_to_nid(unsigned long paddr)
···2222void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr)2323{2424 unsigned long old;2525- int faulted, err;2626- struct ftrace_graph_ent trace;2525+ int faulted;2726 unsigned long return_hooker = (unsigned long)2827 &return_to_handler;2928···6263 return;6364 }64656565- err = ftrace_push_return_trace(old, self_addr, &trace.depth, 0, NULL);6666- if (err == -EBUSY) {6666+ if (function_graph_enter(old, self_addr, 0, NULL))6767 *parent = old;6868- return;6969- }7070-7171- trace.func = self_addr;7272- /* Only trace if the calling function expects to */7373- if (!ftrace_graph_entry(&trace)) {7474- current->curr_ret_stack--;7575- *parent = old;7676- }7768}7869#endif /* CONFIG_FUNCTION_GRAPH_TRACER */7970
+1-1
arch/mips/cavium-octeon/executive/cvmx-helper.c
···6767void (*cvmx_override_ipd_port_setup) (int ipd_port);68686969/* Port count per interface */7070-static int interface_port_count[5];7070+static int interface_port_count[9];71717272/**7373 * Return the number of interfaces the chip has. Each interface
+1
arch/mips/configs/cavium_octeon_defconfig
···140140CONFIG_RTC_DRV_DS1307=y141141CONFIG_STAGING=y142142CONFIG_OCTEON_ETHERNET=y143143+CONFIG_OCTEON_USB=y143144# CONFIG_IOMMU_SUPPORT is not set144145CONFIG_RAS=y145146CONFIG_EXT4_FS=y
+1-1
arch/mips/include/asm/syscall.h
···7373#ifdef CONFIG_64BIT7474 case 4: case 5: case 6: case 7:7575#ifdef CONFIG_MIPS32_O327676- if (test_thread_flag(TIF_32BIT_REGS))7676+ if (test_tsk_thread_flag(task, TIF_32BIT_REGS))7777 return get_user(*arg, (int *)usp + n);7878 else7979#endif
+2-12
arch/mips/kernel/ftrace.c
···322322 unsigned long fp)323323{324324 unsigned long old_parent_ra;325325- struct ftrace_graph_ent trace;326325 unsigned long return_hooker = (unsigned long)327326 &return_to_handler;328327 int faulted, insns;···368369 if (unlikely(faulted))369370 goto out;370371371371- if (ftrace_push_return_trace(old_parent_ra, self_ra, &trace.depth, fp,372372- NULL) == -EBUSY) {373373- *parent_ra_addr = old_parent_ra;374374- return;375375- }376376-377372 /*378373 * Get the recorded ip of the current mcount calling site in the379374 * __mcount_loc section, which will be used to filter the function···375382 */376383377384 insns = core_kernel_text(self_ra) ? 2 : MCOUNT_OFFSET_INSNS + 1;378378- trace.func = self_ra - (MCOUNT_INSN_SIZE * insns);385385+ self_ra -= (MCOUNT_INSN_SIZE * insns);379386380380- /* Only trace if the calling function expects to */381381- if (!ftrace_graph_entry(&trace)) {382382- current->curr_ret_stack--;387387+ if (function_graph_enter(old_parent_ra, self_ra, fp, NULL))383388 *parent_ra_addr = old_parent_ra;384384- }385389 return;386390out:387391 ftrace_graph_stop();
+1
arch/mips/kernel/setup.c
···794794795795 /* call board setup routine */796796 plat_mem_setup();797797+ memblock_set_bottom_up(true);797798798799 /*799800 * Make sure all kernel memory is in the maps. The "UP" and
+1-2
arch/mips/kernel/traps.c
···22602260 unsigned long size = 0x200 + VECTORSPACING*64;22612261 phys_addr_t ebase_pa;2262226222632263- memblock_set_bottom_up(true);22642263 ebase = (unsigned long)22652264 memblock_alloc_from(size, 1 << fls(size), 0);22662266- memblock_set_bottom_up(false);2267226522682266 /*22692267 * Try to ensure ebase resides in KSeg0 if possible.···23052307 if (board_ebase_setup)23062308 board_ebase_setup();23072309 per_cpu_trap_init(true);23102310+ memblock_set_bottom_up(false);2308231123092312 /*23102313 * Copy the generic exception handlers to their final destination.
+2-10
arch/mips/loongson64/loongson-3/numa.c
···231231 cpumask_clear(&__node_data[(node)]->cpumask);232232 }233233 }234234+ max_low_pfn = PHYS_PFN(memblock_end_of_DRAM());235235+234236 for (cpu = 0; cpu < loongson_sysconf.nr_cpus; cpu++) {235237 node = cpu / loongson_sysconf.cores_per_node;236238 if (node >= num_online_nodes())···250248251249void __init paging_init(void)252250{253253- unsigned node;254251 unsigned long zones_size[MAX_NR_ZONES] = {0, };255252256253 pagetable_init();257257-258258- for_each_online_node(node) {259259- unsigned long start_pfn, end_pfn;260260-261261- get_pfn_range_for_nid(node, &start_pfn, &end_pfn);262262-263263- if (end_pfn > max_low_pfn)264264- max_low_pfn = end_pfn;265265- }266254#ifdef CONFIG_ZONE_DMA32267255 zones_size[ZONE_DMA32] = MAX_DMA32_PFN;268256#endif
+1-1
arch/mips/mm/dma-noncoherent.c
···5050 void *ret;51515252 ret = dma_direct_alloc_pages(dev, size, dma_handle, gfp, attrs);5353- if (!ret && !(attrs & DMA_ATTR_NON_CONSISTENT)) {5353+ if (ret && !(attrs & DMA_ATTR_NON_CONSISTENT)) {5454 dma_cache_wback_inv((unsigned long) ret, size);5555 ret = (void *)UNCAC_ADDR(ret);5656 }
···211211 unsigned long frame_pointer)212212{213213 unsigned long return_hooker = (unsigned long)&return_to_handler;214214- struct ftrace_graph_ent trace;215214 unsigned long old;216216- int err;217215218216 if (unlikely(atomic_read(¤t->tracing_graph_pause)))219217 return;220218221219 old = *parent;222220223223- trace.func = self_addr;224224- trace.depth = current->curr_ret_stack + 1;225225-226226- /* Only trace if the calling function expects to */227227- if (!ftrace_graph_entry(&trace))228228- return;229229-230230- err = ftrace_push_return_trace(old, self_addr, &trace.depth,231231- frame_pointer, NULL);232232-233233- if (err == -EBUSY)234234- return;235235-236236- *parent = return_hooker;221221+ if (!function_graph_enter(old, self_addr, frame_pointer, NULL))222222+ *parent = return_hooker;237223}238224239225noinline void ftrace_graph_caller(void)
···3737 volatile unsigned int *a;38383939 a = __ldcw_align(x);4040- /* Release with ordered store. */4141- __asm__ __volatile__("stw,ma %0,0(%1)" : : "r"(1), "r"(a) : "memory");4040+ mb();4141+ *a = 1;4242}43434444static inline int arch_spin_trylock(arch_spinlock_t *x)
+3-14
arch/parisc/kernel/ftrace.c
···3030 unsigned long self_addr)3131{3232 unsigned long old;3333- struct ftrace_graph_ent trace;3433 extern int parisc_return_to_handler;35343635 if (unlikely(ftrace_graph_is_dead()))···40414142 old = *parent;42434343- trace.func = self_addr;4444- trace.depth = current->curr_ret_stack + 1;4545-4646- /* Only trace if the calling function expects to */4747- if (!ftrace_graph_entry(&trace))4848- return;4949-5050- if (ftrace_push_return_trace(old, self_addr, &trace.depth,5151- 0, NULL) == -EBUSY)5252- return;5353-5454- /* activate parisc_return_to_handler() as return point */5555- *parent = (unsigned long) &parisc_return_to_handler;4444+ if (!function_graph_enter(old, self_addr, 0, NULL))4545+ /* activate parisc_return_to_handler() as return point */4646+ *parent = (unsigned long) &parisc_return_to_handler;5647}5748#endif /* CONFIG_FUNCTION_GRAPH_TRACER */5849
···268268 * their hooks, a bitfield is reserved for use by the platform near the269269 * top of MMIO addresses (not PIO, those have to cope the hard way).270270 *271271- * This bit field is 12 bits and is at the top of the IO virtual272272- * addresses PCI_IO_INDIRECT_TOKEN_MASK.271271+ * The highest address in the kernel virtual space are:273272 *274274- * The kernel virtual space is thus:273273+ * d0003fffffffffff # with Hash MMU274274+ * c00fffffffffffff # with Radix MMU275275 *276276- * 0xD000000000000000 : vmalloc277277- * 0xD000080000000000 : PCI PHB IO space278278- * 0xD000080080000000 : ioremap279279- * 0xD0000fffffffffff : end of ioremap region280280- *281281- * Since the top 4 bits are reserved as the region ID, we use thus282282- * the next 12 bits and keep 4 bits available for the future if the283283- * virtual address space is ever to be extended.276276+ * The top 4 bits are reserved as the region ID on hash, leaving us 8 bits277277+ * that can be used for the field.284278 *285279 * The direct IO mapping operations will then mask off those bits286280 * before doing the actual access, though that only happen when···286292 */287293288294#ifdef CONFIG_PPC_INDIRECT_MMIO289289-#define PCI_IO_IND_TOKEN_MASK 0x0fff000000000000ul290290-#define PCI_IO_IND_TOKEN_SHIFT 48295295+#define PCI_IO_IND_TOKEN_SHIFT 52296296+#define PCI_IO_IND_TOKEN_MASK (0xfful << PCI_IO_IND_TOKEN_SHIFT)291297#define PCI_FIX_ADDR(addr) \292298 ((PCI_IO_ADDR)(((unsigned long)(addr)) & ~PCI_IO_IND_TOKEN_MASK))293299#define PCI_GET_ADDR_TOKEN(addr) \
···54545555#ifdef CONFIG_PPC645656 unsigned long ppr;5757+ unsigned long __pad; /* Maintain 16 byte interrupt stack alignment */5758#endif5859};5960#endif
+2
arch/powerpc/kernel/setup_64.c
···636636{637637 unsigned long pa;638638639639+ BUILD_BUG_ON(STACK_INT_FRAME_SIZE % 16);640640+639641 pa = memblock_alloc_base_nid(THREAD_SIZE, THREAD_SIZE, limit,640642 early_cpu_to_node(cpu), MEMBLOCK_NONE);641643 if (!pa) {
+2-13
arch/powerpc/kernel/trace/ftrace.c
···950950 */951951unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip)952952{953953- struct ftrace_graph_ent trace;954953 unsigned long return_hooker;955954956955 if (unlikely(ftrace_graph_is_dead()))···960961961962 return_hooker = ppc_function_entry(return_to_handler);962963963963- trace.func = ip;964964- trace.depth = current->curr_ret_stack + 1;965965-966966- /* Only trace if the calling function expects to */967967- if (!ftrace_graph_entry(&trace))968968- goto out;969969-970970- if (ftrace_push_return_trace(parent, ip, &trace.depth, 0,971971- NULL) == -EBUSY)972972- goto out;973973-974974- parent = return_hooker;964964+ if (!function_graph_enter(parent, ip, 0, NULL))965965+ parent = return_hooker;975966out:976967 return parent;977968}
+1
arch/powerpc/kvm/book3s_hv.c
···983983 ret = kvmhv_enter_nested_guest(vcpu);984984 if (ret == H_INTERRUPT) {985985 kvmppc_set_gpr(vcpu, 3, 0);986986+ vcpu->arch.hcall_needed = 0;986987 return -EINTR;987988 }988989 break;
+6-2
arch/powerpc/kvm/trace.h
···6677#undef TRACE_SYSTEM88#define TRACE_SYSTEM kvm99-#define TRACE_INCLUDE_PATH .1010-#define TRACE_INCLUDE_FILE trace1191210/*1311 * Tracepoint for guest mode entry.···118120#endif /* _TRACE_KVM_H */119121120122/* This part must be outside protection */123123+#undef TRACE_INCLUDE_PATH124124+#undef TRACE_INCLUDE_FILE125125+126126+#define TRACE_INCLUDE_PATH .127127+#define TRACE_INCLUDE_FILE trace128128+121129#include <trace/define_trace.h>
+7-2
arch/powerpc/kvm/trace_booke.h
···6677#undef TRACE_SYSTEM88#define TRACE_SYSTEM kvm_booke99-#define TRACE_INCLUDE_PATH .1010-#define TRACE_INCLUDE_FILE trace_booke1191210#define kvm_trace_symbol_exit \1311 {0, "CRITICAL"}, \···216218#endif217219218220/* This part must be outside protection */221221+222222+#undef TRACE_INCLUDE_PATH223223+#undef TRACE_INCLUDE_FILE224224+225225+#define TRACE_INCLUDE_PATH .226226+#define TRACE_INCLUDE_FILE trace_booke227227+219228#include <trace/define_trace.h>
+7-2
arch/powerpc/kvm/trace_hv.h
···991010#undef TRACE_SYSTEM1111#define TRACE_SYSTEM kvm_hv1212-#define TRACE_INCLUDE_PATH .1313-#define TRACE_INCLUDE_FILE trace_hv14121513#define kvm_trace_symbol_hcall \1614 {H_REMOVE, "H_REMOVE"}, \···495497#endif /* _TRACE_KVM_HV_H */496498497499/* This part must be outside protection */500500+501501+#undef TRACE_INCLUDE_PATH502502+#undef TRACE_INCLUDE_FILE503503+504504+#define TRACE_INCLUDE_PATH .505505+#define TRACE_INCLUDE_FILE trace_hv506506+498507#include <trace/define_trace.h>
+7-2
arch/powerpc/kvm/trace_pr.h
···8899#undef TRACE_SYSTEM1010#define TRACE_SYSTEM kvm_pr1111-#define TRACE_INCLUDE_PATH .1212-#define TRACE_INCLUDE_FILE trace_pr13111412TRACE_EVENT(kvm_book3s_reenter,1513 TP_PROTO(int r, struct kvm_vcpu *vcpu),···255257#endif /* _TRACE_KVM_H */256258257259/* This part must be outside protection */260260+261261+#undef TRACE_INCLUDE_PATH262262+#undef TRACE_INCLUDE_FILE263263+264264+#define TRACE_INCLUDE_PATH .265265+#define TRACE_INCLUDE_FILE trace_pr266266+258267#include <trace/define_trace.h>
···1919#include <asm/mmu.h>2020#include <asm/mmu_context.h>2121#include <asm/paca.h>2222+#include <asm/ppc-opcode.h>2223#include <asm/cputable.h>2324#include <asm/cacheflush.h>2425#include <asm/smp.h>···5958 return __mk_vsid_data(get_kernel_vsid(ea, ssize), ssize, flags);6059}61606262-static void assert_slb_exists(unsigned long ea)6161+static void assert_slb_presence(bool present, unsigned long ea)6362{6463#ifdef CONFIG_DEBUG_VM6564 unsigned long tmp;66656766 WARN_ON_ONCE(mfmsr() & MSR_EE);68676969- asm volatile("slbfee. %0, %1" : "=r"(tmp) : "r"(ea) : "cr0");7070- WARN_ON(tmp == 0);7171-#endif7272-}6868+ if (!cpu_has_feature(CPU_FTR_ARCH_206))6969+ return;73707474-static void assert_slb_notexists(unsigned long ea)7575-{7676-#ifdef CONFIG_DEBUG_VM7777- unsigned long tmp;7171+ asm volatile(__PPC_SLBFEE_DOT(%0, %1) : "=r"(tmp) : "r"(ea) : "cr0");78727979- WARN_ON_ONCE(mfmsr() & MSR_EE);8080-8181- asm volatile("slbfee. %0, %1" : "=r"(tmp) : "r"(ea) : "cr0");8282- WARN_ON(tmp != 0);7373+ WARN_ON(present == (tmp == 0));8374#endif8475}8576···107114 */108115 slb_shadow_update(ea, ssize, flags, index);109116110110- assert_slb_notexists(ea);117117+ assert_slb_presence(false, ea);111118 asm volatile("slbmte %0,%1" :112119 : "r" (mk_vsid_data(ea, ssize, flags)),113120 "r" (mk_esid_data(ea, ssize, index))···130137 "r" (be64_to_cpu(p->save_area[index].esid)));131138 }132139133133- assert_slb_exists(local_paca->kstack);140140+ assert_slb_presence(true, local_paca->kstack);134141}135142136143/*···178185 :: "r" (be64_to_cpu(p->save_area[KSTACK_INDEX].vsid)),179186 "r" (be64_to_cpu(p->save_area[KSTACK_INDEX].esid))180187 : "memory");181181- assert_slb_exists(get_paca()->kstack);188188+ assert_slb_presence(true, get_paca()->kstack);182189183190 get_paca()->slb_cache_ptr = 0;184191···436443 ea = (unsigned long)437444 get_paca()->slb_cache[i] << SID_SHIFT;438445 /*439439- * Could assert_slb_exists here, but hypervisor440440- * or machine check could have come in and441441- * removed the entry at this point.446446+ * Could assert_slb_presence(true) here, but447447+ * hypervisor or machine check could have come448448+ * in and removed the entry at this point.442449 */443450444451 slbie_data = ea;···669676 * User preloads should add isync afterwards in case the kernel670677 * accesses user memory before it returns to userspace with rfid.671678 */672672- assert_slb_notexists(ea);679679+ assert_slb_presence(false, ea);673680 asm volatile("slbmte %0, %1" : : "r" (vsid_data), "r" (esid_data));674681675682 barrier();···708715 return -EFAULT;709716710717 if (ea < H_VMALLOC_END)711711- flags = get_paca()->vmalloc_sllp;718718+ flags = local_paca->vmalloc_sllp;712719 else713720 flags = SLB_VSID_KERNEL | mmu_psize_defs[mmu_io_psize].sllp;714721 } else {
+38-19
arch/powerpc/net/bpf_jit_comp64.c
···166166 PPC_BLR();167167}168168169169-static void bpf_jit_emit_func_call(u32 *image, struct codegen_context *ctx, u64 func)169169+static void bpf_jit_emit_func_call_hlp(u32 *image, struct codegen_context *ctx,170170+ u64 func)171171+{172172+#ifdef PPC64_ELF_ABI_v1173173+ /* func points to the function descriptor */174174+ PPC_LI64(b2p[TMP_REG_2], func);175175+ /* Load actual entry point from function descriptor */176176+ PPC_BPF_LL(b2p[TMP_REG_1], b2p[TMP_REG_2], 0);177177+ /* ... and move it to LR */178178+ PPC_MTLR(b2p[TMP_REG_1]);179179+ /*180180+ * Load TOC from function descriptor at offset 8.181181+ * We can clobber r2 since we get called through a182182+ * function pointer (so caller will save/restore r2)183183+ * and since we don't use a TOC ourself.184184+ */185185+ PPC_BPF_LL(2, b2p[TMP_REG_2], 8);186186+#else187187+ /* We can clobber r12 */188188+ PPC_FUNC_ADDR(12, func);189189+ PPC_MTLR(12);190190+#endif191191+ PPC_BLRL();192192+}193193+194194+static void bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx,195195+ u64 func)170196{171197 unsigned int i, ctx_idx = ctx->idx;172198···299273{300274 const struct bpf_insn *insn = fp->insnsi;301275 int flen = fp->len;302302- int i;276276+ int i, ret;303277304278 /* Start of epilogue code - will only be valid 2nd pass onwards */305279 u32 exit_addr = addrs[flen];···310284 u32 src_reg = b2p[insn[i].src_reg];311285 s16 off = insn[i].off;312286 s32 imm = insn[i].imm;287287+ bool func_addr_fixed;288288+ u64 func_addr;313289 u64 imm64;314314- u8 *func;315290 u32 true_cond;316291 u32 tmp_idx;317292···738711 case BPF_JMP | BPF_CALL:739712 ctx->seen |= SEEN_FUNC;740713741741- /* bpf function call */742742- if (insn[i].src_reg == BPF_PSEUDO_CALL)743743- if (!extra_pass)744744- func = NULL;745745- else if (fp->aux->func && off < fp->aux->func_cnt)746746- /* use the subprog id from the off747747- * field to lookup the callee address748748- */749749- func = (u8 *) fp->aux->func[off]->bpf_func;750750- else751751- return -EINVAL;752752- /* kernel helper call */714714+ ret = bpf_jit_get_func_addr(fp, &insn[i], extra_pass,715715+ &func_addr, &func_addr_fixed);716716+ if (ret < 0)717717+ return ret;718718+719719+ if (func_addr_fixed)720720+ bpf_jit_emit_func_call_hlp(image, ctx, func_addr);753721 else754754- func = (u8 *) __bpf_call_base + imm;755755-756756- bpf_jit_emit_func_call(image, ctx, (u64)func);757757-722722+ bpf_jit_emit_func_call_rel(image, ctx, func_addr);758723 /* move return value from r3 to BPF_REG_0 */759724 PPC_MR(b2p[BPF_REG_0], 3);760725 break;
+4-60
arch/powerpc/platforms/powernv/npu-dma.c
···102102}103103EXPORT_SYMBOL(pnv_pci_get_npu_dev);104104105105-#define NPU_DMA_OP_UNSUPPORTED() \106106- dev_err_once(dev, "%s operation unsupported for NVLink devices\n", \107107- __func__)108108-109109-static void *dma_npu_alloc(struct device *dev, size_t size,110110- dma_addr_t *dma_handle, gfp_t flag,111111- unsigned long attrs)112112-{113113- NPU_DMA_OP_UNSUPPORTED();114114- return NULL;115115-}116116-117117-static void dma_npu_free(struct device *dev, size_t size,118118- void *vaddr, dma_addr_t dma_handle,119119- unsigned long attrs)120120-{121121- NPU_DMA_OP_UNSUPPORTED();122122-}123123-124124-static dma_addr_t dma_npu_map_page(struct device *dev, struct page *page,125125- unsigned long offset, size_t size,126126- enum dma_data_direction direction,127127- unsigned long attrs)128128-{129129- NPU_DMA_OP_UNSUPPORTED();130130- return 0;131131-}132132-133133-static int dma_npu_map_sg(struct device *dev, struct scatterlist *sglist,134134- int nelems, enum dma_data_direction direction,135135- unsigned long attrs)136136-{137137- NPU_DMA_OP_UNSUPPORTED();138138- return 0;139139-}140140-141141-static int dma_npu_dma_supported(struct device *dev, u64 mask)142142-{143143- NPU_DMA_OP_UNSUPPORTED();144144- return 0;145145-}146146-147147-static u64 dma_npu_get_required_mask(struct device *dev)148148-{149149- NPU_DMA_OP_UNSUPPORTED();150150- return 0;151151-}152152-153153-static const struct dma_map_ops dma_npu_ops = {154154- .map_page = dma_npu_map_page,155155- .map_sg = dma_npu_map_sg,156156- .alloc = dma_npu_alloc,157157- .free = dma_npu_free,158158- .dma_supported = dma_npu_dma_supported,159159- .get_required_mask = dma_npu_get_required_mask,160160-};161161-162105/*163106 * Returns the PE assoicated with the PCI device of the given164107 * NPU. Returns the linked pci device if pci_dev != NULL.···213270 rc = pnv_npu_set_window(npe, 0, gpe->table_group.tables[0]);214271215272 /*216216- * We don't initialise npu_pe->tce32_table as we always use217217- * dma_npu_ops which are nops.273273+ * NVLink devices use the same TCE table configuration as274274+ * their parent device so drivers shouldn't be doing DMA275275+ * operations directly on these devices.218276 */219219- set_dma_ops(&npe->pdev->dev, &dma_npu_ops);277277+ set_dma_ops(&npe->pdev->dev, NULL);220278}221279222280/*
···11+#22+# arch/riscv/boot/Makefile33+#44+# This file is included by the global makefile so that you can add your own55+# architecture-specific flags and dependencies.66+#77+# This file is subject to the terms and conditions of the GNU General Public88+# License. See the file "COPYING" in the main directory of this archive99+# for more details.1010+#1111+# Copyright (C) 2018, Anup Patel.1212+# Author: Anup Patel <anup@brainfault.org>1313+#1414+# Based on the ia64 and arm64 boot/Makefile.1515+#1616+1717+OBJCOPYFLAGS_Image :=-O binary -R .note -R .note.gnu.build-id -R .comment -S1818+1919+targets := Image2020+2121+$(obj)/Image: vmlinux FORCE2222+ $(call if_changed,objcopy)2323+2424+$(obj)/Image.gz: $(obj)/Image FORCE2525+ $(call if_changed,gzip)2626+2727+install:2828+ $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \2929+ $(obj)/Image System.map "$(INSTALL_PATH)"3030+3131+zinstall:3232+ $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \3333+ $(obj)/Image.gz System.map "$(INSTALL_PATH)"
+60
arch/riscv/boot/install.sh
···11+#!/bin/sh22+#33+# arch/riscv/boot/install.sh44+#55+# This file is subject to the terms and conditions of the GNU General Public66+# License. See the file "COPYING" in the main directory of this archive77+# for more details.88+#99+# Copyright (C) 1995 by Linus Torvalds1010+#1111+# Adapted from code in arch/i386/boot/Makefile by H. Peter Anvin1212+# Adapted from code in arch/i386/boot/install.sh by Russell King1313+#1414+# "make install" script for the RISC-V Linux port1515+#1616+# Arguments:1717+# $1 - kernel version1818+# $2 - kernel image file1919+# $3 - kernel map file2020+# $4 - default install path (blank if root directory)2121+#2222+2323+verify () {2424+ if [ ! -f "$1" ]; then2525+ echo "" 1>&22626+ echo " *** Missing file: $1" 1>&22727+ echo ' *** You need to run "make" before "make install".' 1>&22828+ echo "" 1>&22929+ exit 13030+ fi3131+}3232+3333+# Make sure the files actually exist3434+verify "$2"3535+verify "$3"3636+3737+# User may have a custom install script3838+if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi3939+if [ -x /sbin/${INSTALLKERNEL} ]; then exec /sbin/${INSTALLKERNEL} "$@"; fi4040+4141+if [ "$(basename $2)" = "Image.gz" ]; then4242+# Compressed install4343+ echo "Installing compressed kernel"4444+ base=vmlinuz4545+else4646+# Normal install4747+ echo "Installing normal kernel"4848+ base=vmlinux4949+fi5050+5151+if [ -f $4/$base-$1 ]; then5252+ mv $4/$base-$1 $4/$base-$1.old5353+fi5454+cat $2 > $4/$base-$15555+5656+# Install system map file5757+if [ -f $4/System.map-$1 ]; then5858+ mv $4/System.map-$1 $4/System.map-$1.old5959+fi6060+cp $3 $4/System.map-$1
+1
arch/riscv/configs/defconfig
···7676CONFIG_NFS_V4_2=y7777CONFIG_ROOT_NFS=y7878CONFIG_CRYPTO_USER_API_HASH=y7979+CONFIG_PRINTK_TIME=y7980# CONFIG_RCU_TRACE is not set
···5656 unsigned long sstatus;5757 unsigned long sbadaddr;5858 unsigned long scause;5959- /* a0 value before the syscall */6060- unsigned long orig_a0;5959+ /* a0 value before the syscall */6060+ unsigned long orig_a0;6161};62626363#ifdef CONFIG_64BIT
···13131414/*1515 * There is explicitly no include guard here because this file is expected to1616- * be included multiple times. See uapi/asm/syscalls.h for more info.1616+ * be included multiple times.1717 */18181919-#define __ARCH_WANT_NEW_STAT2019#define __ARCH_WANT_SYS_CLONE2020+2121#include <uapi/asm/unistd.h>2222-#include <uapi/asm/syscalls.h>
···11-/* SPDX-License-Identifier: GPL-2.0 */11+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */22/*33- * Copyright (C) 2017-2018 SiFive33+ * Copyright (C) 2018 David Abdurachmanov <david.abdurachmanov@gmail.com>44+ *55+ * This program is free software; you can redistribute it and/or modify66+ * it under the terms of the GNU General Public License version 2 as77+ * published by the Free Software Foundation.88+ *99+ * This program is distributed in the hope that it will be useful,1010+ * but WITHOUT ANY WARRANTY; without even the implied warranty of1111+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1212+ * GNU General Public License for more details.1313+ *1414+ * You should have received a copy of the GNU General Public License1515+ * along with this program. If not, see <http://www.gnu.org/licenses/>.416 */51766-/*77- * There is explicitly no include guard here because this file is expected to88- * be included multiple times in order to define the syscall macros via99- * __SYSCALL.1010- */1818+#ifdef __LP64__1919+#define __ARCH_WANT_NEW_STAT2020+#endif /* __LP64__ */2121+2222+#include <asm-generic/unistd.h>11231224/*1325 * Allows the instruction cache to be flushed from userspace. Despite RISC-V
+6-3
arch/riscv/kernel/cpu.c
···64646565static void print_isa(struct seq_file *f, const char *orig_isa)6666{6767- static const char *ext = "mafdc";6767+ static const char *ext = "mafdcsu";6868 const char *isa = orig_isa;6969 const char *e;7070···8888 /*8989 * Check the rest of the ISA string for valid extensions, printing those9090 * we find. RISC-V ISA strings define an order, so we only print the9191- * extension bits when they're in order.9191+ * extension bits when they're in order. Hide the supervisor (S)9292+ * extension from userspace as it's not accessible from there.9293 */9394 for (e = ext; *e != '\0'; ++e) {9495 if (isa[0] == e[0]) {9595- seq_write(f, isa, 1);9696+ if (isa[0] != 's')9797+ seq_write(f, isa, 1);9898+9699 isa++;97100 }98101 }
+2-12
arch/riscv/kernel/ftrace.c
···132132{133133 unsigned long return_hooker = (unsigned long)&return_to_handler;134134 unsigned long old;135135- struct ftrace_graph_ent trace;136135 int err;137136138137 if (unlikely(atomic_read(¤t->tracing_graph_pause)))···143144 */144145 old = *parent;145146146146- trace.func = self_addr;147147- trace.depth = current->curr_ret_stack + 1;148148-149149- if (!ftrace_graph_entry(&trace))150150- return;151151-152152- err = ftrace_push_return_trace(old, self_addr, &trace.depth,153153- frame_pointer, parent);154154- if (err == -EBUSY)155155- return;156156- *parent = return_hooker;147147+ if (function_graph_enter(old, self_addr, frame_pointer, parent))148148+ *parent = return_hooker;157149}158150159151#ifdef CONFIG_DYNAMIC_FTRACE
+10
arch/riscv/kernel/head.S
···4444 amoadd.w a3, a2, (a3)4545 bnez a3, .Lsecondary_start46464747+ /* Clear BSS for flat non-ELF images */4848+ la a3, __bss_start4949+ la a4, __bss_stop5050+ ble a4, a3, clear_bss_done5151+clear_bss:5252+ REG_S zero, (a3)5353+ add a3, a3, RISCV_SZPTR5454+ blt a3, a4, clear_bss5555+clear_bss_done:5656+4757 /* Save hart ID and DTB physical address */4858 mv s0, a04959 mv s1, a1
+6-6
arch/riscv/kernel/module.c
···2121{2222 if (v != (u32)v) {2323 pr_err("%s: value %016llx out of range for 32-bit field\n",2424- me->name, v);2424+ me->name, (long long)v);2525 return -EINVAL;2626 }2727 *location = v;···102102 if (offset != (s32)offset) {103103 pr_err(104104 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",105105- me->name, v, location);105105+ me->name, (long long)v, location);106106 return -EINVAL;107107 }108108···144144 if (IS_ENABLED(CMODEL_MEDLOW)) {145145 pr_err(146146 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",147147- me->name, v, location);147147+ me->name, (long long)v, location);148148 return -EINVAL;149149 }150150···188188 } else {189189 pr_err(190190 "%s: can not generate the GOT entry for symbol = %016llx from PC = %p\n",191191- me->name, v, location);191191+ me->name, (long long)v, location);192192 return -EINVAL;193193 }194194···212212 } else {213213 pr_err(214214 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",215215- me->name, v, location);215215+ me->name, (long long)v, location);216216 return -EINVAL;217217 }218218 }···234234 if (offset != fill_v) {235235 pr_err(236236 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",237237- me->name, v, location);237237+ me->name, (long long)v, location);238238 return -EINVAL;239239 }240240
···6464CONFIG_PREEMPT=y6565CONFIG_HZ_100=y6666CONFIG_KEXEC_FILE=y6767+CONFIG_EXPOLINE=y6868+CONFIG_EXPOLINE_AUTO=y6769CONFIG_MEMORY_HOTPLUG=y6870CONFIG_MEMORY_HOTREMOVE=y6971CONFIG_KSM=y···8684CONFIG_HOTPLUG_PCI=y8785CONFIG_HOTPLUG_PCI_S390=y8886CONFIG_CHSC_SCH=y8787+CONFIG_VFIO_AP=m8988CONFIG_CRASH_DUMP=y9089CONFIG_BINFMT_MISC=m9190CONFIG_HIBERNATION=y9191+CONFIG_PM_DEBUG=y9292CONFIG_NET=y9393CONFIG_PACKET=y9494CONFIG_PACKET_DIAG=m···165161CONFIG_NF_CT_NETLINK=m166162CONFIG_NF_CT_NETLINK_TIMEOUT=m167163CONFIG_NF_TABLES=m168168-CONFIG_NFT_EXTHDR=m169169-CONFIG_NFT_META=m170164CONFIG_NFT_CT=m171165CONFIG_NFT_COUNTER=m172166CONFIG_NFT_LOG=m···367365CONFIG_NET_ACT_CSUM=m368366CONFIG_DNS_RESOLVER=y369367CONFIG_OPENVSWITCH=m368368+CONFIG_VSOCKETS=m369369+CONFIG_VIRTIO_VSOCKETS=m370370CONFIG_NETLINK_DIAG=m371371CONFIG_CGROUP_NET_PRIO=y372372CONFIG_BPF_JIT=y···465461CONFIG_PPPOL2TP=m466462CONFIG_PPP_ASYNC=m467463CONFIG_PPP_SYNC_TTY=m464464+CONFIG_ISM=m468465CONFIG_INPUT_EVDEV=y469466# CONFIG_INPUT_KEYBOARD is not set470467# CONFIG_INPUT_MOUSE is not set···491486CONFIG_MLX5_INFINIBAND=m492487CONFIG_VFIO=m493488CONFIG_VFIO_PCI=m489489+CONFIG_VFIO_MDEV=m490490+CONFIG_VFIO_MDEV_DEVICE=m494491CONFIG_VIRTIO_PCI=m495492CONFIG_VIRTIO_BALLOON=m496493CONFIG_VIRTIO_INPUT=y494494+CONFIG_S390_AP_IOMMU=y497495CONFIG_EXT4_FS=y498496CONFIG_EXT4_FS_POSIX_ACL=y499497CONFIG_EXT4_FS_SECURITY=y···623615CONFIG_RCU_TORTURE_TEST=m624616CONFIG_RCU_CPU_STALL_TIMEOUT=300625617CONFIG_NOTIFIER_ERROR_INJECTION=m626626-CONFIG_PM_NOTIFIER_ERROR_INJECT=m627618CONFIG_NETDEV_NOTIFIER_ERROR_INJECT=m628619CONFIG_FAULT_INJECTION=y629620CONFIG_FAILSLAB=y···734727CONFIG_KVM=m735728CONFIG_KVM_S390_UCONTROL=y736729CONFIG_VHOST_NET=m730730+CONFIG_VHOST_VSOCK=m
+11-2
arch/s390/configs/performance_defconfig
···6565CONFIG_NUMA=y6666CONFIG_HZ_100=y6767CONFIG_KEXEC_FILE=y6868+CONFIG_EXPOLINE=y6969+CONFIG_EXPOLINE_AUTO=y6870CONFIG_MEMORY_HOTPLUG=y6971CONFIG_MEMORY_HOTREMOVE=y7072CONFIG_KSM=y···8482CONFIG_HOTPLUG_PCI=y8583CONFIG_HOTPLUG_PCI_S390=y8684CONFIG_CHSC_SCH=y8585+CONFIG_VFIO_AP=m8786CONFIG_CRASH_DUMP=y8887CONFIG_BINFMT_MISC=m8988CONFIG_HIBERNATION=y8989+CONFIG_PM_DEBUG=y9090CONFIG_NET=y9191CONFIG_PACKET=y9292CONFIG_PACKET_DIAG=m···163159CONFIG_NF_CT_NETLINK=m164160CONFIG_NF_CT_NETLINK_TIMEOUT=m165161CONFIG_NF_TABLES=m166166-CONFIG_NFT_EXTHDR=m167167-CONFIG_NFT_META=m168162CONFIG_NFT_CT=m169163CONFIG_NFT_COUNTER=m170164CONFIG_NFT_LOG=m···364362CONFIG_NET_ACT_CSUM=m365363CONFIG_DNS_RESOLVER=y366364CONFIG_OPENVSWITCH=m365365+CONFIG_VSOCKETS=m366366+CONFIG_VIRTIO_VSOCKETS=m367367CONFIG_NETLINK_DIAG=m368368CONFIG_CGROUP_NET_PRIO=y369369CONFIG_BPF_JIT=y···462458CONFIG_PPPOL2TP=m463459CONFIG_PPP_ASYNC=m464460CONFIG_PPP_SYNC_TTY=m461461+CONFIG_ISM=m465462CONFIG_INPUT_EVDEV=y466463# CONFIG_INPUT_KEYBOARD is not set467464# CONFIG_INPUT_MOUSE is not set···488483CONFIG_MLX5_INFINIBAND=m489484CONFIG_VFIO=m490485CONFIG_VFIO_PCI=m486486+CONFIG_VFIO_MDEV=m487487+CONFIG_VFIO_MDEV_DEVICE=m491488CONFIG_VIRTIO_PCI=m492489CONFIG_VIRTIO_BALLOON=m493490CONFIG_VIRTIO_INPUT=y491491+CONFIG_S390_AP_IOMMU=y494492CONFIG_EXT4_FS=y495493CONFIG_EXT4_FS_POSIX_ACL=y496494CONFIG_EXT4_FS_SECURITY=y···674666CONFIG_KVM=m675667CONFIG_KVM_S390_UCONTROL=y676668CONFIG_VHOST_NET=m669669+CONFIG_VHOST_VSOCK=m
+41-38
arch/s390/defconfig
···2626CONFIG_CGROUP_PERF=y2727CONFIG_NAMESPACES=y2828CONFIG_USER_NS=y2929+CONFIG_CHECKPOINT_RESTORE=y2930CONFIG_BLK_DEV_INITRD=y3031CONFIG_EXPERT=y3132# CONFIG_SYSFS_SYSCALL is not set3232-CONFIG_CHECKPOINT_RESTORE=y3333CONFIG_BPF_SYSCALL=y3434CONFIG_USERFAULTFD=y3535# CONFIG_COMPAT_BRK is not set3636CONFIG_PROFILING=y3737+CONFIG_LIVEPATCH=y3838+CONFIG_NR_CPUS=2563939+CONFIG_NUMA=y4040+CONFIG_HZ_100=y4141+CONFIG_KEXEC_FILE=y4242+CONFIG_CRASH_DUMP=y4343+CONFIG_HIBERNATION=y4444+CONFIG_PM_DEBUG=y4545+CONFIG_CMM=m3746CONFIG_OPROFILE=y3847CONFIG_KPROBES=y3948CONFIG_JUMP_LABEL=y···5344CONFIG_PARTITION_ADVANCED=y5445CONFIG_IBM_PARTITION=y5546CONFIG_DEFAULT_DEADLINE=y5656-CONFIG_LIVEPATCH=y5757-CONFIG_NR_CPUS=2565858-CONFIG_NUMA=y5959-CONFIG_HZ_100=y6060-CONFIG_KEXEC_FILE=y4747+CONFIG_BINFMT_MISC=m6148CONFIG_MEMORY_HOTPLUG=y6249CONFIG_MEMORY_HOTREMOVE=y6350CONFIG_KSM=y···6560CONFIG_ZSMALLOC=m6661CONFIG_ZSMALLOC_STAT=y6762CONFIG_IDLE_PAGE_TRACKING=y6868-CONFIG_CRASH_DUMP=y6969-CONFIG_BINFMT_MISC=m7070-CONFIG_HIBERNATION=y7163CONFIG_NET=y7264CONFIG_PACKET=y7365CONFIG_UNIX=y···10098CONFIG_BLK_DEV_RAM=y10199CONFIG_VIRTIO_BLK=y102100CONFIG_SCSI=y101101+# CONFIG_SCSI_MQ_DEFAULT is not set103102CONFIG_BLK_DEV_SD=y104103CONFIG_CHR_DEV_ST=y105104CONFIG_BLK_DEV_SR=y···134131CONFIG_TUN=m135132CONFIG_VIRTIO_NET=y136133# CONFIG_NET_VENDOR_ALACRITECH is not set134134+# CONFIG_NET_VENDOR_AURORA is not set137135# CONFIG_NET_VENDOR_CORTINA is not set138136# CONFIG_NET_VENDOR_SOLARFLARE is not set139137# CONFIG_NET_VENDOR_SOCIONEXT is not set···161157CONFIG_TMPFS_POSIX_ACL=y162158CONFIG_HUGETLBFS=y163159# CONFIG_NETWORK_FILESYSTEMS is not set164164-CONFIG_DEBUG_INFO=y165165-CONFIG_DEBUG_INFO_DWARF4=y166166-CONFIG_GDB_SCRIPTS=y167167-CONFIG_UNUSED_SYMBOLS=y168168-CONFIG_DEBUG_SECTION_MISMATCH=y169169-CONFIG_DEBUG_FORCE_WEAK_PER_CPU=y170170-CONFIG_MAGIC_SYSRQ=y171171-CONFIG_DEBUG_PAGEALLOC=y172172-CONFIG_DETECT_HUNG_TASK=y173173-CONFIG_PANIC_ON_OOPS=y174174-CONFIG_PROVE_LOCKING=y175175-CONFIG_LOCK_STAT=y176176-CONFIG_DEBUG_LOCKDEP=y177177-CONFIG_DEBUG_ATOMIC_SLEEP=y178178-CONFIG_DEBUG_LIST=y179179-CONFIG_DEBUG_SG=y180180-CONFIG_DEBUG_NOTIFIERS=y181181-CONFIG_RCU_CPU_STALL_TIMEOUT=60182182-CONFIG_LATENCYTOP=y183183-CONFIG_SCHED_TRACER=y184184-CONFIG_FTRACE_SYSCALLS=y185185-CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP=y186186-CONFIG_STACK_TRACER=y187187-CONFIG_BLK_DEV_IO_TRACE=y188188-CONFIG_FUNCTION_PROFILER=y189189-# CONFIG_RUNTIME_TESTING_MENU is not set190190-CONFIG_S390_PTDUMP=y191160CONFIG_CRYPTO_CRYPTD=m192161CONFIG_CRYPTO_AUTHENC=m193162CONFIG_CRYPTO_TEST=m···170193CONFIG_CRYPTO_CFB=m171194CONFIG_CRYPTO_CTS=m172195CONFIG_CRYPTO_LRW=m196196+CONFIG_CRYPTO_OFB=m173197CONFIG_CRYPTO_PCBC=m174198CONFIG_CRYPTO_XTS=m175199CONFIG_CRYPTO_CMAC=m···209231CONFIG_CRYPTO_USER_API_SKCIPHER=m210232CONFIG_CRYPTO_USER_API_RNG=m211233CONFIG_ZCRYPT=m212212-CONFIG_ZCRYPT_MULTIDEVNODES=y213234CONFIG_PKEY=m214235CONFIG_CRYPTO_PAES_S390=m215236CONFIG_CRYPTO_SHA1_S390=m···224247# CONFIG_XZ_DEC_ARM is not set225248# CONFIG_XZ_DEC_ARMTHUMB is not set226249# CONFIG_XZ_DEC_SPARC is not set227227-CONFIG_CMM=m250250+CONFIG_DEBUG_INFO=y251251+CONFIG_DEBUG_INFO_DWARF4=y252252+CONFIG_GDB_SCRIPTS=y253253+CONFIG_UNUSED_SYMBOLS=y254254+CONFIG_DEBUG_SECTION_MISMATCH=y255255+CONFIG_DEBUG_FORCE_WEAK_PER_CPU=y256256+CONFIG_MAGIC_SYSRQ=y257257+CONFIG_DEBUG_PAGEALLOC=y258258+CONFIG_DETECT_HUNG_TASK=y259259+CONFIG_PANIC_ON_OOPS=y260260+CONFIG_PROVE_LOCKING=y261261+CONFIG_LOCK_STAT=y262262+CONFIG_DEBUG_LOCKDEP=y263263+CONFIG_DEBUG_ATOMIC_SLEEP=y264264+CONFIG_DEBUG_LIST=y265265+CONFIG_DEBUG_SG=y266266+CONFIG_DEBUG_NOTIFIERS=y267267+CONFIG_RCU_CPU_STALL_TIMEOUT=60268268+CONFIG_LATENCYTOP=y269269+CONFIG_SCHED_TRACER=y270270+CONFIG_FTRACE_SYSCALLS=y271271+CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP=y272272+CONFIG_STACK_TRACER=y273273+CONFIG_BLK_DEV_IO_TRACE=y274274+CONFIG_FUNCTION_PROFILER=y275275+# CONFIG_RUNTIME_TESTING_MENU is not set276276+CONFIG_S390_PTDUMP=y
-5
arch/s390/include/asm/mmu_context.h
···4646 mm->context.asce_limit = STACK_TOP_MAX;4747 mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH |4848 _ASCE_USER_BITS | _ASCE_TYPE_REGION3;4949- /* pgd_alloc() did not account this pud */5050- mm_inc_nr_puds(mm);5149 break;5250 case -PAGE_SIZE:5351 /* forked 5-level task, set new asce with new_mm->pgd */···6163 /* forked 2-level compat task, set new asce with new mm->pgd */6264 mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH |6365 _ASCE_USER_BITS | _ASCE_TYPE_SEGMENT;6464- /* pgd_alloc() did not account this pmd */6565- mm_inc_nr_pmds(mm);6666- mm_inc_nr_puds(mm);6766 }6867 crst_table_init((unsigned long *) mm->pgd, pgd_entry_type(mm));6968 return 0;
+3-3
arch/s390/include/asm/pgalloc.h
···36363737static inline unsigned long pgd_entry_type(struct mm_struct *mm)3838{3939- if (mm->context.asce_limit <= _REGION3_SIZE)3939+ if (mm_pmd_folded(mm))4040 return _SEGMENT_ENTRY_EMPTY;4141- if (mm->context.asce_limit <= _REGION2_SIZE)4141+ if (mm_pud_folded(mm))4242 return _REGION3_ENTRY_EMPTY;4343- if (mm->context.asce_limit <= _REGION1_SIZE)4343+ if (mm_p4d_folded(mm))4444 return _REGION2_ENTRY_EMPTY;4545 return _REGION1_ENTRY_EMPTY;4646}
···236236 return sp;237237}238238239239-static __no_sanitize_address_or_inline unsigned short stap(void)239239+static __no_kasan_or_inline unsigned short stap(void)240240{241241 unsigned short cpu_address;242242···330330 * Set PSW mask to specified value, while leaving the331331 * PSW addr pointing to the next instruction.332332 */333333-static __no_sanitize_address_or_inline void __load_psw_mask(unsigned long mask)333333+static __no_kasan_or_inline void __load_psw_mask(unsigned long mask)334334{335335 unsigned long addr;336336 psw_t psw;
+1-1
arch/s390/include/asm/thread_info.h
···1414 * General size of kernel stacks1515 */1616#ifdef CONFIG_KASAN1717-#define THREAD_SIZE_ORDER 31717+#define THREAD_SIZE_ORDER 41818#else1919#define THREAD_SIZE_ORDER 22020#endif
+3-3
arch/s390/include/asm/tlb.h
···136136static inline void pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd,137137 unsigned long address)138138{139139- if (tlb->mm->context.asce_limit <= _REGION3_SIZE)139139+ if (mm_pmd_folded(tlb->mm))140140 return;141141 pgtable_pmd_page_dtor(virt_to_page(pmd));142142 tlb_remove_table(tlb, pmd);···152152static inline void p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d,153153 unsigned long address)154154{155155- if (tlb->mm->context.asce_limit <= _REGION1_SIZE)155155+ if (mm_p4d_folded(tlb->mm))156156 return;157157 tlb_remove_table(tlb, p4d);158158}···167167static inline void pud_free_tlb(struct mmu_gather *tlb, pud_t *pud,168168 unsigned long address)169169{170170- if (tlb->mm->context.asce_limit <= _REGION2_SIZE)170170+ if (mm_pud_folded(tlb->mm))171171 return;172172 tlb_remove_table(tlb, pud);173173}
+3-3
arch/s390/kernel/entry.S
···236236 stmg %r6,%r15,__SF_GPRS(%r15) # store gprs of prev task237237 lghi %r4,__TASK_stack238238 lghi %r1,__TASK_thread239239- lg %r5,0(%r4,%r3) # start of kernel stack of next239239+ llill %r5,STACK_INIT240240 stg %r15,__THREAD_ksp(%r1,%r2) # store kernel stack of prev241241- lgr %r15,%r5242242- aghi %r15,STACK_INIT # end of kernel stack of next241241+ lg %r15,0(%r4,%r3) # start of kernel stack of next242242+ agr %r15,%r5 # end of kernel stack of next243243 stg %r3,__LC_CURRENT # store task struct of next244244 stg %r15,__LC_KERNEL_STACK # store end of kernel stack245245 lg %r15,__THREAD_ksp(%r1,%r3) # load kernel stack of next
+2-11
arch/s390/kernel/ftrace.c
···203203 */204204unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip)205205{206206- struct ftrace_graph_ent trace;207207-208206 if (unlikely(ftrace_graph_is_dead()))209207 goto out;210208 if (unlikely(atomic_read(¤t->tracing_graph_pause)))211209 goto out;212210 ip -= MCOUNT_INSN_SIZE;213213- trace.func = ip;214214- trace.depth = current->curr_ret_stack + 1;215215- /* Only trace if the calling function expects to. */216216- if (!ftrace_graph_entry(&trace))217217- goto out;218218- if (ftrace_push_return_trace(parent, ip, &trace.depth, 0,219219- NULL) == -EBUSY)220220- goto out;221221- parent = (unsigned long) return_to_handler;211211+ if (!function_graph_enter(parent, ip, 0, NULL))212212+ parent = (unsigned long) return_to_handler;222213out:223214 return parent;224215}
+3-1
arch/s390/kernel/perf_cpum_cf.c
···346346 break;347347348348 case PERF_TYPE_HARDWARE:349349+ if (is_sampling_event(event)) /* No sampling support */350350+ return -ENOENT;349351 ev = attr->config;350352 /* Count user space (problem-state) only */351353 if (!attr->exclude_user && attr->exclude_kernel) {···375373 return -ENOENT;376374377375 if (ev > PERF_CPUM_CF_MAX_CTR)378378- return -EINVAL;376376+ return -ENOENT;379377380378 /* Obtain the counter set to which the specified counter belongs */381379 set = get_counter_set(ev);
+28-5
arch/s390/kernel/perf_cpum_sf.c
···18421842CPUMF_EVENT_ATTR(SF, SF_CYCLES_BASIC, PERF_EVENT_CPUM_SF);18431843CPUMF_EVENT_ATTR(SF, SF_CYCLES_BASIC_DIAG, PERF_EVENT_CPUM_SF_DIAG);1844184418451845-static struct attribute *cpumsf_pmu_events_attr[] = {18461846- CPUMF_EVENT_PTR(SF, SF_CYCLES_BASIC),18471847- NULL,18481848- NULL,18451845+/* Attribute list for CPU_SF.18461846+ *18471847+ * The availablitiy depends on the CPU_MF sampling facility authorization18481848+ * for basic + diagnositic samples. This is determined at initialization18491849+ * time by the sampling facility device driver.18501850+ * If the authorization for basic samples is turned off, it should be18511851+ * also turned off for diagnostic sampling.18521852+ *18531853+ * During initialization of the device driver, check the authorization18541854+ * level for diagnostic sampling and installs the attribute18551855+ * file for diagnostic sampling if necessary.18561856+ *18571857+ * For now install a placeholder to reference all possible attributes:18581858+ * SF_CYCLES_BASIC and SF_CYCLES_BASIC_DIAG.18591859+ * Add another entry for the final NULL pointer.18601860+ */18611861+enum {18621862+ SF_CYCLES_BASIC_ATTR_IDX = 0,18631863+ SF_CYCLES_BASIC_DIAG_ATTR_IDX,18641864+ SF_CYCLES_ATTR_MAX18651865+};18661866+18671867+static struct attribute *cpumsf_pmu_events_attr[SF_CYCLES_ATTR_MAX + 1] = {18681868+ [SF_CYCLES_BASIC_ATTR_IDX] = CPUMF_EVENT_PTR(SF, SF_CYCLES_BASIC)18491869};1850187018511871PMU_FORMAT_ATTR(event, "config:0-63");···2060204020612041 if (si.ad) {20622042 sfb_set_limits(CPUM_SF_MIN_SDB, CPUM_SF_MAX_SDB);20632063- cpumsf_pmu_events_attr[1] =20432043+ /* Sampling of diagnostic data authorized,20442044+ * install event into attribute list of PMU device.20452045+ */20462046+ cpumsf_pmu_events_attr[SF_CYCLES_BASIC_DIAG_ATTR_IDX] =20642047 CPUMF_EVENT_PTR(SF, SF_CYCLES_BASIC_DIAG);20652048 }20662049
+3-3
arch/s390/kernel/vdso32/Makefile
···3737$(obj)/vdso32_wrapper.o : $(obj)/vdso32.so38383939# link rule for the .so file, .lds has to be first4040-$(obj)/vdso32.so.dbg: $(src)/vdso32.lds $(obj-vdso32)4040+$(obj)/vdso32.so.dbg: $(src)/vdso32.lds $(obj-vdso32) FORCE4141 $(call if_changed,vdso32ld)42424343# strip rule for the .so file···4646 $(call if_changed,objcopy)47474848# assembly rules for the .S files4949-$(obj-vdso32): %.o: %.S4949+$(obj-vdso32): %.o: %.S FORCE5050 $(call if_changed_dep,vdso32as)51515252# actual build commands5353quiet_cmd_vdso32ld = VDSO32L $@5454- cmd_vdso32ld = $(CC) $(c_flags) -Wl,-T $^ -o $@5454+ cmd_vdso32ld = $(CC) $(c_flags) -Wl,-T $(filter %.lds %.o,$^) -o $@5555quiet_cmd_vdso32as = VDSO32A $@5656 cmd_vdso32as = $(CC) $(a_flags) -c -o $@ $<5757
+3-3
arch/s390/kernel/vdso64/Makefile
···3737$(obj)/vdso64_wrapper.o : $(obj)/vdso64.so38383939# link rule for the .so file, .lds has to be first4040-$(obj)/vdso64.so.dbg: $(src)/vdso64.lds $(obj-vdso64)4040+$(obj)/vdso64.so.dbg: $(src)/vdso64.lds $(obj-vdso64) FORCE4141 $(call if_changed,vdso64ld)42424343# strip rule for the .so file···4646 $(call if_changed,objcopy)47474848# assembly rules for the .S files4949-$(obj-vdso64): %.o: %.S4949+$(obj-vdso64): %.o: %.S FORCE5050 $(call if_changed_dep,vdso64as)51515252# actual build commands5353quiet_cmd_vdso64ld = VDSO64L $@5454- cmd_vdso64ld = $(CC) $(c_flags) -Wl,-T $^ -o $@5454+ cmd_vdso64ld = $(CC) $(c_flags) -Wl,-T $(filter %.lds %.o,$^) -o $@5555quiet_cmd_vdso64as = VDSO64A $@5656 cmd_vdso64as = $(CC) $(a_flags) -c -o $@ $<5757
+2-2
arch/s390/kernel/vmlinux.lds.S
···154154 * uncompressed image info used by the decompressor155155 * it should match struct vmlinux_info156156 */157157- .vmlinux.info 0 : {157157+ .vmlinux.info 0 (INFO) : {158158 QUAD(_stext) /* default_lma */159159 QUAD(startup_continue) /* entry */160160 QUAD(__bss_start - _stext) /* image_size */161161 QUAD(__bss_stop - __bss_start) /* bss_size */162162 QUAD(__boot_data_start) /* bootdata_off */163163 QUAD(__boot_data_end - __boot_data_start) /* bootdata_size */164164- }164164+ } :NONE165165166166 /* Debugging sections. */167167 STABS_DEBUG
···5353{5454 return mode->distance ? mode->distance(a, b) : 0;5555}5656+EXPORT_SYMBOL(__node_distance);56575758int numa_debug_enabled;5859
+2-14
arch/sh/kernel/ftrace.c
···321321void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr)322322{323323 unsigned long old;324324- int faulted, err;325325- struct ftrace_graph_ent trace;324324+ int faulted;326325 unsigned long return_hooker = (unsigned long)&return_to_handler;327326328327 if (unlikely(ftrace_graph_is_dead()))···364365 return;365366 }366367367367- err = ftrace_push_return_trace(old, self_addr, &trace.depth, 0, NULL);368368- if (err == -EBUSY) {368368+ if (function_graph_enter(old, self_addr, 0, NULL))369369 __raw_writel(old, parent);370370- return;371371- }372372-373373- trace.func = self_addr;374374-375375- /* Only trace if the calling function expects to */376376- if (!ftrace_graph_entry(&trace)) {377377- current->curr_ret_stack--;378378- __raw_writel(old, parent);379379- }380370}381371#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
+1-10
arch/sparc/kernel/ftrace.c
···126126 unsigned long frame_pointer)127127{128128 unsigned long return_hooker = (unsigned long) &return_to_handler;129129- struct ftrace_graph_ent trace;130129131130 if (unlikely(atomic_read(¤t->tracing_graph_pause)))132131 return parent + 8UL;133132134134- trace.func = self_addr;135135- trace.depth = current->curr_ret_stack + 1;136136-137137- /* Only trace if the calling function expects to */138138- if (!ftrace_graph_entry(&trace))139139- return parent + 8UL;140140-141141- if (ftrace_push_return_trace(parent, self_addr, &trace.depth,142142- frame_pointer, NULL) == -EBUSY)133133+ if (function_graph_enter(parent, self_addr, frame_pointer, NULL))143134 return parent + 8UL;144135145136 return return_hooker;
···444444 branches. Requires a compiler with -mindirect-branch=thunk-extern445445 support for full protection. The kernel may run slower.446446447447- Without compiler support, at least indirect branches in assembler448448- code are eliminated. Since this includes the syscall entry path,449449- it is not entirely pointless.450450-451447config INTEL_RDT452448 bool "Intel Resource Director Technology support"453449 depends on X86 && CPU_SUP_INTEL···521525 bool "ScaleMP vSMP"522526 select HYPERVISOR_GUEST523527 select PARAVIRT524524- select PARAVIRT_XXL525528 depends on X86_64 && PCI526529 depends on X86_EXTENDED_PLATFORM527530 depends on SMP···10001005 to the kernel image.1001100610021007config SCHED_SMT10031003- bool "SMT (Hyperthreading) scheduler support"10041004- depends on SMP10051005- ---help---10061006- SMT scheduler support improves the CPU scheduler's decision making10071007- when dealing with Intel Pentium 4 chips with HyperThreading at a10081008- cost of slightly increased overhead in some places. If unsure say10091009- N here.10081008+ def_bool y if SMP1010100910111010config SCHED_MC10121011 def_bool y
+4-5
arch/x86/Makefile
···213213KBUILD_LDFLAGS += $(call ld-option, -z max-page-size=0x200000)214214endif215215216216-# Speed up the build217217-KBUILD_CFLAGS += -pipe218216# Workaround for a gcc prelease that unfortunately was shipped in a suse release219217KBUILD_CFLAGS += -Wno-sign-compare220218#···220222221223# Avoid indirect branches in kernel to deal with Spectre222224ifdef CONFIG_RETPOLINE223223-ifneq ($(RETPOLINE_CFLAGS),)224224- KBUILD_CFLAGS += $(RETPOLINE_CFLAGS) -DRETPOLINE225225+ifeq ($(RETPOLINE_CFLAGS),)226226+ $(error You are building kernel with non-retpoline compiler, please update your compiler.)225227endif228228+ KBUILD_CFLAGS += $(RETPOLINE_CFLAGS)226229endif227230228231archscripts: scripts_basic···238239archmacros:239240 $(Q)$(MAKE) $(build)=arch/x86/kernel arch/x86/kernel/macros.s240241241241-ASM_MACRO_FLAGS = -Wa,arch/x86/kernel/macros.s -Wa,-242242+ASM_MACRO_FLAGS = -Wa,arch/x86/kernel/macros.s242243export ASM_MACRO_FLAGS243244KBUILD_CFLAGS += $(ASM_MACRO_FLAGS)244245
+1-5
arch/x86/boot/header.S
···300300 # Part 2 of the header, from the old setup.S301301302302 .ascii "HdrS" # header signature303303- .word 0x020e # header version number (>= 0x0105)303303+ .word 0x020d # header version number (>= 0x0105)304304 # or else old loadlin-1.5 will fail)305305 .globl realmode_swtch306306realmode_swtch: .word 0, 0 # default_switch, SETUPSEG···557557558558init_size: .long INIT_SIZE # kernel initialization size559559handover_offset: .long 0 # Filled in by build.c560560-561561-acpi_rsdp_addr: .quad 0 # 64-bit physical pointer to the562562- # ACPI RSDP table, added with563563- # version 2.14564560565561# End of setup header #####################################################566562
-20
arch/x86/events/core.c
···438438 if (config == -1LL)439439 return -EINVAL;440440441441- /*442442- * Branch tracing:443443- */444444- if (attr->config == PERF_COUNT_HW_BRANCH_INSTRUCTIONS &&445445- !attr->freq && hwc->sample_period == 1) {446446- /* BTS is not supported by this architecture. */447447- if (!x86_pmu.bts_active)448448- return -EOPNOTSUPP;449449-450450- /* BTS is currently only allowed for user-mode. */451451- if (!attr->exclude_kernel)452452- return -EOPNOTSUPP;453453-454454- /* disallow bts if conflicting events are present */455455- if (x86_add_exclusive(x86_lbr_exclusive_lbr))456456- return -EBUSY;457457-458458- event->destroy = hw_perf_lbr_event_destroy;459459- }460460-461441 hwc->config |= config;462442463443 return 0;
+52-16
arch/x86/events/intel/core.c
···23062306 return handled;23072307}2308230823092309-static bool disable_counter_freezing;23092309+static bool disable_counter_freezing = true;23102310static int __init intel_perf_counter_freezing_setup(char *s)23112311{23122312- disable_counter_freezing = true;23132313- pr_info("Intel PMU Counter freezing feature disabled\n");23122312+ bool res;23132313+23142314+ if (kstrtobool(s, &res))23152315+ return -EINVAL;23162316+23172317+ disable_counter_freezing = !res;23142318 return 1;23152319}23162316-__setup("disable_counter_freezing", intel_perf_counter_freezing_setup);23202320+__setup("perf_v4_pmi=", intel_perf_counter_freezing_setup);2317232123182322/*23192323 * Simplified handler for Arch Perfmon v4:···24742470static struct event_constraint *24752471intel_bts_constraints(struct perf_event *event)24762472{24772477- struct hw_perf_event *hwc = &event->hw;24782478- unsigned int hw_event, bts_event;24792479-24802480- if (event->attr.freq)24812481- return NULL;24822482-24832483- hw_event = hwc->config & INTEL_ARCH_EVENT_MASK;24842484- bts_event = x86_pmu.event_map(PERF_COUNT_HW_BRANCH_INSTRUCTIONS);24852485-24862486- if (unlikely(hw_event == bts_event && hwc->sample_period == 1))24732473+ if (unlikely(intel_pmu_has_bts(event)))24872474 return &bts_constraint;2488247524892476 return NULL;···30933098 return flags;30943099}3095310031013101+static int intel_pmu_bts_config(struct perf_event *event)31023102+{31033103+ struct perf_event_attr *attr = &event->attr;31043104+31053105+ if (unlikely(intel_pmu_has_bts(event))) {31063106+ /* BTS is not supported by this architecture. */31073107+ if (!x86_pmu.bts_active)31083108+ return -EOPNOTSUPP;31093109+31103110+ /* BTS is currently only allowed for user-mode. */31113111+ if (!attr->exclude_kernel)31123112+ return -EOPNOTSUPP;31133113+31143114+ /* BTS is not allowed for precise events. */31153115+ if (attr->precise_ip)31163116+ return -EOPNOTSUPP;31173117+31183118+ /* disallow bts if conflicting events are present */31193119+ if (x86_add_exclusive(x86_lbr_exclusive_lbr))31203120+ return -EBUSY;31213121+31223122+ event->destroy = hw_perf_lbr_event_destroy;31233123+ }31243124+31253125+ return 0;31263126+}31273127+31283128+static int core_pmu_hw_config(struct perf_event *event)31293129+{31303130+ int ret = x86_pmu_hw_config(event);31313131+31323132+ if (ret)31333133+ return ret;31343134+31353135+ return intel_pmu_bts_config(event);31363136+}31373137+30963138static int intel_pmu_hw_config(struct perf_event *event)30973139{30983140 int ret = x86_pmu_hw_config(event);3099314131423142+ if (ret)31433143+ return ret;31443144+31453145+ ret = intel_pmu_bts_config(event);31003146 if (ret)31013147 return ret;31023148···31633127 /*31643128 * BTS is set up earlier in this path, so don't account twice31653129 */31663166- if (!intel_pmu_has_bts(event)) {31303130+ if (!unlikely(intel_pmu_has_bts(event))) {31673131 /* disallow lbr if conflicting events are present */31683132 if (x86_add_exclusive(x86_lbr_exclusive_lbr))31693133 return -EBUSY;···36323596 .enable_all = core_pmu_enable_all,36333597 .enable = core_pmu_enable_event,36343598 .disable = x86_pmu_disable_event,36353635- .hw_config = x86_pmu_hw_config,35993599+ .hw_config = core_pmu_hw_config,36363600 .schedule_events = x86_schedule_events,36373601 .eventsel = MSR_ARCH_PERFMON_EVENTSEL0,36383602 .perfctr = MSR_ARCH_PERFMON_PERFCTR0,
···33#ifndef _ASM_X86_NOSPEC_BRANCH_H_44#define _ASM_X86_NOSPEC_BRANCH_H_5566+#include <linux/static_key.h>77+68#include <asm/alternative.h>79#include <asm/alternative-asm.h>810#include <asm/cpufeatures.h>···164162 _ASM_PTR " 999b\n\t" \165163 ".popsection\n\t"166164167167-#if defined(CONFIG_X86_64) && defined(RETPOLINE)165165+#ifdef CONFIG_RETPOLINE166166+#ifdef CONFIG_X86_64168167169168/*170170- * Since the inline asm uses the %V modifier which is only in newer GCC,171171- * the 64-bit one is dependent on RETPOLINE not CONFIG_RETPOLINE.169169+ * Inline asm uses the %V modifier which is only in newer GCC170170+ * which is ensured when CONFIG_RETPOLINE is defined.172171 */173172# define CALL_NOSPEC \174173 ANNOTATE_NOSPEC_ALTERNATIVE \···184181 X86_FEATURE_RETPOLINE_AMD)185182# define THUNK_TARGET(addr) [thunk_target] "r" (addr)186183187187-#elif defined(CONFIG_X86_32) && defined(CONFIG_RETPOLINE)184184+#else /* CONFIG_X86_32 */188185/*189186 * For i386 we use the original ret-equivalent retpoline, because190187 * otherwise we'll run out of registers. We don't care about CET···214211 X86_FEATURE_RETPOLINE_AMD)215212216213# define THUNK_TARGET(addr) [thunk_target] "rm" (addr)214214+#endif217215#else /* No retpoline for C / inline asm */218216# define CALL_NOSPEC "call *%[thunk_target]\n"219217# define THUNK_TARGET(addr) [thunk_target] "rm" (addr)···223219/* The Spectre V2 mitigation variants */224220enum spectre_v2_mitigation {225221 SPECTRE_V2_NONE,226226- SPECTRE_V2_RETPOLINE_MINIMAL,227227- SPECTRE_V2_RETPOLINE_MINIMAL_AMD,228222 SPECTRE_V2_RETPOLINE_GENERIC,229223 SPECTRE_V2_RETPOLINE_AMD,230224 SPECTRE_V2_IBRS_ENHANCED,225225+};226226+227227+/* The indirect branch speculation control variants */228228+enum spectre_v2_user_mitigation {229229+ SPECTRE_V2_USER_NONE,230230+ SPECTRE_V2_USER_STRICT,231231+ SPECTRE_V2_USER_PRCTL,232232+ SPECTRE_V2_USER_SECCOMP,231233};232234233235/* The Speculative Store Bypass disable variants */···312302 X86_FEATURE_USE_IBRS_FW); \313303 preempt_enable(); \314304} while (0)305305+306306+DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);307307+DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);308308+DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);315309316310#endif /* __ASSEMBLY__ */317311
+7-5
arch/x86/include/asm/page_64_types.h
···33333434/*3535 * Set __PAGE_OFFSET to the most negative possible address +3636- * PGDIR_SIZE*16 (pgd slot 272). The gap is to allow a space for a3737- * hypervisor to fit. Choosing 16 slots here is arbitrary, but it's3838- * what Xen requires.3636+ * PGDIR_SIZE*17 (pgd slot 273).3737+ *3838+ * The gap is to allow a space for LDT remap for PTI (1 pgd slot) and space for3939+ * a hypervisor (16 slots). Choosing 16 slots for a hypervisor is arbitrary,4040+ * but it's what Xen requires.3941 */4040-#define __PAGE_OFFSET_BASE_L5 _AC(0xff10000000000000, UL)4141-#define __PAGE_OFFSET_BASE_L4 _AC(0xffff880000000000, UL)4242+#define __PAGE_OFFSET_BASE_L5 _AC(0xff11000000000000, UL)4343+#define __PAGE_OFFSET_BASE_L4 _AC(0xffff888000000000, UL)42444345#ifdef CONFIG_DYNAMIC_MEMORY_LAYOUT4446#define __PAGE_OFFSET page_offset_base
···7979#define TIF_SIGPENDING 2 /* signal pending */8080#define TIF_NEED_RESCHED 3 /* rescheduling necessary */8181#define TIF_SINGLESTEP 4 /* reenable singlestep on user return*/8282-#define TIF_SSBD 5 /* Reduced data speculation */8282+#define TIF_SSBD 5 /* Speculative store bypass disable */8383#define TIF_SYSCALL_EMU 6 /* syscall emulation active */8484#define TIF_SYSCALL_AUDIT 7 /* syscall auditing active */8585#define TIF_SECCOMP 8 /* secure computing */8686+#define TIF_SPEC_IB 9 /* Indirect branch speculation mitigation */8787+#define TIF_SPEC_FORCE_UPDATE 10 /* Force speculation MSR update in context switch */8688#define TIF_USER_RETURN_NOTIFY 11 /* notify kernel of userspace return */8789#define TIF_UPROBE 12 /* breakpointed or singlestepping */8890#define TIF_PATCH_PENDING 13 /* pending live patching update */···112110#define _TIF_SYSCALL_EMU (1 << TIF_SYSCALL_EMU)113111#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)114112#define _TIF_SECCOMP (1 << TIF_SECCOMP)113113+#define _TIF_SPEC_IB (1 << TIF_SPEC_IB)114114+#define _TIF_SPEC_FORCE_UPDATE (1 << TIF_SPEC_FORCE_UPDATE)115115#define _TIF_USER_RETURN_NOTIFY (1 << TIF_USER_RETURN_NOTIFY)116116#define _TIF_UPROBE (1 << TIF_UPROBE)117117#define _TIF_PATCH_PENDING (1 << TIF_PATCH_PENDING)···149145 _TIF_FSCHECK)150146151147/* flags to check in __switch_to() */152152-#define _TIF_WORK_CTXSW \153153- (_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|_TIF_SSBD)148148+#define _TIF_WORK_CTXSW_BASE \149149+ (_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP| \150150+ _TIF_SSBD | _TIF_SPEC_FORCE_UPDATE)151151+152152+/*153153+ * Avoid calls to __switch_to_xtra() on UP as STIBP is not evaluated.154154+ */155155+#ifdef CONFIG_SMP156156+# define _TIF_WORK_CTXSW (_TIF_WORK_CTXSW_BASE | _TIF_SPEC_IB)157157+#else158158+# define _TIF_WORK_CTXSW (_TIF_WORK_CTXSW_BASE)159159+#endif154160155161#define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY)156162#define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW)
+6-2
arch/x86/include/asm/tlbflush.h
···169169170170#define LOADED_MM_SWITCHING ((struct mm_struct *)1)171171172172+ /* Last user mm for optimizing IBPB */173173+ union {174174+ struct mm_struct *last_user_mm;175175+ unsigned long last_user_mm_ibpb;176176+ };177177+172178 u16 loaded_mm_asid;173179 u16 next_asid;174174- /* last user mm's ctx id */175175- u64 last_ctx_id;176180177181 /*178182 * We can be in one of several states:
-2
arch/x86/include/asm/x86_init.h
···303303extern void x86_init_uint_noop(unsigned int unused);304304extern bool x86_pnpbios_disabled(void);305305306306-void x86_verify_bootdata_version(void);307307-308306#endif
+31-4
arch/x86/include/asm/xen/page.h
···99#include <linux/mm.h>1010#include <linux/device.h>11111212-#include <linux/uaccess.h>1212+#include <asm/extable.h>1313#include <asm/page.h>1414#include <asm/pgtable.h>1515···9393 */9494static inline int xen_safe_write_ulong(unsigned long *addr, unsigned long val)9595{9696- return __put_user(val, (unsigned long __user *)addr);9696+ int ret = 0;9797+9898+ asm volatile("1: mov %[val], %[ptr]\n"9999+ "2:\n"100100+ ".section .fixup, \"ax\"\n"101101+ "3: sub $1, %[ret]\n"102102+ " jmp 2b\n"103103+ ".previous\n"104104+ _ASM_EXTABLE(1b, 3b)105105+ : [ret] "+r" (ret), [ptr] "=m" (*addr)106106+ : [val] "r" (val));107107+108108+ return ret;97109}981109999-static inline int xen_safe_read_ulong(unsigned long *addr, unsigned long *val)111111+static inline int xen_safe_read_ulong(const unsigned long *addr,112112+ unsigned long *val)100113{101101- return __get_user(*val, (unsigned long __user *)addr);114114+ int ret = 0;115115+ unsigned long rval = ~0ul;116116+117117+ asm volatile("1: mov %[ptr], %[rval]\n"118118+ "2:\n"119119+ ".section .fixup, \"ax\"\n"120120+ "3: sub $1, %[ret]\n"121121+ " jmp 2b\n"122122+ ".previous\n"123123+ _ASM_EXTABLE(1b, 3b)124124+ : [ret] "+r" (ret), [rval] "+r" (rval)125125+ : [ptr] "m" (*addr));126126+ *val = rval;127127+128128+ return ret;102129}103130104131#ifdef CONFIG_XEN_PV
···1414#include <linux/module.h>1515#include <linux/nospec.h>1616#include <linux/prctl.h>1717+#include <linux/sched/smt.h>17181819#include <asm/spec-ctrl.h>1920#include <asm/cmdline.h>···5352 */5453u64 __ro_after_init x86_amd_ls_cfg_base;5554u64 __ro_after_init x86_amd_ls_cfg_ssbd_mask;5555+5656+/* Control conditional STIPB in switch_to() */5757+DEFINE_STATIC_KEY_FALSE(switch_to_cond_stibp);5858+/* Control conditional IBPB in switch_mm() */5959+DEFINE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);6060+/* Control unconditional IBPB in switch_mm() */6161+DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb);56625763void __init check_bugs(void)5864{···131123#endif132124}133125134134-/* The kernel command line selection */135135-enum spectre_v2_mitigation_cmd {136136- SPECTRE_V2_CMD_NONE,137137- SPECTRE_V2_CMD_AUTO,138138- SPECTRE_V2_CMD_FORCE,139139- SPECTRE_V2_CMD_RETPOLINE,140140- SPECTRE_V2_CMD_RETPOLINE_GENERIC,141141- SPECTRE_V2_CMD_RETPOLINE_AMD,142142-};143143-144144-static const char *spectre_v2_strings[] = {145145- [SPECTRE_V2_NONE] = "Vulnerable",146146- [SPECTRE_V2_RETPOLINE_MINIMAL] = "Vulnerable: Minimal generic ASM retpoline",147147- [SPECTRE_V2_RETPOLINE_MINIMAL_AMD] = "Vulnerable: Minimal AMD ASM retpoline",148148- [SPECTRE_V2_RETPOLINE_GENERIC] = "Mitigation: Full generic retpoline",149149- [SPECTRE_V2_RETPOLINE_AMD] = "Mitigation: Full AMD retpoline",150150- [SPECTRE_V2_IBRS_ENHANCED] = "Mitigation: Enhanced IBRS",151151-};152152-153153-#undef pr_fmt154154-#define pr_fmt(fmt) "Spectre V2 : " fmt155155-156156-static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =157157- SPECTRE_V2_NONE;158158-159126void160127x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)161128{···151168 if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||152169 static_cpu_has(X86_FEATURE_AMD_SSBD))153170 hostval |= ssbd_tif_to_spec_ctrl(ti->flags);171171+172172+ /* Conditional STIBP enabled? */173173+ if (static_branch_unlikely(&switch_to_cond_stibp))174174+ hostval |= stibp_tif_to_spec_ctrl(ti->flags);154175155176 if (hostval != guestval) {156177 msrval = setguest ? guestval : hostval;···189202 tif = setguest ? ssbd_spec_ctrl_to_tif(guestval) :190203 ssbd_spec_ctrl_to_tif(hostval);191204192192- speculative_store_bypass_update(tif);205205+ speculation_ctrl_update(tif);193206 }194207}195208EXPORT_SYMBOL_GPL(x86_virt_spec_ctrl);···203216 else if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD))204217 wrmsrl(MSR_AMD64_LS_CFG, msrval);205218}219219+220220+#undef pr_fmt221221+#define pr_fmt(fmt) "Spectre V2 : " fmt222222+223223+static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =224224+ SPECTRE_V2_NONE;225225+226226+static enum spectre_v2_user_mitigation spectre_v2_user __ro_after_init =227227+ SPECTRE_V2_USER_NONE;206228207229#ifdef RETPOLINE208230static bool spectre_v2_bad_module;···234238static inline const char *spectre_v2_module_string(void) { return ""; }235239#endif236240237237-static void __init spec2_print_if_insecure(const char *reason)238238-{239239- if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2))240240- pr_info("%s selected on command line.\n", reason);241241-}242242-243243-static void __init spec2_print_if_secure(const char *reason)244244-{245245- if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))246246- pr_info("%s selected on command line.\n", reason);247247-}248248-249249-static inline bool retp_compiler(void)250250-{251251- return __is_defined(RETPOLINE);252252-}253253-254241static inline bool match_option(const char *arg, int arglen, const char *opt)255242{256243 int len = strlen(opt);···241262 return len == arglen && !strncmp(arg, opt, len);242263}243264265265+/* The kernel command line selection for spectre v2 */266266+enum spectre_v2_mitigation_cmd {267267+ SPECTRE_V2_CMD_NONE,268268+ SPECTRE_V2_CMD_AUTO,269269+ SPECTRE_V2_CMD_FORCE,270270+ SPECTRE_V2_CMD_RETPOLINE,271271+ SPECTRE_V2_CMD_RETPOLINE_GENERIC,272272+ SPECTRE_V2_CMD_RETPOLINE_AMD,273273+};274274+275275+enum spectre_v2_user_cmd {276276+ SPECTRE_V2_USER_CMD_NONE,277277+ SPECTRE_V2_USER_CMD_AUTO,278278+ SPECTRE_V2_USER_CMD_FORCE,279279+ SPECTRE_V2_USER_CMD_PRCTL,280280+ SPECTRE_V2_USER_CMD_PRCTL_IBPB,281281+ SPECTRE_V2_USER_CMD_SECCOMP,282282+ SPECTRE_V2_USER_CMD_SECCOMP_IBPB,283283+};284284+285285+static const char * const spectre_v2_user_strings[] = {286286+ [SPECTRE_V2_USER_NONE] = "User space: Vulnerable",287287+ [SPECTRE_V2_USER_STRICT] = "User space: Mitigation: STIBP protection",288288+ [SPECTRE_V2_USER_PRCTL] = "User space: Mitigation: STIBP via prctl",289289+ [SPECTRE_V2_USER_SECCOMP] = "User space: Mitigation: STIBP via seccomp and prctl",290290+};291291+292292+static const struct {293293+ const char *option;294294+ enum spectre_v2_user_cmd cmd;295295+ bool secure;296296+} v2_user_options[] __initdata = {297297+ { "auto", SPECTRE_V2_USER_CMD_AUTO, false },298298+ { "off", SPECTRE_V2_USER_CMD_NONE, false },299299+ { "on", SPECTRE_V2_USER_CMD_FORCE, true },300300+ { "prctl", SPECTRE_V2_USER_CMD_PRCTL, false },301301+ { "prctl,ibpb", SPECTRE_V2_USER_CMD_PRCTL_IBPB, false },302302+ { "seccomp", SPECTRE_V2_USER_CMD_SECCOMP, false },303303+ { "seccomp,ibpb", SPECTRE_V2_USER_CMD_SECCOMP_IBPB, false },304304+};305305+306306+static void __init spec_v2_user_print_cond(const char *reason, bool secure)307307+{308308+ if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2) != secure)309309+ pr_info("spectre_v2_user=%s forced on command line.\n", reason);310310+}311311+312312+static enum spectre_v2_user_cmd __init313313+spectre_v2_parse_user_cmdline(enum spectre_v2_mitigation_cmd v2_cmd)314314+{315315+ char arg[20];316316+ int ret, i;317317+318318+ switch (v2_cmd) {319319+ case SPECTRE_V2_CMD_NONE:320320+ return SPECTRE_V2_USER_CMD_NONE;321321+ case SPECTRE_V2_CMD_FORCE:322322+ return SPECTRE_V2_USER_CMD_FORCE;323323+ default:324324+ break;325325+ }326326+327327+ ret = cmdline_find_option(boot_command_line, "spectre_v2_user",328328+ arg, sizeof(arg));329329+ if (ret < 0)330330+ return SPECTRE_V2_USER_CMD_AUTO;331331+332332+ for (i = 0; i < ARRAY_SIZE(v2_user_options); i++) {333333+ if (match_option(arg, ret, v2_user_options[i].option)) {334334+ spec_v2_user_print_cond(v2_user_options[i].option,335335+ v2_user_options[i].secure);336336+ return v2_user_options[i].cmd;337337+ }338338+ }339339+340340+ pr_err("Unknown user space protection option (%s). Switching to AUTO select\n", arg);341341+ return SPECTRE_V2_USER_CMD_AUTO;342342+}343343+344344+static void __init345345+spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)346346+{347347+ enum spectre_v2_user_mitigation mode = SPECTRE_V2_USER_NONE;348348+ bool smt_possible = IS_ENABLED(CONFIG_SMP);349349+ enum spectre_v2_user_cmd cmd;350350+351351+ if (!boot_cpu_has(X86_FEATURE_IBPB) && !boot_cpu_has(X86_FEATURE_STIBP))352352+ return;353353+354354+ if (cpu_smt_control == CPU_SMT_FORCE_DISABLED ||355355+ cpu_smt_control == CPU_SMT_NOT_SUPPORTED)356356+ smt_possible = false;357357+358358+ cmd = spectre_v2_parse_user_cmdline(v2_cmd);359359+ switch (cmd) {360360+ case SPECTRE_V2_USER_CMD_NONE:361361+ goto set_mode;362362+ case SPECTRE_V2_USER_CMD_FORCE:363363+ mode = SPECTRE_V2_USER_STRICT;364364+ break;365365+ case SPECTRE_V2_USER_CMD_PRCTL:366366+ case SPECTRE_V2_USER_CMD_PRCTL_IBPB:367367+ mode = SPECTRE_V2_USER_PRCTL;368368+ break;369369+ case SPECTRE_V2_USER_CMD_AUTO:370370+ case SPECTRE_V2_USER_CMD_SECCOMP:371371+ case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:372372+ if (IS_ENABLED(CONFIG_SECCOMP))373373+ mode = SPECTRE_V2_USER_SECCOMP;374374+ else375375+ mode = SPECTRE_V2_USER_PRCTL;376376+ break;377377+ }378378+379379+ /* Initialize Indirect Branch Prediction Barrier */380380+ if (boot_cpu_has(X86_FEATURE_IBPB)) {381381+ setup_force_cpu_cap(X86_FEATURE_USE_IBPB);382382+383383+ switch (cmd) {384384+ case SPECTRE_V2_USER_CMD_FORCE:385385+ case SPECTRE_V2_USER_CMD_PRCTL_IBPB:386386+ case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:387387+ static_branch_enable(&switch_mm_always_ibpb);388388+ break;389389+ case SPECTRE_V2_USER_CMD_PRCTL:390390+ case SPECTRE_V2_USER_CMD_AUTO:391391+ case SPECTRE_V2_USER_CMD_SECCOMP:392392+ static_branch_enable(&switch_mm_cond_ibpb);393393+ break;394394+ default:395395+ break;396396+ }397397+398398+ pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n",399399+ static_key_enabled(&switch_mm_always_ibpb) ?400400+ "always-on" : "conditional");401401+ }402402+403403+ /* If enhanced IBRS is enabled no STIPB required */404404+ if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)405405+ return;406406+407407+ /*408408+ * If SMT is not possible or STIBP is not available clear the STIPB409409+ * mode.410410+ */411411+ if (!smt_possible || !boot_cpu_has(X86_FEATURE_STIBP))412412+ mode = SPECTRE_V2_USER_NONE;413413+set_mode:414414+ spectre_v2_user = mode;415415+ /* Only print the STIBP mode when SMT possible */416416+ if (smt_possible)417417+ pr_info("%s\n", spectre_v2_user_strings[mode]);418418+}419419+420420+static const char * const spectre_v2_strings[] = {421421+ [SPECTRE_V2_NONE] = "Vulnerable",422422+ [SPECTRE_V2_RETPOLINE_GENERIC] = "Mitigation: Full generic retpoline",423423+ [SPECTRE_V2_RETPOLINE_AMD] = "Mitigation: Full AMD retpoline",424424+ [SPECTRE_V2_IBRS_ENHANCED] = "Mitigation: Enhanced IBRS",425425+};426426+244427static const struct {245428 const char *option;246429 enum spectre_v2_mitigation_cmd cmd;247430 bool secure;248248-} mitigation_options[] = {249249- { "off", SPECTRE_V2_CMD_NONE, false },250250- { "on", SPECTRE_V2_CMD_FORCE, true },251251- { "retpoline", SPECTRE_V2_CMD_RETPOLINE, false },252252- { "retpoline,amd", SPECTRE_V2_CMD_RETPOLINE_AMD, false },253253- { "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false },254254- { "auto", SPECTRE_V2_CMD_AUTO, false },431431+} mitigation_options[] __initdata = {432432+ { "off", SPECTRE_V2_CMD_NONE, false },433433+ { "on", SPECTRE_V2_CMD_FORCE, true },434434+ { "retpoline", SPECTRE_V2_CMD_RETPOLINE, false },435435+ { "retpoline,amd", SPECTRE_V2_CMD_RETPOLINE_AMD, false },436436+ { "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false },437437+ { "auto", SPECTRE_V2_CMD_AUTO, false },255438};439439+440440+static void __init spec_v2_print_cond(const char *reason, bool secure)441441+{442442+ if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2) != secure)443443+ pr_info("%s selected on command line.\n", reason);444444+}256445257446static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)258447{448448+ enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_CMD_AUTO;259449 char arg[20];260450 int ret, i;261261- enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_CMD_AUTO;262451263452 if (cmdline_find_option_bool(boot_command_line, "nospectre_v2"))264453 return SPECTRE_V2_CMD_NONE;265265- else {266266- ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));267267- if (ret < 0)268268- return SPECTRE_V2_CMD_AUTO;269454270270- for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) {271271- if (!match_option(arg, ret, mitigation_options[i].option))272272- continue;273273- cmd = mitigation_options[i].cmd;274274- break;275275- }455455+ ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));456456+ if (ret < 0)457457+ return SPECTRE_V2_CMD_AUTO;276458277277- if (i >= ARRAY_SIZE(mitigation_options)) {278278- pr_err("unknown option (%s). Switching to AUTO select\n", arg);279279- return SPECTRE_V2_CMD_AUTO;280280- }459459+ for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) {460460+ if (!match_option(arg, ret, mitigation_options[i].option))461461+ continue;462462+ cmd = mitigation_options[i].cmd;463463+ break;464464+ }465465+466466+ if (i >= ARRAY_SIZE(mitigation_options)) {467467+ pr_err("unknown option (%s). Switching to AUTO select\n", arg);468468+ return SPECTRE_V2_CMD_AUTO;281469 }282470283471 if ((cmd == SPECTRE_V2_CMD_RETPOLINE ||···462316 return SPECTRE_V2_CMD_AUTO;463317 }464318465465- if (mitigation_options[i].secure)466466- spec2_print_if_secure(mitigation_options[i].option);467467- else468468- spec2_print_if_insecure(mitigation_options[i].option);469469-319319+ spec_v2_print_cond(mitigation_options[i].option,320320+ mitigation_options[i].secure);470321 return cmd;471471-}472472-473473-static bool stibp_needed(void)474474-{475475- if (spectre_v2_enabled == SPECTRE_V2_NONE)476476- return false;477477-478478- if (!boot_cpu_has(X86_FEATURE_STIBP))479479- return false;480480-481481- return true;482482-}483483-484484-static void update_stibp_msr(void *info)485485-{486486- wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);487487-}488488-489489-void arch_smt_update(void)490490-{491491- u64 mask;492492-493493- if (!stibp_needed())494494- return;495495-496496- mutex_lock(&spec_ctrl_mutex);497497- mask = x86_spec_ctrl_base;498498- if (cpu_smt_control == CPU_SMT_ENABLED)499499- mask |= SPEC_CTRL_STIBP;500500- else501501- mask &= ~SPEC_CTRL_STIBP;502502-503503- if (mask != x86_spec_ctrl_base) {504504- pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",505505- cpu_smt_control == CPU_SMT_ENABLED ?506506- "Enabling" : "Disabling");507507- x86_spec_ctrl_base = mask;508508- on_each_cpu(update_stibp_msr, NULL, 1);509509- }510510- mutex_unlock(&spec_ctrl_mutex);511322}512323513324static void __init spectre_v2_select_mitigation(void)···520417 pr_err("Spectre mitigation: LFENCE not serializing, switching to generic retpoline\n");521418 goto retpoline_generic;522419 }523523- mode = retp_compiler() ? SPECTRE_V2_RETPOLINE_AMD :524524- SPECTRE_V2_RETPOLINE_MINIMAL_AMD;420420+ mode = SPECTRE_V2_RETPOLINE_AMD;525421 setup_force_cpu_cap(X86_FEATURE_RETPOLINE_AMD);526422 setup_force_cpu_cap(X86_FEATURE_RETPOLINE);527423 } else {528424 retpoline_generic:529529- mode = retp_compiler() ? SPECTRE_V2_RETPOLINE_GENERIC :530530- SPECTRE_V2_RETPOLINE_MINIMAL;425425+ mode = SPECTRE_V2_RETPOLINE_GENERIC;531426 setup_force_cpu_cap(X86_FEATURE_RETPOLINE);532427 }533428···544443 setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);545444 pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");546445547547- /* Initialize Indirect Branch Prediction Barrier if supported */548548- if (boot_cpu_has(X86_FEATURE_IBPB)) {549549- setup_force_cpu_cap(X86_FEATURE_USE_IBPB);550550- pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");551551- }552552-553446 /*554447 * Retpoline means the kernel is safe because it has no indirect555448 * branches. Enhanced IBRS protects firmware too, so, enable restricted···560465 pr_info("Enabling Restricted Speculation for firmware calls\n");561466 }562467468468+ /* Set up IBPB and STIBP depending on the general spectre V2 command */469469+ spectre_v2_user_select_mitigation(cmd);470470+563471 /* Enable STIBP if appropriate */564472 arch_smt_update();473473+}474474+475475+static void update_stibp_msr(void * __unused)476476+{477477+ wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);478478+}479479+480480+/* Update x86_spec_ctrl_base in case SMT state changed. */481481+static void update_stibp_strict(void)482482+{483483+ u64 mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;484484+485485+ if (sched_smt_active())486486+ mask |= SPEC_CTRL_STIBP;487487+488488+ if (mask == x86_spec_ctrl_base)489489+ return;490490+491491+ pr_info("Update user space SMT mitigation: STIBP %s\n",492492+ mask & SPEC_CTRL_STIBP ? "always-on" : "off");493493+ x86_spec_ctrl_base = mask;494494+ on_each_cpu(update_stibp_msr, NULL, 1);495495+}496496+497497+/* Update the static key controlling the evaluation of TIF_SPEC_IB */498498+static void update_indir_branch_cond(void)499499+{500500+ if (sched_smt_active())501501+ static_branch_enable(&switch_to_cond_stibp);502502+ else503503+ static_branch_disable(&switch_to_cond_stibp);504504+}505505+506506+void arch_smt_update(void)507507+{508508+ /* Enhanced IBRS implies STIBP. No update required. */509509+ if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)510510+ return;511511+512512+ mutex_lock(&spec_ctrl_mutex);513513+514514+ switch (spectre_v2_user) {515515+ case SPECTRE_V2_USER_NONE:516516+ break;517517+ case SPECTRE_V2_USER_STRICT:518518+ update_stibp_strict();519519+ break;520520+ case SPECTRE_V2_USER_PRCTL:521521+ case SPECTRE_V2_USER_SECCOMP:522522+ update_indir_branch_cond();523523+ break;524524+ }525525+526526+ mutex_unlock(&spec_ctrl_mutex);565527}566528567529#undef pr_fmt···635483 SPEC_STORE_BYPASS_CMD_SECCOMP,636484};637485638638-static const char *ssb_strings[] = {486486+static const char * const ssb_strings[] = {639487 [SPEC_STORE_BYPASS_NONE] = "Vulnerable",640488 [SPEC_STORE_BYPASS_DISABLE] = "Mitigation: Speculative Store Bypass disabled",641489 [SPEC_STORE_BYPASS_PRCTL] = "Mitigation: Speculative Store Bypass disabled via prctl",···645493static const struct {646494 const char *option;647495 enum ssb_mitigation_cmd cmd;648648-} ssb_mitigation_options[] = {496496+} ssb_mitigation_options[] __initdata = {649497 { "auto", SPEC_STORE_BYPASS_CMD_AUTO }, /* Platform decides */650498 { "on", SPEC_STORE_BYPASS_CMD_ON }, /* Disable Speculative Store Bypass */651499 { "off", SPEC_STORE_BYPASS_CMD_NONE }, /* Don't touch Speculative Store Bypass */···756604#undef pr_fmt757605#define pr_fmt(fmt) "Speculation prctl: " fmt758606607607+static void task_update_spec_tif(struct task_struct *tsk)608608+{609609+ /* Force the update of the real TIF bits */610610+ set_tsk_thread_flag(tsk, TIF_SPEC_FORCE_UPDATE);611611+612612+ /*613613+ * Immediately update the speculation control MSRs for the current614614+ * task, but for a non-current task delay setting the CPU615615+ * mitigation until it is scheduled next.616616+ *617617+ * This can only happen for SECCOMP mitigation. For PRCTL it's618618+ * always the current task.619619+ */620620+ if (tsk == current)621621+ speculation_ctrl_update_current();622622+}623623+759624static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)760625{761761- bool update;762762-763626 if (ssb_mode != SPEC_STORE_BYPASS_PRCTL &&764627 ssb_mode != SPEC_STORE_BYPASS_SECCOMP)765628 return -ENXIO;···785618 if (task_spec_ssb_force_disable(task))786619 return -EPERM;787620 task_clear_spec_ssb_disable(task);788788- update = test_and_clear_tsk_thread_flag(task, TIF_SSBD);621621+ task_update_spec_tif(task);789622 break;790623 case PR_SPEC_DISABLE:791624 task_set_spec_ssb_disable(task);792792- update = !test_and_set_tsk_thread_flag(task, TIF_SSBD);625625+ task_update_spec_tif(task);793626 break;794627 case PR_SPEC_FORCE_DISABLE:795628 task_set_spec_ssb_disable(task);796629 task_set_spec_ssb_force_disable(task);797797- update = !test_and_set_tsk_thread_flag(task, TIF_SSBD);630630+ task_update_spec_tif(task);798631 break;799632 default:800633 return -ERANGE;801634 }635635+ return 0;636636+}802637803803- /*804804- * If being set on non-current task, delay setting the CPU805805- * mitigation until it is next scheduled.806806- */807807- if (task == current && update)808808- speculative_store_bypass_update_current();809809-638638+static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)639639+{640640+ switch (ctrl) {641641+ case PR_SPEC_ENABLE:642642+ if (spectre_v2_user == SPECTRE_V2_USER_NONE)643643+ return 0;644644+ /*645645+ * Indirect branch speculation is always disabled in strict646646+ * mode.647647+ */648648+ if (spectre_v2_user == SPECTRE_V2_USER_STRICT)649649+ return -EPERM;650650+ task_clear_spec_ib_disable(task);651651+ task_update_spec_tif(task);652652+ break;653653+ case PR_SPEC_DISABLE:654654+ case PR_SPEC_FORCE_DISABLE:655655+ /*656656+ * Indirect branch speculation is always allowed when657657+ * mitigation is force disabled.658658+ */659659+ if (spectre_v2_user == SPECTRE_V2_USER_NONE)660660+ return -EPERM;661661+ if (spectre_v2_user == SPECTRE_V2_USER_STRICT)662662+ return 0;663663+ task_set_spec_ib_disable(task);664664+ if (ctrl == PR_SPEC_FORCE_DISABLE)665665+ task_set_spec_ib_force_disable(task);666666+ task_update_spec_tif(task);667667+ break;668668+ default:669669+ return -ERANGE;670670+ }810671 return 0;811672}812673···844649 switch (which) {845650 case PR_SPEC_STORE_BYPASS:846651 return ssb_prctl_set(task, ctrl);652652+ case PR_SPEC_INDIRECT_BRANCH:653653+ return ib_prctl_set(task, ctrl);847654 default:848655 return -ENODEV;849656 }···856659{857660 if (ssb_mode == SPEC_STORE_BYPASS_SECCOMP)858661 ssb_prctl_set(task, PR_SPEC_FORCE_DISABLE);662662+ if (spectre_v2_user == SPECTRE_V2_USER_SECCOMP)663663+ ib_prctl_set(task, PR_SPEC_FORCE_DISABLE);859664}860665#endif861666···880681 }881682}882683684684+static int ib_prctl_get(struct task_struct *task)685685+{686686+ if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))687687+ return PR_SPEC_NOT_AFFECTED;688688+689689+ switch (spectre_v2_user) {690690+ case SPECTRE_V2_USER_NONE:691691+ return PR_SPEC_ENABLE;692692+ case SPECTRE_V2_USER_PRCTL:693693+ case SPECTRE_V2_USER_SECCOMP:694694+ if (task_spec_ib_force_disable(task))695695+ return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;696696+ if (task_spec_ib_disable(task))697697+ return PR_SPEC_PRCTL | PR_SPEC_DISABLE;698698+ return PR_SPEC_PRCTL | PR_SPEC_ENABLE;699699+ case SPECTRE_V2_USER_STRICT:700700+ return PR_SPEC_DISABLE;701701+ default:702702+ return PR_SPEC_NOT_AFFECTED;703703+ }704704+}705705+883706int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)884707{885708 switch (which) {886709 case PR_SPEC_STORE_BYPASS:887710 return ssb_prctl_get(task);711711+ case PR_SPEC_INDIRECT_BRANCH:712712+ return ib_prctl_get(task);888713 default:889714 return -ENODEV;890715 }···1046823#define L1TF_DEFAULT_MSG "Mitigation: PTE Inversion"10478241048825#if IS_ENABLED(CONFIG_KVM_INTEL)10491049-static const char *l1tf_vmx_states[] = {826826+static const char * const l1tf_vmx_states[] = {1050827 [VMENTER_L1D_FLUSH_AUTO] = "auto",1051828 [VMENTER_L1D_FLUSH_NEVER] = "vulnerable",1052829 [VMENTER_L1D_FLUSH_COND] = "conditional cache flushes",···10628391063840 if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_EPT_DISABLED ||1064841 (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER &&10651065- cpu_smt_control == CPU_SMT_ENABLED))842842+ sched_smt_active())) {1066843 return sprintf(buf, "%s; VMX: %s\n", L1TF_DEFAULT_MSG,1067844 l1tf_vmx_states[l1tf_vmx_mitigation]);845845+ }10688461069847 return sprintf(buf, "%s; VMX: %s, SMT %s\n", L1TF_DEFAULT_MSG,1070848 l1tf_vmx_states[l1tf_vmx_mitigation],10711071- cpu_smt_control == CPU_SMT_ENABLED ? "vulnerable" : "disabled");849849+ sched_smt_active() ? "vulnerable" : "disabled");1072850}1073851#else1074852static ssize_t l1tf_show_state(char *buf)···1078854}1079855#endif1080856857857+static char *stibp_state(void)858858+{859859+ if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)860860+ return "";861861+862862+ switch (spectre_v2_user) {863863+ case SPECTRE_V2_USER_NONE:864864+ return ", STIBP: disabled";865865+ case SPECTRE_V2_USER_STRICT:866866+ return ", STIBP: forced";867867+ case SPECTRE_V2_USER_PRCTL:868868+ case SPECTRE_V2_USER_SECCOMP:869869+ if (static_key_enabled(&switch_to_cond_stibp))870870+ return ", STIBP: conditional";871871+ }872872+ return "";873873+}874874+875875+static char *ibpb_state(void)876876+{877877+ if (boot_cpu_has(X86_FEATURE_IBPB)) {878878+ if (static_key_enabled(&switch_mm_always_ibpb))879879+ return ", IBPB: always-on";880880+ if (static_key_enabled(&switch_mm_cond_ibpb))881881+ return ", IBPB: conditional";882882+ return ", IBPB: disabled";883883+ }884884+ return "";885885+}886886+1081887static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,1082888 char *buf, unsigned int bug)1083889{10841084- int ret;10851085-1086890 if (!boot_cpu_has_bug(bug))1087891 return sprintf(buf, "Not affected\n");1088892···1128876 return sprintf(buf, "Mitigation: __user pointer sanitization\n");11298771130878 case X86_BUG_SPECTRE_V2:11311131- ret = sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],11321132- boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",879879+ return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],880880+ ibpb_state(),1133881 boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",11341134- (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",882882+ stibp_state(),1135883 boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",1136884 spectre_v2_module_string());11371137- return ret;11388851139886 case X86_BUG_SPEC_STORE_BYPASS:1140887 return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
+4-2
arch/x86/kernel/cpu/mcheck/mce.c
···485485 * be somewhat complicated (e.g. segment offset would require an instruction486486 * parser). So only support physical addresses up to page granuality for now.487487 */488488-static int mce_usable_address(struct mce *m)488488+int mce_usable_address(struct mce *m)489489{490490 if (!(m->status & MCI_STATUS_ADDRV))491491 return 0;···505505506506 return 1;507507}508508+EXPORT_SYMBOL_GPL(mce_usable_address);508509509510bool mce_is_memory_error(struct mce *m)510511{···535534}536535EXPORT_SYMBOL_GPL(mce_is_memory_error);537536538538-static bool mce_is_correctable(struct mce *m)537537+bool mce_is_correctable(struct mce *m)539538{540539 if (m->cpuvendor == X86_VENDOR_AMD && m->status & MCI_STATUS_DEFERRED)541540 return false;···548547549548 return true;550549}550550+EXPORT_SYMBOL_GPL(mce_is_correctable);551551552552static bool cec_add_mce(struct mce *m)553553{
+6-13
arch/x86/kernel/cpu/mcheck/mce_amd.c
···5656/* Threshold LVT offset is at MSR0xC0000410[15:12] */5757#define SMCA_THR_LVT_OFF 0xF00058585959-static bool thresholding_en;5959+static bool thresholding_irq_en;60606161static const char * const th_names[] = {6262 "load_store",···534534535535set_offset:536536 offset = setup_APIC_mce_threshold(offset, new);537537-538538- if ((offset == new) && (mce_threshold_vector != amd_threshold_interrupt))539539- mce_threshold_vector = amd_threshold_interrupt;537537+ if (offset == new)538538+ thresholding_irq_en = true;540539541540done:542541 mce_threshold_block_init(&b, offset);···13561357{13571358 unsigned int bank;1358135913591359- if (!thresholding_en)13601360- return 0;13611361-13621360 for (bank = 0; bank < mca_cfg.banks; ++bank) {13631361 if (!(per_cpu(bank_map, cpu) & (1 << bank)))13641362 continue;···13721376 unsigned int bank;13731377 struct threshold_bank **bp;13741378 int err = 0;13751375-13761376- if (!thresholding_en)13771377- return 0;1378137913791380 bp = per_cpu(threshold_banks, cpu);13801381 if (bp)···14011408{14021409 unsigned lcpu = 0;1403141014041404- if (mce_threshold_vector == amd_threshold_interrupt)14051405- thresholding_en = true;14061406-14071411 /* to hit CPUs online before the notifier is up */14081412 for_each_online_cpu(lcpu) {14091413 int err = mce_threshold_create_device(lcpu);···14081418 if (err)14091419 return err;14101420 }14211421+14221422+ if (thresholding_irq_en)14231423+ mce_threshold_vector = amd_threshold_interrupt;1411142414121425 return 0;14131426}
+11
arch/x86/kernel/cpu/mshyperv.c
···2020#include <linux/interrupt.h>2121#include <linux/irq.h>2222#include <linux/kexec.h>2323+#include <linux/i8253.h>2324#include <asm/processor.h>2425#include <asm/hypervisor.h>2526#include <asm/hyperv-tlfs.h>···295294 */296295 if (efi_enabled(EFI_BOOT))297296 x86_platform.get_nmi_reason = hv_get_nmi_reason;297297+298298+ /*299299+ * Hyper-V VMs have a PIT emulation quirk such that zeroing the300300+ * counter register during PIT shutdown restarts the PIT. So it301301+ * continues to interrupt @18.2 HZ. Setting i8253_clear_counter302302+ * to false tells pit_shutdown() not to zero the counter so that303303+ * the PIT really is shutdown. Generation 2 VMs don't have a PIT,304304+ * and setting this value has no effect.305305+ */306306+ i8253_clear_counter_on_shutdown = false;298307299308#if IS_ENABLED(CONFIG_HYPERV)300309 /*
+1-1
arch/x86/kernel/cpu/vmware.c
···7777}7878early_param("no-vmw-sched-clock", setup_vmw_sched_clock);79798080-static unsigned long long vmware_sched_clock(void)8080+static unsigned long long notrace vmware_sched_clock(void)8181{8282 unsigned long long ns;8383
···457457 if (!boot_params.hdr.version)458458 copy_bootdata(__va(real_mode_data));459459460460- x86_verify_bootdata_version();461461-462460 x86_early_init_platform_quirks();463461464462 switch (boot_params.hdr.hardware_subarch) {
+38-21
arch/x86/kernel/ldt.c
···199199/*200200 * If PTI is enabled, this maps the LDT into the kernelmode and201201 * usermode tables for the given mm.202202- *203203- * There is no corresponding unmap function. Even if the LDT is freed, we204204- * leave the PTEs around until the slot is reused or the mm is destroyed.205205- * This is harmless: the LDT is always in ordinary memory, and no one will206206- * access the freed slot.207207- *208208- * If we wanted to unmap freed LDTs, we'd also need to do a flush to make209209- * it useful, and the flush would slow down modify_ldt().210202 */211203static int212204map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot)···206214 unsigned long va;207215 bool is_vmalloc;208216 spinlock_t *ptl;209209- pgd_t *pgd;210210- int i;217217+ int i, nr_pages;211218212219 if (!static_cpu_has(X86_FEATURE_PTI))213220 return 0;···220229 /* Check if the current mappings are sane */221230 sanity_check_ldt_mapping(mm);222231223223- /*224224- * Did we already have the top level entry allocated? We can't225225- * use pgd_none() for this because it doens't do anything on226226- * 4-level page table kernels.227227- */228228- pgd = pgd_offset(mm, LDT_BASE_ADDR);229229-230232 is_vmalloc = is_vmalloc_addr(ldt->entries);231233232232- for (i = 0; i * PAGE_SIZE < ldt->nr_entries * LDT_ENTRY_SIZE; i++) {234234+ nr_pages = DIV_ROUND_UP(ldt->nr_entries * LDT_ENTRY_SIZE, PAGE_SIZE);235235+236236+ for (i = 0; i < nr_pages; i++) {233237 unsigned long offset = i << PAGE_SHIFT;234238 const void *src = (char *)ldt->entries + offset;235239 unsigned long pfn;···258272 /* Propagate LDT mapping to the user page-table */259273 map_ldt_struct_to_user(mm);260274261261- va = (unsigned long)ldt_slot_va(slot);262262- flush_tlb_mm_range(mm, va, va + LDT_SLOT_STRIDE, PAGE_SHIFT, false);263263-264275 ldt->slot = slot;265276 return 0;277277+}278278+279279+static void unmap_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt)280280+{281281+ unsigned long va;282282+ int i, nr_pages;283283+284284+ if (!ldt)285285+ return;286286+287287+ /* LDT map/unmap is only required for PTI */288288+ if (!static_cpu_has(X86_FEATURE_PTI))289289+ return;290290+291291+ nr_pages = DIV_ROUND_UP(ldt->nr_entries * LDT_ENTRY_SIZE, PAGE_SIZE);292292+293293+ for (i = 0; i < nr_pages; i++) {294294+ unsigned long offset = i << PAGE_SHIFT;295295+ spinlock_t *ptl;296296+ pte_t *ptep;297297+298298+ va = (unsigned long)ldt_slot_va(ldt->slot) + offset;299299+ ptep = get_locked_pte(mm, va, &ptl);300300+ pte_clear(mm, va, ptep);301301+ pte_unmap_unlock(ptep, ptl);302302+ }303303+304304+ va = (unsigned long)ldt_slot_va(ldt->slot);305305+ flush_tlb_mm_range(mm, va, va + nr_pages * PAGE_SIZE, PAGE_SHIFT, false);266306}267307268308#else /* !CONFIG_PAGE_TABLE_ISOLATION */···297285map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot)298286{299287 return 0;288288+}289289+290290+static void unmap_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt)291291+{300292}301293#endif /* CONFIG_PAGE_TABLE_ISOLATION */302294···540524 }541525542526 install_ldt(mm, new_ldt);527527+ unmap_ldt_struct(mm, old_ldt);543528 free_ldt_struct(old_ldt);544529 error = 0;545530
+82-19
arch/x86/kernel/process.c
···4040#include <asm/prctl.h>4141#include <asm/spec-ctrl.h>42424343+#include "process.h"4444+4345/*4446 * per-CPU TSS segments. Threads are completely 'soft' on Linux,4547 * no more per-task TSS's. The TSS size is kept cacheline-aligned···254252 enable_cpuid();255253}256254257257-static inline void switch_to_bitmap(struct tss_struct *tss,258258- struct thread_struct *prev,255255+static inline void switch_to_bitmap(struct thread_struct *prev,259256 struct thread_struct *next,260257 unsigned long tifp, unsigned long tifn)261258{259259+ struct tss_struct *tss = this_cpu_ptr(&cpu_tss_rw);260260+262261 if (tifn & _TIF_IO_BITMAP) {263262 /*264263 * Copy the relevant range of the IO bitmap.···398395 wrmsrl(MSR_AMD64_VIRT_SPEC_CTRL, ssbd_tif_to_spec_ctrl(tifn));399396}400397401401-static __always_inline void intel_set_ssb_state(unsigned long tifn)398398+/*399399+ * Update the MSRs managing speculation control, during context switch.400400+ *401401+ * tifp: Previous task's thread flags402402+ * tifn: Next task's thread flags403403+ */404404+static __always_inline void __speculation_ctrl_update(unsigned long tifp,405405+ unsigned long tifn)402406{403403- u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);407407+ unsigned long tif_diff = tifp ^ tifn;408408+ u64 msr = x86_spec_ctrl_base;409409+ bool updmsr = false;404410405405- wrmsrl(MSR_IA32_SPEC_CTRL, msr);411411+ /*412412+ * If TIF_SSBD is different, select the proper mitigation413413+ * method. Note that if SSBD mitigation is disabled or permanentely414414+ * enabled this branch can't be taken because nothing can set415415+ * TIF_SSBD.416416+ */417417+ if (tif_diff & _TIF_SSBD) {418418+ if (static_cpu_has(X86_FEATURE_VIRT_SSBD)) {419419+ amd_set_ssb_virt_state(tifn);420420+ } else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) {421421+ amd_set_core_ssb_state(tifn);422422+ } else if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||423423+ static_cpu_has(X86_FEATURE_AMD_SSBD)) {424424+ msr |= ssbd_tif_to_spec_ctrl(tifn);425425+ updmsr = true;426426+ }427427+ }428428+429429+ /*430430+ * Only evaluate TIF_SPEC_IB if conditional STIBP is enabled,431431+ * otherwise avoid the MSR write.432432+ */433433+ if (IS_ENABLED(CONFIG_SMP) &&434434+ static_branch_unlikely(&switch_to_cond_stibp)) {435435+ updmsr |= !!(tif_diff & _TIF_SPEC_IB);436436+ msr |= stibp_tif_to_spec_ctrl(tifn);437437+ }438438+439439+ if (updmsr)440440+ wrmsrl(MSR_IA32_SPEC_CTRL, msr);406441}407442408408-static __always_inline void __speculative_store_bypass_update(unsigned long tifn)443443+static unsigned long speculation_ctrl_update_tif(struct task_struct *tsk)409444{410410- if (static_cpu_has(X86_FEATURE_VIRT_SSBD))411411- amd_set_ssb_virt_state(tifn);412412- else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))413413- amd_set_core_ssb_state(tifn);414414- else415415- intel_set_ssb_state(tifn);445445+ if (test_and_clear_tsk_thread_flag(tsk, TIF_SPEC_FORCE_UPDATE)) {446446+ if (task_spec_ssb_disable(tsk))447447+ set_tsk_thread_flag(tsk, TIF_SSBD);448448+ else449449+ clear_tsk_thread_flag(tsk, TIF_SSBD);450450+451451+ if (task_spec_ib_disable(tsk))452452+ set_tsk_thread_flag(tsk, TIF_SPEC_IB);453453+ else454454+ clear_tsk_thread_flag(tsk, TIF_SPEC_IB);455455+ }456456+ /* Return the updated threadinfo flags*/457457+ return task_thread_info(tsk)->flags;416458}417459418418-void speculative_store_bypass_update(unsigned long tif)460460+void speculation_ctrl_update(unsigned long tif)419461{462462+ /* Forced update. Make sure all relevant TIF flags are different */420463 preempt_disable();421421- __speculative_store_bypass_update(tif);464464+ __speculation_ctrl_update(~tif, tif);422465 preempt_enable();423466}424467425425-void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,426426- struct tss_struct *tss)468468+/* Called from seccomp/prctl update */469469+void speculation_ctrl_update_current(void)470470+{471471+ preempt_disable();472472+ speculation_ctrl_update(speculation_ctrl_update_tif(current));473473+ preempt_enable();474474+}475475+476476+void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p)427477{428478 struct thread_struct *prev, *next;429479 unsigned long tifp, tifn;···486430487431 tifn = READ_ONCE(task_thread_info(next_p)->flags);488432 tifp = READ_ONCE(task_thread_info(prev_p)->flags);489489- switch_to_bitmap(tss, prev, next, tifp, tifn);433433+ switch_to_bitmap(prev, next, tifp, tifn);490434491435 propagate_user_return_notify(prev_p, next_p);492436···507451 if ((tifp ^ tifn) & _TIF_NOCPUID)508452 set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));509453510510- if ((tifp ^ tifn) & _TIF_SSBD)511511- __speculative_store_bypass_update(tifn);454454+ if (likely(!((tifp | tifn) & _TIF_SPEC_FORCE_UPDATE))) {455455+ __speculation_ctrl_update(tifp, tifn);456456+ } else {457457+ speculation_ctrl_update_tif(prev_p);458458+ tifn = speculation_ctrl_update_tif(next_p);459459+460460+ /* Enforce MSR update to ensure consistent state */461461+ __speculation_ctrl_update(~tifn, tifn);462462+ }512463}513464514465/*
+39
arch/x86/kernel/process.h
···11+// SPDX-License-Identifier: GPL-2.022+//33+// Code shared between 32 and 64 bit44+55+#include <asm/spec-ctrl.h>66+77+void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p);88+99+/*1010+ * This needs to be inline to optimize for the common case where no extra1111+ * work needs to be done.1212+ */1313+static inline void switch_to_extra(struct task_struct *prev,1414+ struct task_struct *next)1515+{1616+ unsigned long next_tif = task_thread_info(next)->flags;1717+ unsigned long prev_tif = task_thread_info(prev)->flags;1818+1919+ if (IS_ENABLED(CONFIG_SMP)) {2020+ /*2121+ * Avoid __switch_to_xtra() invocation when conditional2222+ * STIPB is disabled and the only different bit is2323+ * TIF_SPEC_IB. For CONFIG_SMP=n TIF_SPEC_IB is not2424+ * in the TIF_WORK_CTXSW masks.2525+ */2626+ if (!static_branch_likely(&switch_to_cond_stibp)) {2727+ prev_tif &= ~_TIF_SPEC_IB;2828+ next_tif &= ~_TIF_SPEC_IB;2929+ }3030+ }3131+3232+ /*3333+ * __switch_to_xtra() handles debug registers, i/o bitmaps,3434+ * speculation mitigations etc.3535+ */3636+ if (unlikely(next_tif & _TIF_WORK_CTXSW_NEXT ||3737+ prev_tif & _TIF_WORK_CTXSW_PREV))3838+ __switch_to_xtra(prev, next);3939+}
+3-7
arch/x86/kernel/process_32.c
···5959#include <asm/intel_rdt_sched.h>6060#include <asm/proto.h>61616262+#include "process.h"6363+6264void __show_regs(struct pt_regs *regs, enum show_regs_mode mode)6365{6466 unsigned long cr0 = 0L, cr2 = 0L, cr3 = 0L, cr4 = 0L;···234232 struct fpu *prev_fpu = &prev->fpu;235233 struct fpu *next_fpu = &next->fpu;236234 int cpu = smp_processor_id();237237- struct tss_struct *tss = &per_cpu(cpu_tss_rw, cpu);238235239236 /* never put a printk in __switch_to... printk() calls wake_up*() indirectly */240237···265264 if (get_kernel_rpl() && unlikely(prev->iopl != next->iopl))266265 set_iopl_mask(next->iopl);267266268268- /*269269- * Now maybe handle debug registers and/or IO bitmaps270270- */271271- if (unlikely(task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV ||272272- task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT))273273- __switch_to_xtra(prev_p, next_p, tss);267267+ switch_to_extra(prev_p, next_p);274268275269 /*276270 * Leave lazy mode, flushing any hypercalls made here.
+3-7
arch/x86/kernel/process_64.c
···6060#include <asm/unistd_32_ia32.h>6161#endif62626363+#include "process.h"6464+6365/* Prints also some state that isn't saved in the pt_regs */6466void __show_regs(struct pt_regs *regs, enum show_regs_mode mode)6567{···555553 struct fpu *prev_fpu = &prev->fpu;556554 struct fpu *next_fpu = &next->fpu;557555 int cpu = smp_processor_id();558558- struct tss_struct *tss = &per_cpu(cpu_tss_rw, cpu);559556560557 WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_ENTRY) &&561558 this_cpu_read(irq_count) != -1);···618617 /* Reload sp0. */619618 update_task_stack(next_p);620619621621- /*622622- * Now maybe reload the debug registers and handle I/O bitmaps623623- */624624- if (unlikely(task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT ||625625- task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV))626626- __switch_to_xtra(prev_p, next_p, tss);620620+ switch_to_extra(prev_p, next_p);627621628622#ifdef CONFIG_XEN_PV629623 /*
-17
arch/x86/kernel/setup.c
···12801280 unwind_init();12811281}1282128212831283-/*12841284- * From boot protocol 2.14 onwards we expect the bootloader to set the12851285- * version to "0x8000 | <used version>". In case we find a version >= 2.1412861286- * without the 0x8000 we assume the boot loader supports 2.13 only and12871287- * reset the version accordingly. The 0x8000 flag is removed in any case.12881288- */12891289-void __init x86_verify_bootdata_version(void)12901290-{12911291- if (boot_params.hdr.version & VERSION_WRITTEN)12921292- boot_params.hdr.version &= ~VERSION_WRITTEN;12931293- else if (boot_params.hdr.version >= 0x020e)12941294- boot_params.hdr.version = 0x020d;12951295-12961296- if (boot_params.hdr.version < 0x020e)12971297- boot_params.hdr.acpi_rsdp_addr = 0;12981298-}12991299-13001283#ifdef CONFIG_X86_321301128413021285static struct resource video_ram_resource = {
+7-77
arch/x86/kernel/vsmp_64.c
···26262727#define TOPOLOGY_REGISTER_OFFSET 0x1028282929-#if defined CONFIG_PCI && defined CONFIG_PARAVIRT_XXL3030-/*3131- * Interrupt control on vSMPowered systems:3232- * ~AC is a shadow of IF. If IF is 'on' AC should be 'off'3333- * and vice versa.3434- */3535-3636-asmlinkage __visible unsigned long vsmp_save_fl(void)3737-{3838- unsigned long flags = native_save_fl();3939-4040- if (!(flags & X86_EFLAGS_IF) || (flags & X86_EFLAGS_AC))4141- flags &= ~X86_EFLAGS_IF;4242- return flags;4343-}4444-PV_CALLEE_SAVE_REGS_THUNK(vsmp_save_fl);4545-4646-__visible void vsmp_restore_fl(unsigned long flags)4747-{4848- if (flags & X86_EFLAGS_IF)4949- flags &= ~X86_EFLAGS_AC;5050- else5151- flags |= X86_EFLAGS_AC;5252- native_restore_fl(flags);5353-}5454-PV_CALLEE_SAVE_REGS_THUNK(vsmp_restore_fl);5555-5656-asmlinkage __visible void vsmp_irq_disable(void)5757-{5858- unsigned long flags = native_save_fl();5959-6060- native_restore_fl((flags & ~X86_EFLAGS_IF) | X86_EFLAGS_AC);6161-}6262-PV_CALLEE_SAVE_REGS_THUNK(vsmp_irq_disable);6363-6464-asmlinkage __visible void vsmp_irq_enable(void)6565-{6666- unsigned long flags = native_save_fl();6767-6868- native_restore_fl((flags | X86_EFLAGS_IF) & (~X86_EFLAGS_AC));6969-}7070-PV_CALLEE_SAVE_REGS_THUNK(vsmp_irq_enable);7171-7272-static unsigned __init vsmp_patch(u8 type, void *ibuf,7373- unsigned long addr, unsigned len)7474-{7575- switch (type) {7676- case PARAVIRT_PATCH(irq.irq_enable):7777- case PARAVIRT_PATCH(irq.irq_disable):7878- case PARAVIRT_PATCH(irq.save_fl):7979- case PARAVIRT_PATCH(irq.restore_fl):8080- return paravirt_patch_default(type, ibuf, addr, len);8181- default:8282- return native_patch(type, ibuf, addr, len);8383- }8484-8585-}8686-8787-static void __init set_vsmp_pv_ops(void)2929+#ifdef CONFIG_PCI3030+static void __init set_vsmp_ctl(void)8831{8932 void __iomem *address;9033 unsigned int cap, ctl, cfg;···52109 }53110#endif541115555- if (cap & ctl & (1 << 4)) {5656- /* Setup irq ops and turn on vSMP IRQ fastpath handling */5757- pv_ops.irq.irq_disable = PV_CALLEE_SAVE(vsmp_irq_disable);5858- pv_ops.irq.irq_enable = PV_CALLEE_SAVE(vsmp_irq_enable);5959- pv_ops.irq.save_fl = PV_CALLEE_SAVE(vsmp_save_fl);6060- pv_ops.irq.restore_fl = PV_CALLEE_SAVE(vsmp_restore_fl);6161- pv_ops.init.patch = vsmp_patch;6262- ctl &= ~(1 << 4);6363- }64112 writel(ctl, address + 4);65113 ctl = readl(address + 4);66114 pr_info("vSMP CTL: control set to:0x%08x\n", ctl);6711568116 early_iounmap(address, 8);69117}7070-#else7171-static void __init set_vsmp_pv_ops(void)7272-{7373-}7474-#endif7575-7676-#ifdef CONFIG_PCI77118static int is_vsmp = -1;7811979120static void __init detect_vsmp_box(void)···91164{92165 return 0;93166}167167+static void __init set_vsmp_ctl(void)168168+{169169+}94170#endif9517196172static void __init vsmp_cap_cpus(void)97173{9898-#if !defined(CONFIG_X86_VSMP) && defined(CONFIG_SMP)174174+#if !defined(CONFIG_X86_VSMP) && defined(CONFIG_SMP) && defined(CONFIG_PCI)99175 void __iomem *address;100176 unsigned int cfg, topology, node_shift, maxcpus;101177···151221152222 vsmp_cap_cpus();153223154154- set_vsmp_pv_ops();224224+ set_vsmp_ctl();155225 return;156226}
+6-1
arch/x86/kvm/lapic.c
···5555#define PRIo64 "o"56565757/* #define apic_debug(fmt,arg...) printk(KERN_WARNING fmt,##arg) */5858-#define apic_debug(fmt, arg...)5858+#define apic_debug(fmt, arg...) do {} while (0)59596060/* 14 is the version for Xeon and Pentium 8.4.8*/6161#define APIC_VERSION (0x14UL | ((KVM_APIC_LVT_NUM - 1) << 16))···575575576576 rcu_read_lock();577577 map = rcu_dereference(kvm->arch.apic_map);578578+579579+ if (unlikely(!map)) {580580+ count = -EOPNOTSUPP;581581+ goto out;582582+ }578583579584 if (min > map->max_apic_id)580585 goto out;
+9-18
arch/x86/kvm/mmu.c
···50745074}5075507550765076static u64 mmu_pte_write_fetch_gpte(struct kvm_vcpu *vcpu, gpa_t *gpa,50775077- const u8 *new, int *bytes)50775077+ int *bytes)50785078{50795079- u64 gentry;50795079+ u64 gentry = 0;50805080 int r;5081508150825082 /*···50885088 /* Handle a 32-bit guest writing two halves of a 64-bit gpte */50895089 *gpa &= ~(gpa_t)7;50905090 *bytes = 8;50915091- r = kvm_vcpu_read_guest(vcpu, *gpa, &gentry, 8);50925092- if (r)50935093- gentry = 0;50945094- new = (const u8 *)&gentry;50955091 }5096509250975097- switch (*bytes) {50985098- case 4:50995099- gentry = *(const u32 *)new;51005100- break;51015101- case 8:51025102- gentry = *(const u64 *)new;51035103- break;51045104- default:51055105- gentry = 0;51065106- break;50935093+ if (*bytes == 4 || *bytes == 8) {50945094+ r = kvm_vcpu_read_guest_atomic(vcpu, *gpa, &gentry, *bytes);50955095+ if (r)50965096+ gentry = 0;51075097 }5108509851095099 return gentry;···5197520751985208 pgprintk("%s: gpa %llx bytes %d\n", __func__, gpa, bytes);5199520952005200- gentry = mmu_pte_write_fetch_gpte(vcpu, &gpa, new, &bytes);52015201-52025210 /*52035211 * No need to care whether allocation memory is successful52045212 * or not since pte prefetch is skiped if it does not have···52055217 mmu_topup_memory_caches(vcpu);5206521852075219 spin_lock(&vcpu->kvm->mmu_lock);52205220+52215221+ gentry = mmu_pte_write_fetch_gpte(vcpu, &gpa, &bytes);52225222+52085223 ++vcpu->kvm->stat.mmu_pte_write;52095224 kvm_mmu_audit(vcpu, AUDIT_PRE_PTE_WRITE);52105225
+29-15
arch/x86/kvm/svm.c
···14461446 return vcpu->arch.tsc_offset;14471447}1448144814491449-static void svm_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)14491449+static u64 svm_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)14501450{14511451 struct vcpu_svm *svm = to_svm(vcpu);14521452 u64 g_tsc_offset = 0;···14641464 svm->vmcb->control.tsc_offset = offset + g_tsc_offset;1465146514661466 mark_dirty(svm->vmcb, VMCB_INTERCEPTS);14671467+ return svm->vmcb->control.tsc_offset;14671468}1468146914691470static void avic_init_vmcb(struct vcpu_svm *svm)···16651664static int avic_init_access_page(struct kvm_vcpu *vcpu)16661665{16671666 struct kvm *kvm = vcpu->kvm;16681668- int ret;16671667+ int ret = 0;1669166816691669+ mutex_lock(&kvm->slots_lock);16701670 if (kvm->arch.apic_access_page_done)16711671- return 0;16711671+ goto out;1672167216731673- ret = x86_set_memory_region(kvm,16741674- APIC_ACCESS_PAGE_PRIVATE_MEMSLOT,16751675- APIC_DEFAULT_PHYS_BASE,16761676- PAGE_SIZE);16731673+ ret = __x86_set_memory_region(kvm,16741674+ APIC_ACCESS_PAGE_PRIVATE_MEMSLOT,16751675+ APIC_DEFAULT_PHYS_BASE,16761676+ PAGE_SIZE);16771677 if (ret)16781678- return ret;16781678+ goto out;1679167916801680 kvm->arch.apic_access_page_done = true;16811681- return 0;16811681+out:16821682+ mutex_unlock(&kvm->slots_lock);16831683+ return ret;16821684}1683168516841686static int avic_init_backing_page(struct kvm_vcpu *vcpu)···21932189 return ERR_PTR(err);21942190}2195219121922192+static void svm_clear_current_vmcb(struct vmcb *vmcb)21932193+{21942194+ int i;21952195+21962196+ for_each_online_cpu(i)21972197+ cmpxchg(&per_cpu(svm_data, i)->current_vmcb, vmcb, NULL);21982198+}21992199+21962200static void svm_free_vcpu(struct kvm_vcpu *vcpu)21972201{21982202 struct vcpu_svm *svm = to_svm(vcpu);22032203+22042204+ /*22052205+ * The vmcb page can be recycled, causing a false negative in22062206+ * svm_vcpu_load(). So, ensure that no logical CPU has this22072207+ * vmcb page recorded as its current vmcb.22082208+ */22092209+ svm_clear_current_vmcb(svm->vmcb);2199221022002211 __free_page(pfn_to_page(__sme_clr(svm->vmcb_pa) >> PAGE_SHIFT));22012212 __free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER);···22182199 __free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER);22192200 kvm_vcpu_uninit(vcpu);22202201 kmem_cache_free(kvm_vcpu_cache, svm);22212221- /*22222222- * The vmcb page can be recycled, causing a false negative in22232223- * svm_vcpu_load(). So do a full IBPB now.22242224- */22252225- indirect_branch_prediction_barrier();22262202}2227220322282204static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)···71637149 .has_wbinvd_exit = svm_has_wbinvd_exit,7164715071657151 .read_l1_tsc_offset = svm_read_l1_tsc_offset,71667166- .write_tsc_offset = svm_write_tsc_offset,71527152+ .write_l1_tsc_offset = svm_write_l1_tsc_offset,7167715371687154 .set_tdp_cr3 = set_tdp_cr3,71697155
+65-33
arch/x86/kvm/vmx.c
···174174 * refer SDM volume 3b section 21.6.13 & 22.1.3.175175 */176176static unsigned int ple_gap = KVM_DEFAULT_PLE_GAP;177177+module_param(ple_gap, uint, 0444);177178178179static unsigned int ple_window = KVM_VMX_DEFAULT_PLE_WINDOW;179180module_param(ple_window, uint, 0444);···985984 struct shared_msr_entry *guest_msrs;986985 int nmsrs;987986 int save_nmsrs;987987+ bool guest_msrs_dirty;988988 unsigned long host_idt_base;989989#ifdef CONFIG_X86_64990990 u64 msr_host_kernel_gs_base;···13081306static bool nested_vmx_is_page_fault_vmexit(struct vmcs12 *vmcs12,13091307 u16 error_code);13101308static void vmx_update_msr_bitmap(struct kvm_vcpu *vcpu);13111311-static void __always_inline vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,13091309+static __always_inline void vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,13121310 u32 msr, int type);1313131113141312static DEFINE_PER_CPU(struct vmcs *, vmxarea);···16121610{16131611 struct vcpu_vmx *vmx = to_vmx(vcpu);1614161216151615- /* We don't support disabling the feature for simplicity. */16161616- if (vmx->nested.enlightened_vmcs_enabled)16171617- return 0;16181618-16191619- vmx->nested.enlightened_vmcs_enabled = true;16201620-16211613 /*16221614 * vmcs_version represents the range of supported Enlightened VMCS16231615 * versions: lower 8 bits is the minimal version, higher 8 bits is the···16201624 */16211625 if (vmcs_version)16221626 *vmcs_version = (KVM_EVMCS_VERSION << 8) | 1;16271627+16281628+ /* We don't support disabling the feature for simplicity. */16291629+ if (vmx->nested.enlightened_vmcs_enabled)16301630+ return 0;16311631+16321632+ vmx->nested.enlightened_vmcs_enabled = true;1623163316241634 vmx->nested.msrs.pinbased_ctls_high &= ~EVMCS1_UNSUPPORTED_PINCTRL;16251635 vmx->nested.msrs.entry_ctls_high &= ~EVMCS1_UNSUPPORTED_VMENTRY_CTRL;···2899289729002898 vmx->req_immediate_exit = false;2901289929002900+ /*29012901+ * Note that guest MSRs to be saved/restored can also be changed29022902+ * when guest state is loaded. This happens when guest transitions29032903+ * to/from long-mode by setting MSR_EFER.LMA.29042904+ */29052905+ if (!vmx->loaded_cpu_state || vmx->guest_msrs_dirty) {29062906+ vmx->guest_msrs_dirty = false;29072907+ for (i = 0; i < vmx->save_nmsrs; ++i)29082908+ kvm_set_shared_msr(vmx->guest_msrs[i].index,29092909+ vmx->guest_msrs[i].data,29102910+ vmx->guest_msrs[i].mask);29112911+29122912+ }29132913+29022914 if (vmx->loaded_cpu_state)29032915 return;29042916···29732957 vmcs_writel(HOST_GS_BASE, gs_base);29742958 host_state->gs_base = gs_base;29752959 }29762976-29772977- for (i = 0; i < vmx->save_nmsrs; ++i)29782978- kvm_set_shared_msr(vmx->guest_msrs[i].index,29792979- vmx->guest_msrs[i].data,29802980- vmx->guest_msrs[i].mask);29812960}2982296129832962static void vmx_prepare_switch_to_host(struct vcpu_vmx *vmx)···34473436 move_msr_up(vmx, index, save_nmsrs++);3448343734493438 vmx->save_nmsrs = save_nmsrs;34393439+ vmx->guest_msrs_dirty = true;3450344034513441 if (cpu_has_vmx_msr_bitmap())34523442 vmx_update_msr_bitmap(&vmx->vcpu);···34643452 return vcpu->arch.tsc_offset;34653453}3466345434673467-/*34683468- * writes 'offset' into guest's timestamp counter offset register34693469- */34703470-static void vmx_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)34553455+static u64 vmx_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)34713456{34573457+ u64 active_offset = offset;34723458 if (is_guest_mode(vcpu)) {34733459 /*34743460 * We're here if L1 chose not to trap WRMSR to TSC. According···34743464 * set for L2 remains unchanged, and still needs to be added34753465 * to the newly set TSC to get L2's TSC.34763466 */34773477- struct vmcs12 *vmcs12;34783478- /* recalculate vmcs02.TSC_OFFSET: */34793479- vmcs12 = get_vmcs12(vcpu);34803480- vmcs_write64(TSC_OFFSET, offset +34813481- (nested_cpu_has(vmcs12, CPU_BASED_USE_TSC_OFFSETING) ?34823482- vmcs12->tsc_offset : 0));34673467+ struct vmcs12 *vmcs12 = get_vmcs12(vcpu);34683468+ if (nested_cpu_has(vmcs12, CPU_BASED_USE_TSC_OFFSETING))34693469+ active_offset += vmcs12->tsc_offset;34833470 } else {34843471 trace_kvm_write_tsc_offset(vcpu->vcpu_id,34853472 vmcs_read64(TSC_OFFSET), offset);34863486- vmcs_write64(TSC_OFFSET, offset);34873473 }34743474+34753475+ vmcs_write64(TSC_OFFSET, active_offset);34763476+ return active_offset;34883477}3489347834903479/*···59535944 spin_unlock(&vmx_vpid_lock);59545945}5955594659565956-static void __always_inline vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,59475947+static __always_inline void vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,59575948 u32 msr, int type)59585949{59595950 int f = sizeof(unsigned long);···59915982 }59925983}5993598459945994-static void __always_inline vmx_enable_intercept_for_msr(unsigned long *msr_bitmap,59855985+static __always_inline void vmx_enable_intercept_for_msr(unsigned long *msr_bitmap,59955986 u32 msr, int type)59965987{59975988 int f = sizeof(unsigned long);···60296020 }60306021}6031602260326032-static void __always_inline vmx_set_intercept_for_msr(unsigned long *msr_bitmap,60236023+static __always_inline void vmx_set_intercept_for_msr(unsigned long *msr_bitmap,60336024 u32 msr, int type, bool value)60346025{60356026 if (value)···86738664 struct vmcs12 *vmcs12 = vmx->nested.cached_vmcs12;86748665 struct hv_enlightened_vmcs *evmcs = vmx->nested.hv_evmcs;8675866686768676- vmcs12->hdr.revision_id = evmcs->revision_id;86778677-86788667 /* HV_VMX_ENLIGHTENED_CLEAN_FIELD_NONE */86798668 vmcs12->tpr_threshold = evmcs->tpr_threshold;86808669 vmcs12->guest_rip = evmcs->guest_rip;···9376936993779370 vmx->nested.hv_evmcs = kmap(vmx->nested.hv_evmcs_page);9378937193799379- if (vmx->nested.hv_evmcs->revision_id != VMCS12_REVISION) {93729372+ /*93739373+ * Currently, KVM only supports eVMCS version 193749374+ * (== KVM_EVMCS_VERSION) and thus we expect guest to set this93759375+ * value to first u32 field of eVMCS which should specify eVMCS93769376+ * VersionNumber.93779377+ *93789378+ * Guest should be aware of supported eVMCS versions by host by93799379+ * examining CPUID.0x4000000A.EAX[0:15]. Host userspace VMM is93809380+ * expected to set this CPUID leaf according to the value93819381+ * returned in vmcs_version from nested_enable_evmcs().93829382+ *93839383+ * However, it turns out that Microsoft Hyper-V fails to comply93849384+ * to their own invented interface: When Hyper-V use eVMCS, it93859385+ * just sets first u32 field of eVMCS to revision_id specified93869386+ * in MSR_IA32_VMX_BASIC. Instead of used eVMCS version number93879387+ * which is one of the supported versions specified in93889388+ * CPUID.0x4000000A.EAX[0:15].93899389+ *93909390+ * To overcome Hyper-V bug, we accept here either a supported93919391+ * eVMCS version or VMCS12 revision_id as valid values for first93929392+ * u32 field of eVMCS.93939393+ */93949394+ if ((vmx->nested.hv_evmcs->revision_id != KVM_EVMCS_VERSION) &&93959395+ (vmx->nested.hv_evmcs->revision_id != VMCS12_REVISION)) {93809396 nested_release_evmcs(vcpu);93819397 return 0;93829398 }···94209390 * present in struct hv_enlightened_vmcs, ...). Make sure there94219391 * are no leftovers.94229392 */94239423- if (from_launch)94249424- memset(vmx->nested.cached_vmcs12, 0,94259425- sizeof(*vmx->nested.cached_vmcs12));93939393+ if (from_launch) {93949394+ struct vmcs12 *vmcs12 = get_vmcs12(vcpu);93959395+ memset(vmcs12, 0, sizeof(*vmcs12));93969396+ vmcs12->hdr.revision_id = VMCS12_REVISION;93979397+ }9426939894279399 }94289400 return 1;···1509415062 .has_wbinvd_exit = cpu_has_vmx_wbinvd_exit,15095150631509615064 .read_l1_tsc_offset = vmx_read_l1_tsc_offset,1509715097- .write_tsc_offset = vmx_write_tsc_offset,1506515065+ .write_l1_tsc_offset = vmx_write_l1_tsc_offset,15098150661509915067 .set_tdp_cr3 = vmx_set_cr3,1510015068
···77#include <linux/export.h>88#include <linux/cpu.h>99#include <linux/debugfs.h>1010-#include <linux/ptrace.h>11101211#include <asm/tlbflush.h>1312#include <asm/mmu_context.h>···2829 *2930 * Implement flush IPI by CALL_FUNCTION_VECTOR, Alex Shi3031 */3232+3333+/*3434+ * Use bit 0 to mangle the TIF_SPEC_IB state into the mm pointer which is3535+ * stored in cpu_tlb_state.last_user_mm_ibpb.3636+ */3737+#define LAST_USER_MM_IBPB 0x1UL31383239/*3340 * We get here when we do something requiring a TLB invalidation···186181 }187182}188183189189-static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)184184+static inline unsigned long mm_mangle_tif_spec_ib(struct task_struct *next)190185{186186+ unsigned long next_tif = task_thread_info(next)->flags;187187+ unsigned long ibpb = (next_tif >> TIF_SPEC_IB) & LAST_USER_MM_IBPB;188188+189189+ return (unsigned long)next->mm | ibpb;190190+}191191+192192+static void cond_ibpb(struct task_struct *next)193193+{194194+ if (!next || !next->mm)195195+ return;196196+191197 /*192192- * Check if the current (previous) task has access to the memory193193- * of the @tsk (next) task. If access is denied, make sure to194194- * issue a IBPB to stop user->user Spectre-v2 attacks.195195- *196196- * Note: __ptrace_may_access() returns 0 or -ERRNO.198198+ * Both, the conditional and the always IBPB mode use the mm199199+ * pointer to avoid the IBPB when switching between tasks of the200200+ * same process. Using the mm pointer instead of mm->context.ctx_id201201+ * opens a hypothetical hole vs. mm_struct reuse, which is more or202202+ * less impossible to control by an attacker. Aside of that it203203+ * would only affect the first schedule so the theoretically204204+ * exposed data is not really interesting.197205 */198198- return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&199199- ptrace_may_access_sched(tsk, PTRACE_MODE_SPEC_IBPB));206206+ if (static_branch_likely(&switch_mm_cond_ibpb)) {207207+ unsigned long prev_mm, next_mm;208208+209209+ /*210210+ * This is a bit more complex than the always mode because211211+ * it has to handle two cases:212212+ *213213+ * 1) Switch from a user space task (potential attacker)214214+ * which has TIF_SPEC_IB set to a user space task215215+ * (potential victim) which has TIF_SPEC_IB not set.216216+ *217217+ * 2) Switch from a user space task (potential attacker)218218+ * which has TIF_SPEC_IB not set to a user space task219219+ * (potential victim) which has TIF_SPEC_IB set.220220+ *221221+ * This could be done by unconditionally issuing IBPB when222222+ * a task which has TIF_SPEC_IB set is either scheduled in223223+ * or out. Though that results in two flushes when:224224+ *225225+ * - the same user space task is scheduled out and later226226+ * scheduled in again and only a kernel thread ran in227227+ * between.228228+ *229229+ * - a user space task belonging to the same process is230230+ * scheduled in after a kernel thread ran in between231231+ *232232+ * - a user space task belonging to the same process is233233+ * scheduled in immediately.234234+ *235235+ * Optimize this with reasonably small overhead for the236236+ * above cases. Mangle the TIF_SPEC_IB bit into the mm237237+ * pointer of the incoming task which is stored in238238+ * cpu_tlbstate.last_user_mm_ibpb for comparison.239239+ */240240+ next_mm = mm_mangle_tif_spec_ib(next);241241+ prev_mm = this_cpu_read(cpu_tlbstate.last_user_mm_ibpb);242242+243243+ /*244244+ * Issue IBPB only if the mm's are different and one or245245+ * both have the IBPB bit set.246246+ */247247+ if (next_mm != prev_mm &&248248+ (next_mm | prev_mm) & LAST_USER_MM_IBPB)249249+ indirect_branch_prediction_barrier();250250+251251+ this_cpu_write(cpu_tlbstate.last_user_mm_ibpb, next_mm);252252+ }253253+254254+ if (static_branch_unlikely(&switch_mm_always_ibpb)) {255255+ /*256256+ * Only flush when switching to a user space task with a257257+ * different context than the user space task which ran258258+ * last on this CPU.259259+ */260260+ if (this_cpu_read(cpu_tlbstate.last_user_mm) != next->mm) {261261+ indirect_branch_prediction_barrier();262262+ this_cpu_write(cpu_tlbstate.last_user_mm, next->mm);263263+ }264264+ }200265}201266202267void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,···367292 new_asid = prev_asid;368293 need_flush = true;369294 } else {370370- u64 last_ctx_id = this_cpu_read(cpu_tlbstate.last_ctx_id);371371-372295 /*373296 * Avoid user/user BTB poisoning by flushing the branch374297 * predictor when switching between processes. This stops375298 * one process from doing Spectre-v2 attacks on another.376376- *377377- * As an optimization, flush indirect branches only when378378- * switching into a processes that can't be ptrace by the379379- * current one (as in such case, attacker has much more380380- * convenient way how to tamper with the next process than381381- * branch buffer poisoning).382299 */383383- if (static_cpu_has(X86_FEATURE_USE_IBPB) &&384384- ibpb_needed(tsk, last_ctx_id))385385- indirect_branch_prediction_barrier();300300+ cond_ibpb(tsk);386301387302 if (IS_ENABLED(CONFIG_VMAP_STACK)) {388303 /*···429364 /* See above wrt _rcuidle. */430365 trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, 0);431366 }432432-433433- /*434434- * Record last user mm's context id, so we can avoid435435- * flushing branch buffer with IBPB if we switch back436436- * to the same user.437437- */438438- if (next != &init_mm)439439- this_cpu_write(cpu_tlbstate.last_ctx_id, next->context.ctx_id);440367441368 /* Make sure we write CR3 before loaded_mm. */442369 barrier();···498441 write_cr3(build_cr3(mm->pgd, 0));499442500443 /* Reinitialize tlbstate. */501501- this_cpu_write(cpu_tlbstate.last_ctx_id, mm->context.ctx_id);444444+ this_cpu_write(cpu_tlbstate.last_user_mm_ibpb, LAST_USER_MM_IBPB);502445 this_cpu_write(cpu_tlbstate.loaded_mm_asid, 0);503446 this_cpu_write(cpu_tlbstate.next_asid, 1);504447 this_cpu_write(cpu_tlbstate.ctxs[0].ctx_id, mm->context.ctx_id);
-78
arch/x86/xen/enlighten.c
···1010#include <xen/xen.h>1111#include <xen/features.h>1212#include <xen/page.h>1313-#include <xen/interface/memory.h>14131514#include <asm/xen/hypercall.h>1615#include <asm/xen/hypervisor.h>···345346}346347EXPORT_SYMBOL(xen_arch_unregister_cpu);347348#endif348348-349349-#ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG350350-void __init arch_xen_balloon_init(struct resource *hostmem_resource)351351-{352352- struct xen_memory_map memmap;353353- int rc;354354- unsigned int i, last_guest_ram;355355- phys_addr_t max_addr = PFN_PHYS(max_pfn);356356- struct e820_table *xen_e820_table;357357- const struct e820_entry *entry;358358- struct resource *res;359359-360360- if (!xen_initial_domain())361361- return;362362-363363- xen_e820_table = kmalloc(sizeof(*xen_e820_table), GFP_KERNEL);364364- if (!xen_e820_table)365365- return;366366-367367- memmap.nr_entries = ARRAY_SIZE(xen_e820_table->entries);368368- set_xen_guest_handle(memmap.buffer, xen_e820_table->entries);369369- rc = HYPERVISOR_memory_op(XENMEM_machine_memory_map, &memmap);370370- if (rc) {371371- pr_warn("%s: Can't read host e820 (%d)\n", __func__, rc);372372- goto out;373373- }374374-375375- last_guest_ram = 0;376376- for (i = 0; i < memmap.nr_entries; i++) {377377- if (xen_e820_table->entries[i].addr >= max_addr)378378- break;379379- if (xen_e820_table->entries[i].type == E820_TYPE_RAM)380380- last_guest_ram = i;381381- }382382-383383- entry = &xen_e820_table->entries[last_guest_ram];384384- if (max_addr >= entry->addr + entry->size)385385- goto out; /* No unallocated host RAM. */386386-387387- hostmem_resource->start = max_addr;388388- hostmem_resource->end = entry->addr + entry->size;389389-390390- /*391391- * Mark non-RAM regions between the end of dom0 RAM and end of host RAM392392- * as unavailable. The rest of that region can be used for hotplug-based393393- * ballooning.394394- */395395- for (; i < memmap.nr_entries; i++) {396396- entry = &xen_e820_table->entries[i];397397-398398- if (entry->type == E820_TYPE_RAM)399399- continue;400400-401401- if (entry->addr >= hostmem_resource->end)402402- break;403403-404404- res = kzalloc(sizeof(*res), GFP_KERNEL);405405- if (!res)406406- goto out;407407-408408- res->name = "Unavailable host RAM";409409- res->start = entry->addr;410410- res->end = (entry->addr + entry->size < hostmem_resource->end) ?411411- entry->addr + entry->size : hostmem_resource->end;412412- rc = insert_resource(hostmem_resource, res);413413- if (rc) {414414- pr_warn("%s: Can't insert [%llx - %llx) (%d)\n",415415- __func__, res->start, res->end, rc);416416- kfree(res);417417- goto out;418418- }419419- }420420-421421- out:422422- kfree(xen_e820_table);423423-}424424-#endif /* CONFIG_XEN_BALLOON_MEMORY_HOTPLUG */
+3-3
arch/x86/xen/mmu_pv.c
···19051905 init_top_pgt[0] = __pgd(0);1906190619071907 /* Pre-constructed entries are in pfn, so convert to mfn */19081908- /* L4[272] -> level3_ident_pgt */19081908+ /* L4[273] -> level3_ident_pgt */19091909 /* L4[511] -> level3_kernel_pgt */19101910 convert_pfn_mfn(init_top_pgt);19111911···19251925 addr[0] = (unsigned long)pgd;19261926 addr[1] = (unsigned long)l3;19271927 addr[2] = (unsigned long)l2;19281928- /* Graft it onto L4[272][0]. Note that we creating an aliasing problem:19291929- * Both L4[272][0] and L4[511][510] have entries that point to the same19281928+ /* Graft it onto L4[273][0]. Note that we creating an aliasing problem:19291929+ * Both L4[273][0] and L4[511][510] have entries that point to the same19301930 * L2 (PMD) tables. Meaning that if you modify it in __va space19311931 * it will be also modified in the __ka space! (But if you just19321932 * modify the PMD table to point to other PTE's or none, then you
+20-15
arch/x86/xen/multicalls.c
···69697070 trace_xen_mc_flush(b->mcidx, b->argidx, b->cbidx);71717272+#if MC_DEBUG7373+ memcpy(b->debug, b->entries,7474+ b->mcidx * sizeof(struct multicall_entry));7575+#endif7676+7277 switch (b->mcidx) {7378 case 0:7479 /* no-op */···9287 break;93889489 default:9595-#if MC_DEBUG9696- memcpy(b->debug, b->entries,9797- b->mcidx * sizeof(struct multicall_entry));9898-#endif9999-10090 if (HYPERVISOR_multicall(b->entries, b->mcidx) != 0)10191 BUG();10292 for (i = 0; i < b->mcidx; i++)10393 if (b->entries[i].result < 0)10494 ret++;9595+ }105969797+ if (WARN_ON(ret)) {9898+ pr_err("%d of %d multicall(s) failed: cpu %d\n",9999+ ret, b->mcidx, smp_processor_id());100100+ for (i = 0; i < b->mcidx; i++) {101101+ if (b->entries[i].result < 0) {106102#if MC_DEBUG107107- if (ret) {108108- printk(KERN_ERR "%d multicall(s) failed: cpu %d\n",109109- ret, smp_processor_id());110110- dump_stack();111111- for (i = 0; i < b->mcidx; i++) {112112- printk(KERN_DEBUG " call %2d/%d: op=%lu arg=[%lx] result=%ld\t%pF\n",113113- i+1, b->mcidx,103103+ pr_err(" call %2d: op=%lu arg=[%lx] result=%ld\t%pF\n",104104+ i + 1,114105 b->debug[i].op,115106 b->debug[i].args[0],116107 b->entries[i].result,117108 b->caller[i]);109109+#else110110+ pr_err(" call %2d: op=%lu arg=[%lx] result=%ld\n",111111+ i + 1,112112+ b->entries[i].op,113113+ b->entries[i].args[0],114114+ b->entries[i].result);115115+#endif118116 }119117 }120120-#endif121118 }122119123120 b->mcidx = 0;···133126 b->cbidx = 0;134127135128 local_irq_restore(flags);136136-137137- WARN_ON(ret);138129}139130140131struct multicall_space __xen_mc_entry(size_t args)
+1-2
arch/x86/xen/p2m.c
···656656657657 /*658658 * The interface requires atomic updates on p2m elements.659659- * xen_safe_write_ulong() is using __put_user which does an atomic660660- * store via asm().659659+ * xen_safe_write_ulong() is using an atomic store via asm().661660 */662661 if (likely(!xen_safe_write_ulong(xen_p2m_addr + pfn, mfn)))663662 return true;
···33 * Split spinlock implementation out into its own file, so it can be44 * compiled in a FTRACE-compatible way.55 */66-#include <linux/kernel_stat.h>66+#include <linux/kernel.h>77#include <linux/spinlock.h>88-#include <linux/debugfs.h>99-#include <linux/log2.h>1010-#include <linux/gfp.h>118#include <linux/slab.h>99+#include <linux/atomic.h>12101311#include <asm/paravirt.h>1412#include <asm/qspinlock.h>15131616-#include <xen/interface/xen.h>1714#include <xen/events.h>18151916#include "xen-ops.h"2020-#include "debugfs.h"21172218static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;2319static DEFINE_PER_CPU(char *, irq_name);2020+static DEFINE_PER_CPU(atomic_t, xen_qlock_wait_nest);2421static bool xen_pvspin = true;25222623static void xen_qlock_kick(int cpu)···3639 */3740static void xen_qlock_wait(u8 *byte, u8 val)3841{3939- unsigned long flags;4042 int irq = __this_cpu_read(lock_kicker_irq);4343+ atomic_t *nest_cnt = this_cpu_ptr(&xen_qlock_wait_nest);41444245 /* If kicker interrupts not initialized yet, just spin */4346 if (irq == -1 || in_nmi())4447 return;45484646- /* Guard against reentry. */4747- local_irq_save(flags);4949+ /* Detect reentry. */5050+ atomic_inc(nest_cnt);48514949- /* If irq pending already clear it. */5050- if (xen_test_irq_pending(irq)) {5252+ /* If irq pending already and no nested call clear it. */5353+ if (atomic_read(nest_cnt) == 1 && xen_test_irq_pending(irq)) {5154 xen_clear_irq_pending(irq);5255 } else if (READ_ONCE(*byte) == val) {5356 /* Block until irq becomes pending (or a spurious wakeup) */5457 xen_poll_irq(irq);5558 }56595757- local_irq_restore(flags);6060+ atomic_dec(nest_cnt);5861}59626063static irqreturn_t dummy_handler(int irq, void *dev_id)
+5-1
arch/xtensa/include/asm/processor.h
···2323# error Linux requires the Xtensa Windowed Registers Option.2424#endif25252626-#define ARCH_SLAB_MINALIGN XCHAL_DATA_WIDTH2626+/* Xtensa ABI requires stack alignment to be at least 16 */2727+2828+#define STACK_ALIGN (XCHAL_DATA_WIDTH > 16 ? XCHAL_DATA_WIDTH : 16)2929+3030+#define ARCH_SLAB_MINALIGN STACK_ALIGN27312832/*2933 * User space process size: 1 GB.
···798798 * dispatch may still be in-progress since we dispatch requests799799 * from more than one contexts.800800 *801801- * No need to quiesce queue if it isn't initialized yet since802802- * blk_freeze_queue() should be enough for cases of passthrough803803- * request.801801+ * We rely on driver to deal with the race in case that queue802802+ * initialization isn't done.804803 */805804 if (q->mq_ops && blk_queue_init_done(q))806805 blk_mq_quiesce_queue(q);
+8-18
block/blk-lib.c
···5151 if ((sector | nr_sects) & bs_mask)5252 return -EINVAL;53535454+ if (!nr_sects)5555+ return -EINVAL;5656+5457 while (nr_sects) {5555- unsigned int req_sects = nr_sects;5656- sector_t end_sect;5858+ sector_t req_sects = min_t(sector_t, nr_sects,5959+ bio_allowed_max_sectors(q));57605858- if (!req_sects)5959- goto fail;6060- if (req_sects > UINT_MAX >> 9)6161- req_sects = UINT_MAX >> 9;6262-6363- end_sect = sector + req_sects;6161+ WARN_ON_ONCE((req_sects << 9) > UINT_MAX);64626563 bio = blk_next_bio(bio, 0, gfp_mask);6664 bio->bi_iter.bi_sector = sector;···6668 bio_set_op_attrs(bio, op, 0);67696870 bio->bi_iter.bi_size = req_sects << 9;7171+ sector += req_sects;6972 nr_sects -= req_sects;7070- sector = end_sect;71737274 /*7375 * We can loop for a long time in here, if someone does···80828183 *biop = bio;8284 return 0;8383-8484-fail:8585- if (bio) {8686- submit_bio_wait(bio);8787- bio_put(bio);8888- }8989- *biop = NULL;9090- return -EOPNOTSUPP;9185}9286EXPORT_SYMBOL(__blkdev_issue_discard);9387···151161 return -EOPNOTSUPP;152162153163 /* Ensure that max_write_same_sectors doesn't overflow bi_size */154154- max_write_same_sectors = UINT_MAX >> 9;164164+ max_write_same_sectors = bio_allowed_max_sectors(q);155165156166 while (nr_sects) {157167 bio = blk_next_bio(bio, 1, gfp_mask);
+4-3
block/blk-merge.c
···4646 bio_get_first_bvec(prev_rq->bio, &pb);4747 else4848 bio_get_first_bvec(prev, &pb);4949- if (pb.bv_offset)4949+ if (pb.bv_offset & queue_virt_boundary(q))5050 return true;51515252 /*···9090 /* Zero-sector (unknown) and one-sector granularities are the same. */9191 granularity = max(q->limits.discard_granularity >> 9, 1U);92929393- max_discard_sectors = min(q->limits.max_discard_sectors, UINT_MAX >> 9);9393+ max_discard_sectors = min(q->limits.max_discard_sectors,9494+ bio_allowed_max_sectors(q));9495 max_discard_sectors -= max_discard_sectors % granularity;95969697 if (unlikely(!max_discard_sectors)) {···820819821820 req->__data_len += blk_rq_bytes(next);822821823823- if (req_op(req) != REQ_OP_DISCARD)822822+ if (!blk_discard_mergable(req))824823 elv_merge_requests(q, req, next);825824826825 /*
+11-1
block/blk.h
···169169static inline bool __bvec_gap_to_prev(struct request_queue *q,170170 struct bio_vec *bprv, unsigned int offset)171171{172172- return offset ||172172+ return (offset & queue_virt_boundary(q)) ||173173 ((bprv->bv_offset + bprv->bv_len) & queue_virt_boundary(q));174174}175175···393393static inline unsigned long blk_rq_deadline(struct request *rq)394394{395395 return rq->__deadline & ~0x1UL;396396+}397397+398398+/*399399+ * The max size one bio can handle is UINT_MAX becasue bvec_iter.bi_size400400+ * is defined as 'unsigned int', meantime it has to aligned to with logical401401+ * block size which is the minimum accepted unit by hardware.402402+ */403403+static inline unsigned int bio_allowed_max_sectors(struct request_queue *q)404404+{405405+ return round_down(UINT_MAX, queue_logical_block_size(q)) >> 9;396406}397407398408/*
···512512513513config XPOWER_PMIC_OPREGION514514 bool "ACPI operation region support for XPower AXP288 PMIC"515515- depends on MFD_AXP20X_I2C && IOSF_MBI515515+ depends on MFD_AXP20X_I2C && IOSF_MBI=y516516 help517517 This config adds ACPI operation region support for XPower AXP288 PMIC.518518
···29282928 return rc;2929292929302930 if (ars_status_process_records(acpi_desc))29312931- return -ENOMEM;29312931+ dev_err(acpi_desc->dev, "Failed to process ARS records\n");2932293229332933- return 0;29332933+ return rc;29342934}2935293529362936static int ars_register(struct acpi_nfit_desc *acpi_desc,···33413341 struct nvdimm *nvdimm, unsigned int cmd)33423342{33433343 struct acpi_nfit_desc *acpi_desc = to_acpi_nfit_desc(nd_desc);33443344- struct nfit_spa *nfit_spa;33453345- int rc = 0;3346334433473345 if (nvdimm)33483346 return 0;···33533355 * just needs guarantees that any ARS it initiates are not33543356 * interrupted by any intervening start requests from userspace.33553357 */33563356- mutex_lock(&acpi_desc->init_mutex);33573357- list_for_each_entry(nfit_spa, &acpi_desc->spas, list)33583358- if (acpi_desc->scrub_spa33593359- || test_bit(ARS_REQ_SHORT, &nfit_spa->ars_state)33603360- || test_bit(ARS_REQ_LONG, &nfit_spa->ars_state)) {33613361- rc = -EBUSY;33623362- break;33633363- }33643364- mutex_unlock(&acpi_desc->init_mutex);33583358+ if (work_busy(&acpi_desc->dwork.work))33593359+ return -EBUSY;3365336033663366- return rc;33613361+ return 0;33673362}3368336333693364int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc,
+6-2
drivers/acpi/nfit/mce.c
···2525 struct acpi_nfit_desc *acpi_desc;2626 struct nfit_spa *nfit_spa;27272828- /* We only care about memory errors */2929- if (!mce_is_memory_error(mce))2828+ /* We only care about uncorrectable memory errors */2929+ if (!mce_is_memory_error(mce) || mce_is_correctable(mce))3030+ return NOTIFY_DONE;3131+3232+ /* Verify the address reported in the MCE is valid. */3333+ if (!mce_usable_address(mce))3034 return NOTIFY_DONE;31353236 /*
···45534553 /* These specific Samsung models/firmware-revs do not handle LPM well */45544554 { "SAMSUNG MZMPC128HBFU-000MV", "CXM14M1Q", ATA_HORKAGE_NOLPM, },45554555 { "SAMSUNG SSD PM830 mSATA *", "CXM13D1Q", ATA_HORKAGE_NOLPM, },45564556- { "SAMSUNG MZ7TD256HAFV-000L9", "DXT02L5Q", ATA_HORKAGE_NOLPM, },45564556+ { "SAMSUNG MZ7TD256HAFV-000L9", NULL, ATA_HORKAGE_NOLPM, },4557455745584558 /* devices that don't properly handle queued TRIM commands */45594559 { "Micron_M500IT_*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM |
+1-5
drivers/ata/sata_rcar.c
···11+// SPDX-License-Identifier: GPL-2.0+12/*23 * Renesas R-Car SATA driver34 *45 * Author: Vladimir Barinov <source@cogentembedded.com>56 * Copyright (C) 2013-2015 Cogent Embedded, Inc.67 * Copyright (C) 2013-2015 Renesas Solutions Corp.77- *88- * This program is free software; you can redistribute it and/or modify it99- * under the terms of the GNU General Public License as published by the1010- * Free Software Foundation; either version 2 of the License, or (at your1111- * option) any later version.128 */1391410#include <linux/kernel.h>
+2-2
drivers/atm/firestream.c
···1410141014111411 func_enter ();1412141214131413- fs_dprintk (FS_DEBUG_INIT, "Inititing queue at %x: %d entries:\n", 14131413+ fs_dprintk (FS_DEBUG_INIT, "Initializing queue at %x: %d entries:\n",14141414 queue, nentries);1415141514161416 p = aligned_kmalloc (sz, GFP_KERNEL, 0x10);···14431443{14441444 func_enter ();1445144514461446- fs_dprintk (FS_DEBUG_INIT, "Inititing free pool at %x:\n", queue);14461446+ fs_dprintk (FS_DEBUG_INIT, "Initializing free pool at %x:\n", queue);1447144714481448 write_fs (dev, FP_CNF(queue), (bufsize * RBFP_RBS) | RBFP_RBSVAL | RBFP_CME);14491449 write_fs (dev, FP_SA(queue), 0);
+8-2
drivers/base/devres.c
···26262727struct devres {2828 struct devres_node node;2929- /* -- 3 pointers */3030- unsigned long long data[]; /* guarantee ull alignment */2929+ /*3030+ * Some archs want to perform DMA into kmalloc caches3131+ * and need a guaranteed alignment larger than3232+ * the alignment of a 64-bit integer.3333+ * Thus we use ARCH_KMALLOC_MINALIGN here and get exactly the same3434+ * buffer alignment as if it was allocated by plain kmalloc().3535+ */3636+ u8 __aligned(ARCH_KMALLOC_MINALIGN) data[];3137};32383339struct devres_group {
···325325 .ops = &clk_regmap_gate_ops,326326 .parent_names = (const char *[]){ "fclk_div2_div" },327327 .num_parents = 1,328328+ .flags = CLK_IS_CRITICAL,328329 },329330};330331···350349 .ops = &clk_regmap_gate_ops,351350 .parent_names = (const char *[]){ "fclk_div3_div" },352351 .num_parents = 1,352352+ /*353353+ * FIXME:354354+ * This clock, as fdiv2, is used by the SCPI FW and is required355355+ * by the platform to operate correctly.356356+ * Until the following condition are met, we need this clock to357357+ * be marked as critical:358358+ * a) The SCPI generic driver claims and enable all the clocks359359+ * it needs360360+ * b) CCF has a clock hand-off mechanism to make the sure the361361+ * clock stays on until the proper driver comes along362362+ */363363+ .flags = CLK_IS_CRITICAL,353364 },354365};355366
+12
drivers/clk/meson/gxbb.c
···506506 .ops = &clk_regmap_gate_ops,507507 .parent_names = (const char *[]){ "fclk_div3_div" },508508 .num_parents = 1,509509+ /*510510+ * FIXME:511511+ * This clock, as fdiv2, is used by the SCPI FW and is required512512+ * by the platform to operate correctly.513513+ * Until the following condition are met, we need this clock to514514+ * be marked as critical:515515+ * a) The SCPI generic driver claims and enable all the clocks516516+ * it needs517517+ * b) CCF has a clock hand-off mechanism to make the sure the518518+ * clock stays on until the proper driver comes along519519+ */520520+ .flags = CLK_IS_CRITICAL,509521 },510522};511523
···2020DEFINE_RAW_SPINLOCK(i8253_lock);2121EXPORT_SYMBOL(i8253_lock);22222323+/*2424+ * Handle PIT quirk in pit_shutdown() where zeroing the counter register2525+ * restarts the PIT, negating the shutdown. On platforms with the quirk,2626+ * platform specific code can set this to false.2727+ */2828+bool i8253_clear_counter_on_shutdown __ro_after_init = true;2929+2330#ifdef CONFIG_CLKSRC_I82532431/*2532 * Since the PIT overflows every tick, its not very useful···116109 raw_spin_lock(&i8253_lock);117110118111 outb_p(0x30, PIT_MODE);119119- outb_p(0, PIT_CH0);120120- outb_p(0, PIT_CH0);112112+113113+ if (i8253_clear_counter_on_shutdown) {114114+ outb_p(0, PIT_CH0);115115+ outb_p(0, PIT_CH0);116116+ }121117122118 raw_spin_unlock(&i8253_lock);123119 return 0;
+6-1
drivers/cpufreq/imx6q-cpufreq.c
···160160 /* Ensure the arm clock divider is what we expect */161161 ret = clk_set_rate(clks[ARM].clk, new_freq * 1000);162162 if (ret) {163163+ int ret1;164164+163165 dev_err(cpu_dev, "failed to set clock rate: %d\n", ret);164164- regulator_set_voltage_tol(arm_reg, volt_old, 0);166166+ ret1 = regulator_set_voltage_tol(arm_reg, volt_old, 0);167167+ if (ret1)168168+ dev_warn(cpu_dev,169169+ "failed to restore vddarm voltage: %d\n", ret1);165170 return ret;166171 }167172
+21-5
drivers/cpufreq/ti-cpufreq.c
···201201 {},202202};203203204204+static const struct of_device_id *ti_cpufreq_match_node(void)205205+{206206+ struct device_node *np;207207+ const struct of_device_id *match;208208+209209+ np = of_find_node_by_path("/");210210+ match = of_match_node(ti_cpufreq_of_match, np);211211+ of_node_put(np);212212+213213+ return match;214214+}215215+204216static int ti_cpufreq_probe(struct platform_device *pdev)205217{206218 u32 version[VERSION_COUNT];207207- struct device_node *np;208219 const struct of_device_id *match;209220 struct opp_table *ti_opp_table;210221 struct ti_cpufreq_data *opp_data;211222 const char * const reg_names[] = {"vdd", "vbb"};212223 int ret;213224214214- np = of_find_node_by_path("/");215215- match = of_match_node(ti_cpufreq_of_match, np);216216- of_node_put(np);225225+ match = dev_get_platdata(&pdev->dev);217226 if (!match)218227 return -ENODEV;219228···299290300291static int ti_cpufreq_init(void)301292{302302- platform_device_register_simple("ti-cpufreq", -1, NULL, 0);293293+ const struct of_device_id *match;294294+295295+ /* Check to ensure we are on a compatible platform */296296+ match = ti_cpufreq_match_node();297297+ if (match)298298+ platform_device_register_data(NULL, "ti-cpufreq", -1, match,299299+ sizeof(*match));300300+303301 return 0;304302}305303module_init(ti_cpufreq_init);
+7-33
drivers/cpuidle/cpuidle-arm.c
···8282{8383 int ret;8484 struct cpuidle_driver *drv;8585- struct cpuidle_device *dev;86858786 drv = kmemdup(&arm_idle_driver, sizeof(*drv), GFP_KERNEL);8887 if (!drv)···102103 goto out_kfree_drv;103104 }104105105105- ret = cpuidle_register_driver(drv);106106- if (ret) {107107- if (ret != -EBUSY)108108- pr_err("Failed to register cpuidle driver\n");109109- goto out_kfree_drv;110110- }111111-112106 /*113107 * Call arch CPU operations in order to initialize114108 * idle states suspend back-end specific data···109117 ret = arm_cpuidle_init(cpu);110118111119 /*112112- * Skip the cpuidle device initialization if the reported120120+ * Allow the initialization to continue for other CPUs, if the reported113121 * failure is a HW misconfiguration/breakage (-ENXIO).114122 */115115- if (ret == -ENXIO)116116- return 0;117117-118123 if (ret) {119124 pr_err("CPU %d failed to init idle CPU ops\n", cpu);120120- goto out_unregister_drv;125125+ ret = ret == -ENXIO ? 0 : ret;126126+ goto out_kfree_drv;121127 }122128123123- dev = kzalloc(sizeof(*dev), GFP_KERNEL);124124- if (!dev) {125125- ret = -ENOMEM;126126- goto out_unregister_drv;127127- }128128- dev->cpu = cpu;129129-130130- ret = cpuidle_register_device(dev);131131- if (ret) {132132- pr_err("Failed to register cpuidle device for CPU %d\n",133133- cpu);134134- goto out_kfree_dev;135135- }129129+ ret = cpuidle_register(drv, NULL);130130+ if (ret)131131+ goto out_kfree_drv;136132137133 return 0;138134139139-out_kfree_dev:140140- kfree(dev);141141-out_unregister_drv:142142- cpuidle_unregister_driver(drv);143135out_kfree_drv:144136 kfree(drv);145137 return ret;···154178 while (--cpu >= 0) {155179 dev = per_cpu(cpuidle_devices, cpu);156180 drv = cpuidle_get_cpu_driver(dev);157157- cpuidle_unregister_device(dev);158158- cpuidle_unregister_driver(drv);159159- kfree(dev);181181+ cpuidle_unregister(drv);160182 kfree(drv);161183 }162184
+17-14
drivers/crypto/hisilicon/sec/sec_algs.c
···732732 int *splits_in_nents;733733 int *splits_out_nents = NULL;734734 struct sec_request_el *el, *temp;735735+ bool split = skreq->src != skreq->dst;735736736737 mutex_init(&sec_req->lock);737738 sec_req->req_base = &skreq->base;···751750 if (ret)752751 goto err_free_split_sizes;753752754754- if (skreq->src != skreq->dst) {753753+ if (split) {755754 sec_req->len_out = sg_nents(skreq->dst);756755 ret = sec_map_and_split_sg(skreq->dst, split_sizes, steps,757756 &splits_out, &splits_out_nents,···786785 split_sizes[i],787786 skreq->src != skreq->dst,788787 splits_in[i], splits_in_nents[i],789789- splits_out[i],790790- splits_out_nents[i], info);788788+ split ? splits_out[i] : NULL,789789+ split ? splits_out_nents[i] : 0,790790+ info);791791 if (IS_ERR(el)) {792792 ret = PTR_ERR(el);793793 goto err_free_elements;···808806 * more refined but this is unlikely to happen so no need.809807 */810808811811- /* Cleanup - all elements in pointer arrays have been coppied */812812- kfree(splits_in_nents);813813- kfree(splits_in);814814- kfree(splits_out_nents);815815- kfree(splits_out);816816- kfree(split_sizes);817817-818809 /* Grab a big lock for a long time to avoid concurrency issues */819810 mutex_lock(&queue->queuelock);820811···822827 (!queue->havesoftqueue ||823828 kfifo_avail(&queue->softqueue) > steps)) ||824829 !list_empty(&ctx->backlog)) {830830+ ret = -EBUSY;825831 if ((skreq->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) {826832 list_add_tail(&sec_req->backlog_head, &ctx->backlog);827833 mutex_unlock(&queue->queuelock);828828- return -EBUSY;834834+ goto out;829835 }830836831831- ret = -EBUSY;832837 mutex_unlock(&queue->queuelock);833838 goto err_free_elements;834839 }···837842 if (ret)838843 goto err_free_elements;839844840840- return -EINPROGRESS;845845+ ret = -EINPROGRESS;846846+out:847847+ /* Cleanup - all elements in pointer arrays have been copied */848848+ kfree(splits_in_nents);849849+ kfree(splits_in);850850+ kfree(splits_out_nents);851851+ kfree(splits_out);852852+ kfree(split_sizes);853853+ return ret;841854842855err_free_elements:843856 list_for_each_entry_safe(el, temp, &sec_req->elements, head) {···857854 crypto_skcipher_ivsize(atfm),858855 DMA_BIDIRECTIONAL);859856err_unmap_out_sg:860860- if (skreq->src != skreq->dst)857857+ if (split)861858 sec_unmap_sg_on_err(skreq->dst, steps, splits_out,862859 splits_out_nents, sec_req->len_out,863860 info->dev);
···265265 (params.mmap & ~PAGE_MASK)));266266267267 init_screen_info();268268+269269+ /* ARM does not permit early mappings to persist across paging_init() */270270+ if (IS_ENABLED(CONFIG_ARM))271271+ efi_memmap_unmap();268272}269273270274static int __init register_gop_device(void)
+1-1
drivers/firmware/efi/arm-runtime.c
···110110{111111 u64 mapsize;112112113113- if (!efi_enabled(EFI_BOOT) || !efi_enabled(EFI_MEMMAP)) {113113+ if (!efi_enabled(EFI_BOOT)) {114114 pr_info("EFI services will not be available.\n");115115 return 0;116116 }
+41-14
drivers/firmware/efi/efi.c
···592592593593 early_memunmap(tbl, sizeof(*tbl));594594 }595595+ return 0;596596+}595597598598+int __init efi_apply_persistent_mem_reservations(void)599599+{596600 if (efi.mem_reserve != EFI_INVALID_TABLE_ADDR) {597601 unsigned long prsv = efi.mem_reserve;598602···967963}968964969965static DEFINE_SPINLOCK(efi_mem_reserve_persistent_lock);966966+static struct linux_efi_memreserve *efi_memreserve_root __ro_after_init;970967971971-int efi_mem_reserve_persistent(phys_addr_t addr, u64 size)968968+static int __init efi_memreserve_map_root(void)972969{973973- struct linux_efi_memreserve *rsv, *parent;974974-975970 if (efi.mem_reserve == EFI_INVALID_TABLE_ADDR)976971 return -ENODEV;977972978978- rsv = kmalloc(sizeof(*rsv), GFP_KERNEL);973973+ efi_memreserve_root = memremap(efi.mem_reserve,974974+ sizeof(*efi_memreserve_root),975975+ MEMREMAP_WB);976976+ if (WARN_ON_ONCE(!efi_memreserve_root))977977+ return -ENOMEM;978978+ return 0;979979+}980980+981981+int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size)982982+{983983+ struct linux_efi_memreserve *rsv;984984+ int rc;985985+986986+ if (efi_memreserve_root == (void *)ULONG_MAX)987987+ return -ENODEV;988988+989989+ if (!efi_memreserve_root) {990990+ rc = efi_memreserve_map_root();991991+ if (rc)992992+ return rc;993993+ }994994+995995+ rsv = kmalloc(sizeof(*rsv), GFP_ATOMIC);979996 if (!rsv)980997 return -ENOMEM;981981-982982- parent = memremap(efi.mem_reserve, sizeof(*rsv), MEMREMAP_WB);983983- if (!parent) {984984- kfree(rsv);985985- return -ENOMEM;986986- }987998988999 rsv->base = addr;9891000 rsv->size = size;99010019911002 spin_lock(&efi_mem_reserve_persistent_lock);992992- rsv->next = parent->next;993993- parent->next = __pa(rsv);10031003+ rsv->next = efi_memreserve_root->next;10041004+ efi_memreserve_root->next = __pa(rsv);9941005 spin_unlock(&efi_mem_reserve_persistent_lock);995995-996996- memunmap(parent);99710069981007 return 0;9991008}10091009+10101010+static int __init efi_memreserve_root_init(void)10111011+{10121012+ if (efi_memreserve_root)10131013+ return 0;10141014+ if (efi_memreserve_map_root())10151015+ efi_memreserve_root = (void *)ULONG_MAX;10161016+ return 0;10171017+}10181018+early_initcall(efi_memreserve_root_init);1000101910011020#ifdef CONFIG_KEXEC10021021static int update_efi_random_seed(struct notifier_block *nb,
+3
drivers/firmware/efi/libstub/arm-stub.c
···7575 efi_guid_t memreserve_table_guid = LINUX_EFI_MEMRESERVE_TABLE_GUID;7676 efi_status_t status;77777878+ if (IS_ENABLED(CONFIG_ARM))7979+ return;8080+7881 status = efi_call_early(allocate_pool, EFI_LOADER_DATA, sizeof(*rsv),7982 (void **)&rsv);8083 if (status != EFI_SUCCESS) {
···4646 tristate "FSI master based on Aspeed ColdFire coprocessor"4747 depends on GPIOLIB4848 depends on GPIO_ASPEED4949+ select GENERIC_ALLOCATOR4950 ---help---5051 This option enables a FSI master using the AST2400 and AST2500 GPIO5152 lines driven by the internal ColdFire coprocessor. This requires
···268268269269 if (pxa_gpio_has_pinctrl()) {270270 ret = pinctrl_gpio_direction_input(chip->base + offset);271271- if (!ret)272272- return 0;271271+ if (ret)272272+ return ret;273273 }274274275275 spin_lock_irqsave(&gpio_lock, flags);
+3-2
drivers/gpio/gpiolib.c
···12951295 gdev->descs = kcalloc(chip->ngpio, sizeof(gdev->descs[0]), GFP_KERNEL);12961296 if (!gdev->descs) {12971297 status = -ENOMEM;12981298- goto err_free_gdev;12981298+ goto err_free_ida;12991299 }1300130013011301 if (chip->ngpio == 0) {···14271427 kfree_const(gdev->label);14281428err_free_descs:14291429 kfree(gdev->descs);14301430-err_free_gdev:14301430+err_free_ida:14311431 ida_simple_remove(&gpio_ida, gdev->id);14321432+err_free_gdev:14321433 /* failures here can mean systems won't boot... */14331434 pr_err("%s: GPIOs %d..%d (%s) failed to register, %d\n", __func__,14341435 gdev->base, gdev->base + gdev->ngpio - 1,
+1
drivers/gpu/drm/amd/amdgpu/amdgpu.h
···151151extern int amdgpu_gpu_recovery;152152extern int amdgpu_emu_mode;153153extern uint amdgpu_smu_memory_pool_size;154154+extern uint amdgpu_dc_feature_mask;154155extern struct amdgpu_mgpu_info mgpu_info;155156156157#ifdef CONFIG_DRM_AMDGPU_SI
···127127int amdgpu_gpu_recovery = -1; /* auto */128128int amdgpu_emu_mode = 0;129129uint amdgpu_smu_memory_pool_size = 0;130130+/* FBC (bit 0) disabled by default*/131131+uint amdgpu_dc_feature_mask = 0;132132+130133struct amdgpu_mgpu_info mgpu_info = {131134 .mutex = __MUTEX_INITIALIZER(mgpu_info.mutex),132135};···633630module_param(halt_if_hws_hang, int, 0644);634631MODULE_PARM_DESC(halt_if_hws_hang, "Halt if HWS hang is detected (0 = off (default), 1 = on)");635632#endif633633+634634+/**635635+ * DOC: dcfeaturemask (uint)636636+ * Override display features enabled. See enum DC_FEATURE_MASK in drivers/gpu/drm/amd/include/amd_shared.h.637637+ * The default is the current set of stable display features.638638+ */639639+MODULE_PARM_DESC(dcfeaturemask, "all stable DC features enabled (default))");640640+module_param_named(dcfeaturemask, amdgpu_dc_feature_mask, uint, 0444);636641637642static const struct pci_device_id pciidlist[] = {638643#ifdef CONFIG_DRM_AMDGPU_SI
+2
drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
···339339 struct drm_property *audio_property;340340 /* FMT dithering */341341 struct drm_property *dither_property;342342+ /* maximum number of bits per channel for monitor color */343343+ struct drm_property *max_bpc_property;342344 /* hardcoded DFP edid from BIOS */343345 struct edid *bios_hardcoded_edid;344346 int bios_hardcoded_edid_size;
+18-14
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
···181181182182 if (level == adev->vm_manager.root_level)183183 /* For the root directory */184184- return round_up(adev->vm_manager.max_pfn, 1 << shift) >> shift;184184+ return round_up(adev->vm_manager.max_pfn, 1ULL << shift) >> shift;185185 else if (level != AMDGPU_VM_PTB)186186 /* Everything in between */187187 return 512;···16321632 continue;16331633 }1634163416351635- /* First check if the entry is already handled */16361636- if (cursor.pfn < frag_start) {16371637- cursor.entry->huge = true;16381638- amdgpu_vm_pt_next(adev, &cursor);16391639- continue;16401640- }16411641-16421635 /* If it isn't already handled it can't be a huge page */16431636 if (cursor.entry->huge) {16441637 /* Add the entry to the relocated list to update it. */···16561663 if (!amdgpu_vm_pt_descendant(adev, &cursor))16571664 return -ENOENT;16581665 continue;16591659- } else if (frag >= parent_shift) {16661666+ } else if (frag >= parent_shift &&16671667+ cursor.level - 1 != adev->vm_manager.root_level) {16601668 /* If the fragment size is even larger than the parent16611661- * shift we should go up one level and check it again.16691669+ * shift we should go up one level and check it again16701670+ * unless one level up is the root level.16621671 */16631672 if (!amdgpu_vm_pt_ancestor(&cursor))16641673 return -ENOENT;···16681673 }1669167416701675 /* Looks good so far, calculate parameters for the update */16711671- incr = AMDGPU_GPU_PAGE_SIZE << shift;16761676+ incr = (uint64_t)AMDGPU_GPU_PAGE_SIZE << shift;16721677 mask = amdgpu_vm_entries_mask(adev, cursor.level);16731678 pe_start = ((cursor.pfn >> shift) & mask) * 8;16741674- entry_end = (mask + 1) << shift;16791679+ entry_end = (uint64_t)(mask + 1) << shift;16751680 entry_end += cursor.pfn & ~(entry_end - 1);16761681 entry_end = min(entry_end, end);16771682···16841689 flags | AMDGPU_PTE_FRAG(frag));1685169016861691 pe_start += nptes * 8;16871687- dst += nptes * AMDGPU_GPU_PAGE_SIZE << shift;16921692+ dst += (uint64_t)nptes * AMDGPU_GPU_PAGE_SIZE << shift;1688169316891694 frag_start = upd_end;16901695 if (frag_start >= frag_end) {···16961701 }16971702 } while (frag_start < entry_end);1698170316991699- if (frag >= shift)17041704+ if (amdgpu_vm_pt_descendant(adev, &cursor)) {17051705+ /* Mark all child entries as huge */17061706+ while (cursor.pfn < frag_start) {17071707+ cursor.entry->huge = true;17081708+ amdgpu_vm_pt_next(adev, &cursor);17091709+ }17101710+17111711+ } else if (frag >= shift) {17121712+ /* or just move on to the next on the same level. */17001713 amdgpu_vm_pt_next(adev, &cursor);17141714+ }17011715 }1702171617031717 return 0;
+4-3
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
···24402440#endif2441244124422442 WREG32_FIELD15(GC, 0, RLC_CNTL, RLC_ENABLE_F32, 1);24432443+ udelay(50);2443244424442445 /* carrizo do enable cp interrupt after cp inited */24452445- if (!(adev->flags & AMD_IS_APU))24462446+ if (!(adev->flags & AMD_IS_APU)) {24462447 gfx_v9_0_enable_gui_idle_interrupt(adev, true);24472447-24482448- udelay(50);24482448+ udelay(50);24492449+ }2449245024502451#ifdef AMDGPU_RLC_DEBUG_RETRY24512452 /* RLC_GPM_GENERAL_6 : RLC Ucode version */
+3-3
drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
···72727373 /* Program the system aperture low logical page number. */7474 WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR,7575- min(adev->gmc.vram_start, adev->gmc.agp_start) >> 18);7575+ min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18);76767777 if (adev->asic_type == CHIP_RAVEN && adev->rev_id >= 0x8)7878 /*···8282 * to get rid of the VM fault and hardware hang.8383 */8484 WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR,8585- max((adev->gmc.vram_end >> 18) + 0x1,8585+ max((adev->gmc.fb_end >> 18) + 0x1,8686 adev->gmc.agp_end >> 18));8787 else8888 WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR,8989- max(adev->gmc.vram_end, adev->gmc.agp_end) >> 18);8989+ max(adev->gmc.fb_end, adev->gmc.agp_end) >> 18);90909191 /* Set default page address. */9292 value = adev->vram_scratch.gpu_addr - adev->gmc.vram_start
···17361736 if (events->force_trigger)17371737 value |= 0x1;1738173817391739- value |= 0x84;17391739+ if (num_pipes) {17401740+ struct dc *dc = pipe_ctx[0]->stream->ctx->dc;17411741+17421742+ if (dc->fbc_compressor)17431743+ value |= 0x84;17441744+ }1740174517411746 for (i = 0; i < num_pipes; i++)17421747 pipe_ctx[i]->stream_res.tg->funcs->
···142142143143 lockdep_assert_held_once(&dev->master_mutex);144144145145+ WARN_ON(fpriv->is_master);145146 old_master = fpriv->master;146147 fpriv->master = drm_master_create(dev);147148 if (!fpriv->master) {···171170 /* drop references and restore old master on failure */172171 drm_master_put(&fpriv->master);173172 fpriv->master = old_master;173173+ fpriv->is_master = 0;174174175175 return ret;176176}
+3
drivers/gpu/drm/drm_dp_mst_topology.c
···12751275 mutex_lock(&mgr->lock);12761276 mstb = mgr->mst_primary;1277127712781278+ if (!mstb)12791279+ goto out;12801280+12781281 for (i = 0; i < lct - 1; i++) {12791282 int shift = (i % 2) ? 0 : 4;12801283 int port_num = (rad[i / 2] >> shift) & 0xf;
+3
drivers/gpu/drm/drm_fb_helper.c
···219219 mutex_lock(&fb_helper->lock);220220 drm_connector_list_iter_begin(dev, &conn_iter);221221 drm_for_each_connector_iter(connector, &conn_iter) {222222+ if (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK)223223+ continue;224224+222225 ret = __drm_fb_helper_add_one_connector(fb_helper, connector);223226 if (ret)224227 goto fail;
+1-1
drivers/gpu/drm/drm_fourcc.c
···97979898/**9999 * drm_driver_legacy_fb_format - compute drm fourcc code from legacy description100100+ * @dev: DRM device100101 * @bpp: bits per pixels101102 * @depth: bit depth per pixel102102- * @native: use host native byte order103103 *104104 * Computes a drm fourcc pixel format code for the given @bpp/@depth values.105105 * Unlike drm_mode_legacy_fb_format() this looks at the drivers mode_config,
+1-1
drivers/gpu/drm/etnaviv/etnaviv_sched.c
···9393 * If the GPU managed to complete this jobs fence, the timout is9494 * spurious. Bail out.9595 */9696- if (fence_completed(gpu, submit->out_fence->seqno))9696+ if (dma_fence_is_signaled(submit->out_fence))9797 return;98989999 /*
···11751175 return -EINVAL;11761176 }1177117711781178- dram_info->valid_dimm = true;11791179-11801178 /*11811179 * If any of the channel is single rank channel, worst case output11821180 * will be same as if single rank memory, so consider single rank···11911193 return -EINVAL;11921194 }1193119511941194- if (ch0.is_16gb_dimm || ch1.is_16gb_dimm)11951195- dram_info->is_16gb_dimm = true;11961196+ dram_info->is_16gb_dimm = ch0.is_16gb_dimm || ch1.is_16gb_dimm;1196119711971198 dev_priv->dram_info.symmetric_memory = intel_is_dram_symmetric(val_ch0,11981199 val_ch1,···13111314 return -EINVAL;13121315 }1313131613141314- dram_info->valid_dimm = true;13151317 dram_info->valid = true;13161318 return 0;13171319}···13231327 int ret;1324132813251329 dram_info->valid = false;13261326- dram_info->valid_dimm = false;13271327- dram_info->is_16gb_dimm = false;13281330 dram_info->rank = I915_DRAM_RANK_INVALID;13291331 dram_info->bandwidth_kbps = 0;13301332 dram_info->num_channels = 0;13331333+13341334+ /*13351335+ * Assume 16Gb DIMMs are present until proven otherwise.13361336+ * This is only used for the level 0 watermark latency13371337+ * w/a which does not apply to bxt/glk.13381338+ */13391339+ dram_info->is_16gb_dimm = !IS_GEN9_LP(dev_priv);1331134013321341 if (INTEL_GEN(dev_priv) < 9 || IS_GEMINILAKE(dev_priv))13331342 return;
···21382138static int intel_pixel_rate_to_cdclk(struct drm_i915_private *dev_priv,21392139 int pixel_rate)21402140{21412141- if (INTEL_GEN(dev_priv) >= 10)21412141+ if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv))21422142 return DIV_ROUND_UP(pixel_rate, 2);21432143- else if (IS_GEMINILAKE(dev_priv))21442144- /*21452145- * FIXME: Avoid using a pixel clock that is more than 99% of the cdclk21462146- * as a temporary workaround. Use a higher cdclk instead. (Note that21472147- * intel_compute_max_dotclk() limits the max pixel clock to 99% of max21482148- * cdclk.)21492149- */21502150- return DIV_ROUND_UP(pixel_rate * 100, 2 * 99);21512143 else if (IS_GEN9(dev_priv) ||21522144 IS_BROADWELL(dev_priv) || IS_HASWELL(dev_priv))21532145 return pixel_rate;···25352543{25362544 int max_cdclk_freq = dev_priv->max_cdclk_freq;2537254525382538- if (INTEL_GEN(dev_priv) >= 10)25462546+ if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv))25392547 return 2 * max_cdclk_freq;25402540- else if (IS_GEMINILAKE(dev_priv))25412541- /*25422542- * FIXME: Limiting to 99% as a temporary workaround. See25432543- * intel_min_cdclk() for details.25442544- */25452545- return 2 * max_cdclk_freq * 99 / 100;25462548 else if (IS_GEN9(dev_priv) ||25472549 IS_BROADWELL(dev_priv) || IS_HASWELL(dev_priv))25482550 return max_cdclk_freq;
+1-1
drivers/gpu/drm/i915/intel_device_info.c
···474474 u8 eu_disabled_mask;475475 u32 n_disabled;476476477477- if (!(sseu->subslice_mask[ss] & BIT(ss)))477477+ if (!(sseu->subslice_mask[s] & BIT(ss)))478478 /* skip disabled subslice */479479 continue;480480
+88-15
drivers/gpu/drm/i915/intel_display.c
···28902890 return;2891289128922892valid_fb:28932893+ intel_state->base.rotation = plane_config->rotation;28932894 intel_fill_fb_ggtt_view(&intel_state->view, fb,28942895 intel_state->base.rotation);28952896 intel_state->color_plane[0].stride =···48514850 * chroma samples for both of the luma samples, and thus we don't48524851 * actually get the expected MPEG2 chroma siting convention :(48534852 * The same behaviour is observed on pre-SKL platforms as well.48534853+ *48544854+ * Theory behind the formula (note that we ignore sub-pixel48554855+ * source coordinates):48564856+ * s = source sample position48574857+ * d = destination sample position48584858+ *48594859+ * Downscaling 4:1:48604860+ * -0.548614861+ * | 0.048624862+ * | | 1.5 (initial phase)48634863+ * | | |48644864+ * v v v48654865+ * | s | s | s | s |48664866+ * | d |48674867+ *48684868+ * Upscaling 1:4:48694869+ * -0.548704870+ * | -0.375 (initial phase)48714871+ * | | 0.048724872+ * | | |48734873+ * v v v48744874+ * | s |48754875+ * | d | d | d | d |48544876 */48554855-u16 skl_scaler_calc_phase(int sub, bool chroma_cosited)48774877+u16 skl_scaler_calc_phase(int sub, int scale, bool chroma_cosited)48564878{48574879 int phase = -0x8000;48584880 u16 trip = 0;4859488148604882 if (chroma_cosited)48614883 phase += (sub - 1) * 0x8000 / sub;48844884+48854885+ phase += scale / (2 * sub);48864886+48874887+ /*48884888+ * Hardware initial phase limited to [-0.5:1.5].48894889+ * Since the max hardware scale factor is 3.0, we48904890+ * should never actually excdeed 1.0 here.48914891+ */48924892+ WARN_ON(phase < -0x8000 || phase > 0x18000);4862489348634894 if (phase < 0)48644895 phase = 0x10000 + phase;···5100506751015068 if (crtc->config->pch_pfit.enabled) {51025069 u16 uv_rgb_hphase, uv_rgb_vphase;50705070+ int pfit_w, pfit_h, hscale, vscale;51035071 int id;5104507251055073 if (WARN_ON(crtc->config->scaler_state.scaler_id < 0))51065074 return;5107507551085108- uv_rgb_hphase = skl_scaler_calc_phase(1, false);51095109- uv_rgb_vphase = skl_scaler_calc_phase(1, false);50765076+ pfit_w = (crtc->config->pch_pfit.size >> 16) & 0xFFFF;50775077+ pfit_h = crtc->config->pch_pfit.size & 0xFFFF;50785078+50795079+ hscale = (crtc->config->pipe_src_w << 16) / pfit_w;50805080+ vscale = (crtc->config->pipe_src_h << 16) / pfit_h;50815081+50825082+ uv_rgb_hphase = skl_scaler_calc_phase(1, hscale, false);50835083+ uv_rgb_vphase = skl_scaler_calc_phase(1, vscale, false);5110508451115085 id = scaler_state->scaler_id;51125086 I915_WRITE(SKL_PS_CTRL(pipe, id), PS_SCALER_EN |···78837843 plane_config->tiling = I915_TILING_X;78847844 fb->modifier = I915_FORMAT_MOD_X_TILED;78857845 }78467846+78477847+ if (val & DISPPLANE_ROTATE_180)78487848+ plane_config->rotation = DRM_MODE_ROTATE_180;78867849 }78507850+78517851+ if (IS_CHERRYVIEW(dev_priv) && pipe == PIPE_B &&78527852+ val & DISPPLANE_MIRROR)78537853+ plane_config->rotation |= DRM_MODE_REFLECT_X;7887785478887855 pixel_format = val & DISPPLANE_PIXFORMAT_MASK;78897856 fourcc = i9xx_format_to_fourcc(pixel_format);···89598912 MISSING_CASE(tiling);89608913 goto error;89618914 }89158915+89168916+ /*89178917+ * DRM_MODE_ROTATE_ is counter clockwise to stay compatible with Xrandr89188918+ * while i915 HW rotation is clockwise, thats why this swapping.89198919+ */89208920+ switch (val & PLANE_CTL_ROTATE_MASK) {89218921+ case PLANE_CTL_ROTATE_0:89228922+ plane_config->rotation = DRM_MODE_ROTATE_0;89238923+ break;89248924+ case PLANE_CTL_ROTATE_90:89258925+ plane_config->rotation = DRM_MODE_ROTATE_270;89268926+ break;89278927+ case PLANE_CTL_ROTATE_180:89288928+ plane_config->rotation = DRM_MODE_ROTATE_180;89298929+ break;89308930+ case PLANE_CTL_ROTATE_270:89318931+ plane_config->rotation = DRM_MODE_ROTATE_90;89328932+ break;89338933+ }89348934+89358935+ if (INTEL_GEN(dev_priv) >= 10 &&89368936+ val & PLANE_CTL_FLIP_HORIZONTAL)89378937+ plane_config->rotation |= DRM_MODE_REFLECT_X;8962893889638939 base = I915_READ(PLANE_SURF(pipe, plane_id)) & 0xfffff000;89648940 plane_config->base = base;···1283812768 intel_check_cpu_fifo_underruns(dev_priv);1283912769 intel_check_pch_fifo_underruns(dev_priv);12840127701284112841- if (!new_crtc_state->active) {1284212842- /*1284312843- * Make sure we don't call initial_watermarks1284412844- * for ILK-style watermark updates.1284512845- *1284612846- * No clue what this is supposed to achieve.1284712847- */1284812848- if (INTEL_GEN(dev_priv) >= 9)1284912849- dev_priv->display.initial_watermarks(intel_state,1285012850- to_intel_crtc_state(new_crtc_state));1285112851- }1277112771+ /* FIXME unify this for all platforms */1277212772+ if (!new_crtc_state->active &&1277312773+ !HAS_GMCH_DISPLAY(dev_priv) &&1277412774+ dev_priv->display.initial_watermarks)1277512775+ dev_priv->display.initial_watermarks(intel_state,1277612776+ to_intel_crtc_state(new_crtc_state));1285212777 }1285312778 }1285412779···1471114646 fb->height < SKL_MIN_YUV_420_SRC_H ||1471214647 (fb->width % 4) != 0 || (fb->height % 4) != 0)) {1471314648 DRM_DEBUG_KMS("src dimensions not correct for NV12\n");1471414714- return -EINVAL;1464914649+ goto err;1471514650 }14716146511471714652 for (i = 0; i < fb->format->num_planes; i++) {···1529815233 ret = drm_atomic_add_affected_planes(state, crtc);1529915234 if (ret)1530015235 goto out;1523615236+1523715237+ /*1523815238+ * FIXME hack to force a LUT update to avoid the1523915239+ * plane update forcing the pipe gamma on without1524015240+ * having a proper LUT loaded. Remove once we1524115241+ * have readout for pipe gamma enable.1524215242+ */1524315243+ crtc_state->color_mgmt_changed = true;1530115244 }1530215245 }1530315246
···228228 drm_for_each_connector_iter(connector, &conn_iter) {229229 struct intel_connector *intel_connector = to_intel_connector(connector);230230231231- if (intel_connector->encoder->hpd_pin == pin) {231231+ /* Don't check MST ports, they don't have pins */232232+ if (!intel_connector->mst_port &&233233+ intel_connector->encoder->hpd_pin == pin) {232234 if (connector->polled != intel_connector->polled)233235 DRM_DEBUG_DRIVER("Reenabling HPD on connector %s\n",234236 connector->name);···397395 struct intel_encoder *encoder;398396 bool storm_detected = false;399397 bool queue_dig = false, queue_hp = false;398398+ u32 long_hpd_pulse_mask = 0;399399+ u32 short_hpd_pulse_mask = 0;400400+ enum hpd_pin pin;400401401402 if (!pin_mask)402403 return;403404404405 spin_lock(&dev_priv->irq_lock);405405- for_each_intel_encoder(&dev_priv->drm, encoder) {406406- enum hpd_pin pin = encoder->hpd_pin;407407- bool has_hpd_pulse = intel_encoder_has_hpd_pulse(encoder);408406407407+ /*408408+ * Determine whether ->hpd_pulse() exists for each pin, and409409+ * whether we have a short or a long pulse. This is needed410410+ * as each pin may have up to two encoders (HDMI and DP) and411411+ * only the one of them (DP) will have ->hpd_pulse().412412+ */413413+ for_each_intel_encoder(&dev_priv->drm, encoder) {414414+ bool has_hpd_pulse = intel_encoder_has_hpd_pulse(encoder);415415+ enum port port = encoder->port;416416+ bool long_hpd;417417+418418+ pin = encoder->hpd_pin;409419 if (!(BIT(pin) & pin_mask))410420 continue;411421412412- if (has_hpd_pulse) {413413- bool long_hpd = long_mask & BIT(pin);414414- enum port port = encoder->port;422422+ if (!has_hpd_pulse)423423+ continue;415424416416- DRM_DEBUG_DRIVER("digital hpd port %c - %s\n", port_name(port),417417- long_hpd ? "long" : "short");418418- /*419419- * For long HPD pulses we want to have the digital queue happen,420420- * but we still want HPD storm detection to function.421421- */422422- queue_dig = true;423423- if (long_hpd) {424424- dev_priv->hotplug.long_port_mask |= (1 << port);425425- } else {426426- /* for short HPD just trigger the digital queue */427427- dev_priv->hotplug.short_port_mask |= (1 << port);428428- continue;429429- }425425+ long_hpd = long_mask & BIT(pin);426426+427427+ DRM_DEBUG_DRIVER("digital hpd port %c - %s\n", port_name(port),428428+ long_hpd ? "long" : "short");429429+ queue_dig = true;430430+431431+ if (long_hpd) {432432+ long_hpd_pulse_mask |= BIT(pin);433433+ dev_priv->hotplug.long_port_mask |= BIT(port);434434+ } else {435435+ short_hpd_pulse_mask |= BIT(pin);436436+ dev_priv->hotplug.short_port_mask |= BIT(port);430437 }438438+ }439439+440440+ /* Now process each pin just once */441441+ for_each_hpd_pin(pin) {442442+ bool long_hpd;443443+444444+ if (!(BIT(pin) & pin_mask))445445+ continue;431446432447 if (dev_priv->hotplug.stats[pin].state == HPD_DISABLED) {433448 /*···461442 if (dev_priv->hotplug.stats[pin].state != HPD_ENABLED)462443 continue;463444464464- if (!has_hpd_pulse) {445445+ /*446446+ * Delegate to ->hpd_pulse() if one of the encoders for this447447+ * pin has it, otherwise let the hotplug_work deal with this448448+ * pin directly.449449+ */450450+ if (((short_hpd_pulse_mask | long_hpd_pulse_mask) & BIT(pin))) {451451+ long_hpd = long_hpd_pulse_mask & BIT(pin);452452+ } else {465453 dev_priv->hotplug.event_bits |= BIT(pin);454454+ long_hpd = true;466455 queue_hp = true;467456 }457457+458458+ if (!long_hpd)459459+ continue;468460469461 if (intel_hpd_irq_storm_detect(dev_priv, pin)) {470462 dev_priv->hotplug.event_bits &= ~BIT(pin);
···424424425425 reg_state[CTX_RING_TAIL+1] = intel_ring_set_tail(rq->ring, rq->tail);426426427427- /* True 32b PPGTT with dynamic page allocation: update PDP427427+ /*428428+ * True 32b PPGTT with dynamic page allocation: update PDP428429 * registers and point the unallocated PDPs to scratch page.429430 * PML4 is allocated during ppgtt init, so this is not needed430431 * in 48-bit mode.···433432 if (ppgtt && !i915_vm_is_48bit(&ppgtt->vm))434433 execlists_update_context_pdps(ppgtt, reg_state);435434435435+ /*436436+ * Make sure the context image is complete before we submit it to HW.437437+ *438438+ * Ostensibly, writes (including the WCB) should be flushed prior to439439+ * an uncached write such as our mmio register access, the empirical440440+ * evidence (esp. on Braswell) suggests that the WC write into memory441441+ * may not be visible to the HW prior to the completion of the UC442442+ * register write and that we may begin execution from the context443443+ * before its image is complete leading to invalid PD chasing.444444+ */445445+ wmb();436446 return ce->lrc_desc;437447}438448
+41-3
drivers/gpu/drm/i915/intel_pm.c
···24932493 uint32_t method1, method2;24942494 int cpp;2495249524962496+ if (mem_value == 0)24972497+ return U32_MAX;24982498+24962499 if (!intel_wm_plane_visible(cstate, pstate))24972500 return 0;24982501···25252522 uint32_t method1, method2;25262523 int cpp;2527252425252525+ if (mem_value == 0)25262526+ return U32_MAX;25272527+25282528 if (!intel_wm_plane_visible(cstate, pstate))25292529 return 0;25302530···25502544 uint32_t mem_value)25512545{25522546 int cpp;25472547+25482548+ if (mem_value == 0)25492549+ return U32_MAX;2553255025542551 if (!intel_wm_plane_visible(cstate, pstate))25552552 return 0;···28902881 * any underrun. If not able to get Dimm info assume 16GB dimm28912882 * to avoid any underrun.28922883 */28932893- if (!dev_priv->dram_info.valid_dimm ||28942894- dev_priv->dram_info.is_16gb_dimm)28842884+ if (dev_priv->dram_info.is_16gb_dimm)28952885 wm[0] += 1;2896288628972887 } else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) {···30173009 intel_print_wm_latency(dev_priv, "Cursor", dev_priv->wm.cur_latency);30183010}3019301130123012+static void snb_wm_lp3_irq_quirk(struct drm_i915_private *dev_priv)30133013+{30143014+ /*30153015+ * On some SNB machines (Thinkpad X220 Tablet at least)30163016+ * LP3 usage can cause vblank interrupts to be lost.30173017+ * The DEIIR bit will go high but it looks like the CPU30183018+ * never gets interrupted.30193019+ *30203020+ * It's not clear whether other interrupt source could30213021+ * be affected or if this is somehow limited to vblank30223022+ * interrupts only. To play it safe we disable LP330233023+ * watermarks entirely.30243024+ */30253025+ if (dev_priv->wm.pri_latency[3] == 0 &&30263026+ dev_priv->wm.spr_latency[3] == 0 &&30273027+ dev_priv->wm.cur_latency[3] == 0)30283028+ return;30293029+30303030+ dev_priv->wm.pri_latency[3] = 0;30313031+ dev_priv->wm.spr_latency[3] = 0;30323032+ dev_priv->wm.cur_latency[3] = 0;30333033+30343034+ DRM_DEBUG_KMS("LP3 watermarks disabled due to potential for lost interrupts\n");30353035+ intel_print_wm_latency(dev_priv, "Primary", dev_priv->wm.pri_latency);30363036+ intel_print_wm_latency(dev_priv, "Sprite", dev_priv->wm.spr_latency);30373037+ intel_print_wm_latency(dev_priv, "Cursor", dev_priv->wm.cur_latency);30383038+}30393039+30203040static void ilk_setup_wm_latency(struct drm_i915_private *dev_priv)30213041{30223042 intel_read_wm_latency(dev_priv, dev_priv->wm.pri_latency);···30613025 intel_print_wm_latency(dev_priv, "Sprite", dev_priv->wm.spr_latency);30623026 intel_print_wm_latency(dev_priv, "Cursor", dev_priv->wm.cur_latency);3063302730643064- if (IS_GEN6(dev_priv))30283028+ if (IS_GEN6(dev_priv)) {30653029 snb_wm_latency_quirk(dev_priv);30303030+ snb_wm_lp3_irq_quirk(dev_priv);30313031+ }30663032}3067303330683034static void skl_setup_wm_latency(struct drm_i915_private *dev_priv)
+36-2
drivers/gpu/drm/i915/intel_ringbuffer.c
···9191gen4_render_ring_flush(struct i915_request *rq, u32 mode)9292{9393 u32 cmd, *cs;9494+ int i;94959596 /*9697 * read/write caches:···128127 cmd |= MI_INVALIDATE_ISP;129128 }130129131131- cs = intel_ring_begin(rq, 2);130130+ i = 2;131131+ if (mode & EMIT_INVALIDATE)132132+ i += 20;133133+134134+ cs = intel_ring_begin(rq, i);132135 if (IS_ERR(cs))133136 return PTR_ERR(cs);134137135138 *cs++ = cmd;136136- *cs++ = MI_NOOP;139139+140140+ /*141141+ * A random delay to let the CS invalidate take effect? Without this142142+ * delay, the GPU relocation path fails as the CS does not see143143+ * the updated contents. Just as important, if we apply the flushes144144+ * to the EMIT_FLUSH branch (i.e. immediately after the relocation145145+ * write and before the invalidate on the next batch), the relocations146146+ * still fail. This implies that is a delay following invalidation147147+ * that is required to reset the caches as opposed to a delay to148148+ * ensure the memory is written.149149+ */150150+ if (mode & EMIT_INVALIDATE) {151151+ *cs++ = GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE;152152+ *cs++ = i915_ggtt_offset(rq->engine->scratch) |153153+ PIPE_CONTROL_GLOBAL_GTT;154154+ *cs++ = 0;155155+ *cs++ = 0;156156+157157+ for (i = 0; i < 12; i++)158158+ *cs++ = MI_FLUSH;159159+160160+ *cs++ = GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE;161161+ *cs++ = i915_ggtt_offset(rq->engine->scratch) |162162+ PIPE_CONTROL_GLOBAL_GTT;163163+ *cs++ = 0;164164+ *cs++ = 0;165165+ }166166+167167+ *cs++ = cmd;168168+137169 intel_ring_advance(rq, cs);138170139171 return 0;
···7171 */72727373/* HHI Registers */7474+#define HHI_GCLK_MPEG2 0x148 /* 0x52 offset in data sheet */7475#define HHI_VDAC_CNTL0 0x2F4 /* 0xbd offset in data sheet */7576#define HHI_VDAC_CNTL1 0x2F8 /* 0xbe offset in data sheet */7677#define HHI_HDMI_PHY_CNTL0 0x3a0 /* 0xe8 offset in data sheet */···715714 { 5, &meson_hdmi_encp_mode_1080i60 },716715 { 20, &meson_hdmi_encp_mode_1080i50 },717716 { 32, &meson_hdmi_encp_mode_1080p24 },717717+ { 33, &meson_hdmi_encp_mode_1080p50 },718718 { 34, &meson_hdmi_encp_mode_1080p30 },719719 { 31, &meson_hdmi_encp_mode_1080p50 },720720 { 16, &meson_hdmi_encp_mode_1080p60 },···856854 unsigned int sof_lines;857855 unsigned int vsync_lines;858856857857+ /* Use VENCI for 480i and 576i and double HDMI pixels */858858+ if (mode->flags & DRM_MODE_FLAG_DBLCLK) {859859+ hdmi_repeat = true;860860+ use_enci = true;861861+ venc_hdmi_latency = 1;862862+ }863863+859864 if (meson_venc_hdmi_supported_vic(vic)) {860865 vmode = meson_venc_hdmi_get_vic_vmode(vic);861866 if (!vmode) {···874865 } else {875866 meson_venc_hdmi_get_dmt_vmode(mode, &vmode_dmt);876867 vmode = &vmode_dmt;877877- }878878-879879- /* Use VENCI for 480i and 576i and double HDMI pixels */880880- if (mode->flags & DRM_MODE_FLAG_DBLCLK) {881881- hdmi_repeat = true;882882- use_enci = true;883883- venc_hdmi_latency = 1;868868+ use_enci = false;884869 }885870886871 /* Repeat VENC pixels for 480/576i/p, 720p50/60 and 1080p50/60 */···15321529void meson_venc_enable_vsync(struct meson_drm *priv)15331530{15341531 writel_relaxed(2, priv->io_base + _REG(VENC_INTCTRL));15321532+ regmap_update_bits(priv->hhi, HHI_GCLK_MPEG2, BIT(25), BIT(25));15351533}1536153415371535void meson_venc_disable_vsync(struct meson_drm *priv)15381536{15371537+ regmap_update_bits(priv->hhi, HHI_GCLK_MPEG2, BIT(25), 0);15391538 writel_relaxed(0, priv->io_base + _REG(VENC_INTCTRL));15401539}15411540
+6-6
drivers/gpu/drm/meson/meson_viu.c
···184184 if (lut_sel == VIU_LUT_OSD_OETF) {185185 writel(0, priv->io_base + _REG(addr_port));186186187187- for (i = 0; i < 20; i++)187187+ for (i = 0; i < (OSD_OETF_LUT_SIZE / 2); i++)188188 writel(r_map[i * 2] | (r_map[i * 2 + 1] << 16),189189 priv->io_base + _REG(data_port));190190191191 writel(r_map[OSD_OETF_LUT_SIZE - 1] | (g_map[0] << 16),192192 priv->io_base + _REG(data_port));193193194194- for (i = 0; i < 20; i++)194194+ for (i = 0; i < (OSD_OETF_LUT_SIZE / 2); i++)195195 writel(g_map[i * 2 + 1] | (g_map[i * 2 + 2] << 16),196196 priv->io_base + _REG(data_port));197197198198- for (i = 0; i < 20; i++)198198+ for (i = 0; i < (OSD_OETF_LUT_SIZE / 2); i++)199199 writel(b_map[i * 2] | (b_map[i * 2 + 1] << 16),200200 priv->io_base + _REG(data_port));201201···211211 } else if (lut_sel == VIU_LUT_OSD_EOTF) {212212 writel(0, priv->io_base + _REG(addr_port));213213214214- for (i = 0; i < 20; i++)214214+ for (i = 0; i < (OSD_EOTF_LUT_SIZE / 2); i++)215215 writel(r_map[i * 2] | (r_map[i * 2 + 1] << 16),216216 priv->io_base + _REG(data_port));217217218218 writel(r_map[OSD_EOTF_LUT_SIZE - 1] | (g_map[0] << 16),219219 priv->io_base + _REG(data_port));220220221221- for (i = 0; i < 20; i++)221221+ for (i = 0; i < (OSD_EOTF_LUT_SIZE / 2); i++)222222 writel(g_map[i * 2 + 1] | (g_map[i * 2 + 2] << 16),223223 priv->io_base + _REG(data_port));224224225225- for (i = 0; i < 20; i++)225225+ for (i = 0; i < (OSD_EOTF_LUT_SIZE / 2); i++)226226 writel(b_map[i * 2] | (b_map[i * 2 + 1] << 16),227227 priv->io_base + _REG(data_port));228228
+11-11
drivers/gpu/drm/omapdrm/dss/dsi.c
···5409540954105410 /* DSI on OMAP3 doesn't have register DSI_GNQ, set number54115411 * of data to 3 by default */54125412- if (dsi->data->quirks & DSI_QUIRK_GNQ)54125412+ if (dsi->data->quirks & DSI_QUIRK_GNQ) {54135413+ dsi_runtime_get(dsi);54135414 /* NB_DATA_LANES */54145415 dsi->num_lanes_supported = 1 + REG_GET(dsi, DSI_GNQ, 11, 9);54155415- else54165416+ dsi_runtime_put(dsi);54175417+ } else {54165418 dsi->num_lanes_supported = 3;54195419+ }5417542054185421 r = dsi_init_output(dsi);54195422 if (r)···54295426 }5430542754315428 r = of_platform_populate(dev->of_node, NULL, NULL, dev);54325432- if (r)54295429+ if (r) {54335430 DSSERR("Failed to populate DSI child devices: %d\n", r);54315431+ goto err_uninit_output;54325432+ }5434543354355434 r = component_add(&pdev->dev, &dsi_component_ops);54365435 if (r)54375437- goto err_uninit_output;54365436+ goto err_of_depopulate;5438543754395438 return 0;5440543954405440+err_of_depopulate:54415441+ of_platform_depopulate(dev);54415442err_uninit_output:54425443 dsi_uninit_output(dsi);54435444err_pm_disable:···54775470 /* wait for current handler to finish before turning the DSI off */54785471 synchronize_irq(dsi->irq);5479547254805480- dispc_runtime_put(dsi->dss->dispc);54815481-54825473 return 0;54835474}5484547554855476static int dsi_runtime_resume(struct device *dev)54865477{54875478 struct dsi_data *dsi = dev_get_drvdata(dev);54885488- int r;54895489-54905490- r = dispc_runtime_get(dsi->dss->dispc);54915491- if (r)54925492- return r;5493547954945480 dsi->is_enabled = true;54955481 /* ensure the irq handler sees the is_enabled value */
+10-1
drivers/gpu/drm/omapdrm/dss/dss.c
···14841484 dss);1485148514861486 /* Add all the child devices as components. */14871487+ r = of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev);14881488+ if (r)14891489+ goto err_uninit_debugfs;14901490+14871491 omapdss_gather_components(&pdev->dev);1488149214891493 device_for_each_child(&pdev->dev, &match, dss_add_child_component);1490149414911495 r = component_master_add_with_match(&pdev->dev, &dss_component_ops, match);14921496 if (r)14931493- goto err_uninit_debugfs;14971497+ goto err_of_depopulate;1494149814951499 return 0;15001500+15011501+err_of_depopulate:15021502+ of_platform_depopulate(&pdev->dev);1496150314971504err_uninit_debugfs:14981505 dss_debugfs_remove_file(dss->debugfs.clk);···15281521static int dss_remove(struct platform_device *pdev)15291522{15301523 struct dss_device *dss = platform_get_drvdata(pdev);15241524+15251525+ of_platform_depopulate(&pdev->dev);1531152615321527 component_master_del(&pdev->dev, &dss_component_ops);15331528
···946946 if (venc->tv_dac_clk)947947 clk_disable_unprepare(venc->tv_dac_clk);948948949949- dispc_runtime_put(venc->dss->dispc);950950-951949 return 0;952950}953951954952static int venc_runtime_resume(struct device *dev)955953{956954 struct venc_device *venc = dev_get_drvdata(dev);957957- int r;958958-959959- r = dispc_runtime_get(venc->dss->dispc);960960- if (r < 0)961961- return r;962955963956 if (venc->tv_dac_clk)964957 clk_prepare_enable(venc->tv_dac_clk);
···202202203203static void __rcar_du_group_start_stop(struct rcar_du_group *rgrp, bool start)204204{205205- struct rcar_du_crtc *rcrtc = &rgrp->dev->crtcs[rgrp->index * 2];205205+ struct rcar_du_device *rcdu = rgrp->dev;206206207207- rcar_du_crtc_dsysr_clr_set(rcrtc, DSYSR_DRES | DSYSR_DEN,208208- start ? DSYSR_DEN : DSYSR_DRES);207207+ /*208208+ * Group start/stop is controlled by the DRES and DEN bits of DSYSR0209209+ * for the first group and DSYSR2 for the second group. On most DU210210+ * instances, this maps to the first CRTC of the group, and we can just211211+ * use rcar_du_crtc_dsysr_clr_set() to access the correct DSYSR. On212212+ * M3-N, however, DU2 doesn't exist, but DSYSR2 does. We thus need to213213+ * access the register directly using group read/write.214214+ */215215+ if (rcdu->info->channels_mask & BIT(rgrp->index * 2)) {216216+ struct rcar_du_crtc *rcrtc = &rgrp->dev->crtcs[rgrp->index * 2];217217+218218+ rcar_du_crtc_dsysr_clr_set(rcrtc, DSYSR_DRES | DSYSR_DEN,219219+ start ? DSYSR_DEN : DSYSR_DRES);220220+ } else {221221+ rcar_du_group_write(rgrp, DSYSR,222222+ start ? DSYSR_DEN : DSYSR_DRES);223223+ }209224}210225211226void rcar_du_group_start_stop(struct rcar_du_group *rgrp, bool start)
+2-2
drivers/gpu/drm/sun4i/sun4i_lvds.c
···75757676 DRM_DEBUG_DRIVER("Enabling LVDS output\n");77777878- if (!IS_ERR(tcon->panel)) {7878+ if (tcon->panel) {7979 drm_panel_prepare(tcon->panel);8080 drm_panel_enable(tcon->panel);8181 }···88888989 DRM_DEBUG_DRIVER("Disabling LVDS output\n");90909191- if (!IS_ERR(tcon->panel)) {9191+ if (tcon->panel) {9292 drm_panel_disable(tcon->panel);9393 drm_panel_unprepare(tcon->panel);9494 }
+2-2
drivers/gpu/drm/sun4i/sun4i_rgb.c
···135135136136 DRM_DEBUG_DRIVER("Enabling RGB output\n");137137138138- if (!IS_ERR(tcon->panel)) {138138+ if (tcon->panel) {139139 drm_panel_prepare(tcon->panel);140140 drm_panel_enable(tcon->panel);141141 }···148148149149 DRM_DEBUG_DRIVER("Disabling RGB output\n");150150151151- if (!IS_ERR(tcon->panel)) {151151+ if (tcon->panel) {152152 drm_panel_disable(tcon->panel);153153 drm_panel_unprepare(tcon->panel);154154 }
+3-2
drivers/gpu/drm/sun4i/sun4i_tcon.c
···491491 sun4i_tcon0_mode_set_common(tcon, mode);492492493493 /* Set dithering if needed */494494- sun4i_tcon0_mode_set_dithering(tcon, tcon->panel->connector);494494+ if (tcon->panel)495495+ sun4i_tcon0_mode_set_dithering(tcon, tcon->panel->connector);495496496497 /* Adjust clock delay */497498 clk_delay = sun4i_tcon_get_clk_delay(mode, 0);···556555 * Following code is a way to avoid quirks all around TCON557556 * and DOTCLOCK drivers.558557 */559559- if (!IS_ERR(tcon->panel)) {558558+ if (tcon->panel) {560559 struct drm_panel *panel = tcon->panel;561560 struct drm_connector *connector = panel->connector;562561 struct drm_display_info display_info = connector->display_info;
+6
drivers/gpu/drm/vc4/vc4_kms.c
···214214 return 0;215215 }216216217217+ /* We know for sure we don't want an async update here. Set218218+ * state->legacy_cursor_update to false to prevent219219+ * drm_atomic_helper_setup_commit() from auto-completing220220+ * commit->flip_done.221221+ */222222+ state->legacy_cursor_update = false;217223 ret = drm_atomic_helper_setup_commit(state, nonblock);218224 if (ret)219225 return ret;
+13-2
drivers/gpu/drm/vc4/vc4_plane.c
···807807static void vc4_plane_atomic_async_update(struct drm_plane *plane,808808 struct drm_plane_state *state)809809{810810- struct vc4_plane_state *vc4_state = to_vc4_plane_state(plane->state);810810+ struct vc4_plane_state *vc4_state, *new_vc4_state;811811812812 if (plane->state->fb != state->fb) {813813 vc4_plane_async_set_fb(plane, state->fb);···828828 plane->state->src_y = state->src_y;829829830830 /* Update the display list based on the new crtc_x/y. */831831- vc4_plane_atomic_check(plane, plane->state);831831+ vc4_plane_atomic_check(plane, state);832832+833833+ new_vc4_state = to_vc4_plane_state(state);834834+ vc4_state = to_vc4_plane_state(plane->state);835835+836836+ /* Update the current vc4_state pos0, pos2 and ptr0 dlist entries. */837837+ vc4_state->dlist[vc4_state->pos0_offset] =838838+ new_vc4_state->dlist[vc4_state->pos0_offset];839839+ vc4_state->dlist[vc4_state->pos2_offset] =840840+ new_vc4_state->dlist[vc4_state->pos2_offset];841841+ vc4_state->dlist[vc4_state->ptr0_offset] =842842+ new_vc4_state->dlist[vc4_state->ptr0_offset];832843833844 /* Note that we can't just call vc4_plane_write_dlist()834845 * because that would smash the context data that the HVS is
+3
drivers/gpu/vga/vga_switcheroo.c
···380380 mutex_unlock(&vgasr_mutex);381381 return -EINVAL;382382 }383383+ /* notify if GPU has been already bound */384384+ if (ops->gpu_bound)385385+ ops->gpu_bound(pdev, id);383386 }384387 mutex_unlock(&vgasr_mutex);385388
···325325 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM,326326 USB_DEVICE_ID_ELECOM_BM084),327327 HID_BATTERY_QUIRK_IGNORE },328328+ { HID_USB_DEVICE(USB_VENDOR_ID_SYMBOL,329329+ USB_DEVICE_ID_SYMBOL_SCANNER_3),330330+ HID_BATTERY_QUIRK_IGNORE },328331 {}329332};330333···18411838}18421839EXPORT_SYMBOL_GPL(hidinput_disconnect);1843184018441844-/**18451845- * hid_scroll_counter_handle_scroll() - Send high- and low-resolution scroll18461846- * events given a high-resolution wheel18471847- * movement.18481848- * @counter: a hid_scroll_counter struct describing the wheel.18491849- * @hi_res_value: the movement of the wheel, in the mouse's high-resolution18501850- * units.18511851- *18521852- * Given a high-resolution movement, this function converts the movement into18531853- * microns and emits high-resolution scroll events for the input device. It also18541854- * uses the multiplier from &struct hid_scroll_counter to emit low-resolution18551855- * scroll events when appropriate for backwards-compatibility with userspace18561856- * input libraries.18571857- */18581858-void hid_scroll_counter_handle_scroll(struct hid_scroll_counter *counter,18591859- int hi_res_value)18601860-{18611861- int low_res_value, remainder, multiplier;18621862-18631863- input_report_rel(counter->dev, REL_WHEEL_HI_RES,18641864- hi_res_value * counter->microns_per_hi_res_unit);18651865-18661866- /*18671867- * Update the low-res remainder with the high-res value,18681868- * but reset if the direction has changed.18691869- */18701870- remainder = counter->remainder;18711871- if ((remainder ^ hi_res_value) < 0)18721872- remainder = 0;18731873- remainder += hi_res_value;18741874-18751875- /*18761876- * Then just use the resolution multiplier to see if18771877- * we should send a low-res (aka regular wheel) event.18781878- */18791879- multiplier = counter->resolution_multiplier;18801880- low_res_value = remainder / multiplier;18811881- remainder -= low_res_value * multiplier;18821882- counter->remainder = remainder;18831883-18841884- if (low_res_value)18851885- input_report_rel(counter->dev, REL_WHEEL, low_res_value);18861886-}18871887-EXPORT_SYMBOL_GPL(hid_scroll_counter_handle_scroll);
+27-282
drivers/hid/hid-logitech-hidpp.c
···6464#define HIDPP_QUIRK_NO_HIDINPUT BIT(23)6565#define HIDPP_QUIRK_FORCE_OUTPUT_REPORTS BIT(24)6666#define HIDPP_QUIRK_UNIFYING BIT(25)6767-#define HIDPP_QUIRK_HI_RES_SCROLL_1P0 BIT(26)6868-#define HIDPP_QUIRK_HI_RES_SCROLL_X2120 BIT(27)6969-#define HIDPP_QUIRK_HI_RES_SCROLL_X2121 BIT(28)7070-7171-/* Convenience constant to check for any high-res support. */7272-#define HIDPP_QUIRK_HI_RES_SCROLL (HIDPP_QUIRK_HI_RES_SCROLL_1P0 | \7373- HIDPP_QUIRK_HI_RES_SCROLL_X2120 | \7474- HIDPP_QUIRK_HI_RES_SCROLL_X2121)75677668#define HIDPP_QUIRK_DELAYED_INIT HIDPP_QUIRK_NO_HIDINPUT7769···149157 unsigned long capabilities;150158151159 struct hidpp_battery battery;152152- struct hid_scroll_counter vertical_wheel_counter;153160};154161155162/* HID++ 1.0 error codes */···400409#define HIDPP_SET_LONG_REGISTER 0x82401410#define HIDPP_GET_LONG_REGISTER 0x83402411403403-/**404404- * hidpp10_set_register_bit() - Sets a single bit in a HID++ 1.0 register.405405- * @hidpp_dev: the device to set the register on.406406- * @register_address: the address of the register to modify.407407- * @byte: the byte of the register to modify. Should be less than 3.408408- * Return: 0 if successful, otherwise a negative error code.409409- */410410-static int hidpp10_set_register_bit(struct hidpp_device *hidpp_dev,411411- u8 register_address, u8 byte, u8 bit)412412+#define HIDPP_REG_GENERAL 0x00413413+414414+static int hidpp10_enable_battery_reporting(struct hidpp_device *hidpp_dev)412415{413416 struct hidpp_report response;414417 int ret;415418 u8 params[3] = { 0 };416419417420 ret = hidpp_send_rap_command_sync(hidpp_dev,418418- REPORT_ID_HIDPP_SHORT,419419- HIDPP_GET_REGISTER,420420- register_address,421421- NULL, 0, &response);421421+ REPORT_ID_HIDPP_SHORT,422422+ HIDPP_GET_REGISTER,423423+ HIDPP_REG_GENERAL,424424+ NULL, 0, &response);422425 if (ret)423426 return ret;424427425428 memcpy(params, response.rap.params, 3);426429427427- params[byte] |= BIT(bit);430430+ /* Set the battery bit */431431+ params[0] |= BIT(4);428432429433 return hidpp_send_rap_command_sync(hidpp_dev,430430- REPORT_ID_HIDPP_SHORT,431431- HIDPP_SET_REGISTER,432432- register_address,433433- params, 3, &response);434434-}435435-436436-437437-#define HIDPP_REG_GENERAL 0x00438438-439439-static int hidpp10_enable_battery_reporting(struct hidpp_device *hidpp_dev)440440-{441441- return hidpp10_set_register_bit(hidpp_dev, HIDPP_REG_GENERAL, 0, 4);442442-}443443-444444-#define HIDPP_REG_FEATURES 0x01445445-446446-/* On HID++ 1.0 devices, high-res scroll was called "scrolling acceleration". */447447-static int hidpp10_enable_scrolling_acceleration(struct hidpp_device *hidpp_dev)448448-{449449- return hidpp10_set_register_bit(hidpp_dev, HIDPP_REG_FEATURES, 0, 6);434434+ REPORT_ID_HIDPP_SHORT,435435+ HIDPP_SET_REGISTER,436436+ HIDPP_REG_GENERAL,437437+ params, 3, &response);450438}451439452440#define HIDPP_REG_BATTERY_STATUS 0x07···11341164 }1135116511361166 return ret;11371137-}11381138-11391139-/* -------------------------------------------------------------------------- */11401140-/* 0x2120: Hi-resolution scrolling */11411141-/* -------------------------------------------------------------------------- */11421142-11431143-#define HIDPP_PAGE_HI_RESOLUTION_SCROLLING 0x212011441144-11451145-#define CMD_HI_RESOLUTION_SCROLLING_SET_HIGHRES_SCROLLING_MODE 0x1011461146-11471147-static int hidpp_hrs_set_highres_scrolling_mode(struct hidpp_device *hidpp,11481148- bool enabled, u8 *multiplier)11491149-{11501150- u8 feature_index;11511151- u8 feature_type;11521152- int ret;11531153- u8 params[1];11541154- struct hidpp_report response;11551155-11561156- ret = hidpp_root_get_feature(hidpp,11571157- HIDPP_PAGE_HI_RESOLUTION_SCROLLING,11581158- &feature_index,11591159- &feature_type);11601160- if (ret)11611161- return ret;11621162-11631163- params[0] = enabled ? BIT(0) : 0;11641164- ret = hidpp_send_fap_command_sync(hidpp, feature_index,11651165- CMD_HI_RESOLUTION_SCROLLING_SET_HIGHRES_SCROLLING_MODE,11661166- params, sizeof(params), &response);11671167- if (ret)11681168- return ret;11691169- *multiplier = response.fap.params[1];11701170- return 0;11711171-}11721172-11731173-/* -------------------------------------------------------------------------- */11741174-/* 0x2121: HiRes Wheel */11751175-/* -------------------------------------------------------------------------- */11761176-11771177-#define HIDPP_PAGE_HIRES_WHEEL 0x212111781178-11791179-#define CMD_HIRES_WHEEL_GET_WHEEL_CAPABILITY 0x0011801180-#define CMD_HIRES_WHEEL_SET_WHEEL_MODE 0x2011811181-11821182-static int hidpp_hrw_get_wheel_capability(struct hidpp_device *hidpp,11831183- u8 *multiplier)11841184-{11851185- u8 feature_index;11861186- u8 feature_type;11871187- int ret;11881188- struct hidpp_report response;11891189-11901190- ret = hidpp_root_get_feature(hidpp, HIDPP_PAGE_HIRES_WHEEL,11911191- &feature_index, &feature_type);11921192- if (ret)11931193- goto return_default;11941194-11951195- ret = hidpp_send_fap_command_sync(hidpp, feature_index,11961196- CMD_HIRES_WHEEL_GET_WHEEL_CAPABILITY,11971197- NULL, 0, &response);11981198- if (ret)11991199- goto return_default;12001200-12011201- *multiplier = response.fap.params[0];12021202- return 0;12031203-return_default:12041204- hid_warn(hidpp->hid_dev,12051205- "Couldn't get wheel multiplier (error %d), assuming %d.\n",12061206- ret, *multiplier);12071207- return ret;12081208-}12091209-12101210-static int hidpp_hrw_set_wheel_mode(struct hidpp_device *hidpp, bool invert,12111211- bool high_resolution, bool use_hidpp)12121212-{12131213- u8 feature_index;12141214- u8 feature_type;12151215- int ret;12161216- u8 params[1];12171217- struct hidpp_report response;12181218-12191219- ret = hidpp_root_get_feature(hidpp, HIDPP_PAGE_HIRES_WHEEL,12201220- &feature_index, &feature_type);12211221- if (ret)12221222- return ret;12231223-12241224- params[0] = (invert ? BIT(2) : 0) |12251225- (high_resolution ? BIT(1) : 0) |12261226- (use_hidpp ? BIT(0) : 0);12271227-12281228- return hidpp_send_fap_command_sync(hidpp, feature_index,12291229- CMD_HIRES_WHEEL_SET_WHEEL_MODE,12301230- params, sizeof(params), &response);12311167}1232116812331169/* -------------------------------------------------------------------------- */···23992523 input_report_rel(mydata->input, REL_Y, v);2400252424012525 v = hid_snto32(data[6], 8);24022402- hid_scroll_counter_handle_scroll(24032403- &hidpp->vertical_wheel_counter, v);25262526+ input_report_rel(mydata->input, REL_WHEEL, v);2404252724052528 input_sync(mydata->input);24062529 }···25282653}2529265425302655/* -------------------------------------------------------------------------- */25312531-/* High-resolution scroll wheels */25322532-/* -------------------------------------------------------------------------- */25332533-25342534-/**25352535- * struct hi_res_scroll_info - Stores info on a device's high-res scroll wheel.25362536- * @product_id: the HID product ID of the device being described.25372537- * @microns_per_hi_res_unit: the distance moved by the user's finger for each25382538- * high-resolution unit reported by the device, in25392539- * 256ths of a millimetre.25402540- */25412541-struct hi_res_scroll_info {25422542- __u32 product_id;25432543- int microns_per_hi_res_unit;25442544-};25452545-25462546-static struct hi_res_scroll_info hi_res_scroll_devices[] = {25472547- { /* Anywhere MX */25482548- .product_id = 0x1017, .microns_per_hi_res_unit = 445 },25492549- { /* Performance MX */25502550- .product_id = 0x101a, .microns_per_hi_res_unit = 406 },25512551- { /* M560 */25522552- .product_id = 0x402d, .microns_per_hi_res_unit = 435 },25532553- { /* MX Master 2S */25542554- .product_id = 0x4069, .microns_per_hi_res_unit = 406 },25552555-};25562556-25572557-static int hi_res_scroll_look_up_microns(__u32 product_id)25582558-{25592559- int i;25602560- int num_devices = sizeof(hi_res_scroll_devices)25612561- / sizeof(hi_res_scroll_devices[0]);25622562- for (i = 0; i < num_devices; i++) {25632563- if (hi_res_scroll_devices[i].product_id == product_id)25642564- return hi_res_scroll_devices[i].microns_per_hi_res_unit;25652565- }25662566- /* We don't have a value for this device, so use a sensible default. */25672567- return 406;25682568-}25692569-25702570-static int hi_res_scroll_enable(struct hidpp_device *hidpp)25712571-{25722572- int ret;25732573- u8 multiplier = 8;25742574-25752575- if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_X2121) {25762576- ret = hidpp_hrw_set_wheel_mode(hidpp, false, true, false);25772577- hidpp_hrw_get_wheel_capability(hidpp, &multiplier);25782578- } else if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_X2120) {25792579- ret = hidpp_hrs_set_highres_scrolling_mode(hidpp, true,25802580- &multiplier);25812581- } else /* if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_1P0) */25822582- ret = hidpp10_enable_scrolling_acceleration(hidpp);25832583-25842584- if (ret)25852585- return ret;25862586-25872587- hidpp->vertical_wheel_counter.resolution_multiplier = multiplier;25882588- hidpp->vertical_wheel_counter.microns_per_hi_res_unit =25892589- hi_res_scroll_look_up_microns(hidpp->hid_dev->product);25902590- hid_info(hidpp->hid_dev, "multiplier = %d, microns = %d\n",25912591- multiplier,25922592- hidpp->vertical_wheel_counter.microns_per_hi_res_unit);25932593- return 0;25942594-}25952595-25962596-/* -------------------------------------------------------------------------- */25972656/* Generic HID++ devices */25982657/* -------------------------------------------------------------------------- */25992658···25722763 wtp_populate_input(hidpp, input, origin_is_hid_core);25732764 else if (hidpp->quirks & HIDPP_QUIRK_CLASS_M560)25742765 m560_populate_input(hidpp, input, origin_is_hid_core);25752575-25762576- if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL) {25772577- input_set_capability(input, EV_REL, REL_WHEEL_HI_RES);25782578- hidpp->vertical_wheel_counter.dev = input;25792579- }25802766}2581276725822768static int hidpp_input_configured(struct hid_device *hdev,···26882884 return m560_raw_event(hdev, data, size);2689288526902886 return 0;26912691-}26922692-26932693-static int hidpp_event(struct hid_device *hdev, struct hid_field *field,26942694- struct hid_usage *usage, __s32 value)26952695-{26962696- /* This function will only be called for scroll events, due to the26972697- * restriction imposed in hidpp_usages.26982698- */26992699- struct hidpp_device *hidpp = hid_get_drvdata(hdev);27002700- struct hid_scroll_counter *counter = &hidpp->vertical_wheel_counter;27012701- /* A scroll event may occur before the multiplier has been retrieved or27022702- * the input device set, or high-res scroll enabling may fail. In such27032703- * cases we must return early (falling back to default behaviour) to27042704- * avoid a crash in hid_scroll_counter_handle_scroll.27052705- */27062706- if (!(hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL) || value == 027072707- || counter->dev == NULL || counter->resolution_multiplier == 0)27082708- return 0;27092709-27102710- hid_scroll_counter_handle_scroll(counter, value);27112711- return 1;27122887}2713288827142889static int hidpp_initialize_battery(struct hidpp_device *hidpp)···29013118 if (hidpp->battery.ps)29023119 power_supply_changed(hidpp->battery.ps);2903312029042904- if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL)29052905- hi_res_scroll_enable(hidpp);29062906-29073121 if (!(hidpp->quirks & HIDPP_QUIRK_NO_HIDINPUT) || hidpp->delayed_input)29083122 /* if the input nodes are already created, we can stop now */29093123 return;···30863306 mutex_destroy(&hidpp->send_mutex);30873307}3088330830893089-#define LDJ_DEVICE(product) \30903090- HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, \30913091- USB_VENDOR_ID_LOGITECH, (product))30923092-30933309static const struct hid_device_id hidpp_devices[] = {30943310 { /* wireless touchpad */30953095- LDJ_DEVICE(0x4011),33113311+ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,33123312+ USB_VENDOR_ID_LOGITECH, 0x4011),30963313 .driver_data = HIDPP_QUIRK_CLASS_WTP | HIDPP_QUIRK_DELAYED_INIT |30973314 HIDPP_QUIRK_WTP_PHYSICAL_BUTTONS },30983315 { /* wireless touchpad T650 */30993099- LDJ_DEVICE(0x4101),33163316+ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,33173317+ USB_VENDOR_ID_LOGITECH, 0x4101),31003318 .driver_data = HIDPP_QUIRK_CLASS_WTP | HIDPP_QUIRK_DELAYED_INIT },31013319 { /* wireless touchpad T651 */31023320 HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH,31033321 USB_DEVICE_ID_LOGITECH_T651),31043322 .driver_data = HIDPP_QUIRK_CLASS_WTP },31053105- { /* Mouse Logitech Anywhere MX */31063106- LDJ_DEVICE(0x1017), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 },31073107- { /* Mouse Logitech Cube */31083108- LDJ_DEVICE(0x4010), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2120 },31093109- { /* Mouse Logitech M335 */31103110- LDJ_DEVICE(0x4050), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31113111- { /* Mouse Logitech M515 */31123112- LDJ_DEVICE(0x4007), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2120 },31133323 { /* Mouse logitech M560 */31143114- LDJ_DEVICE(0x402d),31153115- .driver_data = HIDPP_QUIRK_DELAYED_INIT | HIDPP_QUIRK_CLASS_M56031163116- | HIDPP_QUIRK_HI_RES_SCROLL_X2120 },31173117- { /* Mouse Logitech M705 (firmware RQM17) */31183118- LDJ_DEVICE(0x101b), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 },31193119- { /* Mouse Logitech M705 (firmware RQM67) */31203120- LDJ_DEVICE(0x406d), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31213121- { /* Mouse Logitech M720 */31223122- LDJ_DEVICE(0x405e), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31233123- { /* Mouse Logitech MX Anywhere 2 */31243124- LDJ_DEVICE(0x404a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31253125- { LDJ_DEVICE(0xb013), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31263126- { LDJ_DEVICE(0xb018), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31273127- { LDJ_DEVICE(0xb01f), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31283128- { /* Mouse Logitech MX Anywhere 2S */31293129- LDJ_DEVICE(0x406a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31303130- { /* Mouse Logitech MX Master */31313131- LDJ_DEVICE(0x4041), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31323132- { LDJ_DEVICE(0x4060), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31333133- { LDJ_DEVICE(0x4071), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31343134- { /* Mouse Logitech MX Master 2S */31353135- LDJ_DEVICE(0x4069), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31363136- { /* Mouse Logitech Performance MX */31373137- LDJ_DEVICE(0x101a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 },33243324+ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,33253325+ USB_VENDOR_ID_LOGITECH, 0x402d),33263326+ .driver_data = HIDPP_QUIRK_DELAYED_INIT | HIDPP_QUIRK_CLASS_M560 },31383327 { /* Keyboard logitech K400 */31393139- LDJ_DEVICE(0x4024),33283328+ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,33293329+ USB_VENDOR_ID_LOGITECH, 0x4024),31403330 .driver_data = HIDPP_QUIRK_CLASS_K400 },31413331 { /* Solar Keyboard Logitech K750 */31423142- LDJ_DEVICE(0x4002),33323332+ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,33333333+ USB_VENDOR_ID_LOGITECH, 0x4002),31433334 .driver_data = HIDPP_QUIRK_CLASS_K750 },3144333531453145- { LDJ_DEVICE(HID_ANY_ID) },33363336+ { HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,33373337+ USB_VENDOR_ID_LOGITECH, HID_ANY_ID)},3146333831473339 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_G920_WHEEL),31483340 .driver_data = HIDPP_QUIRK_CLASS_G920 | HIDPP_QUIRK_FORCE_OUTPUT_REPORTS},···3123337131243372MODULE_DEVICE_TABLE(hid, hidpp_devices);3125337331263126-static const struct hid_usage_id hidpp_usages[] = {31273127- { HID_GD_WHEEL, EV_REL, REL_WHEEL },31283128- { HID_ANY_ID - 1, HID_ANY_ID - 1, HID_ANY_ID - 1}31293129-};31303130-31313374static struct hid_driver hidpp_driver = {31323375 .name = "logitech-hidpp-device",31333376 .id_table = hidpp_devices,31343377 .probe = hidpp_probe,31353378 .remove = hidpp_remove,31363379 .raw_event = hidpp_raw_event,31373137- .usage_table = hidpp_usages,31383138- .event = hidpp_event,31393380 .input_configured = hidpp_input_configured,31403381 .input_mapping = hidpp_input_mapping,31413382 .input_mapped = hidpp_input_mapped,
···2323 * In order to avoid breaking them this driver creates a layered hidraw device,2424 * so it can detect when the client is running and then:2525 * - it will not send any command to the controller.2626- * - this input device will be disabled, to avoid double input of the same2626+ * - this input device will be removed, to avoid double input of the same2727 * user action.2828+ * When the client is closed, this input device will be created again.2829 *2930 * For additional functions, such as changing the right-pad margin or switching3031 * the led, you can use the user-space tool at:···114113 spinlock_t lock;115114 struct hid_device *hdev, *client_hdev;116115 struct mutex mutex;117117- bool client_opened, input_opened;116116+ bool client_opened;118117 struct input_dev __rcu *input;119118 unsigned long quirks;120119 struct work_struct work_connect;···280279 }281280}282281283283-static void steam_update_lizard_mode(struct steam_device *steam)284284-{285285- mutex_lock(&steam->mutex);286286- if (!steam->client_opened) {287287- if (steam->input_opened)288288- steam_set_lizard_mode(steam, false);289289- else290290- steam_set_lizard_mode(steam, lizard_mode);291291- }292292- mutex_unlock(&steam->mutex);293293-}294294-295282static int steam_input_open(struct input_dev *dev)296283{297284 struct steam_device *steam = input_get_drvdata(dev);···290301 return ret;291302292303 mutex_lock(&steam->mutex);293293- steam->input_opened = true;294304 if (!steam->client_opened && lizard_mode)295305 steam_set_lizard_mode(steam, false);296306 mutex_unlock(&steam->mutex);···301313 struct steam_device *steam = input_get_drvdata(dev);302314303315 mutex_lock(&steam->mutex);304304- steam->input_opened = false;305316 if (!steam->client_opened && lizard_mode)306317 steam_set_lizard_mode(steam, true);307318 mutex_unlock(&steam->mutex);···387400 return 0;388401}389402390390-static int steam_register(struct steam_device *steam)403403+static int steam_input_register(struct steam_device *steam)391404{392405 struct hid_device *hdev = steam->hdev;393406 struct input_dev *input;···400413 dbg_hid("%s: already connected\n", __func__);401414 return 0;402415 }403403-404404- /*405405- * Unlikely, but getting the serial could fail, and it is not so406406- * important, so make up a serial number and go on.407407- */408408- if (steam_get_serial(steam) < 0)409409- strlcpy(steam->serial_no, "XXXXXXXXXX",410410- sizeof(steam->serial_no));411411-412412- hid_info(hdev, "Steam Controller '%s' connected",413413- steam->serial_no);414416415417 input = input_allocate_device();416418 if (!input)···468492 goto input_register_fail;469493470494 rcu_assign_pointer(steam->input, input);471471-472472- /* ignore battery errors, we can live without it */473473- if (steam->quirks & STEAM_QUIRK_WIRELESS)474474- steam_battery_register(steam);475475-476495 return 0;477496478497input_register_fail:···475504 return ret;476505}477506478478-static void steam_unregister(struct steam_device *steam)507507+static void steam_input_unregister(struct steam_device *steam)479508{480509 struct input_dev *input;510510+ rcu_read_lock();511511+ input = rcu_dereference(steam->input);512512+ rcu_read_unlock();513513+ if (!input)514514+ return;515515+ RCU_INIT_POINTER(steam->input, NULL);516516+ synchronize_rcu();517517+ input_unregister_device(input);518518+}519519+520520+static void steam_battery_unregister(struct steam_device *steam)521521+{481522 struct power_supply *battery;482523483524 rcu_read_lock();484484- input = rcu_dereference(steam->input);485525 battery = rcu_dereference(steam->battery);486526 rcu_read_unlock();487527488488- if (battery) {489489- RCU_INIT_POINTER(steam->battery, NULL);490490- synchronize_rcu();491491- power_supply_unregister(battery);528528+ if (!battery)529529+ return;530530+ RCU_INIT_POINTER(steam->battery, NULL);531531+ synchronize_rcu();532532+ power_supply_unregister(battery);533533+}534534+535535+static int steam_register(struct steam_device *steam)536536+{537537+ int ret;538538+539539+ /*540540+ * This function can be called several times in a row with the541541+ * wireless adaptor, without steam_unregister() between them, because542542+ * another client send a get_connection_status command, for example.543543+ * The battery and serial number are set just once per device.544544+ */545545+ if (!steam->serial_no[0]) {546546+ /*547547+ * Unlikely, but getting the serial could fail, and it is not so548548+ * important, so make up a serial number and go on.549549+ */550550+ if (steam_get_serial(steam) < 0)551551+ strlcpy(steam->serial_no, "XXXXXXXXXX",552552+ sizeof(steam->serial_no));553553+554554+ hid_info(steam->hdev, "Steam Controller '%s' connected",555555+ steam->serial_no);556556+557557+ /* ignore battery errors, we can live without it */558558+ if (steam->quirks & STEAM_QUIRK_WIRELESS)559559+ steam_battery_register(steam);560560+561561+ mutex_lock(&steam_devices_lock);562562+ list_add(&steam->list, &steam_devices);563563+ mutex_unlock(&steam_devices_lock);492564 }493493- if (input) {494494- RCU_INIT_POINTER(steam->input, NULL);495495- synchronize_rcu();565565+566566+ mutex_lock(&steam->mutex);567567+ if (!steam->client_opened) {568568+ steam_set_lizard_mode(steam, lizard_mode);569569+ ret = steam_input_register(steam);570570+ } else {571571+ ret = 0;572572+ }573573+ mutex_unlock(&steam->mutex);574574+575575+ return ret;576576+}577577+578578+static void steam_unregister(struct steam_device *steam)579579+{580580+ steam_battery_unregister(steam);581581+ steam_input_unregister(steam);582582+ if (steam->serial_no[0]) {496583 hid_info(steam->hdev, "Steam Controller '%s' disconnected",497584 steam->serial_no);498498- input_unregister_device(input);585585+ mutex_lock(&steam_devices_lock);586586+ list_del(&steam->list);587587+ mutex_unlock(&steam_devices_lock);588588+ steam->serial_no[0] = 0;499589 }500590}501591···632600 mutex_lock(&steam->mutex);633601 steam->client_opened = true;634602 mutex_unlock(&steam->mutex);603603+604604+ steam_input_unregister(steam);605605+635606 return ret;636607}637608···644609645610 mutex_lock(&steam->mutex);646611 steam->client_opened = false;647647- if (steam->input_opened)648648- steam_set_lizard_mode(steam, false);649649- else650650- steam_set_lizard_mode(steam, lizard_mode);651612 mutex_unlock(&steam->mutex);652613653614 hid_hw_close(steam->hdev);615615+ if (steam->connected) {616616+ steam_set_lizard_mode(steam, lizard_mode);617617+ steam_input_register(steam);618618+ }654619}655620656621static int steam_client_ll_raw_request(struct hid_device *hdev,···779744 }780745 }781746782782- mutex_lock(&steam_devices_lock);783783- steam_update_lizard_mode(steam);784784- list_add(&steam->list, &steam_devices);785785- mutex_unlock(&steam_devices_lock);786786-787747 return 0;788748789749hid_hw_open_fail:···804774 return;805775 }806776807807- mutex_lock(&steam_devices_lock);808808- list_del(&steam->list);809809- mutex_unlock(&steam_devices_lock);810810-811777 hid_destroy_device(steam->client_hdev);812778 steam->client_opened = false;813779 cancel_work_sync(&steam->work_connect);···818792static void steam_do_connect_event(struct steam_device *steam, bool connected)819793{820794 unsigned long flags;795795+ bool changed;821796822797 spin_lock_irqsave(&steam->lock, flags);798798+ changed = steam->connected != connected;823799 steam->connected = connected;824800 spin_unlock_irqrestore(&steam->lock, flags);825801826826- if (schedule_work(&steam->work_connect) == 0)802802+ if (changed && schedule_work(&steam->work_connect) == 0)827803 dbg_hid("%s: connected=%d event already queued\n",828804 __func__, connected);829805}···10471019 return 0;10481020 rcu_read_lock();10491021 input = rcu_dereference(steam->input);10501050- if (likely(input)) {10221022+ if (likely(input))10511023 steam_do_input_event(steam, input, data);10521052- } else {10531053- dbg_hid("%s: input data without connect event\n",10541054- __func__);10551055- steam_do_connect_event(steam, true);10561056- }10571024 rcu_read_unlock();10581025 break;10591026 case STEAM_EV_CONNECT:···1097107410981075 mutex_lock(&steam_devices_lock);10991076 list_for_each_entry(steam, &steam_devices, list) {11001100- steam_update_lizard_mode(steam);10771077+ mutex_lock(&steam->mutex);10781078+ if (!steam->client_opened)10791079+ steam_set_lizard_mode(steam, lizard_mode);10801080+ mutex_unlock(&steam->mutex);11011081 }11021082 mutex_unlock(&steam_devices_lock);11031083 return 0;
···12121313#include <linux/atomic.h>1414#include <linux/compat.h>1515+#include <linux/cred.h>1516#include <linux/device.h>1617#include <linux/fs.h>1718#include <linux/hid.h>···497496 goto err_free;498497 }499498500500- len = min(sizeof(hid->name), sizeof(ev->u.create2.name));501501- strlcpy(hid->name, ev->u.create2.name, len);502502- len = min(sizeof(hid->phys), sizeof(ev->u.create2.phys));503503- strlcpy(hid->phys, ev->u.create2.phys, len);504504- len = min(sizeof(hid->uniq), sizeof(ev->u.create2.uniq));505505- strlcpy(hid->uniq, ev->u.create2.uniq, len);499499+ /* @hid is zero-initialized, strncpy() is correct, strlcpy() not */500500+ len = min(sizeof(hid->name), sizeof(ev->u.create2.name)) - 1;501501+ strncpy(hid->name, ev->u.create2.name, len);502502+ len = min(sizeof(hid->phys), sizeof(ev->u.create2.phys)) - 1;503503+ strncpy(hid->phys, ev->u.create2.phys, len);504504+ len = min(sizeof(hid->uniq), sizeof(ev->u.create2.uniq)) - 1;505505+ strncpy(hid->uniq, ev->u.create2.uniq, len);506506507507 hid->ll_driver = &uhid_hid_driver;508508 hid->bus = ev->u.create2.bus;···724722725723 switch (uhid->input_buf.type) {726724 case UHID_CREATE:725725+ /*726726+ * 'struct uhid_create_req' contains a __user pointer which is727727+ * copied from, so it's unsafe to allow this with elevated728728+ * privileges (e.g. from a setuid binary) or via kernel_write().729729+ */730730+ if (file->f_cred != current_cred() || uaccess_kernel()) {731731+ pr_err_once("UHID_CREATE from different security context by process %d (%s), this is not allowed.\n",732732+ task_tgid_vnr(current), current->comm);733733+ ret = -EACCES;734734+ goto unlock;735735+ }727736 ret = uhid_dev_create(uhid, &uhid->input_buf);728737 break;729738 case UHID_CREATE2:
···516516 }517517 wait_for_completion(&msginfo->waitevent);518518519519+ if (msginfo->response.gpadl_created.creation_status != 0) {520520+ pr_err("Failed to establish GPADL: err = 0x%x\n",521521+ msginfo->response.gpadl_created.creation_status);522522+523523+ ret = -EDQUOT;524524+ goto cleanup;525525+ }526526+519527 if (channel->rescind) {520528 ret = -ENODEV;521529 goto cleanup;
+22-4
drivers/hv/hv_kvp.c
···353353354354 out->body.kvp_ip_val.dhcp_enabled = in->kvp_ip_val.dhcp_enabled;355355356356+ /* fallthrough */357357+358358+ case KVP_OP_GET_IP_INFO:356359 utf16s_to_utf8s((wchar_t *)in->kvp_ip_val.adapter_id,357360 MAX_ADAPTER_ID_SIZE,358361 UTF16_LITTLE_ENDIAN,···408405 process_ib_ipinfo(in_msg, message, KVP_OP_SET_IP_INFO);409406 break;410407 case KVP_OP_GET_IP_INFO:411411- /* We only need to pass on message->kvp_hdr.operation. */408408+ /*409409+ * We only need to pass on the info of operation, adapter_id410410+ * and addr_family to the userland kvp daemon.411411+ */412412+ process_ib_ipinfo(in_msg, message, KVP_OP_GET_IP_INFO);412413 break;413414 case KVP_OP_SET:414415 switch (in_msg->body.kvp_set.data.value_type) {···453446454447 }455448456456- break;457457-458458- case KVP_OP_GET:449449+ /*450450+ * The key is always a string - utf16 encoding.451451+ */459452 message->body.kvp_set.data.key_size =460453 utf16s_to_utf8s(461454 (wchar_t *)in_msg->body.kvp_set.data.key,462455 in_msg->body.kvp_set.data.key_size,463456 UTF16_LITTLE_ENDIAN,464457 message->body.kvp_set.data.key,458458+ HV_KVP_EXCHANGE_MAX_KEY_SIZE - 1) + 1;459459+460460+ break;461461+462462+ case KVP_OP_GET:463463+ message->body.kvp_get.data.key_size =464464+ utf16s_to_utf8s(465465+ (wchar_t *)in_msg->body.kvp_get.data.key,466466+ in_msg->body.kvp_get.data.key_size,467467+ UTF16_LITTLE_ENDIAN,468468+ message->body.kvp_get.data.key,465469 HV_KVP_EXCHANGE_MAX_KEY_SIZE - 1) + 1;466470 break;467471
+4-4
drivers/hwmon/hwmon.c
···649649 if (info[i]->config[j] & HWMON_T_INPUT) {650650 err = hwmon_thermal_add_sensor(dev,651651 hwdev, j);652652- if (err)653653- goto free_device;652652+ if (err) {653653+ device_unregister(hdev);654654+ goto ida_remove;655655+ }654656 }655657 }656658 }···660658661659 return hdev;662660663663-free_device:664664- device_unregister(hdev);665661free_hwmon:666662 kfree(hwdev);667663ida_remove:
···224224 This driver can also be built as a module. If so, the module225225 will be called i2c-nforce2-s4985.226226227227+config I2C_NVIDIA_GPU228228+ tristate "NVIDIA GPU I2C controller"229229+ depends on PCI230230+ help231231+ If you say yes to this option, support will be included for the232232+ NVIDIA GPU I2C controller which is used to communicate with the GPU's233233+ Type-C controller. This driver can also be built as a module called234234+ i2c-nvidia-gpu.235235+227236config I2C_SIS5595228237 tristate "SiS 5595"229238 depends on PCI···761752762753config I2C_OMAP763754 tristate "OMAP I2C adapter"764764- depends on ARCH_OMAP755755+ depends on ARCH_OMAP || ARCH_K3765756 default y if MACH_OMAP_H3 || MACH_OMAP_OSK766757 help767758 If you say yes to this option, support will be included for the
···11241124 IB_MR_CHECK_SIG_STATUS, &mr_status);11251125 if (ret) {11261126 pr_err("ib_check_mr_status failed, ret %d\n", ret);11271127- goto err;11271127+ /* Not a lot we can do, return ambiguous guard error */11281128+ *sector = 0;11291129+ return 0x1;11281130 }1129113111301132 if (mr_status.fail_status & IB_MR_CHECK_SIG_STATUS) {···11541152 }1155115311561154 return 0;11571157-err:11581158- /* Not alot we can do here, return ambiguous guard error */11591159- return 0x1;11601155}1161115611621157void iser_err_comp(struct ib_wc *wc, const char *type)
···595595 pr_err("%s: Page request without PASID: %08llx %08llx\n",596596 iommu->name, ((unsigned long long *)req)[0],597597 ((unsigned long long *)req)[1]);598598- goto bad_req;598598+ goto no_pasid;599599 }600600601601 if (!svm || svm->pasid != req->pasid) {
+3
drivers/iommu/ipmmu-vmsa.c
···498498499499static void ipmmu_domain_destroy_context(struct ipmmu_vmsa_domain *domain)500500{501501+ if (!domain->mmu)502502+ return;503503+501504 /*502505 * Disable the context. Flush the TLB as required when modifying the503506 * context registers.
+8-19
drivers/leds/trigger/ledtrig-pattern.c
···7575{7676 struct pattern_trig_data *data = from_timer(data, t, timer);77777878- mutex_lock(&data->lock);7979-8078 for (;;) {8179 if (!data->is_indefinite && !data->repeat)8280 break;···8587 data->curr->brightness);8688 mod_timer(&data->timer,8789 jiffies + msecs_to_jiffies(data->curr->delta_t));8888-8989- /* Skip the tuple with zero duration */9090- pattern_trig_update_patterns(data);9090+ if (!data->next->delta_t) {9191+ /* Skip the tuple with zero duration */9292+ pattern_trig_update_patterns(data);9393+ }9194 /* Select next tuple */9295 pattern_trig_update_patterns(data);9396 } else {···115116116117 break;117118 }118118-119119- mutex_unlock(&data->lock);120119}121120122121static int pattern_trig_start_pattern(struct led_classdev *led_cdev)···173176 if (res < -1 || res == 0)174177 return -EINVAL;175178176176- /*177177- * Clear previous patterns' performence firstly, and remove the timer178178- * without mutex lock to avoid dead lock.179179- */180180- del_timer_sync(&data->timer);181181-182179 mutex_lock(&data->lock);180180+181181+ del_timer_sync(&data->timer);183182184183 if (data->is_hw_pattern)185184 led_cdev->pattern_clear(led_cdev);···227234 struct pattern_trig_data *data = led_cdev->trigger_data;228235 int ccount, cr, offset = 0, err = 0;229236230230- /*231231- * Clear previous patterns' performence firstly, and remove the timer232232- * without mutex lock to avoid dead lock.233233- */234234- del_timer_sync(&data->timer);235235-236237 mutex_lock(&data->lock);238238+239239+ del_timer_sync(&data->timer);237240238241 if (data->is_hw_pattern)239242 led_cdev->pattern_clear(led_cdev);
+38-11
drivers/media/cec/cec-adap.c
···807807 }808808809809 if (adap->transmit_queue_sz >= CEC_MAX_MSG_TX_QUEUE_SZ) {810810- dprintk(1, "%s: transmit queue full\n", __func__);810810+ dprintk(2, "%s: transmit queue full\n", __func__);811811 return -EBUSY;812812 }813813···11801180{11811181 struct cec_log_addrs *las = &adap->log_addrs;11821182 struct cec_msg msg = { };11831183+ const unsigned int max_retries = 2;11841184+ unsigned int i;11831185 int err;1184118611851187 if (cec_has_log_addr(adap, log_addr))···11901188 /* Send poll message */11911189 msg.len = 1;11921190 msg.msg[0] = (log_addr << 4) | log_addr;11931193- err = cec_transmit_msg_fh(adap, &msg, NULL, true);11911191+11921192+ for (i = 0; i < max_retries; i++) {11931193+ err = cec_transmit_msg_fh(adap, &msg, NULL, true);11941194+11951195+ /*11961196+ * While trying to poll the physical address was reset11971197+ * and the adapter was unconfigured, so bail out.11981198+ */11991199+ if (!adap->is_configuring)12001200+ return -EINTR;12011201+12021202+ if (err)12031203+ return err;12041204+12051205+ /*12061206+ * The message was aborted due to a disconnect or12071207+ * unconfigure, just bail out.12081208+ */12091209+ if (msg.tx_status & CEC_TX_STATUS_ABORTED)12101210+ return -EINTR;12111211+ if (msg.tx_status & CEC_TX_STATUS_OK)12121212+ return 0;12131213+ if (msg.tx_status & CEC_TX_STATUS_NACK)12141214+ break;12151215+ /*12161216+ * Retry up to max_retries times if the message was neither12171217+ * OKed or NACKed. This can happen due to e.g. a Lost12181218+ * Arbitration condition.12191219+ */12201220+ }1194122111951222 /*11961196- * While trying to poll the physical address was reset11971197- * and the adapter was unconfigured, so bail out.12231223+ * If we are unable to get an OK or a NACK after max_retries attempts12241224+ * (and note that each attempt already consists of four polls), then12251225+ * then we assume that something is really weird and that it is not a12261226+ * good idea to try and claim this logical address.11981227 */11991199- if (!adap->is_configuring)12001200- return -EINTR;12011201-12021202- if (err)12031203- return err;12041204-12051205- if (msg.tx_status & CEC_TX_STATUS_OK)12281228+ if (i == max_retries)12061229 return 0;1207123012081231 /*
-1
drivers/media/i2c/tc358743.c
···19181918 ret = v4l2_fwnode_endpoint_alloc_parse(of_fwnode_handle(ep), &endpoint);19191919 if (ret) {19201920 dev_err(dev, "failed to parse endpoint\n");19211921- ret = ret;19221921 goto put_node;19231922 }19241923
+4-6
drivers/media/pci/intel/ipu3/ipu3-cio2.c
···18441844static void cio2_pci_remove(struct pci_dev *pci_dev)18451845{18461846 struct cio2_device *cio2 = pci_get_drvdata(pci_dev);18471847- unsigned int i;1848184718491849- cio2_notifier_exit(cio2);18501850- cio2_fbpt_exit_dummy(cio2);18511851- for (i = 0; i < CIO2_QUEUES; i++)18521852- cio2_queue_exit(cio2, &cio2->queue[i]);18531853- v4l2_device_unregister(&cio2->v4l2_dev);18541848 media_device_unregister(&cio2->media_dev);18491849+ cio2_notifier_exit(cio2);18501850+ cio2_queues_exit(cio2);18511851+ cio2_fbpt_exit_dummy(cio2);18521852+ v4l2_device_unregister(&cio2->v4l2_dev);18551853 media_device_cleanup(&cio2->media_dev);18561854 mutex_destroy(&cio2->lock);18571855}
···20322032 int ret;2033203320342034 nand_np = dev->of_node;20352035- nfc_np = of_find_compatible_node(dev->of_node, NULL,20362036- "atmel,sama5d3-nfc");20352035+ nfc_np = of_get_compatible_child(dev->of_node, "atmel,sama5d3-nfc");20372036 if (!nfc_np) {20382037 dev_err(dev, "Could not find device node for sama5d3-nfc\n");20392038 return -ENODEV;···24462447 }2447244824482449 if (caps->legacy_of_bindings) {24502450+ struct device_node *nfc_node;24492451 u32 ale_offs = 21;2450245224512453 /*24522454 * If we are parsing legacy DT props and the DT contains a24532455 * valid NFC node, forward the request to the sama5 logic.24542456 */24552455- if (of_find_compatible_node(pdev->dev.of_node, NULL,24562456- "atmel,sama5d3-nfc"))24572457+ nfc_node = of_get_compatible_child(pdev->dev.of_node,24582458+ "atmel,sama5d3-nfc");24592459+ if (nfc_node) {24572460 caps = &atmel_sama5_nand_caps;24612461+ of_node_put(nfc_node);24622462+ }2458246324592464 /*24602465 * Even if the compatible says we are dealing with an
-1
drivers/mtd/nand/raw/nand_base.c
···590590591591/**592592 * panic_nand_wait - [GENERIC] wait until the command is done593593- * @mtd: MTD device structure594593 * @chip: NAND chip structure595594 * @timeo: timeout596595 *
···644644 ndelay(cqspi->wr_delay);645645646646 while (remaining > 0) {647647+ size_t write_words, mod_bytes;648648+647649 write_bytes = remaining > page_size ? page_size : remaining;648648- iowrite32_rep(cqspi->ahb_base, txbuf,649649- DIV_ROUND_UP(write_bytes, 4));650650+ write_words = write_bytes / 4;651651+ mod_bytes = write_bytes % 4;652652+ /* Write 4 bytes at a time then single bytes. */653653+ if (write_words) {654654+ iowrite32_rep(cqspi->ahb_base, txbuf, write_words);655655+ txbuf += (write_words * 4);656656+ }657657+ if (mod_bytes) {658658+ unsigned int temp = 0xFFFFFFFF;659659+660660+ memcpy(&temp, txbuf, mod_bytes);661661+ iowrite32(temp, cqspi->ahb_base);662662+ txbuf += mod_bytes;663663+ }650664651665 if (!wait_for_completion_timeout(&cqspi->transfer_complete,652666 msecs_to_jiffies(CQSPI_TIMEOUT_MS))) {···669655 goto failwr;670656 }671657672672- txbuf += write_bytes;673658 remaining -= write_bytes;674659675660 if (remaining > 0)···1009996err_unmap:1010997 dma_unmap_single(nor->dev, dma_dst, len, DMA_FROM_DEVICE);101199810121012- return 0;999999+ return ret;10131000}1014100110151002static ssize_t cqspi_read(struct spi_nor *nor, loff_t from,
+130-35
drivers/mtd/spi-nor/spi-nor.c
···21562156 * @nor: pointer to a 'struct spi_nor'21572157 * @addr: offset in the serial flash memory21582158 * @len: number of bytes to read21592159- * @buf: buffer where the data is copied into21592159+ * @buf: buffer where the data is copied into (dma-safe memory)21602160 *21612161 * Return: 0 on success, -errno otherwise.21622162 */···25222522}2523252325242524/**25252525+ * spi_nor_sort_erase_mask() - sort erase mask25262526+ * @map: the erase map of the SPI NOR25272527+ * @erase_mask: the erase type mask to be sorted25282528+ *25292529+ * Replicate the sort done for the map's erase types in BFPT: sort the erase25302530+ * mask in ascending order with the smallest erase type size starting from25312531+ * BIT(0) in the sorted erase mask.25322532+ *25332533+ * Return: sorted erase mask.25342534+ */25352535+static u8 spi_nor_sort_erase_mask(struct spi_nor_erase_map *map, u8 erase_mask)25362536+{25372537+ struct spi_nor_erase_type *erase_type = map->erase_type;25382538+ int i;25392539+ u8 sorted_erase_mask = 0;25402540+25412541+ if (!erase_mask)25422542+ return 0;25432543+25442544+ /* Replicate the sort done for the map's erase types. */25452545+ for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++)25462546+ if (erase_type[i].size && erase_mask & BIT(erase_type[i].idx))25472547+ sorted_erase_mask |= BIT(i);25482548+25492549+ return sorted_erase_mask;25502550+}25512551+25522552+/**25252553 * spi_nor_regions_sort_erase_types() - sort erase types in each region25262554 * @map: the erase map of the SPI NOR25272555 *···25642536static void spi_nor_regions_sort_erase_types(struct spi_nor_erase_map *map)25652537{25662538 struct spi_nor_erase_region *region = map->regions;25672567- struct spi_nor_erase_type *erase_type = map->erase_type;25682568- int i;25692539 u8 region_erase_mask, sorted_erase_mask;2570254025712541 while (region) {25722542 region_erase_mask = region->offset & SNOR_ERASE_TYPE_MASK;2573254325742574- /* Replicate the sort done for the map's erase types. */25752575- sorted_erase_mask = 0;25762576- for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++)25772577- if (erase_type[i].size &&25782578- region_erase_mask & BIT(erase_type[i].idx))25792579- sorted_erase_mask |= BIT(i);25442544+ sorted_erase_mask = spi_nor_sort_erase_mask(map,25452545+ region_erase_mask);2580254625812547 /* Overwrite erase mask. */25822548 region->offset = (region->offset & ~SNOR_ERASE_TYPE_MASK) |···28772855 * spi_nor_get_map_in_use() - get the configuration map in use28782856 * @nor: pointer to a 'struct spi_nor'28792857 * @smpt: pointer to the sector map parameter table28582858+ * @smpt_len: sector map parameter table length28592859+ *28602860+ * Return: pointer to the map in use, ERR_PTR(-errno) otherwise.28802861 */28812881-static const u32 *spi_nor_get_map_in_use(struct spi_nor *nor, const u32 *smpt)28622862+static const u32 *spi_nor_get_map_in_use(struct spi_nor *nor, const u32 *smpt,28632863+ u8 smpt_len)28822864{28832883- const u32 *ret = NULL;28842884- u32 i, addr;28652865+ const u32 *ret;28662866+ u8 *buf;28672867+ u32 addr;28852868 int err;28692869+ u8 i;28862870 u8 addr_width, read_opcode, read_dummy;28872887- u8 read_data_mask, data_byte, map_id;28712871+ u8 read_data_mask, map_id;28722872+28732873+ /* Use a kmalloc'ed bounce buffer to guarantee it is DMA-able. */28742874+ buf = kmalloc(sizeof(*buf), GFP_KERNEL);28752875+ if (!buf)28762876+ return ERR_PTR(-ENOMEM);2888287728892878 addr_width = nor->addr_width;28902879 read_dummy = nor->read_dummy;28912880 read_opcode = nor->read_opcode;2892288128932882 map_id = 0;28942894- i = 0;28952883 /* Determine if there are any optional Detection Command Descriptors */28962896- while (!(smpt[i] & SMPT_DESC_TYPE_MAP)) {28842884+ for (i = 0; i < smpt_len; i += 2) {28852885+ if (smpt[i] & SMPT_DESC_TYPE_MAP)28862886+ break;28872887+28972888 read_data_mask = SMPT_CMD_READ_DATA(smpt[i]);28982889 nor->addr_width = spi_nor_smpt_addr_width(nor, smpt[i]);28992890 nor->read_dummy = spi_nor_smpt_read_dummy(nor, smpt[i]);29002891 nor->read_opcode = SMPT_CMD_OPCODE(smpt[i]);29012892 addr = smpt[i + 1];2902289329032903- err = spi_nor_read_raw(nor, addr, 1, &data_byte);29042904- if (err)28942894+ err = spi_nor_read_raw(nor, addr, 1, buf);28952895+ if (err) {28962896+ ret = ERR_PTR(err);29052897 goto out;28982898+ }2906289929072900 /*29082901 * Build an index value that is used to select the Sector Map29092902 * Configuration that is currently in use.29102903 */29112911- map_id = map_id << 1 | !!(data_byte & read_data_mask);29122912- i = i + 2;29042904+ map_id = map_id << 1 | !!(*buf & read_data_mask);29132905 }2914290629152915- /* Find the matching configuration map */29162916- while (SMPT_MAP_ID(smpt[i]) != map_id) {29072907+ /*29082908+ * If command descriptors are provided, they always precede map29092909+ * descriptors in the table. There is no need to start the iteration29102910+ * over smpt array all over again.29112911+ *29122912+ * Find the matching configuration map.29132913+ */29142914+ ret = ERR_PTR(-EINVAL);29152915+ while (i < smpt_len) {29162916+ if (SMPT_MAP_ID(smpt[i]) == map_id) {29172917+ ret = smpt + i;29182918+ break;29192919+ }29202920+29212921+ /*29222922+ * If there are no more configuration map descriptors and no29232923+ * configuration ID matched the configuration identifier, the29242924+ * sector address map is unknown.29252925+ */29172926 if (smpt[i] & SMPT_DESC_END)29182918- goto out;29272927+ break;29282928+29192929 /* increment the table index to the next map */29202930 i += SMPT_MAP_REGION_COUNT(smpt[i]) + 1;29212931 }2922293229232923- ret = smpt + i;29242933 /* fall through */29252934out:29352935+ kfree(buf);29262936 nor->addr_width = addr_width;29272937 nor->read_dummy = read_dummy;29282938 nor->read_opcode = read_opcode;···29952941 const u32 *smpt)29962942{29972943 struct spi_nor_erase_map *map = &nor->erase_map;29982998- const struct spi_nor_erase_type *erase = map->erase_type;29442944+ struct spi_nor_erase_type *erase = map->erase_type;29992945 struct spi_nor_erase_region *region;30002946 u64 offset;30012947 u32 region_count;30022948 int i, j;30033003- u8 erase_type;29492949+ u8 uniform_erase_type, save_uniform_erase_type;29502950+ u8 erase_type, regions_erase_type;3004295130052952 region_count = SMPT_MAP_REGION_COUNT(*smpt);30062953 /*···30142959 return -ENOMEM;30152960 map->regions = region;3016296130173017- map->uniform_erase_type = 0xff;29622962+ uniform_erase_type = 0xff;29632963+ regions_erase_type = 0;30182964 offset = 0;30192965 /* Populate regions. */30202966 for (i = 0; i < region_count; i++) {···30302974 * Save the erase types that are supported in all regions and30312975 * can erase the entire flash memory.30322976 */30333033- map->uniform_erase_type &= erase_type;29772977+ uniform_erase_type &= erase_type;29782978+29792979+ /*29802980+ * regions_erase_type mask will indicate all the erase types29812981+ * supported in this configuration map.29822982+ */29832983+ regions_erase_type |= erase_type;3034298430352985 offset = (region[i].offset & ~SNOR_ERASE_FLAGS_MASK) +30362986 region[i].size;30372987 }29882988+29892989+ save_uniform_erase_type = map->uniform_erase_type;29902990+ map->uniform_erase_type = spi_nor_sort_erase_mask(map,29912991+ uniform_erase_type);29922992+29932993+ if (!regions_erase_type) {29942994+ /*29952995+ * Roll back to the previous uniform_erase_type mask, SMPT is29962996+ * broken.29972997+ */29982998+ map->uniform_erase_type = save_uniform_erase_type;29992999+ return -EINVAL;30003000+ }30013001+30023002+ /*30033003+ * BFPT advertises all the erase types supported by all the possible30043004+ * map configurations. Mask out the erase types that are not supported30053005+ * by the current map configuration.30063006+ */30073007+ for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++)30083008+ if (!(regions_erase_type & BIT(erase[i].idx)))30093009+ spi_nor_set_erase_type(&erase[i], 0, 0xFF);3038301030393011 spi_nor_region_mark_end(®ion[i - 1]);30403012···31043020 for (i = 0; i < smpt_header->length; i++)31053021 smpt[i] = le32_to_cpu(smpt[i]);3106302231073107- sector_map = spi_nor_get_map_in_use(nor, smpt);31083108- if (!sector_map) {31093109- ret = -EINVAL;30233023+ sector_map = spi_nor_get_map_in_use(nor, smpt, smpt_header->length);30243024+ if (IS_ERR(sector_map)) {30253025+ ret = PTR_ERR(sector_map);31103026 goto out;31113027 }31123028···32093125 if (err)32103126 goto exit;3211312732123212- /* Parse other parameter headers. */31283128+ /* Parse optional parameter tables. */32133129 for (i = 0; i < header.nph; i++) {32143130 param_header = ¶m_headers[i];32153131···32223138 break;32233139 }3224314032253225- if (err)32263226- goto exit;31413141+ if (err) {31423142+ dev_warn(dev, "Failed to parse optional parameter table: %04x\n",31433143+ SFDP_PARAM_HEADER_ID(param_header));31443144+ /*31453145+ * Let's not drop all information we extracted so far31463146+ * if optional table parsers fail. In case of failing,31473147+ * each optional parser is responsible to roll back to31483148+ * the previously known spi_nor data.31493149+ */31503150+ err = 0;31513151+ }32273152 }3228315332293154exit:···33433250 memcpy(&sfdp_params, params, sizeof(sfdp_params));33443251 memcpy(&prev_map, &nor->erase_map, sizeof(prev_map));3345325233463346- if (spi_nor_parse_sfdp(nor, &sfdp_params))32533253+ if (spi_nor_parse_sfdp(nor, &sfdp_params)) {32543254+ nor->addr_width = 0;33473255 /* restore previous erase map */33483256 memcpy(&nor->erase_map, &prev_map,33493257 sizeof(nor->erase_map));33503350- else32583258+ } else {33513259 memcpy(params, &sfdp_params, sizeof(*params));32603260+ }33523261 }3353326233543263 return 0;
+2-2
drivers/net/bonding/bond_main.c
···31123112 case NETDEV_CHANGE:31133113 /* For 802.3ad mode only:31143114 * Getting invalid Speed/Duplex values here will put slave31153115- * in weird state. So mark it as link-down for the time31153115+ * in weird state. So mark it as link-fail for the time31163116 * being and let link-monitoring (miimon) set it right when31173117 * correct speeds/duplex are available.31183118 */31193119 if (bond_update_speed_duplex(slave) &&31203120 BOND_MODE(bond) == BOND_MODE_8023AD)31213121- slave->link = BOND_LINK_DOWN;31213121+ slave->link = BOND_LINK_FAIL;3122312231233123 if (BOND_MODE(bond) == BOND_MODE_8023AD)31243124 bond_3ad_adapter_speed_duplex_changed(slave);
+35-13
drivers/net/can/dev.c
···477477}478478EXPORT_SYMBOL_GPL(can_put_echo_skb);479479480480+struct sk_buff *__can_get_echo_skb(struct net_device *dev, unsigned int idx, u8 *len_ptr)481481+{482482+ struct can_priv *priv = netdev_priv(dev);483483+ struct sk_buff *skb = priv->echo_skb[idx];484484+ struct canfd_frame *cf;485485+486486+ if (idx >= priv->echo_skb_max) {487487+ netdev_err(dev, "%s: BUG! Trying to access can_priv::echo_skb out of bounds (%u/max %u)\n",488488+ __func__, idx, priv->echo_skb_max);489489+ return NULL;490490+ }491491+492492+ if (!skb) {493493+ netdev_err(dev, "%s: BUG! Trying to echo non existing skb: can_priv::echo_skb[%u]\n",494494+ __func__, idx);495495+ return NULL;496496+ }497497+498498+ /* Using "struct canfd_frame::len" for the frame499499+ * length is supported on both CAN and CANFD frames.500500+ */501501+ cf = (struct canfd_frame *)skb->data;502502+ *len_ptr = cf->len;503503+ priv->echo_skb[idx] = NULL;504504+505505+ return skb;506506+}507507+480508/*481509 * Get the skb from the stack and loop it back locally482510 *···514486 */515487unsigned int can_get_echo_skb(struct net_device *dev, unsigned int idx)516488{517517- struct can_priv *priv = netdev_priv(dev);489489+ struct sk_buff *skb;490490+ u8 len;518491519519- BUG_ON(idx >= priv->echo_skb_max);492492+ skb = __can_get_echo_skb(dev, idx, &len);493493+ if (!skb)494494+ return 0;520495521521- if (priv->echo_skb[idx]) {522522- struct sk_buff *skb = priv->echo_skb[idx];523523- struct can_frame *cf = (struct can_frame *)skb->data;524524- u8 dlc = cf->can_dlc;496496+ netif_rx(skb);525497526526- netif_rx(priv->echo_skb[idx]);527527- priv->echo_skb[idx] = NULL;528528-529529- return dlc;530530- }531531-532532- return 0;498498+ return len;533499}534500EXPORT_SYMBOL_GPL(can_get_echo_skb);535501
···3535#include <linux/slab.h>3636#include <linux/usb.h>37373838-#include <linux/can.h>3939-#include <linux/can/dev.h>4040-#include <linux/can/error.h>4141-4238#define UCAN_DRIVER_NAME "ucan"4339#define UCAN_MAX_RX_URBS 84440/* the CAN controller needs a while to enable/disable the bus */···15711575/* disconnect the device */15721576static void ucan_disconnect(struct usb_interface *intf)15731577{15741574- struct usb_device *udev;15751578 struct ucan_priv *up = usb_get_intfdata(intf);15761576-15771577- udev = interface_to_usbdev(intf);1578157915791580 usb_set_intfdata(intf, NULL);15801581
+5-5
drivers/net/dsa/microchip/ksz_common.c
···11171117{11181118 int i;1119111911201120- mutex_init(&dev->reg_mutex);11211121- mutex_init(&dev->stats_mutex);11221122- mutex_init(&dev->alu_mutex);11231123- mutex_init(&dev->vlan_mutex);11241124-11251120 dev->ds->ops = &ksz_switch_ops;1126112111271122 for (i = 0; i < ARRAY_SIZE(ksz_switch_chips); i++) {···1200120512011206 if (dev->pdata)12021207 dev->chip_id = dev->pdata->chip_id;12081208+12091209+ mutex_init(&dev->reg_mutex);12101210+ mutex_init(&dev->stats_mutex);12111211+ mutex_init(&dev->alu_mutex);12121212+ mutex_init(&dev->vlan_mutex);1203121312041214 if (ksz_switch_detect(dev))12051215 return -EINVAL;
+2
drivers/net/dsa/mv88e6xxx/global1.c
···567567 if (err)568568 return err;569569570570+ /* Keep the histogram mode bits */571571+ val &= MV88E6XXX_G1_STATS_OP_HIST_RX_TX;570572 val |= MV88E6XXX_G1_STATS_OP_BUSY | MV88E6XXX_G1_STATS_OP_FLUSH_ALL;571573572574 err = mv88e6xxx_g1_write(chip, MV88E6XXX_G1_STATS_OP, val);
+11-12
drivers/net/ethernet/amazon/ena/ena_netdev.c
···18481848 rc = ena_com_dev_reset(adapter->ena_dev, adapter->reset_reason);18491849 if (rc)18501850 dev_err(&adapter->pdev->dev, "Device reset failed\n");18511851+ /* stop submitting admin commands on a device that was reset */18521852+ ena_com_set_admin_running_state(adapter->ena_dev, false);18511853 }1852185418531855 ena_destroy_all_io_queues(adapter);···19151913 struct ena_adapter *adapter = netdev_priv(netdev);1916191419171915 netif_dbg(adapter, ifdown, netdev, "%s\n", __func__);19161916+19171917+ if (!test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags))19181918+ return 0;1918191919191920 if (test_bit(ENA_FLAG_DEV_UP, &adapter->flags))19201921 ena_down(adapter);···26182613 ena_down(adapter);2619261426202615 /* Stop the device from sending AENQ events (in case reset flag is set26212621- * and device is up, ena_close already reset the device26222622- * In case the reset flag is set and the device is up, ena_down()26232623- * already perform the reset, so it can be skipped.26162616+ * and device is up, ena_down() already reset the device.26242617 */26252618 if (!(test_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags) && dev_up))26262619 ena_com_dev_reset(adapter->ena_dev, adapter->reset_reason);···26972694 ena_com_abort_admin_commands(ena_dev);26982695 ena_com_wait_for_abort_completion(ena_dev);26992696 ena_com_admin_destroy(ena_dev);27002700- ena_com_mmio_reg_read_request_destroy(ena_dev);27012697 ena_com_dev_reset(ena_dev, ENA_REGS_RESET_DRIVER_INVALID_STATE);26982698+ ena_com_mmio_reg_read_request_destroy(ena_dev);27022699err:27032700 clear_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags);27042701 clear_bit(ENA_FLAG_ONGOING_RESET, &adapter->flags);···34553452 ena_com_rss_destroy(ena_dev);34563453err_free_msix:34573454 ena_com_dev_reset(ena_dev, ENA_REGS_RESET_INIT_ERR);34553455+ /* stop submitting admin commands on a device that was reset */34563456+ ena_com_set_admin_running_state(ena_dev, false);34583457 ena_free_mgmnt_irq(adapter);34593458 ena_disable_msix(adapter);34603459err_worker_destroy:···3503349835043499 cancel_work_sync(&adapter->reset_task);3505350035063506- unregister_netdev(netdev);35073507-35083508- /* If the device is running then we want to make sure the device will be35093509- * reset to make sure no more events will be issued by the device.35103510- */35113511- if (test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags))35123512- set_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags);35133513-35143501 rtnl_lock();35153502 ena_destroy_device(adapter, true);35163503 rtnl_unlock();35043504+35053505+ unregister_netdev(netdev);3517350635183507 free_netdev(netdev);35193508
···204204205205 int (*hw_get_fw_version)(struct aq_hw_s *self, u32 *fw_version);206206207207+ int (*hw_set_offload)(struct aq_hw_s *self,208208+ struct aq_nic_cfg_s *aq_nic_cfg);209209+210210+ int (*hw_set_fc)(struct aq_hw_s *self, u32 fc, u32 tc);207211};208212209213struct aq_fw_ops {···229225 int (*update_link_status)(struct aq_hw_s *self);230226231227 int (*update_stats)(struct aq_hw_s *self);228228+229229+ u32 (*get_flow_control)(struct aq_hw_s *self, u32 *fcmode);232230233231 int (*set_flow_control)(struct aq_hw_s *self);234232
+8-2
drivers/net/ethernet/aquantia/atlantic/aq_main.c
···9999 struct aq_nic_s *aq_nic = netdev_priv(ndev);100100 struct aq_nic_cfg_s *aq_cfg = aq_nic_get_cfg(aq_nic);101101 bool is_lro = false;102102+ int err = 0;102103103103- if (aq_cfg->hw_features & NETIF_F_LRO) {104104+ aq_cfg->features = features;105105+106106+ if (aq_cfg->aq_hw_caps->hw_features & NETIF_F_LRO) {104107 is_lro = features & NETIF_F_LRO;105108106109 if (aq_cfg->is_lro != is_lro) {···115112 }116113 }117114 }115115+ if ((aq_nic->ndev->features ^ features) & NETIF_F_RXCSUM)116116+ err = aq_nic->aq_hw_ops->hw_set_offload(aq_nic->aq_hw,117117+ aq_cfg);118118119119- return 0;119119+ return err;120120}121121122122static int aq_ndev_set_mac_address(struct net_device *ndev, void *addr)
+15-3
drivers/net/ethernet/aquantia/atlantic/aq_nic.c
···118118 }119119120120 cfg->link_speed_msk &= cfg->aq_hw_caps->link_speed_msk;121121- cfg->hw_features = cfg->aq_hw_caps->hw_features;121121+ cfg->features = cfg->aq_hw_caps->hw_features;122122}123123124124static int aq_nic_update_link_status(struct aq_nic_s *self)125125{126126 int err = self->aq_fw_ops->update_link_status(self->aq_hw);127127+ u32 fc = 0;127128128129 if (err)129130 return err;···134133 AQ_CFG_DRV_NAME, self->link_status.mbps,135134 self->aq_hw->aq_link_status.mbps);136135 aq_nic_update_interrupt_moderation_settings(self);136136+137137+ /* Driver has to update flow control settings on RX block138138+ * on any link event.139139+ * We should query FW whether it negotiated FC.140140+ */141141+ if (self->aq_fw_ops->get_flow_control)142142+ self->aq_fw_ops->get_flow_control(self->aq_hw, &fc);143143+ if (self->aq_hw_ops->hw_set_fc)144144+ self->aq_hw_ops->hw_set_fc(self->aq_hw, fc, 0);137145 }138146139147 self->link_status = self->aq_hw->aq_link_status;···600590 }601591 }602592603603- if (i > 0 && i < AQ_HW_MULTICAST_ADDRESS_MAX) {593593+ if (i > 0 && i <= AQ_HW_MULTICAST_ADDRESS_MAX) {604594 packet_filter |= IFF_MULTICAST;605595 self->mc_list.count = i;606596 self->aq_hw_ops->hw_multicast_list_set(self->aq_hw,···782772 ethtool_link_ksettings_add_link_mode(cmd, advertising,783773 Pause);784774785785- if (self->aq_nic_cfg.flow_control & AQ_NIC_FC_TX)775775+ /* Asym is when either RX or TX, but not both */776776+ if (!!(self->aq_nic_cfg.flow_control & AQ_NIC_FC_TX) ^777777+ !!(self->aq_nic_cfg.flow_control & AQ_NIC_FC_RX))786778 ethtool_link_ksettings_add_link_mode(cmd, advertising,787779 Asym_Pause);788780
···58635863 if (!is_t4(adapter->params.chip))58645864 cxgb4_ptp_init(adapter);5865586558665866- if (IS_ENABLED(CONFIG_THERMAL) &&58665866+ if (IS_REACHABLE(CONFIG_THERMAL) &&58675867 !is_t4(adapter->params.chip) && (adapter->flags & FW_OK))58685868 cxgb4_thermal_init(adapter);58695869···5932593259335933 if (!is_t4(adapter->params.chip))59345934 cxgb4_ptp_stop(adapter);59355935- if (IS_ENABLED(CONFIG_THERMAL))59355935+ if (IS_REACHABLE(CONFIG_THERMAL))59365936 cxgb4_thermal_remove(adapter);5937593759385938 /* If we allocated filters, free up state associated with any
···37603760 /* Hardware table is only clear when pf resets */37613761 if (!(handle->flags & HNAE3_SUPPORT_VF)) {37623762 ret = hns3_restore_vlan(netdev);37633763- return ret;37633763+ if (ret)37643764+ return ret;37643765 }3765376637663767 ret = hns3_restore_fd_rules(netdev);
···14131413 }1414141414151415 vsi->flags |= I40E_VSI_FLAG_FILTER_CHANGED;14161416- set_bit(__I40E_MACVLAN_SYNC_PENDING, vsi->state);14161416+ set_bit(__I40E_MACVLAN_SYNC_PENDING, vsi->back->state);14171417}1418141814191419/**···1224912249 NETIF_F_GSO_GRE |1225012250 NETIF_F_GSO_GRE_CSUM |1225112251 NETIF_F_GSO_PARTIAL |1225212252+ NETIF_F_GSO_IPXIP4 |1225312253+ NETIF_F_GSO_IPXIP6 |1225212254 NETIF_F_GSO_UDP_TUNNEL |1225312255 NETIF_F_GSO_UDP_TUNNEL_CSUM |1225412256 NETIF_F_SCTP_CRC |···1226812266 /* record features VLANs can make use of */1226912267 netdev->vlan_features |= hw_enc_features | NETIF_F_TSO_MANGLEID;12270122681227112271- if (!(pf->flags & I40E_FLAG_MFP_ENABLED))1227212272- netdev->hw_features |= NETIF_F_NTUPLE | NETIF_F_HW_TC;1227312273-1227412269 hw_features = hw_enc_features |1227512270 NETIF_F_HW_VLAN_CTAG_TX |1227612271 NETIF_F_HW_VLAN_CTAG_RX;1227212272+1227312273+ if (!(pf->flags & I40E_FLAG_MFP_ENABLED))1227412274+ hw_features |= NETIF_F_NTUPLE | NETIF_F_HW_TC;12277122751227812276 netdev->hw_features |= hw_features;1227912277
+7-7
drivers/net/ethernet/intel/i40e/i40e_xsk.c
···3333}34343535/**3636- * i40e_add_xsk_umem - Store an UMEM for a certain ring/qid3636+ * i40e_add_xsk_umem - Store a UMEM for a certain ring/qid3737 * @vsi: Current VSI3838 * @umem: UMEM to store3939 * @qid: Ring/qid to associate with the UMEM···5656}57575858/**5959- * i40e_remove_xsk_umem - Remove an UMEM for a certain ring/qid5959+ * i40e_remove_xsk_umem - Remove a UMEM for a certain ring/qid6060 * @vsi: Current VSI6161 * @qid: Ring/qid associated with the UMEM6262 **/···130130}131131132132/**133133- * i40e_xsk_umem_enable - Enable/associate an UMEM to a certain ring/qid133133+ * i40e_xsk_umem_enable - Enable/associate a UMEM to a certain ring/qid134134 * @vsi: Current VSI135135 * @umem: UMEM136136 * @qid: Rx ring to associate UMEM to···189189}190190191191/**192192- * i40e_xsk_umem_disable - Diassociate an UMEM from a certain ring/qid192192+ * i40e_xsk_umem_disable - Disassociate a UMEM from a certain ring/qid193193 * @vsi: Current VSI194194 * @qid: Rx ring to associate UMEM to195195 *···255255}256256257257/**258258- * i40e_xsk_umem_query - Queries a certain ring/qid for its UMEM258258+ * i40e_xsk_umem_setup - Enable/disassociate a UMEM to/from a ring/qid259259 * @vsi: Current VSI260260 * @umem: UMEM to enable/associate to a ring, or NULL to disable261261 * @qid: Rx ring to (dis)associate UMEM (from)to262262 *263263- * This function enables or disables an UMEM to a certain ring.263263+ * This function enables or disables a UMEM to a certain ring.264264 *265265 * Returns 0 on success, <0 on failure266266 **/···276276 * @rx_ring: Rx ring277277 * @xdp: xdp_buff used as input to the XDP program278278 *279279- * This function enables or disables an UMEM to a certain ring.279279+ * This function enables or disables a UMEM to a certain ring.280280 *281281 * Returns any of I40E_XDP_{PASS, CONSUMED, TX, REDIR}282282 **/
···811811 /* Attempt to disable FW logging before shutting down control queues */812812 ice_cfg_fw_log(hw, false);813813 ice_shutdown_all_ctrlq(hw);814814+815815+ /* Clear VSI contexts if not already cleared */816816+ ice_clear_all_vsi_ctx(hw);814817}815818816819/**
+6-1
drivers/net/ethernet/intel/ice/ice_ethtool.c
···15171517 }1518151815191519 if (!test_bit(__ICE_DOWN, pf->state)) {15201520- /* Give it a little more time to try to come back */15201520+ /* Give it a little more time to try to come back. If still15211521+ * down, restart autoneg link or reinitialize the interface.15221522+ */15211523 msleep(75);15221524 if (!test_bit(__ICE_DOWN, pf->state))15231525 return ice_nway_reset(netdev);15261526+15271527+ ice_down(vsi);15281528+ ice_up(vsi);15241529 }1525153015261531 return err;
···19971997 status = ice_update_vsi(&vsi->back->hw, vsi->idx, ctxt, NULL);19981998 if (status) {19991999 netdev_err(vsi->netdev, "%sabling VLAN pruning on VSI handle: %d, VSI HW ID: %d failed, err = %d, aq_err = %d\n",20002000- ena ? "Ena" : "Dis", vsi->idx, vsi->vsi_num, status,20002000+ ena ? "En" : "Dis", vsi->idx, vsi->vsi_num, status,20012001 vsi->back->hw.adminq.sq_last_status);20022002 goto err_out;20032003 }···24582458 * on this wq24592459 */24602460 if (vsi->netdev && !ice_is_reset_in_progress(pf->state)) {24612461+ ice_napi_del(vsi);24612462 unregister_netdev(vsi->netdev);24622463 free_netdev(vsi->netdev);24632464 vsi->netdev = NULL;
+48-38
drivers/net/ethernet/intel/ice/ice_main.c
···14651465 * ice_napi_del - Remove NAPI handler for the VSI14661466 * @vsi: VSI for which NAPI handler is to be removed14671467 */14681468-static void ice_napi_del(struct ice_vsi *vsi)14681468+void ice_napi_del(struct ice_vsi *vsi)14691469{14701470 int v_idx;14711471···16221622{16231623 struct ice_netdev_priv *np = netdev_priv(netdev);16241624 struct ice_vsi *vsi = np->vsi;16251625- int ret;1626162516271626 if (vid >= VLAN_N_VID) {16281627 netdev_err(netdev, "VLAN id requested %d is out of range %d\n",···1634163516351636 /* Enable VLAN pruning when VLAN 0 is added */16361637 if (unlikely(!vid)) {16371637- ret = ice_cfg_vlan_pruning(vsi, true);16381638+ int ret = ice_cfg_vlan_pruning(vsi, true);16391639+16381640 if (ret)16391641 return ret;16401642 }···16441644 * needed to continue allowing all untagged packets since VLAN prune16451645 * list is applied to all packets by the switch16461646 */16471647- ret = ice_vsi_add_vlan(vsi, vid);16481648-16491649- if (!ret)16501650- set_bit(vid, vsi->active_vlans);16511651-16521652- return ret;16471647+ return ice_vsi_add_vlan(vsi, vid);16531648}1654164916551650/**···16711676 status = ice_vsi_kill_vlan(vsi, vid);16721677 if (status)16731678 return status;16741674-16751675- clear_bit(vid, vsi->active_vlans);1676167916771680 /* Disable VLAN pruning when VLAN 0 is removed */16781681 if (unlikely(!vid))···19952002}1996200319972004/**20052005+ * ice_verify_cacheline_size - verify driver's assumption of 64 Byte cache lines20062006+ * @pf: pointer to the PF structure20072007+ *20082008+ * There is no error returned here because the driver should be able to handle20092009+ * 128 Byte cache lines, so we only print a warning in case issues are seen,20102010+ * specifically with Tx.20112011+ */20122012+static void ice_verify_cacheline_size(struct ice_pf *pf)20132013+{20142014+ if (rd32(&pf->hw, GLPCI_CNF2) & GLPCI_CNF2_CACHELINE_SIZE_M)20152015+ dev_warn(&pf->pdev->dev,20162016+ "%d Byte cache line assumption is invalid, driver may have Tx timeouts!\n",20172017+ ICE_CACHE_LINE_BYTES);20182018+}20192019+20202020+/**19982021 * ice_probe - Device initialization routine19992022 * @pdev: PCI device information struct20002023 * @ent: entry in ice_pci_tbl···21602151 /* since everything is good, start the service timer */21612152 mod_timer(&pf->serv_tmr, round_jiffies(jiffies + pf->serv_tmr_period));2162215321542154+ ice_verify_cacheline_size(pf);21552155+21632156 return 0;2164215721652158err_alloc_sw_unroll:···2192218121932182 if (!pf)21942183 return;21842184+21852185+ for (i = 0; i < ICE_MAX_RESET_WAIT; i++) {21862186+ if (!ice_is_reset_in_progress(pf->state))21872187+ break;21882188+ msleep(100);21892189+ }2195219021962191 set_bit(__ICE_DOWN, pf->state);21972192 ice_service_task_stop(pf);···25272510}2528251125292512/**25302530- * ice_restore_vlan - Reinstate VLANs when vsi/netdev comes back up25312531- * @vsi: the VSI being brought back up25322532- */25332533-static int ice_restore_vlan(struct ice_vsi *vsi)25342534-{25352535- int err;25362536- u16 vid;25372537-25382538- if (!vsi->netdev)25392539- return -EINVAL;25402540-25412541- err = ice_vsi_vlan_setup(vsi);25422542- if (err)25432543- return err;25442544-25452545- for_each_set_bit(vid, vsi->active_vlans, VLAN_N_VID) {25462546- err = ice_vlan_rx_add_vid(vsi->netdev, htons(ETH_P_8021Q), vid);25472547- if (err)25482548- break;25492549- }25502550-25512551- return err;25522552-}25532553-25542554-/**25552513 * ice_vsi_cfg - Setup the VSI25562514 * @vsi: the VSI being configured25572515 *···2538254625392547 if (vsi->netdev) {25402548 ice_set_rx_mode(vsi->netdev);25412541- err = ice_restore_vlan(vsi);25492549+25502550+ err = ice_vsi_vlan_setup(vsi);25512551+25422552 if (err)25432553 return err;25442554 }···32903296 struct device *dev = &pf->pdev->dev;32913297 struct ice_hw *hw = &pf->hw;32923298 enum ice_status ret;32933293- int err;32993299+ int err, i;3294330032953301 if (test_bit(__ICE_DOWN, pf->state))32963302 goto clear_recovery;···33643370 }3365337133663372 ice_reset_all_vfs(pf, true);33733373+33743374+ for (i = 0; i < pf->num_alloc_vsi; i++) {33753375+ bool link_up;33763376+33773377+ if (!pf->vsi[i] || pf->vsi[i]->type != ICE_VSI_PF)33783378+ continue;33793379+ ice_get_link_status(pf->vsi[i]->port_info, &link_up);33803380+ if (link_up) {33813381+ netif_carrier_on(pf->vsi[i]->netdev);33823382+ netif_tx_wake_all_queues(pf->vsi[i]->netdev);33833383+ } else {33843384+ netif_carrier_off(pf->vsi[i]->netdev);33853385+ netif_tx_stop_all_queues(pf->vsi[i]->netdev);33863386+ }33873387+ }33883388+33673389 /* if we get here, reset flow is successful */33683390 clear_bit(__ICE_RESET_FAILED, pf->state);33693391 return;
+12
drivers/net/ethernet/intel/ice/ice_switch.c
···348348}349349350350/**351351+ * ice_clear_all_vsi_ctx - clear all the VSI context entries352352+ * @hw: pointer to the hw struct353353+ */354354+void ice_clear_all_vsi_ctx(struct ice_hw *hw)355355+{356356+ u16 i;357357+358358+ for (i = 0; i < ICE_MAX_VSI; i++)359359+ ice_clear_vsi_ctx(hw, i);360360+}361361+362362+/**351363 * ice_add_vsi - add VSI context to the hardware and VSI handle list352364 * @hw: pointer to the hw struct353365 * @vsi_handle: unique VSI handle provided by drivers
···1520152015211521 /* update gso_segs and bytecount */15221522 first->gso_segs = skb_shinfo(skb)->gso_segs;15231523- first->bytecount = (first->gso_segs - 1) * off->header_len;15231523+ first->bytecount += (first->gso_segs - 1) * off->header_len;1524152415251525 cd_tso_len = skb->len - off->header_len;15261526 cd_mss = skb_shinfo(skb)->gso_size;···15561556 * magnitude greater than our largest possible GSO size.15571557 *15581558 * This would then be implemented as:15591559- * return (((size >> 12) * 85) >> 8) + 1;15591559+ * return (((size >> 12) * 85) >> 8) + ICE_DESCS_FOR_SKB_DATA_PTR;15601560 *15611561 * Since multiplication and division are commutative, we can reorder15621562 * operations into:15631563- * return ((size * 85) >> 20) + 1;15631563+ * return ((size * 85) >> 20) + ICE_DESCS_FOR_SKB_DATA_PTR;15641564 */15651565static unsigned int ice_txd_use_count(unsigned int size)15661566{15671567- return ((size * 85) >> 20) + 1;15671567+ return ((size * 85) >> 20) + ICE_DESCS_FOR_SKB_DATA_PTR;15681568}1569156915701570/**···17061706 * + 1 desc for context descriptor,17071707 * otherwise try next time17081708 */17091709- if (ice_maybe_stop_tx(tx_ring, count + 4 + 1)) {17091709+ if (ice_maybe_stop_tx(tx_ring, count + ICE_DESCS_PER_CACHE_LINE +17101710+ ICE_DESCS_FOR_CTX_DESC)) {17101711 tx_ring->tx_stats.tx_busy++;17111712 return NETDEV_TX_BUSY;17121713 }
+15-2
drivers/net/ethernet/intel/ice/ice_txrx.h
···2222#define ICE_RX_BUF_WRITE 16 /* Must be power of 2 */2323#define ICE_MAX_TXQ_PER_TXQG 12824242525-/* Tx Descriptors needed, worst case */2626-#define DESC_NEEDED (MAX_SKB_FRAGS + 4)2525+/* We are assuming that the cache line is always 64 Bytes here for ice.2626+ * In order to make sure that is a correct assumption there is a check in probe2727+ * to print a warning if the read from GLPCI_CNF2 tells us that the cache line2828+ * size is 128 bytes. We do it this way because we do not want to read the2929+ * GLPCI_CNF2 register or a variable containing the value on every pass through3030+ * the Tx path.3131+ */3232+#define ICE_CACHE_LINE_BYTES 643333+#define ICE_DESCS_PER_CACHE_LINE (ICE_CACHE_LINE_BYTES / \3434+ sizeof(struct ice_tx_desc))3535+#define ICE_DESCS_FOR_CTX_DESC 13636+#define ICE_DESCS_FOR_SKB_DATA_PTR 13737+/* Tx descriptors needed, worst case */3838+#define DESC_NEEDED (MAX_SKB_FRAGS + ICE_DESCS_FOR_CTX_DESC + \3939+ ICE_DESCS_PER_CACHE_LINE + ICE_DESCS_FOR_SKB_DATA_PTR)2740#define ICE_DESC_UNUSED(R) \2841 ((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \2942 (R)->next_to_clean - (R)->next_to_use - 1)
+1-1
drivers/net/ethernet/intel/ice/ice_type.h
···9292 u64 phy_type_low;9393 u16 max_frame_size;9494 u16 link_speed;9595+ u16 req_speeds;9596 u8 lse_ena; /* Link Status Event notification */9697 u8 link_info;9798 u8 an_info;9899 u8 ext_info;99100 u8 pacing;100100- u8 req_speeds;101101 /* Refer to #define from module_type[ICE_MODULE_TYPE_TOTAL_BYTE] of102102 * ice_aqc_get_phy_caps structure103103 */
+1-3
drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
···348348 struct ice_vsi_ctx ctxt = { 0 };349349 enum ice_status status;350350351351- ctxt.info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_TAGGED |351351+ ctxt.info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_UNTAGGED |352352 ICE_AQ_VSI_PVLAN_INSERT_PVID |353353 ICE_AQ_VSI_VLAN_EMOD_STR;354354 ctxt.info.pvid = cpu_to_le16(vid);···2171217121722172 if (!ice_vsi_add_vlan(vsi, vid)) {21732173 vf->num_vlan++;21742174- set_bit(vid, vsi->active_vlans);2175217421762175 /* Enable VLAN pruning when VLAN 0 is added */21772176 if (unlikely(!vid))···21892190 */21902191 if (!ice_vsi_kill_vlan(vsi, vid)) {21912192 vf->num_vlan--;21922192- clear_bit(vid, vsi->active_vlans);2193219321942194 /* Disable VLAN pruning when removing VLAN 0 */21952195 if (unlikely(!vid))
+1
drivers/net/ethernet/intel/igb/e1000_i210.c
···842842 nvm_word = E1000_INVM_DEFAULT_AL;843843 tmp_nvm = nvm_word | E1000_INVM_PLL_WO_VAL;844844 igb_write_phy_reg_82580(hw, I347AT4_PAGE_SELECT, E1000_PHY_PLL_FREQ_PAGE);845845+ phy_word = E1000_PHY_PLL_UNCONF;845846 for (i = 0; i < E1000_MAX_PLL_TRIES; i++) {846847 /* check current state directly from internal PHY */847848 igb_read_phy_reg_82580(hw, E1000_PHY_PLL_FREQ_REG, &phy_word);
+7-5
drivers/net/ethernet/intel/igb/igb_ptp.c
···5353 * 2^40 * 10^-9 / 60 = 18.3 minutes.5454 *5555 * SYSTIM is converted to real time using a timecounter. As5656- * timecounter_cyc2time() allows old timestamps, the timecounter5757- * needs to be updated at least once per half of the SYSTIM interval.5858- * Scheduling of delayed work is not very accurate, so we aim for 85959- * minutes to be sure the actual interval is shorter than 9.16 minutes.5656+ * timecounter_cyc2time() allows old timestamps, the timecounter needs5757+ * to be updated at least once per half of the SYSTIM interval.5858+ * Scheduling of delayed work is not very accurate, and also the NIC5959+ * clock can be adjusted to run up to 6% faster and the system clock6060+ * up to 10% slower, so we aim for 6 minutes to be sure the actual6161+ * interval in the NIC time is shorter than 9.16 minutes.6062 */61636262-#define IGB_SYSTIM_OVERFLOW_PERIOD (HZ * 60 * 8)6464+#define IGB_SYSTIM_OVERFLOW_PERIOD (HZ * 60 * 6)6365#define IGB_PTP_TX_TIMEOUT (HZ * 15)6466#define INCPERIOD_82576 BIT(E1000_TIMINCA_16NS_SHIFT)6567#define INCVALUE_82576_MASK GENMASK(E1000_TIMINCA_16NS_SHIFT - 1, 0)
···540540struct resource_allocator {541541 spinlock_t alloc_lock; /* protect quotas */542542 union {543543- int res_reserved;544544- int res_port_rsvd[MLX4_MAX_PORTS];543543+ unsigned int res_reserved;544544+ unsigned int res_port_rsvd[MLX4_MAX_PORTS];545545 };546546 union {547547 int res_free;
···60716071 "no error",60726072 "length error",60736073 "function disabled",60746074- "VF sent command to attnetion address",60746074+ "VF sent command to attention address",60756075 "host sent prod update command",60766076 "read of during interrupt register while in MIMD mode",60776077 "access to PXP BAR reserved address",
+35-9
drivers/net/ethernet/qlogic/qed/qed_dev.c
···185185 qed_iscsi_free(p_hwfn);186186 qed_ooo_free(p_hwfn);187187 }188188+189189+ if (QED_IS_RDMA_PERSONALITY(p_hwfn))190190+ qed_rdma_info_free(p_hwfn);191191+188192 qed_iov_free(p_hwfn);189193 qed_l2_free(p_hwfn);190194 qed_dmae_info_free(p_hwfn);···485481 struct qed_qm_info *qm_info = &p_hwfn->qm_info;486482487483 /* Can't have multiple flags set here */488488- if (bitmap_weight((unsigned long *)&pq_flags, sizeof(pq_flags)) > 1)484484+ if (bitmap_weight((unsigned long *)&pq_flags,485485+ sizeof(pq_flags) * BITS_PER_BYTE) > 1) {486486+ DP_ERR(p_hwfn, "requested multiple pq flags 0x%x\n", pq_flags);489487 goto err;488488+ }489489+490490+ if (!(qed_get_pq_flags(p_hwfn) & pq_flags)) {491491+ DP_ERR(p_hwfn, "pq flag 0x%x is not set\n", pq_flags);492492+ goto err;493493+ }490494491495 switch (pq_flags) {492496 case PQ_FLAGS_RLS:···518506 }519507520508err:521521- DP_ERR(p_hwfn, "BAD pq flags %d\n", pq_flags);522522- return NULL;509509+ return &qm_info->start_pq;523510}524511525512/* save pq index in qm info */···542531{543532 u8 max_tc = qed_init_qm_get_num_tcs(p_hwfn);544533534534+ if (max_tc == 0) {535535+ DP_ERR(p_hwfn, "pq with flag 0x%lx do not exist\n",536536+ PQ_FLAGS_MCOS);537537+ return p_hwfn->qm_info.start_pq;538538+ }539539+545540 if (tc > max_tc)546541 DP_ERR(p_hwfn, "tc %d must be smaller than %d\n", tc, max_tc);547542548548- return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + tc;543543+ return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + (tc % max_tc);549544}550545551546u16 qed_get_cm_pq_idx_vf(struct qed_hwfn *p_hwfn, u16 vf)552547{553548 u16 max_vf = qed_init_qm_get_num_vfs(p_hwfn);554549550550+ if (max_vf == 0) {551551+ DP_ERR(p_hwfn, "pq with flag 0x%lx do not exist\n",552552+ PQ_FLAGS_VFS);553553+ return p_hwfn->qm_info.start_pq;554554+ }555555+555556 if (vf > max_vf)556557 DP_ERR(p_hwfn, "vf %d must be smaller than %d\n", vf, max_vf);557558558558- return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + vf;559559+ return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + (vf % max_vf);559560}560561561562u16 qed_get_cm_pq_idx_ofld_mtc(struct qed_hwfn *p_hwfn, u8 tc)···11001077 if (rc)11011078 goto alloc_err;11021079 rc = qed_ooo_alloc(p_hwfn);10801080+ if (rc)10811081+ goto alloc_err;10821082+ }10831083+10841084+ if (QED_IS_RDMA_PERSONALITY(p_hwfn)) {10851085+ rc = qed_rdma_info_alloc(p_hwfn);11031086 if (rc)11041087 goto alloc_err;11051088 }···21312102 if (!p_ptt)21322103 return -EAGAIN;2133210421342134- /* If roce info is allocated it means roce is initialized and should21352135- * be enabled in searcher.21362136- */21372105 if (p_hwfn->p_rdma_info &&21382138- p_hwfn->b_rdma_enabled_in_prs)21062106+ p_hwfn->p_rdma_info->active && p_hwfn->b_rdma_enabled_in_prs)21392107 qed_wr(p_hwfn, p_ptt, p_hwfn->rdma_prs_search_reg, 0x1);2140210821412109 /* Re-open incoming traffic */
···992992 */993993 do {994994 index = p_sb_attn->sb_index;995995+ /* finish reading index before the loop condition */996996+ dma_rmb();995997 attn_bits = le32_to_cpu(p_sb_attn->atten_bits);996998 attn_acks = le32_to_cpu(p_sb_attn->atten_ack);997999 } while (index != p_sb_attn->sb_index);
···167167 enum spq_mode comp_mode;168168 struct qed_spq_comp_cb comp_cb;169169 struct qed_spq_comp_done comp_done; /* SPQ_MODE_EBLOCK */170170+171171+ /* Posted entry for unlimited list entry in EBLOCK mode */172172+ struct qed_spq_entry *post_ent;170173};171174172175struct qed_eq {···398395 enum spq_mode comp_mode;399396 struct qed_spq_comp_cb *p_comp_data;400397};398398+399399+/**400400+ * @brief Returns a SPQ entry to the pool / frees the entry if allocated.401401+ * Should be called on in error flows after initializing the SPQ entry402402+ * and before posting it.403403+ *404404+ * @param p_hwfn405405+ * @param p_ent406406+ */407407+void qed_sp_destroy_request(struct qed_hwfn *p_hwfn,408408+ struct qed_spq_entry *p_ent);401409402410int qed_sp_init_request(struct qed_hwfn *p_hwfn,403411 struct qed_spq_entry **pp_ent,
+20-2
drivers/net/ethernet/qlogic/qed/qed_sp_commands.c
···4747#include "qed_sp.h"4848#include "qed_sriov.h"49495050+void qed_sp_destroy_request(struct qed_hwfn *p_hwfn,5151+ struct qed_spq_entry *p_ent)5252+{5353+ /* qed_spq_get_entry() can either get an entry from the free_pool,5454+ * or, if no entries are left, allocate a new entry and add it to5555+ * the unlimited_pending list.5656+ */5757+ if (p_ent->queue == &p_hwfn->p_spq->unlimited_pending)5858+ kfree(p_ent);5959+ else6060+ qed_spq_return_entry(p_hwfn, p_ent);6161+}6262+5063int qed_sp_init_request(struct qed_hwfn *p_hwfn,5164 struct qed_spq_entry **pp_ent,5265 u8 cmd, u8 protocol, struct qed_sp_init_data *p_data)···93809481 case QED_SPQ_MODE_BLOCK:9582 if (!p_data->p_comp_data)9696- return -EINVAL;8383+ goto err;97849885 p_ent->comp_cb.cookie = p_data->p_comp_data->cookie;9986 break;···10895 default:10996 DP_NOTICE(p_hwfn, "Unknown SPQE completion mode %d\n",11097 p_ent->comp_mode);111111- return -EINVAL;9898+ goto err;11299 }113100114101 DP_VERBOSE(p_hwfn, QED_MSG_SPQ,···122109 memset(&p_ent->ramrod, 0, sizeof(p_ent->ramrod));123110124111 return 0;112112+113113+err:114114+ qed_sp_destroy_request(p_hwfn, p_ent);115115+116116+ return -EINVAL;125117}126118127119static enum tunnel_clss qed_tunn_clss_to_fw_clss(u8 type)
+35-34
drivers/net/ethernet/qlogic/qed/qed_spq.c
···142142143143 DP_INFO(p_hwfn, "Ramrod is stuck, requesting MCP drain\n");144144 rc = qed_mcp_drain(p_hwfn, p_ptt);145145+ qed_ptt_release(p_hwfn, p_ptt);145146 if (rc) {146147 DP_NOTICE(p_hwfn, "MCP drain failed\n");147148 goto err;···151150 /* Retry after drain */152151 rc = __qed_spq_block(p_hwfn, p_ent, p_fw_ret, true);153152 if (!rc)154154- goto out;153153+ return 0;155154156155 comp_done = (struct qed_spq_comp_done *)p_ent->comp_cb.cookie;157157- if (comp_done->done == 1)156156+ if (comp_done->done == 1) {158157 if (p_fw_ret)159158 *p_fw_ret = comp_done->fw_return_code;160160-out:161161- qed_ptt_release(p_hwfn, p_ptt);162162- return 0;163163-159159+ return 0;160160+ }164161err:165165- qed_ptt_release(p_hwfn, p_ptt);166162 DP_NOTICE(p_hwfn,167163 "Ramrod is stuck [CID %08x cmd %02x protocol %02x echo %04x]\n",168164 le32_to_cpu(p_ent->elem.hdr.cid),···683685 /* EBLOCK responsible to free the allocated p_ent */684686 if (p_ent->comp_mode != QED_SPQ_MODE_EBLOCK)685687 kfree(p_ent);688688+ else689689+ p_ent->post_ent = p_en2;686690687691 p_ent = p_en2;688692 }···767767 SPQ_HIGH_PRI_RESERVE_DEFAULT);768768}769769770770+/* Avoid overriding of SPQ entries when getting out-of-order completions, by771771+ * marking the completions in a bitmap and increasing the chain consumer only772772+ * for the first successive completed entries.773773+ */774774+static void qed_spq_comp_bmap_update(struct qed_hwfn *p_hwfn, __le16 echo)775775+{776776+ u16 pos = le16_to_cpu(echo) % SPQ_RING_SIZE;777777+ struct qed_spq *p_spq = p_hwfn->p_spq;778778+779779+ __set_bit(pos, p_spq->p_comp_bitmap);780780+ while (test_bit(p_spq->comp_bitmap_idx,781781+ p_spq->p_comp_bitmap)) {782782+ __clear_bit(p_spq->comp_bitmap_idx,783783+ p_spq->p_comp_bitmap);784784+ p_spq->comp_bitmap_idx++;785785+ qed_chain_return_produced(&p_spq->chain);786786+ }787787+}788788+770789int qed_spq_post(struct qed_hwfn *p_hwfn,771790 struct qed_spq_entry *p_ent, u8 *fw_return_code)772791{···843824 p_ent->queue == &p_spq->unlimited_pending);844825845826 if (p_ent->queue == &p_spq->unlimited_pending) {846846- /* This is an allocated p_ent which does not need to847847- * return to pool.848848- */827827+ struct qed_spq_entry *p_post_ent = p_ent->post_ent;828828+849829 kfree(p_ent);850850- return rc;830830+831831+ /* Return the entry which was actually posted */832832+ p_ent = p_post_ent;851833 }852834853835 if (rc)···862842spq_post_fail2:863843 spin_lock_bh(&p_spq->lock);864844 list_del(&p_ent->list);865865- qed_chain_return_produced(&p_spq->chain);845845+ qed_spq_comp_bmap_update(p_hwfn, p_ent->elem.hdr.echo);866846867847spq_post_fail:868848 /* return to the free pool */···894874 spin_lock_bh(&p_spq->lock);895875 list_for_each_entry_safe(p_ent, tmp, &p_spq->completion_pending, list) {896876 if (p_ent->elem.hdr.echo == echo) {897897- u16 pos = le16_to_cpu(echo) % SPQ_RING_SIZE;898898-899877 list_del(&p_ent->list);900900-901901- /* Avoid overriding of SPQ entries when getting902902- * out-of-order completions, by marking the completions903903- * in a bitmap and increasing the chain consumer only904904- * for the first successive completed entries.905905- */906906- __set_bit(pos, p_spq->p_comp_bitmap);907907-908908- while (test_bit(p_spq->comp_bitmap_idx,909909- p_spq->p_comp_bitmap)) {910910- __clear_bit(p_spq->comp_bitmap_idx,911911- p_spq->p_comp_bitmap);912912- p_spq->comp_bitmap_idx++;913913- qed_chain_return_produced(&p_spq->chain);914914- }915915-878878+ qed_spq_comp_bmap_update(p_hwfn, echo);916879 p_spq->comp_count++;917880 found = p_ent;918881 break;···934931 QED_MSG_SPQ,935932 "Got a completion without a callback function\n");936933937937- if ((found->comp_mode != QED_SPQ_MODE_EBLOCK) ||938938- (found->queue == &p_spq->unlimited_pending))934934+ if (found->comp_mode != QED_SPQ_MODE_EBLOCK)939935 /* EBLOCK is responsible for returning its own entry into the940940- * free list, unless it originally added the entry into the941941- * unlimited pending list.936936+ * free list.942937 */943938 qed_spq_return_entry(p_hwfn, found);944939
···262262 int mode, int end)263263{264264 p->des0 |= cpu_to_le32(RDES0_OWN);265265- p->des1 |= cpu_to_le32((BUF_SIZE_8KiB - 1) & ERDES1_BUFFER1_SIZE_MASK);265265+ p->des1 |= cpu_to_le32(BUF_SIZE_8KiB & ERDES1_BUFFER1_SIZE_MASK);266266267267 if (mode == STMMAC_CHAIN_MODE)268268 ehn_desc_rx_set_on_chain(p);
+1-1
drivers/net/ethernet/stmicro/stmmac/ring_mode.c
···140140static int set_16kib_bfsize(int mtu)141141{142142 int ret = 0;143143- if (unlikely(mtu >= BUF_SIZE_8KiB))143143+ if (unlikely(mtu > BUF_SIZE_8KiB))144144 ret = BUF_SIZE_16KiB;145145 return ret;146146}
···11-// SPDX-License-Identifier: GPL-2.011+// SPDX-License-Identifier: GPL-2.0+22/* FDDI network adapter driver for DEC FDDIcontroller 700/700-C devices.33 *44 * Copyright (c) 2018 Maciej W. Rozycki···5656#define DRV_VERSION "v.1.1.4"5757#define DRV_RELDATE "Oct 6 2018"58585959-static char version[] =5959+static const char version[] =6060 DRV_NAME ": " DRV_VERSION " " DRV_RELDATE " Maciej W. Rozycki\n";61616262MODULE_AUTHOR("Maciej W. Rozycki <macro@linux-mips.org>");···784784static void fza_tx_smt(struct net_device *dev)785785{786786 struct fza_private *fp = netdev_priv(dev);787787- struct fza_buffer_tx __iomem *smt_tx_ptr, *skb_data_ptr;787787+ struct fza_buffer_tx __iomem *smt_tx_ptr;788788 int i, len;789789 u32 own;790790···799799800800 if (!netif_queue_stopped(dev)) {801801 if (dev_nit_active(dev)) {802802+ struct fza_buffer_tx *skb_data_ptr;802803 struct sk_buff *skb;803804804805 /* Length must be a multiple of 4 as only word
+2-1
drivers/net/fddi/defza.h
···11-/* SPDX-License-Identifier: GPL-2.0 */11+/* SPDX-License-Identifier: GPL-2.0+ */22/* FDDI network adapter driver for DEC FDDIcontroller 700/700-C devices.33 *44 * Copyright (c) 2018 Maciej W. Rozycki···235235#define FZA_RING_CMD 0x200400 /* command ring address */236236#define FZA_RING_CMD_SIZE 0x40 /* command descriptor ring237237 * size238238+ */238239/* Command constants. */239240#define FZA_RING_CMD_MASK 0x7fffffff240241#define FZA_RING_CMD_NOP 0x00000000 /* nop */
+16-2
drivers/net/phy/broadcom.c
···9292 return 0;9393}94949595-static int bcm5481x_config(struct phy_device *phydev)9595+static int bcm54xx_config_clock_delay(struct phy_device *phydev)9696{9797 int rc, val;9898···429429 ret = genphy_config_aneg(phydev);430430431431 /* Then we can set up the delay. */432432- bcm5481x_config(phydev);432432+ bcm54xx_config_clock_delay(phydev);433433434434 if (of_property_read_bool(np, "enet-phy-lane-swap")) {435435 /* Lane Swap - Undocumented register...magic! */···438438 if (ret < 0)439439 return ret;440440 }441441+442442+ return ret;443443+}444444+445445+static int bcm54616s_config_aneg(struct phy_device *phydev)446446+{447447+ int ret;448448+449449+ /* Aneg firsly. */450450+ ret = genphy_config_aneg(phydev);451451+452452+ /* Then we can set up the delay. */453453+ bcm54xx_config_clock_delay(phydev);441454442455 return ret;443456}···649636 .features = PHY_GBIT_FEATURES,650637 .flags = PHY_HAS_INTERRUPT,651638 .config_init = bcm54xx_config_init,639639+ .config_aneg = bcm54616s_config_aneg,652640 .ack_interrupt = bcm_phy_ack_intr,653641 .config_intr = bcm_phy_config_intr,654642}, {
+5-5
drivers/net/phy/mdio-gpio.c
···6363 * assume the pin serves as pull-up. If direction is6464 * output, the default value is high.6565 */6666- gpiod_set_value(bitbang->mdo, 1);6666+ gpiod_set_value_cansleep(bitbang->mdo, 1);6767 return;6868 }6969···7878 struct mdio_gpio_info *bitbang =7979 container_of(ctrl, struct mdio_gpio_info, ctrl);80808181- return gpiod_get_value(bitbang->mdio);8181+ return gpiod_get_value_cansleep(bitbang->mdio);8282}83838484static void mdio_set(struct mdiobb_ctrl *ctrl, int what)···8787 container_of(ctrl, struct mdio_gpio_info, ctrl);88888989 if (bitbang->mdo)9090- gpiod_set_value(bitbang->mdo, what);9090+ gpiod_set_value_cansleep(bitbang->mdo, what);9191 else9292- gpiod_set_value(bitbang->mdio, what);9292+ gpiod_set_value_cansleep(bitbang->mdio, what);9393}94949595static void mdc_set(struct mdiobb_ctrl *ctrl, int what)···9797 struct mdio_gpio_info *bitbang =9898 container_of(ctrl, struct mdio_gpio_info, ctrl);9999100100- gpiod_set_value(bitbang->mdc, what);100100+ gpiod_set_value_cansleep(bitbang->mdc, what);101101}102102103103static const struct mdiobb_ops mdio_gpio_ops = {
···21972197 new_driver->mdiodrv.driver.remove = phy_remove;21982198 new_driver->mdiodrv.driver.owner = owner;2199219922002200+ /* The following works around an issue where the PHY driver doesn't bind22012201+ * to the device, resulting in the genphy driver being used instead of22022202+ * the dedicated driver. The root cause of the issue isn't known yet22032203+ * and seems to be in the base driver core. Once this is fixed we may22042204+ * remove this workaround.22052205+ */22062206+ new_driver->mdiodrv.driver.probe_type = PROBE_FORCE_SYNCHRONOUS;22072207+22002208 retval = driver_register(&new_driver->mdiodrv.driver);22012209 if (retval) {22022210 pr_err("%s: Error %d in registering driver\n",
···216216 * it just report sending a packet to the target217217 * (without actual packet transfer).218218 */219219- dev_kfree_skb_any(skb);220219 ndev->stats.tx_packets++;221220 ndev->stats.tx_bytes += skb->len;221221+ dev_kfree_skb_any(skb);222222 }223223 }224224
···66 * GPL LICENSE SUMMARY77 *88 * Copyright(c) 2017 Intel Deutschland GmbH99+ * Copyright(c) 2018 Intel Corporation910 *1011 * This program is free software; you can redistribute it and/or modify1112 * it under the terms of version 2 of the GNU General Public License as···2726 * BSD LICENSE2827 *2928 * Copyright(c) 2017 Intel Deutschland GmbH2929+ * Copyright(c) 2018 Intel Corporation3030 * All rights reserved.3131 *3232 * Redistribution and use in source and binary forms, with or without···8381#define ACPI_WRDS_WIFI_DATA_SIZE (ACPI_SAR_TABLE_SIZE + 2)8482#define ACPI_EWRD_WIFI_DATA_SIZE ((ACPI_SAR_PROFILE_NUM - 1) * \8583 ACPI_SAR_TABLE_SIZE + 3)8686-#define ACPI_WGDS_WIFI_DATA_SIZE 188484+#define ACPI_WGDS_WIFI_DATA_SIZE 198785#define ACPI_WRDD_WIFI_DATA_SIZE 28886#define ACPI_SPLC_WIFI_DATA_SIZE 28987
···893893 IWL_DEBUG_RADIO(mvm, "Sending GEO_TX_POWER_LIMIT\n");894894895895 BUILD_BUG_ON(ACPI_NUM_GEO_PROFILES * ACPI_WGDS_NUM_BANDS *896896- ACPI_WGDS_TABLE_SIZE != ACPI_WGDS_WIFI_DATA_SIZE);896896+ ACPI_WGDS_TABLE_SIZE + 1 != ACPI_WGDS_WIFI_DATA_SIZE);897897898898 BUILD_BUG_ON(ACPI_NUM_GEO_PROFILES > IWL_NUM_GEO_PROFILES);899899···928928 return -ENOENT;929929}930930931931+static int iwl_mvm_sar_get_wgds_table(struct iwl_mvm *mvm)932932+{933933+ return -ENOENT;934934+}935935+931936static int iwl_mvm_sar_geo_init(struct iwl_mvm *mvm)932937{933938 return 0;···959954 IWL_DEBUG_RADIO(mvm,960955 "WRDS SAR BIOS table invalid or unavailable. (%d)\n",961956 ret);962962- /* if not available, don't fail and don't bother with EWRD */963963- return 0;957957+ /*958958+ * If not available, don't fail and don't bother with EWRD.959959+ * Return 1 to tell that we can't use WGDS either.960960+ */961961+ return 1;964962 }965963966964 ret = iwl_mvm_sar_get_ewrd_table(mvm);···976968 /* choose profile 1 (WRDS) as default for both chains */977969 ret = iwl_mvm_sar_select_profile(mvm, 1, 1);978970979979- /* if we don't have profile 0 from BIOS, just skip it */971971+ /*972972+ * If we don't have profile 0 from BIOS, just skip it. This973973+ * means that SAR Geo will not be enabled either, even if we974974+ * have other valid profiles.975975+ */980976 if (ret == -ENOENT)981981- return 0;977977+ return 1;982978983979 return ret;984980}···11801168 iwl_mvm_unref(mvm, IWL_MVM_REF_UCODE_DOWN);1181116911821170 ret = iwl_mvm_sar_init(mvm);11831183- if (ret)11841184- goto error;11711171+ if (ret == 0) {11721172+ ret = iwl_mvm_sar_geo_init(mvm);11731173+ } else if (ret > 0 && !iwl_mvm_sar_get_wgds_table(mvm)) {11741174+ /*11751175+ * If basic SAR is not available, we check for WGDS,11761176+ * which should *not* be available either. If it is11771177+ * available, issue an error, because we can't use SAR11781178+ * Geo without basic SAR.11791179+ */11801180+ IWL_ERR(mvm, "BIOS contains WGDS but no WRDS\n");11811181+ }1185118211861186- ret = iwl_mvm_sar_geo_init(mvm);11871187- if (ret)11831183+ if (ret < 0)11881184 goto error;1189118511901186 iwl_mvm_leds_sync(mvm);
+6-6
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
···301301 goto out;302302 }303303304304- if (changed)305305- *changed = (resp->status == MCC_RESP_NEW_CHAN_PROFILE);304304+ if (changed) {305305+ u32 status = le32_to_cpu(resp->status);306306+307307+ *changed = (status == MCC_RESP_NEW_CHAN_PROFILE ||308308+ status == MCC_RESP_ILLEGAL);309309+ }306310307311 regd = iwl_parse_nvm_mcc_info(mvm->trans->dev, mvm->cfg,308312 __le32_to_cpu(resp->n_channels),···44474443 sinfo->signal_avg = mvmsta->avg_energy;44484444 sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL_AVG);44494445 }44504450-44514451- if (!fw_has_capa(&mvm->fw->ucode_capa,44524452- IWL_UCODE_TLV_CAPA_RADIO_BEACON_STATS))44534453- return;4454444644554447 /* if beacon filtering isn't on mac80211 does it anyway */44564448 if (!(vif->driver_flags & IEEE80211_VIF_BEACON_FILTER))
···11config MT76_CORE22 tristate3344+config MT76_LEDS55+ bool66+ depends on MT76_CORE77+ depends on LEDS_CLASS=y || MT76_CORE=LEDS_CLASS88+ default y99+410config MT76_USB511 tristate612 depends on MT76_CORE
+5-3
drivers/net/wireless/mediatek/mt76/mac80211.c
···345345 mt76_check_sband(dev, NL80211_BAND_2GHZ);346346 mt76_check_sband(dev, NL80211_BAND_5GHZ);347347348348- ret = mt76_led_init(dev);349349- if (ret)350350- return ret;348348+ if (IS_ENABLED(CONFIG_MT76_LEDS)) {349349+ ret = mt76_led_init(dev);350350+ if (ret)351351+ return ret;352352+ }351353352354 return ieee80211_register_hw(hw);353355}
···272272 if (val != ~0 && val > 0xffff)273273 return -EINVAL;274274275275- mutex_lock(&dev->mutex);275275+ mutex_lock(&dev->mt76.mutex);276276 mt76x2_mac_set_tx_protection(dev, val);277277- mutex_unlock(&dev->mutex);277277+ mutex_unlock(&dev->mt76.mutex);278278279279 return 0;280280}
+11-6
drivers/net/wireless/ti/wlcore/sdio.c
···285285 struct resource res[2];286286 mmc_pm_flag_t mmcflags;287287 int ret = -ENOMEM;288288- int irq, wakeirq;288288+ int irq, wakeirq, num_irqs;289289 const char *chip_family;290290291291 /* We are only able to handle the wlan function */···353353 irqd_get_trigger_type(irq_get_irq_data(irq));354354 res[0].name = "irq";355355356356- res[1].start = wakeirq;357357- res[1].flags = IORESOURCE_IRQ |358358- irqd_get_trigger_type(irq_get_irq_data(wakeirq));359359- res[1].name = "wakeirq";360356361361- ret = platform_device_add_resources(glue->core, res, ARRAY_SIZE(res));357357+ if (wakeirq > 0) {358358+ res[1].start = wakeirq;359359+ res[1].flags = IORESOURCE_IRQ |360360+ irqd_get_trigger_type(irq_get_irq_data(wakeirq));361361+ res[1].name = "wakeirq";362362+ num_irqs = 2;363363+ } else {364364+ num_irqs = 1;365365+ }366366+ ret = platform_device_add_resources(glue->core, res, num_irqs);362367 if (ret) {363368 dev_err(glue->dev, "can't add resources\n");364369 goto out_dev_put;
+8-4
drivers/nvme/host/core.c
···15191519 if (ns->ndev)15201520 nvme_nvm_update_nvm_info(ns);15211521#ifdef CONFIG_NVME_MULTIPATH15221522- if (ns->head->disk)15221522+ if (ns->head->disk) {15231523 nvme_update_disk_info(ns->head->disk, ns, id);15241524+ blk_queue_stack_limits(ns->head->disk->queue, ns->queue);15251525+ }15241526#endif15251527}15261528···33143312 struct nvme_ns *ns, *next;33153313 LIST_HEAD(ns_list);3316331433153315+ /* prevent racing with ns scanning */33163316+ flush_work(&ctrl->scan_work);33173317+33173318 /*33183319 * The dead states indicates the controller was not gracefully33193320 * disconnected. In that case, we won't be able to flush any data while···34793474 nvme_mpath_stop(ctrl);34803475 nvme_stop_keep_alive(ctrl);34813476 flush_work(&ctrl->async_event_work);34823482- flush_work(&ctrl->scan_work);34833477 cancel_work_sync(&ctrl->fw_act_work);34843478 if (ctrl->ops->stop_ctrl)34853479 ctrl->ops->stop_ctrl(ctrl);···3587358335883584 return 0;35893585out_free_name:35903590- kfree_const(dev->kobj.name);35863586+ kfree_const(ctrl->device->kobj.name);35913587out_release_instance:35923588 ida_simple_remove(&nvme_instance_ida, ctrl->instance);35933589out:···36093605 down_read(&ctrl->namespaces_rwsem);3610360636113607 /* Forcibly unquiesce queues to avoid blocking dispatch */36123612- if (ctrl->admin_q)36083608+ if (ctrl->admin_q && !blk_queue_dying(ctrl->admin_q))36133609 blk_mq_unquiesce_queue(ctrl->admin_q);3614361036153611 list_for_each_entry(ns, &ctrl->namespaces, list)
+65-12
drivers/nvme/host/fc.c
···152152153153 bool ioq_live;154154 bool assoc_active;155155+ atomic_t err_work_active;155156 u64 association_id;156157157158 struct list_head ctrl_list; /* rport->ctrl_list */···161160 struct blk_mq_tag_set tag_set;162161163162 struct delayed_work connect_work;163163+ struct work_struct err_work;164164165165 struct kref ref;166166 u32 flags;···15331531 struct nvme_fc_fcp_op *aen_op = ctrl->aen_ops;15341532 int i;1535153315341534+ /* ensure we've initialized the ops once */15351535+ if (!(aen_op->flags & FCOP_FLAGS_AEN))15361536+ return;15371537+15361538 for (i = 0; i < NVME_NR_AEN_COMMANDS; i++, aen_op++)15371539 __nvme_fc_abort_op(ctrl, aen_op);15381540}···17521746 struct nvme_fc_queue *queue = &ctrl->queues[queue_idx];17531747 int res;1754174817551755- nvme_req(rq)->ctrl = &ctrl->ctrl;17561749 res = __nvme_fc_init_request(ctrl, queue, &op->op, rq, queue->rqcnt++);17571750 if (res)17581751 return res;17591752 op->op.fcp_req.first_sgl = &op->sgl[0];17601753 op->op.fcp_req.private = &op->priv[0];17541754+ nvme_req(rq)->ctrl = &ctrl->ctrl;17611755 return res;17621756}17631757···20552049static void20562050nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg)20572051{20582058- /* only proceed if in LIVE state - e.g. on first error */20522052+ int active;20532053+20542054+ /*20552055+ * if an error (io timeout, etc) while (re)connecting,20562056+ * it's an error on creating the new association.20572057+ * Start the error recovery thread if it hasn't already20582058+ * been started. It is expected there could be multiple20592059+ * ios hitting this path before things are cleaned up.20602060+ */20612061+ if (ctrl->ctrl.state == NVME_CTRL_CONNECTING) {20622062+ active = atomic_xchg(&ctrl->err_work_active, 1);20632063+ if (!active && !schedule_work(&ctrl->err_work)) {20642064+ atomic_set(&ctrl->err_work_active, 0);20652065+ WARN_ON(1);20662066+ }20672067+ return;20682068+ }20692069+20702070+ /* Otherwise, only proceed if in LIVE state - e.g. on first error */20592071 if (ctrl->ctrl.state != NVME_CTRL_LIVE)20602072 return;20612073···28382814{28392815 struct nvme_fc_ctrl *ctrl = to_fc_ctrl(nctrl);2840281628172817+ cancel_work_sync(&ctrl->err_work);28412818 cancel_delayed_work_sync(&ctrl->connect_work);28422819 /*28432820 * kill the association on the link side. this will block···28912866}2892286728932868static void28692869+__nvme_fc_terminate_io(struct nvme_fc_ctrl *ctrl)28702870+{28712871+ nvme_stop_keep_alive(&ctrl->ctrl);28722872+28732873+ /* will block will waiting for io to terminate */28742874+ nvme_fc_delete_association(ctrl);28752875+28762876+ if (ctrl->ctrl.state != NVME_CTRL_CONNECTING &&28772877+ !nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING))28782878+ dev_err(ctrl->ctrl.device,28792879+ "NVME-FC{%d}: error_recovery: Couldn't change state "28802880+ "to CONNECTING\n", ctrl->cnum);28812881+}28822882+28832883+static void28942884nvme_fc_reset_ctrl_work(struct work_struct *work)28952885{28962886 struct nvme_fc_ctrl *ctrl =28972887 container_of(work, struct nvme_fc_ctrl, ctrl.reset_work);28982888 int ret;2899288928902890+ __nvme_fc_terminate_io(ctrl);28912891+29002892 nvme_stop_ctrl(&ctrl->ctrl);29012901-29022902- /* will block will waiting for io to terminate */29032903- nvme_fc_delete_association(ctrl);29042904-29052905- if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) {29062906- dev_err(ctrl->ctrl.device,29072907- "NVME-FC{%d}: error_recovery: Couldn't change state "29082908- "to CONNECTING\n", ctrl->cnum);29092909- return;29102910- }2911289329122894 if (ctrl->rport->remoteport.port_state == FC_OBJSTATE_ONLINE)29132895 ret = nvme_fc_create_association(ctrl);···29272895 dev_info(ctrl->ctrl.device,29282896 "NVME-FC{%d}: controller reset complete\n",29292897 ctrl->cnum);28982898+}28992899+29002900+static void29012901+nvme_fc_connect_err_work(struct work_struct *work)29022902+{29032903+ struct nvme_fc_ctrl *ctrl =29042904+ container_of(work, struct nvme_fc_ctrl, err_work);29052905+29062906+ __nvme_fc_terminate_io(ctrl);29072907+29082908+ atomic_set(&ctrl->err_work_active, 0);29092909+29102910+ /*29112911+ * Rescheduling the connection after recovering29122912+ * from the io error is left to the reconnect work29132913+ * item, which is what should have stalled waiting on29142914+ * the io that had the error that scheduled this work.29152915+ */29302916}2931291729322918static const struct nvme_ctrl_ops nvme_fc_ctrl_ops = {···30573007 ctrl->cnum = idx;30583008 ctrl->ioq_live = false;30593009 ctrl->assoc_active = false;30103010+ atomic_set(&ctrl->err_work_active, 0);30603011 init_waitqueue_head(&ctrl->ioabort_wait);3061301230623013 get_device(ctrl->dev);···3065301430663015 INIT_WORK(&ctrl->ctrl.reset_work, nvme_fc_reset_ctrl_work);30673016 INIT_DELAYED_WORK(&ctrl->connect_work, nvme_fc_connect_ctrl_work);30173017+ INIT_WORK(&ctrl->err_work, nvme_fc_connect_err_work);30683018 spin_lock_init(&ctrl->lock);3069301930703020 /* io queue count */···31553103fail_ctrl:31563104 nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_DELETING);31573105 cancel_work_sync(&ctrl->ctrl.reset_work);31063106+ cancel_work_sync(&ctrl->err_work);31583107 cancel_delayed_work_sync(&ctrl->connect_work);3159310831603109 ctrl->ctrl.opts = NULL;
+1
drivers/nvme/host/multipath.c
···285285 blk_queue_flag_set(QUEUE_FLAG_NONROT, q);286286 /* set to a default value for 512 until disk is validated */287287 blk_queue_logical_block_size(q, 512);288288+ blk_set_stacking_limits(&q->limits);288289289290 /* we need to propagate up the VMC settings */290291 if (ctrl->vwc & NVME_CTRL_VWC_PRESENT)
+3
drivers/nvme/host/nvme.h
···531531static inline int nvme_mpath_init(struct nvme_ctrl *ctrl,532532 struct nvme_id_ctrl *id)533533{534534+ if (ctrl->subsys->cmic & (1 << 3))535535+ dev_warn(ctrl->device,536536+"Please enable CONFIG_NVME_MULTIPATH for full support of multi-port devices.\n");534537 return 0;535538}536539static inline void nvme_mpath_uninit(struct nvme_ctrl *ctrl)
···420420 struct pci_dev *p2p_dev;421421 int ret;422422423423- if (!ctrl->p2p_client)423423+ if (!ctrl->p2p_client || !ns->use_p2pmem)424424 return;425425426426 if (ns->p2p_dev) {
+4-15
drivers/nvme/target/rdma.c
···122122 int inline_page_count;123123};124124125125-static struct workqueue_struct *nvmet_rdma_delete_wq;126125static bool nvmet_rdma_use_srq;127126module_param_named(use_srq, nvmet_rdma_use_srq, bool, 0444);128127MODULE_PARM_DESC(use_srq, "Use shared receive queue.");···1273127412741275 if (queue->host_qid == 0) {12751276 /* Let inflight controller teardown complete */12761276- flush_workqueue(nvmet_rdma_delete_wq);12771277+ flush_scheduled_work();12771278 }1278127912791280 ret = nvmet_rdma_cm_accept(cm_id, queue, &event->param.conn);12801281 if (ret) {12811281- queue_work(nvmet_rdma_delete_wq, &queue->release_work);12821282+ schedule_work(&queue->release_work);12821283 /* Destroying rdma_cm id is not needed here */12831284 return 0;12841285 }···1343134413441345 if (disconnect) {13451346 rdma_disconnect(queue->cm_id);13461346- queue_work(nvmet_rdma_delete_wq, &queue->release_work);13471347+ schedule_work(&queue->release_work);13471348 }13481349}13491350···13731374 mutex_unlock(&nvmet_rdma_queue_mutex);1374137513751376 pr_err("failed to connect queue %d\n", queue->idx);13761376- queue_work(nvmet_rdma_delete_wq, &queue->release_work);13771377+ schedule_work(&queue->release_work);13771378}1378137913791380/**···16551656 if (ret)16561657 goto err_ib_client;1657165816581658- nvmet_rdma_delete_wq = alloc_workqueue("nvmet-rdma-delete-wq",16591659- WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_SYSFS, 0);16601660- if (!nvmet_rdma_delete_wq) {16611661- ret = -ENOMEM;16621662- goto err_unreg_transport;16631663- }16641664-16651659 return 0;1666166016671667-err_unreg_transport:16681668- nvmet_unregister_transport(&nvmet_rdma_ops);16691661err_ib_client:16701662 ib_unregister_client(&nvmet_rdma_ib_client);16711663 return ret;···1664167416651675static void __exit nvmet_rdma_exit(void)16661676{16671667- destroy_workqueue(nvmet_rdma_delete_wq);16681677 nvmet_unregister_transport(&nvmet_rdma_ops);16691678 ib_unregister_client(&nvmet_rdma_ib_client);16701679 WARN_ON_ONCE(!list_empty(&nvmet_rdma_queue_list));
+6-4
drivers/nvmem/core.c
···4444 int bytes;4545 int bit_offset;4646 int nbits;4747+ struct device_node *np;4748 struct nvmem_device *nvmem;4849 struct list_head node;4950};···299298 mutex_lock(&nvmem_mutex);300299 list_del(&cell->node);301300 mutex_unlock(&nvmem_mutex);301301+ of_node_put(cell->np);302302 kfree(cell->name);303303 kfree(cell);304304}···532530 return -ENOMEM;533531534532 cell->nvmem = nvmem;533533+ cell->np = of_node_get(child);535534 cell->offset = be32_to_cpup(addr++);536535 cell->bytes = be32_to_cpup(addr);537536 cell->name = kasprintf(GFP_KERNEL, "%pOFn", child);···963960964961#if IS_ENABLED(CONFIG_OF)965962static struct nvmem_cell *966966-nvmem_find_cell_by_index(struct nvmem_device *nvmem, int index)963963+nvmem_find_cell_by_node(struct nvmem_device *nvmem, struct device_node *np)967964{968965 struct nvmem_cell *cell = NULL;969969- int i = 0;970966971967 mutex_lock(&nvmem_mutex);972968 list_for_each_entry(cell, &nvmem->cells, node) {973973- if (index == i++)969969+ if (np == cell->np)974970 break;975971 }976972 mutex_unlock(&nvmem_mutex);···10131011 if (IS_ERR(nvmem))10141012 return ERR_CAST(nvmem);1015101310161016- cell = nvmem_find_cell_by_index(nvmem, index);10141014+ cell = nvmem_find_cell_by_node(nvmem, cell_np);10171015 if (!cell) {10181016 __nvmem_device_put(nvmem);10191017 return ERR_PTR(-ENOENT);
+3-1
drivers/of/device.c
···149149 * set by the driver.150150 */151151 mask = DMA_BIT_MASK(ilog2(dma_addr + size - 1) + 1);152152- dev->bus_dma_mask = mask;153152 dev->coherent_dma_mask &= mask;154153 *dev->dma_mask &= mask;154154+ /* ...but only set bus mask if we found valid dma-ranges earlier */155155+ if (!ret)156156+ dev->bus_dma_mask = mask;155157156158 coherent = of_dma_is_coherent(np);157159 dev_dbg(dev, "device is%sdma coherent\n",
···8888 int i;89899090 for (i = 0; i < PCIE_IATU_NUM; i++)9191- dw_pcie_disable_atu(pcie->pci, DW_PCIE_REGION_OUTBOUND, i);9191+ dw_pcie_disable_atu(pcie->pci, i, DW_PCIE_REGION_OUTBOUND);9292}93939494static int ls1021_pcie_link_up(struct dw_pcie *pci)
···793793{794794 struct pci_dev *pci_dev = to_pci_dev(dev);795795 struct acpi_device *adev = ACPI_COMPANION(dev);796796- int node;797796798797 if (!adev)799798 return;800800-801801- node = acpi_get_node(adev->handle);802802- if (node != NUMA_NO_NODE)803803- set_dev_node(dev, node);804799805800 pci_acpi_optimize_delay(pci_dev, adev->handle);806801
+11-13
drivers/pci/pci.c
···55565556 u32 lnkcap2, lnkcap;5557555755585558 /*55595559- * PCIe r4.0 sec 7.5.3.18 recommends using the Supported Link55605560- * Speeds Vector in Link Capabilities 2 when supported, falling55615561- * back to Max Link Speed in Link Capabilities otherwise.55595559+ * Link Capabilities 2 was added in PCIe r3.0, sec 7.8.18. The55605560+ * implementation note there recommends using the Supported Link55615561+ * Speeds Vector in Link Capabilities 2 when supported.55625562+ *55635563+ * Without Link Capabilities 2, i.e., prior to PCIe r3.0, software55645564+ * should use the Supported Link Speeds field in Link Capabilities,55655565+ * where only 2.5 GT/s and 5.0 GT/s speeds were defined.55625566 */55635567 pcie_capability_read_dword(dev, PCI_EXP_LNKCAP2, &lnkcap2);55645568 if (lnkcap2) { /* PCIe r3.0-compliant */···55785574 }5579557555805576 pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap);55815581- if (lnkcap) {55825582- if (lnkcap & PCI_EXP_LNKCAP_SLS_16_0GB)55835583- return PCIE_SPEED_16_0GT;55845584- else if (lnkcap & PCI_EXP_LNKCAP_SLS_8_0GB)55855585- return PCIE_SPEED_8_0GT;55865586- else if (lnkcap & PCI_EXP_LNKCAP_SLS_5_0GB)55875587- return PCIE_SPEED_5_0GT;55885588- else if (lnkcap & PCI_EXP_LNKCAP_SLS_2_5GB)55895589- return PCIE_SPEED_2_5GT;55905590- }55775577+ if ((lnkcap & PCI_EXP_LNKCAP_SLS) == PCI_EXP_LNKCAP_SLS_5_0GB)55785578+ return PCIE_SPEED_5_0GT;55795579+ else if ((lnkcap & PCI_EXP_LNKCAP_SLS) == PCI_EXP_LNKCAP_SLS_2_5GB)55805580+ return PCIE_SPEED_2_5GT;5591558155925582 return PCI_SPEED_UNKNOWN;55935583}
+11-9
drivers/phy/qualcomm/phy-qcom-qusb2.c
···231231 .mask_core_ready = CORE_READY_STATUS,232232 .has_pll_override = true,233233 .autoresume_en = BIT(0),234234+ .update_tune1_with_efuse = true,234235};235236236237static const char * const qusb2_phy_vreg_names[] = {···403402404403 /*405404 * Read efuse register having TUNE2/1 parameter's high nibble.406406- * If efuse register shows value as 0x0, or if we fail to find407407- * a valid efuse register settings, then use default value408408- * as 0xB for high nibble that we have already set while409409- * configuring phy.405405+ * If efuse register shows value as 0x0 (indicating value is not406406+ * fused), or if we fail to find a valid efuse register setting,407407+ * then use default value for high nibble that we have already408408+ * set while configuring the phy.410409 */411410 val = nvmem_cell_read(qphy->cell, NULL);412411 if (IS_ERR(val) || !val[0]) {···416415417416 /* Fused TUNE1/2 value is the higher nibble only */418417 if (cfg->update_tune1_with_efuse)419419- qusb2_setbits(qphy->base, cfg->regs[QUSB2PHY_PORT_TUNE1],420420- val[0] << 0x4);418418+ qusb2_write_mask(qphy->base, cfg->regs[QUSB2PHY_PORT_TUNE1],419419+ val[0] << HSTX_TRIM_SHIFT,420420+ HSTX_TRIM_MASK);421421 else422422- qusb2_setbits(qphy->base, cfg->regs[QUSB2PHY_PORT_TUNE2],423423- val[0] << 0x4);424424-422422+ qusb2_write_mask(qphy->base, cfg->regs[QUSB2PHY_PORT_TUNE2],423423+ val[0] << HSTX_TRIM_SHIFT,424424+ HSTX_TRIM_MASK);425425}426426427427static int qusb2_phy_set_mode(struct phy *phy, enum phy_mode mode)
+2-1
drivers/phy/socionext/Kconfig
···26262727config PHY_UNIPHIER_PCIE2828 tristate "Uniphier PHY driver for PCIe controller"2929- depends on (ARCH_UNIPHIER || COMPILE_TEST) && OF2929+ depends on ARCH_UNIPHIER || COMPILE_TEST3030+ depends on OF && HAS_IOMEM3031 default PCIE_UNIPHIER3132 select GENERIC_PHY3233 help
+1-1
drivers/pinctrl/meson/pinctrl-meson-gxbb.c
···830830831831static struct meson_bank meson_gxbb_aobus_banks[] = {832832 /* name first last irq pullen pull dir out in */833833- BANK("AO", GPIOAO_0, GPIOAO_13, 0, 13, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0),833833+ BANK("AO", GPIOAO_0, GPIOAO_13, 0, 13, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0),834834};835835836836static struct meson_pinctrl_data meson_gxbb_periphs_pinctrl_data = {
+1-1
drivers/pinctrl/meson/pinctrl-meson-gxl.c
···807807808808static struct meson_bank meson_gxl_aobus_banks[] = {809809 /* name first last irq pullen pull dir out in */810810- BANK("AO", GPIOAO_0, GPIOAO_9, 0, 9, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0),810810+ BANK("AO", GPIOAO_0, GPIOAO_9, 0, 9, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0),811811};812812813813static struct meson_pinctrl_data meson_gxl_periphs_pinctrl_data = {
+1-1
drivers/pinctrl/meson/pinctrl-meson.c
···192192 dev_dbg(pc->dev, "pin %u: disable bias\n", pin);193193194194 meson_calc_reg_and_bit(bank, pin, REG_PULL, ®, &bit);195195- ret = regmap_update_bits(pc->reg_pull, reg,195195+ ret = regmap_update_bits(pc->reg_pullen, reg,196196 BIT(bit), 0);197197 if (ret)198198 return ret;
+1-1
drivers/pinctrl/meson/pinctrl-meson8.c
···1053105310541054static struct meson_bank meson8_aobus_banks[] = {10551055 /* name first last irq pullen pull dir out in */10561056- BANK("AO", GPIOAO_0, GPIO_TEST_N, 0, 13, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0),10561056+ BANK("AO", GPIOAO_0, GPIO_TEST_N, 0, 13, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0),10571057};1058105810591059static struct meson_pinctrl_data meson8_cbus_pinctrl_data = {
···257257 struct cmos_rtc *cmos = dev_get_drvdata(dev);258258 unsigned char rtc_control;259259260260+ /* This not only a rtc_op, but also called directly */260261 if (!is_valid_irq(cmos->irq))261262 return -EIO;262263···453452 unsigned char mon, mday, hrs, min, sec, rtc_control;454453 int ret;455454455455+ /* This not only a rtc_op, but also called directly */456456 if (!is_valid_irq(cmos->irq))457457 return -EIO;458458···518516 struct cmos_rtc *cmos = dev_get_drvdata(dev);519517 unsigned long flags;520518521521- if (!is_valid_irq(cmos->irq))522522- return -EINVAL;523523-524519 spin_lock_irqsave(&rtc_lock, flags);525520526521 if (enabled)···576577 .set_alarm = cmos_set_alarm,577578 .proc = cmos_procfs,578579 .alarm_irq_enable = cmos_alarm_irq_enable,580580+};581581+582582+static const struct rtc_class_ops cmos_rtc_ops_no_alarm = {583583+ .read_time = cmos_read_time,584584+ .set_time = cmos_set_time,585585+ .proc = cmos_procfs,579586};580587581588/*----------------------------------------------------------------*/···860855 dev_dbg(dev, "IRQ %d is already in use\n", rtc_irq);861856 goto cleanup1;862857 }858858+859859+ cmos_rtc.rtc->ops = &cmos_rtc_ops;860860+ } else {861861+ cmos_rtc.rtc->ops = &cmos_rtc_ops_no_alarm;863862 }864863865865- cmos_rtc.rtc->ops = &cmos_rtc_ops;866864 cmos_rtc.rtc->nvram_old_abi = true;867865 retval = rtc_register_device(cmos_rtc.rtc);868866 if (retval)
+1-1
drivers/rtc/rtc-hid-sensor-time.c
···213213 /* get a report with all values through requesting one value */214214 sensor_hub_input_attr_get_raw_value(time_state->common_attributes.hsdev,215215 HID_USAGE_SENSOR_TIME, hid_time_addresses[0],216216- time_state->info[0].report_id, SENSOR_HUB_SYNC);216216+ time_state->info[0].report_id, SENSOR_HUB_SYNC, false);217217 /* wait for all values (event) */218218 ret = wait_for_completion_killable_timeout(219219 &time_state->comp_last_time, HZ*6);
+3
drivers/rtc/rtc-pcf2127.c
···303303 memcpy(buf + 1, val, val_size);304304305305 ret = i2c_master_send(client, buf, val_size + 1);306306+307307+ kfree(buf);308308+306309 if (ret != val_size + 1)307310 return ret < 0 ? ret : -EIO;308311
+4-2
drivers/s390/cio/vfio_ccw_cp.c
···387387 * orb specified one of the unsupported formats, we defer388388 * checking for IDAWs in unsupported formats to here.389389 */390390- if ((!cp->orb.cmd.c64 || cp->orb.cmd.i2k) && ccw_is_idal(ccw))390390+ if ((!cp->orb.cmd.c64 || cp->orb.cmd.i2k) && ccw_is_idal(ccw)) {391391+ kfree(p);391392 return -EOPNOTSUPP;393393+ }392394393395 if ((!ccw_is_chain(ccw)) && (!ccw_is_tic(ccw)))394396 break;···530528531529 ret = pfn_array_alloc_pin(pat->pat_pa, cp->mdev, ccw->cda, ccw->count);532530 if (ret < 0)533533- goto out_init;531531+ goto out_unpin;534532535533 /* Translate this direct ccw to a idal ccw. */536534 idaws = kcalloc(ret, sizeof(*idaws), GFP_DMA | GFP_KERNEL);
+5-5
drivers/s390/cio/vfio_ccw_drv.c
···2222#include "vfio_ccw_private.h"23232424struct workqueue_struct *vfio_ccw_work_q;2525-struct kmem_cache *vfio_ccw_io_region;2525+static struct kmem_cache *vfio_ccw_io_region;26262727/*2828 * Helpers···134134 if (ret)135135 goto out_free;136136137137- ret = vfio_ccw_mdev_reg(sch);138138- if (ret)139139- goto out_disable;140140-141137 INIT_WORK(&private->io_work, vfio_ccw_sch_io_todo);142138 atomic_set(&private->avail, 1);143139 private->state = VFIO_CCW_STATE_STANDBY;140140+141141+ ret = vfio_ccw_mdev_reg(sch);142142+ if (ret)143143+ goto out_disable;144144145145 return 0;146146
+4-4
drivers/s390/crypto/ap_bus.c
···775775 drvres = ap_drv->flags & AP_DRIVER_FLAG_DEFAULT;776776 if (!!devres != !!drvres)777777 return -ENODEV;778778+ /* (re-)init queue's state machine */779779+ ap_queue_reinit_state(to_ap_queue(dev));778780 }779781780782 /* Add queue/card to list of active queues/cards */···809807 struct ap_device *ap_dev = to_ap_dev(dev);810808 struct ap_driver *ap_drv = ap_dev->drv;811809810810+ if (is_queue_dev(dev))811811+ ap_queue_remove(to_ap_queue(dev));812812 if (ap_drv->remove)813813 ap_drv->remove(ap_dev);814814···14481444 aq->ap_dev.device.parent = &ac->ap_dev.device;14491445 dev_set_name(&aq->ap_dev.device,14501446 "%02x.%04x", id, dom);14511451- /* Start with a device reset */14521452- spin_lock_bh(&aq->lock);14531453- ap_wait(ap_sm_event(aq, AP_EVENT_POLL));14541454- spin_unlock_bh(&aq->lock);14551447 /* Register device */14561448 rc = device_register(&aq->ap_dev.device);14571449 if (rc) {
+1
drivers/s390/crypto/ap_bus.h
···254254void ap_queue_remove(struct ap_queue *aq);255255void ap_queue_suspend(struct ap_device *ap_dev);256256void ap_queue_resume(struct ap_device *ap_dev);257257+void ap_queue_reinit_state(struct ap_queue *aq);257258258259struct ap_card *ap_card_create(int id, int queue_depth, int raw_device_type,259260 int comp_device_type, unsigned int functions);
···146146 QETH_CARD_TEXT(card, 2, "L2Wmac");147147 rc = qeth_l2_send_setdelmac(card, mac, cmd);148148 if (rc == -EEXIST)149149- QETH_DBF_MESSAGE(2, "MAC %pM already registered on %s\n",150150- mac, QETH_CARD_IFNAME(card));149149+ QETH_DBF_MESSAGE(2, "MAC already registered on device %x\n",150150+ CARD_DEVID(card));151151 else if (rc)152152- QETH_DBF_MESSAGE(2, "Failed to register MAC %pM on %s: %d\n",153153- mac, QETH_CARD_IFNAME(card), rc);152152+ QETH_DBF_MESSAGE(2, "Failed to register MAC on device %x: %d\n",153153+ CARD_DEVID(card), rc);154154 return rc;155155}156156···163163 QETH_CARD_TEXT(card, 2, "L2Rmac");164164 rc = qeth_l2_send_setdelmac(card, mac, cmd);165165 if (rc)166166- QETH_DBF_MESSAGE(2, "Failed to delete MAC %pM on %s: %d\n",167167- mac, QETH_CARD_IFNAME(card), rc);166166+ QETH_DBF_MESSAGE(2, "Failed to delete MAC on device %u: %d\n",167167+ CARD_DEVID(card), rc);168168 return rc;169169}170170···260260261261 QETH_CARD_TEXT(card, 2, "L2sdvcb");262262 if (cmd->hdr.return_code) {263263- QETH_DBF_MESSAGE(2, "Error in processing VLAN %i on %s: 0x%x.\n",263263+ QETH_DBF_MESSAGE(2, "Error in processing VLAN %u on device %x: %#x.\n",264264 cmd->data.setdelvlan.vlan_id,265265- QETH_CARD_IFNAME(card), cmd->hdr.return_code);265265+ CARD_DEVID(card), cmd->hdr.return_code);266266 QETH_CARD_TEXT_(card, 2, "L2VL%4x", cmd->hdr.command);267267 QETH_CARD_TEXT_(card, 2, "err%d", cmd->hdr.return_code);268268 }···455455 rc = qeth_vm_request_mac(card);456456 if (!rc)457457 goto out;458458- QETH_DBF_MESSAGE(2, "z/VM MAC Service failed on device %s: x%x\n",459459- CARD_BUS_ID(card), rc);458458+ QETH_DBF_MESSAGE(2, "z/VM MAC Service failed on device %x: %#x\n",459459+ CARD_DEVID(card), rc);460460 QETH_DBF_TEXT_(SETUP, 2, "err%04x", rc);461461 /* fall back to alternative mechanism: */462462 }···468468 rc = qeth_setadpparms_change_macaddr(card);469469 if (!rc)470470 goto out;471471- QETH_DBF_MESSAGE(2, "READ_MAC Assist failed on device %s: x%x\n",472472- CARD_BUS_ID(card), rc);471471+ QETH_DBF_MESSAGE(2, "READ_MAC Assist failed on device %x: %#x\n",472472+ CARD_DEVID(card), rc);473473 QETH_DBF_TEXT_(SETUP, 2, "1err%04x", rc);474474 /* fall back once more: */475475 }···826826827827 if (cgdev->state == CCWGROUP_ONLINE)828828 qeth_l2_set_offline(cgdev);829829- unregister_netdev(card->dev);829829+ if (qeth_netdev_is_registered(card->dev))830830+ unregister_netdev(card->dev);830831}831832832833static const struct ethtool_ops qeth_l2_ethtool_ops = {···863862 .ndo_set_features = qeth_set_features864863};865864866866-static int qeth_l2_setup_netdev(struct qeth_card *card)865865+static int qeth_l2_setup_netdev(struct qeth_card *card, bool carrier_ok)867866{868867 int rc;869868870870- if (card->dev->netdev_ops)869869+ if (qeth_netdev_is_registered(card->dev))871870 return 0;872871873872 card->dev->priv_flags |= IFF_UNICAST_FLT;···920919 qeth_l2_request_initial_mac(card);921920 netif_napi_add(card->dev, &card->napi, qeth_poll, QETH_NAPI_WEIGHT);922921 rc = register_netdev(card->dev);922922+ if (!rc && carrier_ok)923923+ netif_carrier_on(card->dev);924924+923925 if (rc)924926 card->dev->netdev_ops = NULL;925927 return rc;···953949 struct qeth_card *card = dev_get_drvdata(&gdev->dev);954950 int rc = 0;955951 enum qeth_card_states recover_flag;952952+ bool carrier_ok;956953957954 mutex_lock(&card->discipline_mutex);958955 mutex_lock(&card->conf_mutex);···961956 QETH_DBF_HEX(SETUP, 2, &card, sizeof(void *));962957963958 recover_flag = card->state;964964- rc = qeth_core_hardsetup_card(card);959959+ rc = qeth_core_hardsetup_card(card, &carrier_ok);965960 if (rc) {966961 QETH_DBF_TEXT_(SETUP, 2, "2err%04x", rc);967962 rc = -ENODEV;···972967 dev_info(&card->gdev->dev,973968 "The device represents a Bridge Capable Port\n");974969975975- rc = qeth_l2_setup_netdev(card);970970+ rc = qeth_l2_setup_netdev(card, carrier_ok);976971 if (rc)977972 goto out_remove;978973
+70-137
drivers/s390/net/qeth_l3_main.c
···278278279279 QETH_CARD_TEXT(card, 4, "clearip");280280281281- if (recover && card->options.sniffer)282282- return;283283-284281 spin_lock_bh(&card->ip_lock);285282286283 hash_for_each_safe(card->ip_htable, i, tmp, addr, hnode) {···491494 QETH_PROT_IPV4);492495 if (rc) {493496 card->options.route4.type = NO_ROUTER;494494- QETH_DBF_MESSAGE(2, "Error (0x%04x) while setting routing type"495495- " on %s. Type set to 'no router'.\n", rc,496496- QETH_CARD_IFNAME(card));497497+ QETH_DBF_MESSAGE(2, "Error (%#06x) while setting routing type on device %x. Type set to 'no router'.\n",498498+ rc, CARD_DEVID(card));497499 }498500 return rc;499501}···514518 QETH_PROT_IPV6);515519 if (rc) {516520 card->options.route6.type = NO_ROUTER;517517- QETH_DBF_MESSAGE(2, "Error (0x%04x) while setting routing type"518518- " on %s. Type set to 'no router'.\n", rc,519519- QETH_CARD_IFNAME(card));521521+ QETH_DBF_MESSAGE(2, "Error (%#06x) while setting routing type on device %x. Type set to 'no router'.\n",522522+ rc, CARD_DEVID(card));520523 }521524 return rc;522525}···658663 int rc = 0;659664 int cnt = 3;660665666666+ if (card->options.sniffer)667667+ return 0;661668662669 if (addr->proto == QETH_PROT_IPV4) {663670 QETH_CARD_TEXT(card, 2, "setaddr4");···693696 struct qeth_ipaddr *addr)694697{695698 int rc = 0;699699+700700+ if (card->options.sniffer)701701+ return 0;696702697703 if (addr->proto == QETH_PROT_IPV4) {698704 QETH_CARD_TEXT(card, 2, "deladdr4");···10701070 }10711071 break;10721072 default:10731073- QETH_DBF_MESSAGE(2, "Unknown sniffer action (0x%04x) on %s\n",10741074- cmd->data.diagass.action, QETH_CARD_IFNAME(card));10731073+ QETH_DBF_MESSAGE(2, "Unknown sniffer action (%#06x) on device %x\n",10741074+ cmd->data.diagass.action, CARD_DEVID(card));10751075 }1076107610771077 return 0;···15171517 qeth_l3_handle_promisc_mode(card);15181518}1519151915201520-static const char *qeth_l3_arp_get_error_cause(int *rc)15201520+static int qeth_l3_arp_makerc(int rc)15211521{15221522- switch (*rc) {15231523- case QETH_IPA_ARP_RC_FAILED:15241524- *rc = -EIO;15251525- return "operation failed";15221522+ switch (rc) {15231523+ case IPA_RC_SUCCESS:15241524+ return 0;15261525 case QETH_IPA_ARP_RC_NOTSUPP:15271527- *rc = -EOPNOTSUPP;15281528- return "operation not supported";15291529- case QETH_IPA_ARP_RC_OUT_OF_RANGE:15301530- *rc = -EINVAL;15311531- return "argument out of range";15321526 case QETH_IPA_ARP_RC_Q_NOTSUPP:15331533- *rc = -EOPNOTSUPP;15341534- return "query operation not supported";15271527+ return -EOPNOTSUPP;15281528+ case QETH_IPA_ARP_RC_OUT_OF_RANGE:15291529+ return -EINVAL;15351530 case QETH_IPA_ARP_RC_Q_NO_DATA:15361536- *rc = -ENOENT;15371537- return "no query data available";15311531+ return -ENOENT;15381532 default:15391539- return "unknown error";15331533+ return -EIO;15401534 }15411535}1542153615431537static int qeth_l3_arp_set_no_entries(struct qeth_card *card, int no_entries)15441538{15451545- int tmp;15461539 int rc;1547154015481541 QETH_CARD_TEXT(card, 3, "arpstnoe");···15531560 rc = qeth_send_simple_setassparms(card, IPA_ARP_PROCESSING,15541561 IPA_CMD_ASS_ARP_SET_NO_ENTRIES,15551562 no_entries);15561556- if (rc) {15571557- tmp = rc;15581558- QETH_DBF_MESSAGE(2, "Could not set number of ARP entries on "15591559- "%s: %s (0x%x/%d)\n", QETH_CARD_IFNAME(card),15601560- qeth_l3_arp_get_error_cause(&rc), tmp, tmp);15611561- }15621562- return rc;15631563+ if (rc)15641564+ QETH_DBF_MESSAGE(2, "Could not set number of ARP entries on device %x: %#x\n",15651565+ CARD_DEVID(card), rc);15661566+ return qeth_l3_arp_makerc(rc);15631567}1564156815651569static __u32 get_arp_entry_size(struct qeth_card *card,···17061716{17071717 struct qeth_cmd_buffer *iob;17081718 struct qeth_ipa_cmd *cmd;17091709- int tmp;17101719 int rc;1711172017121721 QETH_CARD_TEXT_(card, 3, "qarpipv%i", prot);···17241735 rc = qeth_l3_send_ipa_arp_cmd(card, iob,17251736 QETH_SETASS_BASE_LEN+QETH_ARP_CMD_LEN,17261737 qeth_l3_arp_query_cb, (void *)qinfo);17271727- if (rc) {17281728- tmp = rc;17291729- QETH_DBF_MESSAGE(2,17301730- "Error while querying ARP cache on %s: %s "17311731- "(0x%x/%d)\n", QETH_CARD_IFNAME(card),17321732- qeth_l3_arp_get_error_cause(&rc), tmp, tmp);17331733- }17341734-17351735- return rc;17381738+ if (rc)17391739+ QETH_DBF_MESSAGE(2, "Error while querying ARP cache on device %x: %#x\n",17401740+ CARD_DEVID(card), rc);17411741+ return qeth_l3_arp_makerc(rc);17361742}1737174317381744static int qeth_l3_arp_query(struct qeth_card *card, char __user *udata)···17771793 return rc;17781794}1779179517801780-static int qeth_l3_arp_add_entry(struct qeth_card *card,17811781- struct qeth_arp_cache_entry *entry)17961796+static int qeth_l3_arp_modify_entry(struct qeth_card *card,17971797+ struct qeth_arp_cache_entry *entry,17981798+ enum qeth_arp_process_subcmds arp_cmd)17821799{18001800+ struct qeth_arp_cache_entry *cmd_entry;17831801 struct qeth_cmd_buffer *iob;17841784- char buf[16];17851785- int tmp;17861802 int rc;1787180317881788- QETH_CARD_TEXT(card, 3, "arpadent");18041804+ if (arp_cmd == IPA_CMD_ASS_ARP_ADD_ENTRY)18051805+ QETH_CARD_TEXT(card, 3, "arpadd");18061806+ else18071807+ QETH_CARD_TEXT(card, 3, "arpdel");1789180817901809 /*17911810 * currently GuestLAN only supports the ARP assist function···18011814 return -EOPNOTSUPP;18021815 }1803181618041804- iob = qeth_get_setassparms_cmd(card, IPA_ARP_PROCESSING,18051805- IPA_CMD_ASS_ARP_ADD_ENTRY,18061806- sizeof(struct qeth_arp_cache_entry),18071807- QETH_PROT_IPV4);18171817+ iob = qeth_get_setassparms_cmd(card, IPA_ARP_PROCESSING, arp_cmd,18181818+ sizeof(*cmd_entry), QETH_PROT_IPV4);18081819 if (!iob)18091820 return -ENOMEM;18101810- rc = qeth_send_setassparms(card, iob,18111811- sizeof(struct qeth_arp_cache_entry),18121812- (unsigned long) entry,18131813- qeth_setassparms_cb, NULL);18141814- if (rc) {18151815- tmp = rc;18161816- qeth_l3_ipaddr4_to_string((u8 *)entry->ipaddr, buf);18171817- QETH_DBF_MESSAGE(2, "Could not add ARP entry for address %s "18181818- "on %s: %s (0x%x/%d)\n", buf, QETH_CARD_IFNAME(card),18191819- qeth_l3_arp_get_error_cause(&rc), tmp, tmp);18201820- }18211821- return rc;18221822-}1823182118241824-static int qeth_l3_arp_remove_entry(struct qeth_card *card,18251825- struct qeth_arp_cache_entry *entry)18261826-{18271827- struct qeth_cmd_buffer *iob;18281828- char buf[16] = {0, };18291829- int tmp;18301830- int rc;18221822+ cmd_entry = &__ipa_cmd(iob)->data.setassparms.data.arp_entry;18231823+ ether_addr_copy(cmd_entry->macaddr, entry->macaddr);18241824+ memcpy(cmd_entry->ipaddr, entry->ipaddr, 4);18251825+ rc = qeth_send_ipa_cmd(card, iob, qeth_setassparms_cb, NULL);18261826+ if (rc)18271827+ QETH_DBF_MESSAGE(2, "Could not modify (cmd: %#x) ARP entry on device %x: %#x\n",18281828+ arp_cmd, CARD_DEVID(card), rc);1831182918321832- QETH_CARD_TEXT(card, 3, "arprment");18331833-18341834- /*18351835- * currently GuestLAN only supports the ARP assist function18361836- * IPA_CMD_ASS_ARP_QUERY_INFO, but not IPA_CMD_ASS_ARP_REMOVE_ENTRY;18371837- * thus we say EOPNOTSUPP for this ARP function18381838- */18391839- if (card->info.guestlan)18401840- return -EOPNOTSUPP;18411841- if (!qeth_is_supported(card, IPA_ARP_PROCESSING)) {18421842- return -EOPNOTSUPP;18431843- }18441844- memcpy(buf, entry, 12);18451845- iob = qeth_get_setassparms_cmd(card, IPA_ARP_PROCESSING,18461846- IPA_CMD_ASS_ARP_REMOVE_ENTRY,18471847- 12,18481848- QETH_PROT_IPV4);18491849- if (!iob)18501850- return -ENOMEM;18511851- rc = qeth_send_setassparms(card, iob,18521852- 12, (unsigned long)buf,18531853- qeth_setassparms_cb, NULL);18541854- if (rc) {18551855- tmp = rc;18561856- memset(buf, 0, 16);18571857- qeth_l3_ipaddr4_to_string((u8 *)entry->ipaddr, buf);18581858- QETH_DBF_MESSAGE(2, "Could not delete ARP entry for address %s"18591859- " on %s: %s (0x%x/%d)\n", buf, QETH_CARD_IFNAME(card),18601860- qeth_l3_arp_get_error_cause(&rc), tmp, tmp);18611861- }18621862- return rc;18301830+ return qeth_l3_arp_makerc(rc);18631831}1864183218651833static int qeth_l3_arp_flush_cache(struct qeth_card *card)18661834{18671835 int rc;18681868- int tmp;1869183618701837 QETH_CARD_TEXT(card, 3, "arpflush");18711838···18351894 }18361895 rc = qeth_send_simple_setassparms(card, IPA_ARP_PROCESSING,18371896 IPA_CMD_ASS_ARP_FLUSH_CACHE, 0);18381838- if (rc) {18391839- tmp = rc;18401840- QETH_DBF_MESSAGE(2, "Could not flush ARP cache on %s: %s "18411841- "(0x%x/%d)\n", QETH_CARD_IFNAME(card),18421842- qeth_l3_arp_get_error_cause(&rc), tmp, tmp);18431843- }18441844- return rc;18971897+ if (rc)18981898+ QETH_DBF_MESSAGE(2, "Could not flush ARP cache on device %x: %#x\n",18991899+ CARD_DEVID(card), rc);19001900+ return qeth_l3_arp_makerc(rc);18451901}1846190218471903static int qeth_l3_do_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)18481904{18491905 struct qeth_card *card = dev->ml_priv;18501906 struct qeth_arp_cache_entry arp_entry;19071907+ enum qeth_arp_process_subcmds arp_cmd;18511908 int rc = 0;1852190918531910 switch (cmd) {···18641925 rc = qeth_l3_arp_query(card, rq->ifr_ifru.ifru_data);18651926 break;18661927 case SIOC_QETH_ARP_ADD_ENTRY:18671867- if (!capable(CAP_NET_ADMIN)) {18681868- rc = -EPERM;18691869- break;18701870- }18711871- if (copy_from_user(&arp_entry, rq->ifr_ifru.ifru_data,18721872- sizeof(struct qeth_arp_cache_entry)))18731873- rc = -EFAULT;18741874- else18751875- rc = qeth_l3_arp_add_entry(card, &arp_entry);18761876- break;18771928 case SIOC_QETH_ARP_REMOVE_ENTRY:18781878- if (!capable(CAP_NET_ADMIN)) {18791879- rc = -EPERM;18801880- break;18811881- }18821882- if (copy_from_user(&arp_entry, rq->ifr_ifru.ifru_data,18831883- sizeof(struct qeth_arp_cache_entry)))18841884- rc = -EFAULT;18851885- else18861886- rc = qeth_l3_arp_remove_entry(card, &arp_entry);18871887- break;19291929+ if (!capable(CAP_NET_ADMIN))19301930+ return -EPERM;19311931+ if (copy_from_user(&arp_entry, rq->ifr_data, sizeof(arp_entry)))19321932+ return -EFAULT;19331933+19341934+ arp_cmd = (cmd == SIOC_QETH_ARP_ADD_ENTRY) ?19351935+ IPA_CMD_ASS_ARP_ADD_ENTRY :19361936+ IPA_CMD_ASS_ARP_REMOVE_ENTRY;19371937+ return qeth_l3_arp_modify_entry(card, &arp_entry, arp_cmd);18881938 case SIOC_QETH_ARP_FLUSH_CACHE:18891939 if (!capable(CAP_NET_ADMIN)) {18901940 rc = -EPERM;···23112383 .ndo_neigh_setup = qeth_l3_neigh_setup,23122384};2313238523142314-static int qeth_l3_setup_netdev(struct qeth_card *card)23862386+static int qeth_l3_setup_netdev(struct qeth_card *card, bool carrier_ok)23152387{23162388 unsigned int headroom;23172389 int rc;2318239023192319- if (card->dev->netdev_ops)23912391+ if (qeth_netdev_is_registered(card->dev))23202392 return 0;2321239323222394 if (card->info.type == QETH_CARD_TYPE_OSD ||···2385245723862458 netif_napi_add(card->dev, &card->napi, qeth_poll, QETH_NAPI_WEIGHT);23872459 rc = register_netdev(card->dev);24602460+ if (!rc && carrier_ok)24612461+ netif_carrier_on(card->dev);24622462+23882463out:23892464 if (rc)23902465 card->dev->netdev_ops = NULL;···24282497 if (cgdev->state == CCWGROUP_ONLINE)24292498 qeth_l3_set_offline(cgdev);2430249924312431- unregister_netdev(card->dev);25002500+ if (qeth_netdev_is_registered(card->dev))25012501+ unregister_netdev(card->dev);24322502 qeth_l3_clear_ip_htable(card, 0);24332503 qeth_l3_clear_ipato_list(card);24342504}···24392507 struct qeth_card *card = dev_get_drvdata(&gdev->dev);24402508 int rc = 0;24412509 enum qeth_card_states recover_flag;25102510+ bool carrier_ok;2442251124432512 mutex_lock(&card->discipline_mutex);24442513 mutex_lock(&card->conf_mutex);···24472514 QETH_DBF_HEX(SETUP, 2, &card, sizeof(void *));2448251524492516 recover_flag = card->state;24502450- rc = qeth_core_hardsetup_card(card);25172517+ rc = qeth_core_hardsetup_card(card, &carrier_ok);24512518 if (rc) {24522519 QETH_DBF_TEXT_(SETUP, 2, "2err%04x", rc);24532520 rc = -ENODEV;24542521 goto out_remove;24552522 }2456252324572457- rc = qeth_l3_setup_netdev(card);25242524+ rc = qeth_l3_setup_netdev(card, carrier_ok);24582525 if (rc)24592526 goto out_remove;24602527
+1
drivers/scsi/Kconfig
···578578config SCSI_MYRS579579 tristate "Mylex DAC960/DAC1100 PCI RAID Controller (SCSI Interface)"580580 depends on PCI581581+ depends on !CPU_BIG_ENDIAN || COMPILE_TEST581582 select RAID_ATTRS582583 help583584 This driver adds support for the Mylex DAC960, AcceleRAID, and
···6767MODULE_PARM_DESC(ql2xplogiabsentdevice,6868 "Option to enable PLOGI to devices that are not present after "6969 "a Fabric scan. This is needed for several broken switches. "7070- "Default is 0 - no PLOGI. 1 - perfom PLOGI.");7070+ "Default is 0 - no PLOGI. 1 - perform PLOGI.");71717272int ql2xloginretrycount = 0;7373module_param(ql2xloginretrycount, int, S_IRUGO);···17491749static void17501750__qla2x00_abort_all_cmds(struct qla_qpair *qp, int res)17511751{17521752- int cnt;17521752+ int cnt, status;17531753 unsigned long flags;17541754 srb_t *sp;17551755 scsi_qla_host_t *vha = qp->vha;···17991799 if (!sp_get(sp)) {18001800 spin_unlock_irqrestore18011801 (qp->qp_lock_ptr, flags);18021802- qla2xxx_eh_abort(18021802+ status = qla2xxx_eh_abort(18031803 GET_CMD_SP(sp));18041804 spin_lock_irqsave18051805 (qp->qp_lock_ptr, flags);18061806+ /*18071807+ * Get rid of extra reference caused18081808+ * by early exit from qla2xxx_eh_abort18091809+ */18101810+ if (status == FAST_IO_FAIL)18111811+ atomic_dec(&sp->ref_count);18061812 }18071813 }18081814 sp->done(sp, res);
+8
drivers/scsi/scsi_lib.c
···697697 */698698 scsi_mq_uninit_cmd(cmd);699699700700+ /*701701+ * queue is still alive, so grab the ref for preventing it702702+ * from being cleaned up during running queue.703703+ */704704+ percpu_ref_get(&q->q_usage_counter);705705+700706 __blk_mq_end_request(req, error);701707702708 if (scsi_target(sdev)->single_lun ||···710704 kblockd_schedule_work(&sdev->requeue_work);711705 else712706 blk_mq_run_hw_queues(q, true);707707+708708+ percpu_ref_put(&q->q_usage_counter);713709 } else {714710 unsigned long flags;715711
···307307 ipipeif_write(val, ipipeif_base_addr, IPIPEIF_CFG2);308308 break;309309 }310310+ /* fall through */310311311312 case IPIPEIF_SDRAM_YUV:312313 /* Set clock divider */
+12-12
drivers/staging/media/sunxi/cedrus/cedrus.c
···108108 unsigned int count;109109 unsigned int i;110110111111- count = vb2_request_buffer_cnt(req);112112- if (!count) {113113- v4l2_info(&ctx->dev->v4l2_dev,114114- "No buffer was provided with the request\n");115115- return -ENOENT;116116- } else if (count > 1) {117117- v4l2_info(&ctx->dev->v4l2_dev,118118- "More than one buffer was provided with the request\n");119119- return -EINVAL;120120- }121121-122111 list_for_each_entry(obj, &req->objects, list) {123112 struct vb2_buffer *vb;124113···121132122133 if (!ctx)123134 return -ENOENT;135135+136136+ count = vb2_request_buffer_cnt(req);137137+ if (!count) {138138+ v4l2_info(&ctx->dev->v4l2_dev,139139+ "No buffer was provided with the request\n");140140+ return -ENOENT;141141+ } else if (count > 1) {142142+ v4l2_info(&ctx->dev->v4l2_dev,143143+ "More than one buffer was provided with the request\n");144144+ return -EINVAL;145145+ }124146125147 parent_hdl = &ctx->hdl;126148···253253254254static const struct media_device_ops cedrus_m2m_media_ops = {255255 .req_validate = cedrus_request_validate,256256- .req_queue = vb2_m2m_request_queue,256256+ .req_queue = v4l2_m2m_request_queue,257257};258258259259static int cedrus_probe(struct platform_device *pdev)
+1-1
drivers/staging/most/core.c
···351351352352 for (i = 0; i < ARRAY_SIZE(ch_data_type); i++) {353353 if (c->cfg.data_type & ch_data_type[i].most_ch_data_type)354354- return snprintf(buf, PAGE_SIZE, ch_data_type[i].name);354354+ return snprintf(buf, PAGE_SIZE, "%s", ch_data_type[i].name);355355 }356356 return snprintf(buf, PAGE_SIZE, "unconfigured\n");357357}
···17311731 struct vchiq_await_completion32 args32;17321732 struct vchiq_completion_data32 completion32;17331733 unsigned int *msgbufcount32;17341734+ unsigned int msgbufcount_native;17341735 compat_uptr_t msgbuf32;17351736 void *msgbuf;17361737 void **msgbufptr;···18431842 sizeof(completion32)))18441843 return -EFAULT;1845184418461846- args32.msgbufcount--;18451845+ if (get_user(msgbufcount_native, &args->msgbufcount))18461846+ return -EFAULT;18471847+18481848+ if (!msgbufcount_native)18491849+ args32.msgbufcount--;1847185018481851 msgbufcount32 =18491852 &((struct vchiq_await_completion32 __user *)arg)->msgbufcount;
+2-2
drivers/target/target_core_transport.c
···17781778void transport_generic_request_failure(struct se_cmd *cmd,17791779 sense_reason_t sense_reason)17801780{17811781- int ret = 0;17811781+ int ret = 0, post_ret;1782178217831783 pr_debug("-----[ Storage Engine Exception; sense_reason %d\n",17841784 sense_reason);···17901790 transport_complete_task_attr(cmd);1791179117921792 if (cmd->transport_complete_callback)17931793- cmd->transport_complete_callback(cmd, false, NULL);17931793+ cmd->transport_complete_callback(cmd, false, &post_ret);1794179417951795 if (transport_check_aborted_status(cmd, 1))17961796 return;
+38-2
drivers/thunderbolt/switch.c
···863863}864864static DEVICE_ATTR(key, 0600, key_show, key_store);865865866866+static void nvm_authenticate_start(struct tb_switch *sw)867867+{868868+ struct pci_dev *root_port;869869+870870+ /*871871+ * During host router NVM upgrade we should not allow root port to872872+ * go into D3cold because some root ports cannot trigger PME873873+ * itself. To be on the safe side keep the root port in D0 during874874+ * the whole upgrade process.875875+ */876876+ root_port = pci_find_pcie_root_port(sw->tb->nhi->pdev);877877+ if (root_port)878878+ pm_runtime_get_noresume(&root_port->dev);879879+}880880+881881+static void nvm_authenticate_complete(struct tb_switch *sw)882882+{883883+ struct pci_dev *root_port;884884+885885+ root_port = pci_find_pcie_root_port(sw->tb->nhi->pdev);886886+ if (root_port)887887+ pm_runtime_put(&root_port->dev);888888+}889889+866890static ssize_t nvm_authenticate_show(struct device *dev,867891 struct device_attribute *attr, char *buf)868892{···936912937913 sw->nvm->authenticating = true;938914939939- if (!tb_route(sw))915915+ if (!tb_route(sw)) {916916+ /*917917+ * Keep root port from suspending as long as the918918+ * NVM upgrade process is running.919919+ */920920+ nvm_authenticate_start(sw);940921 ret = nvm_authenticate_host(sw);941941- else922922+ if (ret)923923+ nvm_authenticate_complete(sw);924924+ } else {942925 ret = nvm_authenticate_device(sw);926926+ }943927 pm_runtime_mark_last_busy(&sw->dev);944928 pm_runtime_put_autosuspend(&sw->dev);945929 }···13651333 ret = dma_port_flash_update_auth_status(sw->dma_port, &status);13661334 if (ret <= 0)13671335 return ret;13361336+13371337+ /* Now we can allow root port to suspend again */13381338+ if (!tb_route(sw))13391339+ nvm_authenticate_complete(sw);1368134013691341 if (status) {13701342 tb_sw_info(sw, "switch flash authentication failed\n");
+4-4
drivers/tty/serial/sh-sci.c
···16141614 hrtimer_init(&s->rx_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);16151615 s->rx_timer.function = rx_timer_fn;1616161616171617+ s->chan_rx_saved = s->chan_rx = chan;16181618+16171619 if (port->type == PORT_SCIFA || port->type == PORT_SCIFB)16181620 sci_submit_rx(s);16191619-16201620- s->chan_rx_saved = s->chan_rx = chan;16211621 }16221622}16231623···31023102static int sci_remove(struct platform_device *dev)31033103{31043104 struct sci_port *port = platform_get_drvdata(dev);31053105+ unsigned int type = port->port.type; /* uart_remove_... clears it */3105310631063107 sci_ports_in_use &= ~BIT(port->port.line);31073108 uart_remove_one_port(&sci_uart_driver, &port->port);···31133112 sysfs_remove_file(&dev->dev.kobj,31143113 &dev_attr_rx_fifo_trigger.attr);31153114 }31163116- if (port->port.type == PORT_SCIFA || port->port.type == PORT_SCIFB ||31173117- port->port.type == PORT_HSCIF) {31153115+ if (type == PORT_SCIFA || type == PORT_SCIFB || type == PORT_HSCIF) {31183116 sysfs_remove_file(&dev->dev.kobj,31193117 &dev_attr_rx_fifo_timeout.attr);31203118 }
···15211521 usb_wakeup_notification(udev->parent, udev->portnum);15221522}1523152315241524+/*15251525+ * Quirk hanlder for errata seen on Cavium ThunderX2 processor XHCI15261526+ * Controller.15271527+ * As per ThunderX2errata-129 USB 2 device may come up as USB 115281528+ * If a connection to a USB 1 device is followed by another connection15291529+ * to a USB 2 device.15301530+ *15311531+ * Reset the PHY after the USB device is disconnected if device speed15321532+ * is less than HCD_USB3.15331533+ * Retry the reset sequence max of 4 times checking the PLL lock status.15341534+ *15351535+ */15361536+static void xhci_cavium_reset_phy_quirk(struct xhci_hcd *xhci)15371537+{15381538+ struct usb_hcd *hcd = xhci_to_hcd(xhci);15391539+ u32 pll_lock_check;15401540+ u32 retry_count = 4;15411541+15421542+ do {15431543+ /* Assert PHY reset */15441544+ writel(0x6F, hcd->regs + 0x1048);15451545+ udelay(10);15461546+ /* De-assert the PHY reset */15471547+ writel(0x7F, hcd->regs + 0x1048);15481548+ udelay(200);15491549+ pll_lock_check = readl(hcd->regs + 0x1070);15501550+ } while (!(pll_lock_check & 0x1) && --retry_count);15511551+}15521552+15241553static void handle_port_status(struct xhci_hcd *xhci,15251554 union xhci_trb *event)15261555{···15811552 port = &xhci->hw_ports[port_id - 1];15821553 if (!port || !port->rhub || port->hcd_portnum == DUPLICATE_ENTRY) {15831554 xhci_warn(xhci, "Event for invalid port %u\n", port_id);15551555+ bogus_port_status = true;15561556+ goto cleanup;15571557+ }15581558+15591559+ /* We might get interrupts after shared_hcd is removed */15601560+ if (port->rhub == &xhci->usb3_rhub && xhci->shared_hcd == NULL) {15611561+ xhci_dbg(xhci, "ignore port event for removed USB3 hcd\n");15841562 bogus_port_status = true;15851563 goto cleanup;15861564 }···16751639 * RExit to a disconnect state). If so, let the the driver know it's16761640 * out of the RExit state.16771641 */16781678- if (!DEV_SUPERSPEED_ANY(portsc) &&16421642+ if (!DEV_SUPERSPEED_ANY(portsc) && hcd->speed < HCD_USB3 &&16791643 test_and_clear_bit(hcd_portnum,16801644 &bus_state->rexit_ports)) {16811645 complete(&bus_state->rexit_done[hcd_portnum]);···16831647 goto cleanup;16841648 }1685164916861686- if (hcd->speed < HCD_USB3)16501650+ if (hcd->speed < HCD_USB3) {16871651 xhci_test_and_clear_bit(xhci, port, PORT_PLC);16521652+ if ((xhci->quirks & XHCI_RESET_PLL_ON_DISCONNECT) &&16531653+ (portsc & PORT_CSC) && !(portsc & PORT_CONNECT))16541654+ xhci_cavium_reset_phy_quirk(xhci);16551655+ }1688165616891657cleanup:16901658 /* Update event ring dequeue pointer before dropping the lock */···23062266 goto cleanup;23072267 case COMP_RING_UNDERRUN:23082268 case COMP_RING_OVERRUN:22692269+ case COMP_STOPPED_LENGTH_INVALID:23092270 goto cleanup;23102271 default:23112272 xhci_err(xhci, "ERROR Transfer event for unknown stream ring slot %u ep %u\n",
···719719720720 /* Only halt host and free memory after both hcds are removed */721721 if (!usb_hcd_is_primary_hcd(hcd)) {722722- /* usb core will free this hcd shortly, unset pointer */723723- xhci->shared_hcd = NULL;724722 mutex_unlock(&xhci->mutex);725723 return;726724 }
+2-1
drivers/usb/host/xhci.h
···16801680 * It can take up to 20 ms to transition from RExit to U0 on the16811681 * Intel Lynx Point LP xHCI host.16821682 */16831683-#define XHCI_MAX_REXIT_TIMEOUT (20 * 1000)16831683+#define XHCI_MAX_REXIT_TIMEOUT_MS 201684168416851685static inline unsigned int hcd_index(struct usb_hcd *hcd)16861686{···18491849#define XHCI_INTEL_USB_ROLE_SW BIT_ULL(31)18501850#define XHCI_ZERO_64B_REGS BIT_ULL(32)18511851#define XHCI_DEFAULT_PM_RUNTIME_ALLOW BIT_ULL(33)18521852+#define XHCI_RESET_PLL_ON_DISCONNECT BIT_ULL(34)1852185318531854 unsigned int num_active_eps;18541855 unsigned int limit_active_eps;
···23232424if TYPEC_UCSI25252626+config UCSI_CCG2727+ tristate "UCSI Interface Driver for Cypress CCGx"2828+ depends on I2C2929+ help3030+ This driver enables UCSI support on platforms that expose a3131+ Cypress CCGx Type-C controller over I2C interface.3232+3333+ To compile the driver as a module, choose M here: the module will be3434+ called ucsi_ccg.3535+2636config UCSI_ACPI2737 tristate "UCSI ACPI Interface Driver"2838 depends on ACPI
···251251 kfree(resource);252252}253253254254-/*255255- * Host memory not allocated to dom0. We can use this range for hotplug-based256256- * ballooning.257257- *258258- * It's a type-less resource. Setting IORESOURCE_MEM will make resource259259- * management algorithms (arch_remove_reservations()) look into guest e820,260260- * which we don't want.261261- */262262-static struct resource hostmem_resource = {263263- .name = "Host RAM",264264-};265265-266266-void __attribute__((weak)) __init arch_xen_balloon_init(struct resource *res)267267-{}268268-269254static struct resource *additional_memory_resource(phys_addr_t size)270255{271271- struct resource *res, *res_hostmem;272272- int ret = -ENOMEM;256256+ struct resource *res;257257+ int ret;273258274259 res = kzalloc(sizeof(*res), GFP_KERNEL);275260 if (!res)···263278 res->name = "System RAM";264279 res->flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;265280266266- res_hostmem = kzalloc(sizeof(*res), GFP_KERNEL);267267- if (res_hostmem) {268268- /* Try to grab a range from hostmem */269269- res_hostmem->name = "Host memory";270270- ret = allocate_resource(&hostmem_resource, res_hostmem,271271- size, 0, -1,272272- PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL);273273- }274274-275275- if (!ret) {276276- /*277277- * Insert this resource into iomem. Because hostmem_resource278278- * tracks portion of guest e820 marked as UNUSABLE noone else279279- * should try to use it.280280- */281281- res->start = res_hostmem->start;282282- res->end = res_hostmem->end;283283- ret = insert_resource(&iomem_resource, res);284284- if (ret < 0) {285285- pr_err("Can't insert iomem_resource [%llx - %llx]\n",286286- res->start, res->end);287287- release_memory_resource(res_hostmem);288288- res_hostmem = NULL;289289- res->start = res->end = 0;290290- }291291- }292292-293293- if (ret) {294294- ret = allocate_resource(&iomem_resource, res,295295- size, 0, -1,296296- PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL);297297- if (ret < 0) {298298- pr_err("Cannot allocate new System RAM resource\n");299299- kfree(res);300300- return NULL;301301- }281281+ ret = allocate_resource(&iomem_resource, res,282282+ size, 0, -1,283283+ PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL);284284+ if (ret < 0) {285285+ pr_err("Cannot allocate new System RAM resource\n");286286+ kfree(res);287287+ return NULL;302288 }303289304290#ifdef CONFIG_SPARSEMEM···281325 pr_err("New System RAM resource outside addressable RAM (%lu > %lu)\n",282326 pfn, limit);283327 release_memory_resource(res);284284- release_memory_resource(res_hostmem);285328 return NULL;286329 }287330 }···705750 set_online_page_callback(&xen_online_page);706751 register_memory_notifier(&xen_memory_nb);707752 register_sysctl_table(xen_root);708708-709709- arch_xen_balloon_init(&hostmem_resource);710753#endif711754712755#ifdef CONFIG_XEN_PV
+1-1
drivers/xen/grant-table.c
···914914915915 ret = xenmem_reservation_increase(args->nr_pages, args->frames);916916 if (ret != args->nr_pages) {917917- pr_debug("Failed to decrease reservation for DMA buffer\n");917917+ pr_debug("Failed to increase reservation for DMA buffer\n");918918 ret = -EFAULT;919919 } else {920920 ret = 0;
+4-18
drivers/xen/privcmd-buf.c
···21212222MODULE_LICENSE("GPL");23232424-static unsigned int limit = 64;2525-module_param(limit, uint, 0644);2626-MODULE_PARM_DESC(limit, "Maximum number of pages that may be allocated by "2727- "the privcmd-buf device per open file");2828-2924struct privcmd_buf_private {3025 struct mutex lock;3126 struct list_head list;3232- unsigned int allocated;3327};34283529struct privcmd_buf_vma_private {···5460{5561 unsigned int i;56625757- vma_priv->file_priv->allocated -= vma_priv->n_pages;5858-5963 list_del(&vma_priv->list);60646165 for (i = 0; i < vma_priv->n_pages; i++)6262- if (vma_priv->pages[i])6363- __free_page(vma_priv->pages[i]);6666+ __free_page(vma_priv->pages[i]);64676568 kfree(vma_priv);6669}···137146 unsigned int i;138147 int ret = 0;139148140140- if (!(vma->vm_flags & VM_SHARED) || count > limit ||141141- file_priv->allocated + count > limit)149149+ if (!(vma->vm_flags & VM_SHARED))142150 return -EINVAL;143151144152 vma_priv = kzalloc(sizeof(*vma_priv) + count * sizeof(void *),···145155 if (!vma_priv)146156 return -ENOMEM;147157148148- vma_priv->n_pages = count;149149- count = 0;150150- for (i = 0; i < vma_priv->n_pages; i++) {158158+ for (i = 0; i < count; i++) {151159 vma_priv->pages[i] = alloc_page(GFP_KERNEL | __GFP_ZERO);152160 if (!vma_priv->pages[i])153161 break;154154- count++;162162+ vma_priv->n_pages++;155163 }156164157165 mutex_lock(&file_priv->lock);158158-159159- file_priv->allocated += count;160166161167 vma_priv->file_priv = file_priv;162168 vma_priv->users = 1;
···696696};697697698698/*699699+ * Error prioritisation and accumulation.700700+ */701701+struct afs_error {702702+ short error; /* Accumulated error */703703+ bool responded; /* T if server responded */704704+};705705+706706+/*699707 * Cursor for iterating over a server's address list.700708 */701709struct afs_addr_cursor {···10231015 * misc.c10241016 */10251017extern int afs_abort_to_error(u32);10181018+extern void afs_prioritise_error(struct afs_error *, int, u32);1026101910271020/*10281021 * mntpt.c
+52
fs/afs/misc.c
···118118 default: return -EREMOTEIO;119119 }120120}121121+122122+/*123123+ * Select the error to report from a set of errors.124124+ */125125+void afs_prioritise_error(struct afs_error *e, int error, u32 abort_code)126126+{127127+ switch (error) {128128+ case 0:129129+ return;130130+ default:131131+ if (e->error == -ETIMEDOUT ||132132+ e->error == -ETIME)133133+ return;134134+ case -ETIMEDOUT:135135+ case -ETIME:136136+ if (e->error == -ENOMEM ||137137+ e->error == -ENONET)138138+ return;139139+ case -ENOMEM:140140+ case -ENONET:141141+ if (e->error == -ERFKILL)142142+ return;143143+ case -ERFKILL:144144+ if (e->error == -EADDRNOTAVAIL)145145+ return;146146+ case -EADDRNOTAVAIL:147147+ if (e->error == -ENETUNREACH)148148+ return;149149+ case -ENETUNREACH:150150+ if (e->error == -EHOSTUNREACH)151151+ return;152152+ case -EHOSTUNREACH:153153+ if (e->error == -EHOSTDOWN)154154+ return;155155+ case -EHOSTDOWN:156156+ if (e->error == -ECONNREFUSED)157157+ return;158158+ case -ECONNREFUSED:159159+ if (e->error == -ECONNRESET)160160+ return;161161+ case -ECONNRESET: /* Responded, but call expired. */162162+ if (e->responded)163163+ return;164164+ e->error = error;165165+ return;166166+167167+ case -ECONNABORTED:168168+ e->responded = true;169169+ e->error = afs_abort_to_error(abort_code);170170+ return;171171+ }172172+}
+13-40
fs/afs/rotate.c
···136136 struct afs_addr_list *alist;137137 struct afs_server *server;138138 struct afs_vnode *vnode = fc->vnode;139139- u32 rtt, abort_code;139139+ struct afs_error e;140140+ u32 rtt;140141 int error = fc->ac.error, i;141142142143 _enter("%lx[%d],%lx[%d],%d,%d",···307306 if (fc->error != -EDESTADDRREQ)308307 goto iterate_address;309308 /* Fall through */309309+ case -ERFKILL:310310+ case -EADDRNOTAVAIL:310311 case -ENETUNREACH:311312 case -EHOSTUNREACH:313313+ case -EHOSTDOWN:312314 case -ECONNREFUSED:313315 _debug("no conn");314316 fc->error = error;···450446 if (fc->flags & AFS_FS_CURSOR_VBUSY)451447 goto restart_from_beginning;452448453453- abort_code = 0;454454- error = -EDESTADDRREQ;449449+ e.error = -EDESTADDRREQ;450450+ e.responded = false;455451 for (i = 0; i < fc->server_list->nr_servers; i++) {456452 struct afs_server *s = fc->server_list->servers[i].server;457457- int probe_error = READ_ONCE(s->probe.error);458453459459- switch (probe_error) {460460- case 0:461461- continue;462462- default:463463- if (error == -ETIMEDOUT ||464464- error == -ETIME)465465- continue;466466- case -ETIMEDOUT:467467- case -ETIME:468468- if (error == -ENOMEM ||469469- error == -ENONET)470470- continue;471471- case -ENOMEM:472472- case -ENONET:473473- if (error == -ENETUNREACH)474474- continue;475475- case -ENETUNREACH:476476- if (error == -EHOSTUNREACH)477477- continue;478478- case -EHOSTUNREACH:479479- if (error == -ECONNREFUSED)480480- continue;481481- case -ECONNREFUSED:482482- if (error == -ECONNRESET)483483- continue;484484- case -ECONNRESET: /* Responded, but call expired. */485485- if (error == -ECONNABORTED)486486- continue;487487- case -ECONNABORTED:488488- abort_code = s->probe.abort_code;489489- error = probe_error;490490- continue;491491- }454454+ afs_prioritise_error(&e, READ_ONCE(s->probe.error),455455+ s->probe.abort_code);492456 }493493-494494- if (error == -ECONNABORTED)495495- error = afs_abort_to_error(abort_code);496457497458failed_set_error:498459 fc->error = error;···522553 _leave(" = f [abort]");523554 return false;524555556556+ case -ERFKILL:557557+ case -EADDRNOTAVAIL:525558 case -ENETUNREACH:526559 case -EHOSTUNREACH:560560+ case -EHOSTDOWN:527561 case -ECONNREFUSED:528562 case -ETIMEDOUT:529563 case -ETIME:···605633 struct afs_net *net = afs_v2net(fc->vnode);606634607635 if (fc->error == -EDESTADDRREQ ||636636+ fc->error == -EADDRNOTAVAIL ||608637 fc->error == -ENETUNREACH ||609638 fc->error == -EHOSTUNREACH)610639 afs_dump_edestaddrreq(fc);
+10-1
fs/afs/rxrpc.c
···576576{577577 signed long rtt2, timeout;578578 long ret;579579+ bool stalled = false;579580 u64 rtt;580581 u32 life, last_life;581582···610609611610 life = rxrpc_kernel_check_life(call->net->socket, call->rxcall);612611 if (timeout == 0 &&613613- life == last_life && signal_pending(current))612612+ life == last_life && signal_pending(current)) {613613+ if (stalled)614614 break;615615+ __set_current_state(TASK_RUNNING);616616+ rxrpc_kernel_probe_life(call->net->socket, call->rxcall);617617+ timeout = rtt2;618618+ stalled = true;619619+ continue;620620+ }615621616622 if (life != last_life) {617623 timeout = rtt2;618624 last_life = life;625625+ stalled = false;619626 }620627621628 timeout = schedule_timeout(timeout);
+27-18
fs/afs/vl_probe.c
···6161 afs_io_error(call, afs_io_error_vl_probe_fail);6262 goto out;6363 case -ECONNRESET: /* Responded, but call expired. */6464+ case -ERFKILL:6565+ case -EADDRNOTAVAIL:6466 case -ENETUNREACH:6567 case -EHOSTUNREACH:6868+ case -EHOSTDOWN:6669 case -ECONNREFUSED:6770 case -ETIMEDOUT:6871 case -ETIME:···132129 * Probe all of a vlserver's addresses to find out the best route and to133130 * query its capabilities.134131 */135135-static int afs_do_probe_vlserver(struct afs_net *net,136136- struct afs_vlserver *server,137137- struct key *key,138138- unsigned int server_index)132132+static bool afs_do_probe_vlserver(struct afs_net *net,133133+ struct afs_vlserver *server,134134+ struct key *key,135135+ unsigned int server_index,136136+ struct afs_error *_e)139137{140138 struct afs_addr_cursor ac = {141139 .index = 0,142140 };143143- int ret;141141+ bool in_progress = false;142142+ int err;144143145144 _enter("%s", server->name);146145···156151 server->probe.rtt = UINT_MAX;157152158153 for (ac.index = 0; ac.index < ac.alist->nr_addrs; ac.index++) {159159- ret = afs_vl_get_capabilities(net, &ac, key, server,154154+ err = afs_vl_get_capabilities(net, &ac, key, server,160155 server_index, true);161161- if (ret != -EINPROGRESS) {162162- afs_vl_probe_done(server);163163- return ret;164164- }156156+ if (err == -EINPROGRESS)157157+ in_progress = true;158158+ else159159+ afs_prioritise_error(_e, err, ac.abort_code);165160 }166161167167- return 0;162162+ if (!in_progress)163163+ afs_vl_probe_done(server);164164+ return in_progress;168165}169166170167/*···176169 struct afs_vlserver_list *vllist)177170{178171 struct afs_vlserver *server;179179- int i, ret;172172+ struct afs_error e;173173+ bool in_progress = false;174174+ int i;180175176176+ e.error = 0;177177+ e.responded = false;181178 for (i = 0; i < vllist->nr_servers; i++) {182179 server = vllist->servers[i].server;183180 if (test_bit(AFS_VLSERVER_FL_PROBED, &server->flags))184181 continue;185182186186- if (!test_and_set_bit_lock(AFS_VLSERVER_FL_PROBING, &server->flags)) {187187- ret = afs_do_probe_vlserver(net, server, key, i);188188- if (ret)189189- return ret;190190- }183183+ if (!test_and_set_bit_lock(AFS_VLSERVER_FL_PROBING, &server->flags) &&184184+ afs_do_probe_vlserver(net, server, key, i, &e))185185+ in_progress = true;191186 }192187193193- return 0;188188+ return in_progress ? 0 : e.error;194189}195190196191/*
+10-40
fs/afs/vl_rotate.c
···7171{7272 struct afs_addr_list *alist;7373 struct afs_vlserver *vlserver;7474+ struct afs_error e;7475 u32 rtt;7575- int error = vc->ac.error, abort_code, i;7676+ int error = vc->ac.error, i;76777778 _enter("%lx[%d],%lx[%d],%d,%d",7879 vc->untried, vc->index,···120119 goto failed;121120 }122121122122+ case -ERFKILL:123123+ case -EADDRNOTAVAIL:123124 case -ENETUNREACH:124125 case -EHOSTUNREACH:126126+ case -EHOSTDOWN:125127 case -ECONNREFUSED:126128 case -ETIMEDOUT:127129 case -ETIME:···239235 if (vc->flags & AFS_VL_CURSOR_RETRY)240236 goto restart_from_beginning;241237242242- abort_code = 0;243243- error = -EDESTADDRREQ;238238+ e.error = -EDESTADDRREQ;239239+ e.responded = false;244240 for (i = 0; i < vc->server_list->nr_servers; i++) {245241 struct afs_vlserver *s = vc->server_list->servers[i].server;246246- int probe_error = READ_ONCE(s->probe.error);247242248248- switch (probe_error) {249249- case 0:250250- continue;251251- default:252252- if (error == -ETIMEDOUT ||253253- error == -ETIME)254254- continue;255255- case -ETIMEDOUT:256256- case -ETIME:257257- if (error == -ENOMEM ||258258- error == -ENONET)259259- continue;260260- case -ENOMEM:261261- case -ENONET:262262- if (error == -ENETUNREACH)263263- continue;264264- case -ENETUNREACH:265265- if (error == -EHOSTUNREACH)266266- continue;267267- case -EHOSTUNREACH:268268- if (error == -ECONNREFUSED)269269- continue;270270- case -ECONNREFUSED:271271- if (error == -ECONNRESET)272272- continue;273273- case -ECONNRESET: /* Responded, but call expired. */274274- if (error == -ECONNABORTED)275275- continue;276276- case -ECONNABORTED:277277- abort_code = s->probe.abort_code;278278- error = probe_error;279279- continue;280280- }243243+ afs_prioritise_error(&e, READ_ONCE(s->probe.error),244244+ s->probe.abort_code);281245 }282282-283283- if (error == -ECONNABORTED)284284- error = afs_abort_to_error(abort_code);285246286247failed_set_error:287248 vc->error = error;···310341 struct afs_net *net = vc->cell->net;311342312343 if (vc->error == -EDESTADDRREQ ||344344+ vc->error == -EADDRNOTAVAIL ||313345 vc->error == -ENETUNREACH ||314346 vc->error == -EHOSTUNREACH)315347 afs_vl_dump_edestaddrreq(vc);
···477477 int mirror_num = 0;478478 int failed_mirror = 0;479479480480- clear_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags);481480 io_tree = &BTRFS_I(fs_info->btree_inode)->io_tree;482481 while (1) {482482+ clear_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags);483483 ret = read_extent_buffer_pages(io_tree, eb, WAIT_COMPLETE,484484 mirror_num);485485 if (!ret) {···492492 else493493 break;494494 }495495-496496- /*497497- * This buffer's crc is fine, but its contents are corrupted, so498498- * there is no reason to read the other copies, they won't be499499- * any less wrong.500500- */501501- if (test_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags) ||502502- ret == -EUCLEAN)503503- break;504495505496 num_copies = btrfs_num_copies(fs_info,506497 eb->start, eb->len);···16551664 struct btrfs_root *root = arg;16561665 struct btrfs_fs_info *fs_info = root->fs_info;16571666 int again;16581658- struct btrfs_trans_handle *trans;1659166716601660- do {16681668+ while (1) {16611669 again = 0;1662167016631671 /* Make the cleaner go to sleep early. */···17051715 */17061716 btrfs_delete_unused_bgs(fs_info);17071717sleep:17181718+ if (kthread_should_park())17191719+ kthread_parkme();17201720+ if (kthread_should_stop())17211721+ return 0;17081722 if (!again) {17091723 set_current_state(TASK_INTERRUPTIBLE);17101710- if (!kthread_should_stop())17111711- schedule();17241724+ schedule();17121725 __set_current_state(TASK_RUNNING);17131726 }17141714- } while (!kthread_should_stop());17151715-17161716- /*17171717- * Transaction kthread is stopped before us and wakes us up.17181718- * However we might have started a new transaction and COWed some17191719- * tree blocks when deleting unused block groups for example. So17201720- * make sure we commit the transaction we started to have a clean17211721- * shutdown when evicting the btree inode - if it has dirty pages17221722- * when we do the final iput() on it, eviction will trigger a17231723- * writeback for it which will fail with null pointer dereferences17241724- * since work queues and other resources were already released and17251725- * destroyed by the time the iput/eviction/writeback is made.17261726- */17271727- trans = btrfs_attach_transaction(root);17281728- if (IS_ERR(trans)) {17291729- if (PTR_ERR(trans) != -ENOENT)17301730- btrfs_err(fs_info,17311731- "cleaner transaction attach returned %ld",17321732- PTR_ERR(trans));17331733- } else {17341734- int ret;17351735-17361736- ret = btrfs_commit_transaction(trans);17371737- if (ret)17381738- btrfs_err(fs_info,17391739- "cleaner open transaction commit returned %d",17401740- ret);17411727 }17421742-17431743- return 0;17441728}1745172917461730static int transaction_kthread(void *arg)···38953931 int ret;3896393238973933 set_bit(BTRFS_FS_CLOSING_START, &fs_info->flags);39343934+ /*39353935+ * We don't want the cleaner to start new transactions, add more delayed39363936+ * iputs, etc. while we're closing. We can't use kthread_stop() yet39373937+ * because that frees the task_struct, and the transaction kthread might39383938+ * still try to wake up the cleaner.39393939+ */39403940+ kthread_park(fs_info->cleaner_kthread);3898394138993942 /* wait for the qgroup rescan worker to stop */39003943 btrfs_qgroup_wait_for_completion(fs_info, false);···3929395839303959 if (!sb_rdonly(fs_info->sb)) {39313960 /*39323932- * If the cleaner thread is stopped and there are39333933- * block groups queued for removal, the deletion will be39343934- * skipped when we quit the cleaner thread.39613961+ * The cleaner kthread is stopped, so do one final pass over39623962+ * unused block groups.39353963 */39363964 btrfs_delete_unused_bgs(fs_info);39373965···43294359 unpin = pinned_extents;43304360again:43314361 while (1) {43624362+ /*43634363+ * The btrfs_finish_extent_commit() may get the same range as43644364+ * ours between find_first_extent_bit and clear_extent_dirty.43654365+ * Hence, hold the unused_bg_unpin_mutex to avoid double unpin43664366+ * the same extent range.43674367+ */43684368+ mutex_lock(&fs_info->unused_bg_unpin_mutex);43324369 ret = find_first_extent_bit(unpin, 0, &start, &end,43334370 EXTENT_DIRTY, NULL);43344334- if (ret)43714371+ if (ret) {43724372+ mutex_unlock(&fs_info->unused_bg_unpin_mutex);43354373 break;43744374+ }4336437543374376 clear_extent_dirty(unpin, start, end);43384377 btrfs_error_unpin_extent_range(fs_info, start, end);43784378+ mutex_unlock(&fs_info->unused_bg_unpin_mutex);43394379 cond_resched();43404380 }43414381
+24
fs/btrfs/file.c
···20892089 atomic_inc(&root->log_batch);2090209020912091 /*20922092+ * Before we acquired the inode's lock, someone may have dirtied more20932093+ * pages in the target range. We need to make sure that writeback for20942094+ * any such pages does not start while we are logging the inode, because20952095+ * if it does, any of the following might happen when we are not doing a20962096+ * full inode sync:20972097+ *20982098+ * 1) We log an extent after its writeback finishes but before its20992099+ * checksums are added to the csum tree, leading to -EIO errors21002100+ * when attempting to read the extent after a log replay.21012101+ *21022102+ * 2) We can end up logging an extent before its writeback finishes.21032103+ * Therefore after the log replay we will have a file extent item21042104+ * pointing to an unwritten extent (and no data checksums as well).21052105+ *21062106+ * So trigger writeback for any eventual new dirty pages and then we21072107+ * wait for all ordered extents to complete below.21082108+ */21092109+ ret = start_ordered_ops(inode, start, end);21102110+ if (ret) {21112111+ inode_unlock(inode);21122112+ goto out;21132113+ }21142114+21152115+ /*20922116 * We have to do this here to avoid the priority inversion of waiting on20932117 * IO of a lower priority task while holding a transaciton open.20942118 */
+21-1
fs/btrfs/free-space-cache.c
···7575 * sure NOFS is set to keep us from deadlocking.7676 */7777 nofs_flag = memalloc_nofs_save();7878- inode = btrfs_iget(fs_info->sb, &location, root, NULL);7878+ inode = btrfs_iget_path(fs_info->sb, &location, root, NULL, path);7979+ btrfs_release_path(path);7980 memalloc_nofs_restore(nofs_flag);8081 if (IS_ERR(inode))8182 return inode;···839838 path->search_commit_root = 1;840839 path->skip_locking = 1;841840841841+ /*842842+ * We must pass a path with search_commit_root set to btrfs_iget in843843+ * order to avoid a deadlock when allocating extents for the tree root.844844+ *845845+ * When we are COWing an extent buffer from the tree root, when looking846846+ * for a free extent, at extent-tree.c:find_free_extent(), we can find847847+ * block group without its free space cache loaded. When we find one848848+ * we must load its space cache which requires reading its free space849849+ * cache's inode item from the root tree. If this inode item is located850850+ * in the same leaf that we started COWing before, then we end up in851851+ * deadlock on the extent buffer (trying to read lock it when we852852+ * previously write locked it).853853+ *854854+ * It's safe to read the inode item using the commit root because855855+ * block groups, once loaded, stay in memory forever (until they are856856+ * removed) as well as their space caches once loaded. New block groups857857+ * once created get their ->cached field set to BTRFS_CACHE_FINISHED so858858+ * we will never try to read their inode item while the fs is mounted.859859+ */842860 inode = lookup_free_space_inode(fs_info, block_group, path);843861 if (IS_ERR(inode)) {844862 btrfs_free_path(path);
+24-13
fs/btrfs/inode.c
···15311531 }15321532 btrfs_release_path(path);1533153315341534- if (cur_offset <= end && cow_start == (u64)-1) {15341534+ if (cur_offset <= end && cow_start == (u64)-1)15351535 cow_start = cur_offset;15361536- cur_offset = end;15371537- }1538153615391537 if (cow_start != (u64)-1) {15381538+ cur_offset = end;15401539 ret = cow_file_range(inode, locked_page, cow_start, end, end,15411540 page_started, nr_written, 1, NULL);15421541 if (ret)···35693570/*35703571 * read an inode from the btree into the in-memory inode35713572 */35723572-static int btrfs_read_locked_inode(struct inode *inode)35733573+static int btrfs_read_locked_inode(struct inode *inode,35743574+ struct btrfs_path *in_path)35733575{35743576 struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);35753575- struct btrfs_path *path;35773577+ struct btrfs_path *path = in_path;35763578 struct extent_buffer *leaf;35773579 struct btrfs_inode_item *inode_item;35783580 struct btrfs_root *root = BTRFS_I(inode)->root;···35893589 if (!ret)35903590 filled = true;3591359135923592- path = btrfs_alloc_path();35933593- if (!path)35943594- return -ENOMEM;35923592+ if (!path) {35933593+ path = btrfs_alloc_path();35943594+ if (!path)35953595+ return -ENOMEM;35963596+ }3595359735963598 memcpy(&location, &BTRFS_I(inode)->location, sizeof(location));3597359935983600 ret = btrfs_lookup_inode(NULL, root, path, &location, 0);35993601 if (ret) {36003600- btrfs_free_path(path);36023602+ if (path != in_path)36033603+ btrfs_free_path(path);36013604 return ret;36023605 }36033606···37253722 btrfs_ino(BTRFS_I(inode)),37263723 root->root_key.objectid, ret);37273724 }37283728- btrfs_free_path(path);37253725+ if (path != in_path)37263726+ btrfs_free_path(path);3729372737303728 if (!maybe_acls)37313729 cache_no_acl(inode);···56485644/* Get an inode object given its location and corresponding root.56495645 * Returns in *is_new if the inode was read from disk56505646 */56515651-struct inode *btrfs_iget(struct super_block *s, struct btrfs_key *location,56525652- struct btrfs_root *root, int *new)56475647+struct inode *btrfs_iget_path(struct super_block *s, struct btrfs_key *location,56485648+ struct btrfs_root *root, int *new,56495649+ struct btrfs_path *path)56535650{56545651 struct inode *inode;56555652···56615656 if (inode->i_state & I_NEW) {56625657 int ret;5663565856645664- ret = btrfs_read_locked_inode(inode);56595659+ ret = btrfs_read_locked_inode(inode, path);56655660 if (!ret) {56665661 inode_tree_add(inode);56675662 unlock_new_inode(inode);···56815676 }5682567756835678 return inode;56795679+}56805680+56815681+struct inode *btrfs_iget(struct super_block *s, struct btrfs_key *location,56825682+ struct btrfs_root *root, int *new)56835683+{56845684+ return btrfs_iget_path(s, location, root, new, NULL);56845685}5685568656865687static struct inode *new_simple_dir(struct super_block *s,
+12-2
fs/btrfs/ioctl.c
···34883488 const u64 sz = BTRFS_I(src)->root->fs_info->sectorsize;3489348934903490 len = round_down(i_size_read(src), sz) - loff;34913491+ if (len == 0)34923492+ return 0;34913493 olen = len;34923494 }34933495 }···42594257 goto out_unlock;42604258 if (len == 0)42614259 olen = len = src->i_size - off;42624262- /* if we extend to eof, continue to block boundary */42634263- if (off + len == src->i_size)42604260+ /*42614261+ * If we extend to eof, continue to block boundary if and only if the42624262+ * destination end offset matches the destination file's size, otherwise42634263+ * we would be corrupting data by placing the eof block into the middle42644264+ * of a file.42654265+ */42664266+ if (off + len == src->i_size) {42674267+ if (!IS_ALIGNED(len, bs) && destoff + len < inode->i_size)42684268+ goto out_unlock;42644269 len = ALIGN(src->i_size, bs) - off;42704270+ }4265427142664272 if (len == 0) {42674273 ret = 0;
···19161916}1917191719181918/* Used to sort the devices by max_avail(descending sort) */19191919-static int btrfs_cmp_device_free_bytes(const void *dev_info1,19191919+static inline int btrfs_cmp_device_free_bytes(const void *dev_info1,19201920 const void *dev_info2)19211921{19221922 if (((struct btrfs_device_info *)dev_info1)->max_avail >···19451945 * The helper to calc the free space on the devices that can be used to store19461946 * file data.19471947 */19481948-static int btrfs_calc_avail_data_space(struct btrfs_fs_info *fs_info,19491949- u64 *free_bytes)19481948+static inline int btrfs_calc_avail_data_space(struct btrfs_fs_info *fs_info,19491949+ u64 *free_bytes)19501950{19511951 struct btrfs_device_info *devices_info;19521952 struct btrfs_fs_devices *fs_devices = fs_info->fs_devices;···22372237 vol = memdup_user((void __user *)arg, sizeof(*vol));22382238 if (IS_ERR(vol))22392239 return PTR_ERR(vol);22402240+ vol->name[BTRFS_PATH_NAME_MAX] = '\0';2240224122412242 switch (cmd) {22422243 case BTRFS_IOC_SCAN_DEV:
+1-1
fs/btrfs/tree-checker.c
···440440 type != (BTRFS_BLOCK_GROUP_METADATA |441441 BTRFS_BLOCK_GROUP_DATA)) {442442 block_group_err(fs_info, leaf, slot,443443-"invalid type, have 0x%llx (%lu bits set) expect either 0x%llx, 0x%llx, 0x%llu or 0x%llx",443443+"invalid type, have 0x%llx (%lu bits set) expect either 0x%llx, 0x%llx, 0x%llx or 0x%llx",444444 type, hweight64(type),445445 BTRFS_BLOCK_GROUP_DATA, BTRFS_BLOCK_GROUP_METADATA,446446 BTRFS_BLOCK_GROUP_SYSTEM,
+17
fs/btrfs/tree-log.c
···43964396 logged_end = end;4397439743984398 list_for_each_entry_safe(em, n, &tree->modified_extents, list) {43994399+ /*44004400+ * Skip extents outside our logging range. It's important to do44014401+ * it for correctness because if we don't ignore them, we may44024402+ * log them before their ordered extent completes, and therefore44034403+ * we could log them without logging their respective checksums44044404+ * (the checksum items are added to the csum tree at the very44054405+ * end of btrfs_finish_ordered_io()). Also leave such extents44064406+ * outside of our range in the list, since we may have another44074407+ * ranged fsync in the near future that needs them. If an extent44084408+ * outside our range corresponds to a hole, log it to avoid44094409+ * leaving gaps between extents (fsck will complain when we are44104410+ * not using the NO_HOLES feature).44114411+ */44124412+ if ((em->start > end || em->start + em->len <= start) &&44134413+ em->block_start != EXTENT_MAP_HOLE)44144414+ continue;44154415+43994416 list_del_init(&em->list);44004417 /*44014418 * Just an arbitrary number, this can be really CPU intensive
+5-3
fs/cachefiles/namei.c
···244244245245 ASSERT(!test_bit(CACHEFILES_OBJECT_ACTIVE, &xobject->flags));246246247247- cache->cache.ops->put_object(&xobject->fscache, cachefiles_obj_put_wait_retry);247247+ cache->cache.ops->put_object(&xobject->fscache,248248+ (enum fscache_obj_ref_trace)cachefiles_obj_put_wait_retry);248249 goto try_again;249250250251requeue:251251- cache->cache.ops->put_object(&xobject->fscache, cachefiles_obj_put_wait_timeo);252252+ cache->cache.ops->put_object(&xobject->fscache,253253+ (enum fscache_obj_ref_trace)cachefiles_obj_put_wait_timeo);252254 _leave(" = -ETIMEDOUT");253255 return -ETIMEDOUT;254256}···338336try_again:339337 /* first step is to make up a grave dentry in the graveyard */340338 sprintf(nbuffer, "%08x%08x",341341- (uint32_t) get_seconds(),339339+ (uint32_t) ktime_get_real_seconds(),342340 (uint32_t) atomic_inc_return(&cache->gravecounter));343341344342 /* do the multiway lock magic */
···135135 struct dentry *dentry = object->dentry;136136 int ret;137137138138- ASSERT(dentry);138138+ if (!dentry)139139+ return -ESTALE;139140140141 _enter("%p,#%d", object, auxdata->len);141142
+9-2
fs/ceph/file.c
···19311931 if (!prealloc_cf)19321932 return -ENOMEM;1933193319341934- /* Start by sync'ing the source file */19341934+ /* Start by sync'ing the source and destination files */19351935 ret = file_write_and_wait_range(src_file, src_off, (src_off + len));19361936- if (ret < 0)19361936+ if (ret < 0) {19371937+ dout("failed to write src file (%zd)\n", ret);19371938 goto out;19391939+ }19401940+ ret = file_write_and_wait_range(dst_file, dst_off, (dst_off + len));19411941+ if (ret < 0) {19421942+ dout("failed to write dst file (%zd)\n", ret);19431943+ goto out;19441944+ }1938194519391946 /*19401947 * We need FILE_WR caps for dst_ci and FILE_RD for src_ci as other
···892892 if (sb->s_magic != EXT2_SUPER_MAGIC)893893 goto cantfind_ext2;894894895895+ opts.s_mount_opt = 0;895896 /* Set defaults before we parse the mount options */896897 def_mount_opts = le32_to_cpu(es->s_default_mount_opts);897898 if (def_mount_opts & EXT2_DEFM_DEBUG)
···58355835{58365836 int err = 0;5837583758385838- if (unlikely(ext4_forced_shutdown(EXT4_SB(inode->i_sb))))58385838+ if (unlikely(ext4_forced_shutdown(EXT4_SB(inode->i_sb)))) {58395839+ put_bh(iloc->bh);58395840 return -EIO;58405840-58415841+ }58415842 if (IS_I_VERSION(inode))58425843 inode_inc_iversion(inode);58435844
+4-1
fs/ext4/namei.c
···126126 if (!is_dx_block && type == INDEX) {127127 ext4_error_inode(inode, func, line, block,128128 "directory leaf block found instead of index block");129129+ brelse(bh);129130 return ERR_PTR(-EFSCORRUPTED);130131 }131132 if (!ext4_has_metadata_csum(inode->i_sb) ||···28122811 list_del_init(&EXT4_I(inode)->i_orphan);28132812 mutex_unlock(&sbi->s_orphan_lock);28142813 }28152815- }28142814+ } else28152815+ brelse(iloc.bh);28162816+28162817 jbd_debug(4, "superblock will point to %lu\n", inode->i_ino);28172818 jbd_debug(4, "orphan inode %lu will point to %d\n",28182819 inode->i_ino, NEXT_ORPHAN(inode));
···40754075 sbi->s_groups_count = blocks_count;40764076 sbi->s_blockfile_groups = min_t(ext4_group_t, sbi->s_groups_count,40774077 (EXT4_MAX_BLOCK_FILE_PHYS / EXT4_BLOCKS_PER_GROUP(sb)));40784078+ if (((u64)sbi->s_groups_count * sbi->s_inodes_per_group) !=40794079+ le32_to_cpu(es->s_inodes_count)) {40804080+ ext4_msg(sb, KERN_ERR, "inodes count not valid: %u vs %llu",40814081+ le32_to_cpu(es->s_inodes_count),40824082+ ((u64)sbi->s_groups_count * sbi->s_inodes_per_group));40834083+ ret = -EINVAL;40844084+ goto failed_mount;40854085+ }40784086 db_count = (sbi->s_groups_count + EXT4_DESC_PER_BLOCK(sb) - 1) /40794087 EXT4_DESC_PER_BLOCK(sb);40804088 if (ext4_has_feature_meta_bg(sb)) {···41004092 if (sbi->s_group_desc == NULL) {41014093 ext4_msg(sb, KERN_ERR, "not enough memory");41024094 ret = -ENOMEM;41034103- goto failed_mount;41044104- }41054105- if (((u64)sbi->s_groups_count * sbi->s_inodes_per_group) !=41064106- le32_to_cpu(es->s_inodes_count)) {41074107- ext4_msg(sb, KERN_ERR, "inodes count not valid: %u vs %llu",41084108- le32_to_cpu(es->s_inodes_count),41094109- ((u64)sbi->s_groups_count * sbi->s_inodes_per_group));41104110- ret = -EINVAL;41114095 goto failed_mount;41124096 }41134097···45104510 percpu_counter_destroy(&sbi->s_freeinodes_counter);45114511 percpu_counter_destroy(&sbi->s_dirs_counter);45124512 percpu_counter_destroy(&sbi->s_dirtyclusters_counter);45134513+ percpu_free_rwsem(&sbi->s_journal_flag_rwsem);45134514failed_mount5:45144515 ext4_ext_release(sb);45154516 ext4_release_system_zone(sb);
+19-8
fs/ext4/xattr.c
···10311031 inode_lock(ea_inode);1032103210331033 ret = ext4_reserve_inode_write(handle, ea_inode, &iloc);10341034- if (ret) {10351035- iloc.bh = NULL;10341034+ if (ret)10361035 goto out;10371037- }1038103610391037 ref_count = ext4_xattr_inode_get_ref(ea_inode);10401038 ref_count += ref_change;···10781080 }1079108110801082 ret = ext4_mark_iloc_dirty(handle, ea_inode, &iloc);10811081- iloc.bh = NULL;10821083 if (ret)10831084 ext4_warning_inode(ea_inode,10841085 "ext4_mark_iloc_dirty() failed ret=%d", ret);10851086out:10861086- brelse(iloc.bh);10871087 inode_unlock(ea_inode);10881088 return ret;10891089}···13841388 bh = ext4_getblk(handle, ea_inode, block, 0);13851389 if (IS_ERR(bh))13861390 return PTR_ERR(bh);13911391+ if (!bh) {13921392+ WARN_ON_ONCE(1);13931393+ EXT4_ERROR_INODE(ea_inode,13941394+ "ext4_getblk() return bh = NULL");13951395+ return -EFSCORRUPTED;13961396+ }13871397 ret = ext4_journal_get_write_access(handle, bh);13881398 if (ret)13891399 goto out;···22782276 if (!bh)22792277 return ERR_PTR(-EIO);22802278 error = ext4_xattr_check_block(inode, bh);22812281- if (error)22792279+ if (error) {22802280+ brelse(bh);22822281 return ERR_PTR(error);22822282+ }22832283 return bh;22842284}22852285···24012397 error = ext4_xattr_block_set(handle, inode, &i, &bs);24022398 } else if (error == -ENOSPC) {24032399 if (EXT4_I(inode)->i_file_acl && !bs.s.base) {24002400+ brelse(bs.bh);24012401+ bs.bh = NULL;24042402 error = ext4_xattr_block_find(inode, &i, &bs);24052403 if (error)24062404 goto cleanup;···26232617 kfree(buffer);26242618 if (is)26252619 brelse(is->iloc.bh);26202620+ if (bs)26212621+ brelse(bs->bh);26262622 kfree(is);26272623 kfree(bs);26282624···27042696 struct ext4_inode *raw_inode, handle_t *handle)27052697{27062698 struct ext4_xattr_ibody_header *header;27072707- struct buffer_head *bh;27082699 struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);27092700 static unsigned int mnt_count;27102701 size_t min_offs;···27442737 * EA block can hold new_extra_isize bytes.27452738 */27462739 if (EXT4_I(inode)->i_file_acl) {27402740+ struct buffer_head *bh;27412741+27472742 bh = sb_bread(inode->i_sb, EXT4_I(inode)->i_file_acl);27482743 error = -EIO;27492744 if (!bh)27502745 goto cleanup;27512746 error = ext4_xattr_check_block(inode, bh);27522752- if (error)27472747+ if (error) {27482748+ brelse(bh);27532749 goto cleanup;27502750+ }27542751 base = BHDR(bh);27552752 end = bh->b_data + bh->b_size;27562753 min_offs = end - base;
+3
fs/fscache/object.c
···730730731731 if (awaken)732732 wake_up_bit(&cookie->flags, FSCACHE_COOKIE_INVALIDATING);733733+ if (test_and_clear_bit(FSCACHE_COOKIE_LOOKING_UP, &cookie->flags))734734+ wake_up_bit(&cookie->flags, FSCACHE_COOKIE_LOOKING_UP);735735+733736734737 /* Prevent a race with our last child, which has to signal EV_CLEARED735738 * before dropping our spinlock.
+12-4
fs/fuse/dev.c
···165165166166static void fuse_drop_waiting(struct fuse_conn *fc)167167{168168- if (fc->connected) {169169- atomic_dec(&fc->num_waiting);170170- } else if (atomic_dec_and_test(&fc->num_waiting)) {168168+ /*169169+ * lockess check of fc->connected is okay, because atomic_dec_and_test()170170+ * provides a memory barrier mached with the one in fuse_wait_aborted()171171+ * to ensure no wake-up is missed.172172+ */173173+ if (atomic_dec_and_test(&fc->num_waiting) &&174174+ !READ_ONCE(fc->connected)) {171175 /* wake up aborters */172176 wake_up_all(&fc->blocked_waitq);173177 }···17721768 req->in.args[1].size = total_len;1773176917741770 err = fuse_request_send_notify_reply(fc, req, outarg->notify_unique);17751775- if (err)17711771+ if (err) {17761772 fuse_retrieve_end(fc, req);17731773+ fuse_put_request(fc, req);17741774+ }1777177517781776 return err;17791777}···2225221922262220void fuse_wait_aborted(struct fuse_conn *fc)22272221{22222222+ /* matches implicit memory barrier in fuse_drop_waiting() */22232223+ smp_mb();22282224 wait_event(fc->blocked_waitq, atomic_read(&fc->num_waiting) == 0);22292225}22302226
···826826 ret = gfs2_meta_inode_buffer(ip, &dibh);827827 if (ret)828828 goto unlock;829829- iomap->private = dibh;829829+ mp->mp_bh[0] = dibh;830830831831 if (gfs2_is_stuffed(ip)) {832832 if (flags & IOMAP_WRITE) {···863863 len = lblock_stop - lblock + 1;864864 iomap->length = len << inode->i_blkbits;865865866866- get_bh(dibh);867867- mp->mp_bh[0] = dibh;868868-869866 height = ip->i_height;870867 while ((lblock + 1) * sdp->sd_sb.sb_bsize > sdp->sd_heightsize[height])871868 height++;···895898 iomap->bdev = inode->i_sb->s_bdev;896899unlock:897900 up_read(&ip->i_rw_mutex);898898- if (ret && dibh)899899- brelse(dibh);900901 return ret;901902902903do_alloc:···975980976981static int gfs2_iomap_begin_write(struct inode *inode, loff_t pos,977982 loff_t length, unsigned flags,978978- struct iomap *iomap)983983+ struct iomap *iomap,984984+ struct metapath *mp)979985{980980- struct metapath mp = { .mp_aheight = 1, };981986 struct gfs2_inode *ip = GFS2_I(inode);982987 struct gfs2_sbd *sdp = GFS2_SB(inode);983988 unsigned int data_blocks = 0, ind_blocks = 0, rblocks;···991996 unstuff = gfs2_is_stuffed(ip) &&992997 pos + length > gfs2_max_stuffed_size(ip);993998994994- ret = gfs2_iomap_get(inode, pos, length, flags, iomap, &mp);999999+ ret = gfs2_iomap_get(inode, pos, length, flags, iomap, mp);9951000 if (ret)996996- goto out_release;10011001+ goto out_unlock;99710029981003 alloc_required = unstuff || iomap->type == IOMAP_HOLE;9991004···1008101310091014 ret = gfs2_quota_lock_check(ip, &ap);10101015 if (ret)10111011- goto out_release;10161016+ goto out_unlock;1012101710131018 ret = gfs2_inplace_reserve(ip, &ap);10141019 if (ret)···10331038 ret = gfs2_unstuff_dinode(ip, NULL);10341039 if (ret)10351040 goto out_trans_end;10361036- release_metapath(&mp);10371037- brelse(iomap->private);10381038- iomap->private = NULL;10411041+ release_metapath(mp);10391042 ret = gfs2_iomap_get(inode, iomap->offset, iomap->length,10401040- flags, iomap, &mp);10431043+ flags, iomap, mp);10411044 if (ret)10421045 goto out_trans_end;10431046 }1044104710451048 if (iomap->type == IOMAP_HOLE) {10461046- ret = gfs2_iomap_alloc(inode, iomap, flags, &mp);10491049+ ret = gfs2_iomap_alloc(inode, iomap, flags, mp);10471050 if (ret) {10481051 gfs2_trans_end(sdp);10491052 gfs2_inplace_release(ip);···10491056 goto out_qunlock;10501057 }10511058 }10521052- release_metapath(&mp);10531059 if (!gfs2_is_stuffed(ip) && gfs2_is_jdata(ip))10541060 iomap->page_done = gfs2_iomap_journaled_page_done;10551061 return 0;···10611069out_qunlock:10621070 if (alloc_required)10631071 gfs2_quota_unlock(ip);10641064-out_release:10651065- if (iomap->private)10661066- brelse(iomap->private);10671067- release_metapath(&mp);10721072+out_unlock:10681073 gfs2_write_unlock(inode);10691074 return ret;10701075}···1077108810781089 trace_gfs2_iomap_start(ip, pos, length, flags);10791090 if ((flags & IOMAP_WRITE) && !(flags & IOMAP_DIRECT)) {10801080- ret = gfs2_iomap_begin_write(inode, pos, length, flags, iomap);10911091+ ret = gfs2_iomap_begin_write(inode, pos, length, flags, iomap, &mp);10811092 } else {10821093 ret = gfs2_iomap_get(inode, pos, length, flags, iomap, &mp);10831083- release_metapath(&mp);10941094+10841095 /*10851096 * Silently fall back to buffered I/O for stuffed files or if10861097 * we've hot a hole (see gfs2_file_direct_write).···10891100 iomap->type != IOMAP_MAPPED)10901101 ret = -ENOTBLK;10911102 }11031103+ if (!ret) {11041104+ get_bh(mp.mp_bh[0]);11051105+ iomap->private = mp.mp_bh[0];11061106+ }11071107+ release_metapath(&mp);10921108 trace_gfs2_iomap_end(ip, iomap, ret);10931109 return ret;10941110}···19021908 if (ret < 0)19031909 goto out;1904191019051905- /* issue read-ahead on metadata */19061906- if (mp.mp_aheight > 1) {19071907- for (; ret > 1; ret--) {19081908- metapointer_range(&mp, mp.mp_aheight - ret,19111911+ /* On the first pass, issue read-ahead on metadata. */19121912+ if (mp.mp_aheight > 1 && strip_h == ip->i_height - 1) {19131913+ unsigned int height = mp.mp_aheight - 1;19141914+19151915+ /* No read-ahead for data blocks. */19161916+ if (mp.mp_aheight - 1 == strip_h)19171917+ height--;19181918+19191919+ for (; height >= mp.mp_aheight - ret; height--) {19201920+ metapointer_range(&mp, height,19091921 start_list, start_aligned,19101922 end_list, end_aligned,19111923 &start, &end);
+2-1
fs/gfs2/rgrp.c
···733733734734 if (gl) {735735 glock_clear_object(gl, rgd);736736+ gfs2_rgrp_brelse(rgd);736737 gfs2_glock_put(gl);737738 }738739···11751174 * @rgd: the struct gfs2_rgrpd describing the RG to read in11761175 *11771176 * Read in all of a Resource Group's header and bitmap blocks.11781178- * Caller must eventually call gfs2_rgrp_relse() to free the bitmaps.11771177+ * Caller must eventually call gfs2_rgrp_brelse() to free the bitmaps.11791178 *11801179 * Returns: errno11811180 */
+2-1
fs/hfs/btree.c
···338338339339 nidx -= len * 8;340340 i = node->next;341341- hfs_bnode_put(node);342341 if (!i) {343342 /* panic */;344343 pr_crit("unable to free bnode %u. bmap not found!\n",345344 node->this);345345+ hfs_bnode_put(node);346346 return;347347 }348348+ hfs_bnode_put(node);348349 node = hfs_bnode_find(tree, i);349350 if (IS_ERR(node))350351 return;
+2-1
fs/hfsplus/btree.c
···466466467467 nidx -= len * 8;468468 i = node->next;469469- hfs_bnode_put(node);470469 if (!i) {471470 /* panic */;472471 pr_crit("unable to free bnode %u. "473472 "bmap not found!\n",474473 node->this);474474+ hfs_bnode_put(node);475475 return;476476 }477477+ hfs_bnode_put(node);477478 node = hfs_bnode_find(tree, i);478479 if (IS_ERR(node))479480 return;
+5-2
fs/inode.c
···730730 return LRU_REMOVED;731731 }732732733733- /* recently referenced inodes get one more pass */734734- if (inode->i_state & I_REFERENCED) {733733+ /*734734+ * Recently referenced inodes and inodes with many attached pages735735+ * get one more pass.736736+ */737737+ if (inode->i_state & I_REFERENCED || inode->i_data.nrpages > 1) {735738 inode->i_state &= ~I_REFERENCED;736739 spin_unlock(&inode->i_lock);737740 return LRU_ROTATE;
+41-12
fs/iomap.c
···142142iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop,143143 loff_t *pos, loff_t length, unsigned *offp, unsigned *lenp)144144{145145+ loff_t orig_pos = *pos;146146+ loff_t isize = i_size_read(inode);145147 unsigned block_bits = inode->i_blkbits;146148 unsigned block_size = (1 << block_bits);147149 unsigned poff = offset_in_page(*pos);148150 unsigned plen = min_t(loff_t, PAGE_SIZE - poff, length);149151 unsigned first = poff >> block_bits;150152 unsigned last = (poff + plen - 1) >> block_bits;151151- unsigned end = offset_in_page(i_size_read(inode)) >> block_bits;152153153154 /*154155 * If the block size is smaller than the page size we need to check the···184183 * handle both halves separately so that we properly zero data in the185184 * page cache for blocks that are entirely outside of i_size.186185 */187187- if (first <= end && last > end)188188- plen -= (last - end) * block_size;186186+ if (orig_pos <= isize && orig_pos + length > isize) {187187+ unsigned end = offset_in_page(isize - 1) >> block_bits;188188+189189+ if (first <= end && last > end)190190+ plen -= (last - end) * block_size;191191+ }189192190193 *offp = poff;191194 *lenp = plen;···15851580 struct bio *bio;15861581 bool need_zeroout = false;15871582 bool use_fua = false;15881588- int nr_pages, ret;15831583+ int nr_pages, ret = 0;15891584 size_t copied = 0;1590158515911586 if ((pos | length | align) & ((1 << blkbits) - 1))···1601159616021597 if (iomap->flags & IOMAP_F_NEW) {16031598 need_zeroout = true;16041604- } else {15991599+ } else if (iomap->type == IOMAP_MAPPED) {16051600 /*16061606- * Use a FUA write if we need datasync semantics, this16071607- * is a pure data IO that doesn't require any metadata16081608- * updates and the underlying device supports FUA. This16091609- * allows us to avoid cache flushes on IO completion.16011601+ * Use a FUA write if we need datasync semantics, this is a pure16021602+ * data IO that doesn't require any metadata updates (including16031603+ * after IO completion such as unwritten extent conversion) and16041604+ * the underlying device supports FUA. This allows us to avoid16051605+ * cache flushes on IO completion.16101606 */16111607 if (!(iomap->flags & (IOMAP_F_SHARED|IOMAP_F_DIRTY)) &&16121608 (dio->flags & IOMAP_DIO_WRITE_FUA) &&···1650164416511645 ret = bio_iov_iter_get_pages(bio, &iter);16521646 if (unlikely(ret)) {16471647+ /*16481648+ * We have to stop part way through an IO. We must fall16491649+ * through to the sub-block tail zeroing here, otherwise16501650+ * this short IO may expose stale data in the tail of16511651+ * the block we haven't written data to.16521652+ */16531653 bio_put(bio);16541654- return copied ? copied : ret;16541654+ goto zero_tail;16551655 }1656165616571657 n = bio->bi_iter.bi_size;···16881676 dio->submit.cookie = submit_bio(bio);16891677 } while (nr_pages);1690167816911691- if (need_zeroout) {16791679+ /*16801680+ * We need to zeroout the tail of a sub-block write if the extent type16811681+ * requires zeroing or the write extends beyond EOF. If we don't zero16821682+ * the block tail in the latter case, we can expose stale data via mmap16831683+ * reads of the EOF block.16841684+ */16851685+zero_tail:16861686+ if (need_zeroout ||16871687+ ((dio->flags & IOMAP_DIO_WRITE) && pos >= i_size_read(inode))) {16921688 /* zero out from the end of the write to the end of the block */16931689 pad = pos & (fs_block_size - 1);16941690 if (pad)16951691 iomap_dio_zero(dio, iomap, pos, fs_block_size - pad);16961692 }16971697- return copied;16931693+ return copied ? copied : ret;16981694}1699169517001696static loff_t···18771857 dio->wait_for_completion = true;18781858 ret = 0;18791859 }18601860+18611861+ /*18621862+ * Splicing to pipes can fail on a full pipe. We have to18631863+ * swallow this to make it look like a short IO18641864+ * otherwise the higher splice layers will completely18651865+ * mishandle the error and stop moving data.18661866+ */18671867+ if (ret == -EFAULT)18681868+ ret = 0;18801869 break;18811870 }18821871 pos += ret;
+20-8
fs/namespace.c
···695695696696 hlist_for_each_entry(mp, chain, m_hash) {697697 if (mp->m_dentry == dentry) {698698- /* might be worth a WARN_ON() */699699- if (d_unlinked(dentry))700700- return ERR_PTR(-ENOENT);701698 mp->m_count++;702699 return mp;703700 }···708711 int ret;709712710713 if (d_mountpoint(dentry)) {714714+ /* might be worth a WARN_ON() */715715+ if (d_unlinked(dentry))716716+ return ERR_PTR(-ENOENT);711717mountpoint:712718 read_seqlock_excl(&mount_lock);713719 mp = lookup_mountpoint(dentry);···1540154015411541 namespace_lock();15421542 lock_mount_hash();15431543- event++;1544154315441544+ /* Recheck MNT_LOCKED with the locks held */15451545+ retval = -EINVAL;15461546+ if (mnt->mnt.mnt_flags & MNT_LOCKED)15471547+ goto out;15481548+15491549+ event++;15451550 if (flags & MNT_DETACH) {15461551 if (!list_empty(&mnt->mnt_list))15471552 umount_tree(mnt, UMOUNT_PROPAGATE);···15601555 retval = 0;15611556 }15621557 }15581558+out:15631559 unlock_mount_hash();15641560 namespace_unlock();15651561 return retval;···16511645 goto dput_and_out;16521646 if (!check_mnt(mnt))16531647 goto dput_and_out;16541654- if (mnt->mnt.mnt_flags & MNT_LOCKED)16481648+ if (mnt->mnt.mnt_flags & MNT_LOCKED) /* Check optimistically */16551649 goto dput_and_out;16561650 retval = -EPERM;16571651 if (flags & MNT_FORCE && !capable(CAP_SYS_ADMIN))···17341728 for (s = r; s; s = next_mnt(s, r)) {17351729 if (!(flag & CL_COPY_UNBINDABLE) &&17361730 IS_MNT_UNBINDABLE(s)) {17371737- s = skip_mnt_tree(s);17381738- continue;17311731+ if (s->mnt.mnt_flags & MNT_LOCKED) {17321732+ /* Both unbindable and locked. */17331733+ q = ERR_PTR(-EPERM);17341734+ goto out;17351735+ } else {17361736+ s = skip_mnt_tree(s);17371737+ continue;17381738+ }17391739 }17401740 if (!(flag & CL_COPY_MNT_NS_FILE) &&17411741 is_mnt_ns_file(s->mnt.mnt_root)) {···17941782{17951783 namespace_lock();17961784 lock_mount_hash();17971797- umount_tree(real_mount(mnt), UMOUNT_SYNC);17851785+ umount_tree(real_mount(mnt), 0);17981786 unlock_mount_hash();17991787 namespace_unlock();18001788}
···13611361 task))13621362 return;1363136313641364- if (ff_layout_read_prepare_common(task, hdr))13651365- return;13661366-13671367- if (nfs4_set_rw_stateid(&hdr->args.stateid, hdr->args.context,13681368- hdr->args.lock_context, FMODE_READ) == -EIO)13691369- rpc_exit(task, -EIO); /* lost lock, terminate I/O */13641364+ ff_layout_read_prepare_common(task, hdr);13701365}1371136613721367static void ff_layout_read_call_done(struct rpc_task *task, void *data)···15371542 task))15381543 return;1539154415401540- if (ff_layout_write_prepare_common(task, hdr))15411541- return;15421542-15431543- if (nfs4_set_rw_stateid(&hdr->args.stateid, hdr->args.context,15441544- hdr->args.lock_context, FMODE_WRITE) == -EIO)15451545- rpc_exit(task, -EIO); /* lost lock, terminate I/O */15451545+ ff_layout_write_prepare_common(task, hdr);15461546}1547154715481548static void ff_layout_write_call_done(struct rpc_task *task, void *data)···17321742 fh = nfs4_ff_layout_select_ds_fh(lseg, idx);17331743 if (fh)17341744 hdr->args.fh = fh;17451745+17461746+ if (!nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid))17471747+ goto out_failed;17481748+17351749 /*17361750 * Note that if we ever decide to split across DSes,17371751 * then we may need to handle dense-like offsets.···17971803 fh = nfs4_ff_layout_select_ds_fh(lseg, idx);17981804 if (fh)17991805 hdr->args.fh = fh;18061806+18071807+ if (!nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid))18081808+ goto out_failed;1800180918011810 /*18021811 * Note that if we ever decide to split across DSes,
···115115 continue;116116 mark = iter_info->marks[type];117117 /*118118- * if the event is for a child and this inode doesn't care about119119- * events on the child, don't send it!118118+ * If the event is for a child and this mark doesn't care about119119+ * events on a child, don't send it!120120 */121121- if (type == FSNOTIFY_OBJ_TYPE_INODE &&122122- (event_mask & FS_EVENT_ON_CHILD) &&123123- !(mark->mask & FS_EVENT_ON_CHILD))121121+ if (event_mask & FS_EVENT_ON_CHILD &&122122+ (type != FSNOTIFY_OBJ_TYPE_INODE ||123123+ !(mark->mask & FS_EVENT_ON_CHILD)))124124 continue;125125126126 marks_mask |= mark->mask;
+5-2
fs/notify/fsnotify.c
···167167 parent = dget_parent(dentry);168168 p_inode = parent->d_inode;169169170170- if (unlikely(!fsnotify_inode_watches_children(p_inode)))170170+ if (unlikely(!fsnotify_inode_watches_children(p_inode))) {171171 __fsnotify_update_child_dentry_flags(p_inode);172172- else if (p_inode->i_fsnotify_mask & mask) {172172+ } else if (p_inode->i_fsnotify_mask & mask & ALL_FSNOTIFY_EVENTS) {173173 struct name_snapshot name;174174175175 /* we are notifying a parent so come up with the new mask which···339339 sb = mnt->mnt.mnt_sb;340340 mnt_or_sb_mask = mnt->mnt_fsnotify_mask | sb->s_fsnotify_mask;341341 }342342+ /* An event "on child" is not intended for a mount/sb mark */343343+ if (mask & FS_EVENT_ON_CHILD)344344+ mnt_or_sb_mask = 0;342345343346 /*344347 * Optimization: srcu_read_lock() has a memory barrier which can
+10-2
fs/ocfs2/aops.c
···24112411 /* this io's submitter should not have unlocked this before we could */24122412 BUG_ON(!ocfs2_iocb_is_rw_locked(iocb));2413241324142414- if (bytes > 0 && private)24152415- ret = ocfs2_dio_end_io_write(inode, private, offset, bytes);24142414+ if (bytes <= 0)24152415+ mlog_ratelimited(ML_ERROR, "Direct IO failed, bytes = %lld",24162416+ (long long)bytes);24172417+ if (private) {24182418+ if (bytes > 0)24192419+ ret = ocfs2_dio_end_io_write(inode, private, offset,24202420+ bytes);24212421+ else24222422+ ocfs2_dio_free_write_ctx(inode, private);24232423+ }2416242424172425 ocfs2_iocb_clear_rw_locked(iocb);24182426
+9
fs/ocfs2/cluster/masklog.h
···178178 ##__VA_ARGS__); \179179} while (0)180180181181+#define mlog_ratelimited(mask, fmt, ...) \182182+do { \183183+ static DEFINE_RATELIMIT_STATE(_rs, \184184+ DEFAULT_RATELIMIT_INTERVAL, \185185+ DEFAULT_RATELIMIT_BURST); \186186+ if (__ratelimit(&_rs)) \187187+ mlog(mask, fmt, ##__VA_ARGS__); \188188+} while (0)189189+181190#define mlog_errno(st) ({ \182191 int _st = (st); \183192 if (_st != -ERESTARTSYS && _st != -EINTR && \
+1-1
fs/ocfs2/export.c
···125125126126check_gen:127127 if (handle->ih_generation != inode->i_generation) {128128- iput(inode);129128 trace_ocfs2_get_dentry_generation((unsigned long long)blkno,130129 handle->ih_generation,131130 inode->i_generation);131131+ iput(inode);132132 result = ERR_PTR(-ESTALE);133133 goto bail;134134 }
+26-21
fs/ocfs2/move_extents.c
···157157}158158159159/*160160- * lock allocators, and reserving appropriate number of bits for161161- * meta blocks and data clusters.162162- *163163- * in some cases, we don't need to reserve clusters, just let data_ac164164- * be NULL.160160+ * lock allocator, and reserve appropriate number of bits for161161+ * meta blocks.165162 */166166-static int ocfs2_lock_allocators_move_extents(struct inode *inode,163163+static int ocfs2_lock_meta_allocator_move_extents(struct inode *inode,167164 struct ocfs2_extent_tree *et,168165 u32 clusters_to_move,169166 u32 extents_to_split,170167 struct ocfs2_alloc_context **meta_ac,171171- struct ocfs2_alloc_context **data_ac,172168 int extra_blocks,173169 int *credits)174170{···189193 goto out;190194 }191195192192- if (data_ac) {193193- ret = ocfs2_reserve_clusters(osb, clusters_to_move, data_ac);194194- if (ret) {195195- mlog_errno(ret);196196- goto out;197197- }198198- }199196200197 *credits += ocfs2_calc_extend_credits(osb->sb, et->et_root_el);201198···248259 }249260 }250261251251- ret = ocfs2_lock_allocators_move_extents(inode, &context->et, *len, 1,252252- &context->meta_ac,253253- &context->data_ac,254254- extra_blocks, &credits);262262+ ret = ocfs2_lock_meta_allocator_move_extents(inode, &context->et,263263+ *len, 1,264264+ &context->meta_ac,265265+ extra_blocks, &credits);255266 if (ret) {256267 mlog_errno(ret);257268 goto out;···272283 mlog_errno(ret);273284 goto out_unlock_mutex;274285 }286286+ }287287+288288+ /*289289+ * Make sure ocfs2_reserve_cluster is called after290290+ * __ocfs2_flush_truncate_log, otherwise, dead lock may happen.291291+ *292292+ * If ocfs2_reserve_cluster is called293293+ * before __ocfs2_flush_truncate_log, dead lock on global bitmap294294+ * may happen.295295+ *296296+ */297297+ ret = ocfs2_reserve_clusters(osb, *len, &context->data_ac);298298+ if (ret) {299299+ mlog_errno(ret);300300+ goto out_unlock_mutex;275301 }276302277303 handle = ocfs2_start_trans(osb, credits);···621617 }622618 }623619624624- ret = ocfs2_lock_allocators_move_extents(inode, &context->et, len, 1,625625- &context->meta_ac,626626- NULL, extra_blocks, &credits);620620+ ret = ocfs2_lock_meta_allocator_move_extents(inode, &context->et,621621+ len, 1,622622+ &context->meta_ac,623623+ extra_blocks, &credits);627624 if (ret) {628625 mlog_errno(ret);629626 goto out;
+6-9
fs/pstore/ram.c
···816816817817 cxt->pstore.data = cxt;818818 /*819819- * Console can handle any buffer size, so prefer LOG_LINE_MAX. If we820820- * have to handle dumps, we must have at least record_size buffer. And821821- * for ftrace, bufsize is irrelevant (if bufsize is 0, buf will be822822- * ZERO_SIZE_PTR).819819+ * Since bufsize is only used for dmesg crash dumps, it820820+ * must match the size of the dprz record (after PRZ header821821+ * and ECC bytes have been accounted for).823822 */824824- if (cxt->console_size)825825- cxt->pstore.bufsize = 1024; /* LOG_LINE_MAX */826826- cxt->pstore.bufsize = max(cxt->record_size, cxt->pstore.bufsize);827827- cxt->pstore.buf = kmalloc(cxt->pstore.bufsize, GFP_KERNEL);823823+ cxt->pstore.bufsize = cxt->dprzs[0]->buffer_size;824824+ cxt->pstore.buf = kzalloc(cxt->pstore.bufsize, GFP_KERNEL);828825 if (!cxt->pstore.buf) {829829- pr_err("cannot allocate pstore buffer\n");826826+ pr_err("cannot allocate pstore crash dump buffer\n");830827 err = -ENOMEM;831828 goto fail_clear;832829 }
···827827828828829829 ret = udf_dstrCS0toChar(sb, outstr, 31, pvoldesc->volIdent, 32);830830- if (ret < 0)831831- goto out_bh;832832-833833- strncpy(UDF_SB(sb)->s_volume_ident, outstr, ret);830830+ if (ret < 0) {831831+ strcpy(UDF_SB(sb)->s_volume_ident, "InvalidName");832832+ pr_warn("incorrect volume identification, setting to "833833+ "'InvalidName'\n");834834+ } else {835835+ strncpy(UDF_SB(sb)->s_volume_ident, outstr, ret);836836+ }834837 udf_debug("volIdent[] = '%s'\n", UDF_SB(sb)->s_volume_ident);835838836839 ret = udf_dstrCS0toChar(sb, outstr, 127, pvoldesc->volSetIdent, 128);837837- if (ret < 0)840840+ if (ret < 0) {841841+ ret = 0;838842 goto out_bh;839839-843843+ }840844 outstr[ret] = 0;841845 udf_debug("volSetIdent[] = '%s'\n", outstr);842846
+11-3
fs/udf/unicode.c
···351351 return u_len;352352}353353354354+/*355355+ * Convert CS0 dstring to output charset. Warning: This function may truncate356356+ * input string if it is too long as it is used for informational strings only357357+ * and it is better to truncate the string than to refuse mounting a media.358358+ */354359int udf_dstrCS0toChar(struct super_block *sb, uint8_t *utf_o, int o_len,355360 const uint8_t *ocu_i, int i_len)356361{···364359 if (i_len > 0) {365360 s_len = ocu_i[i_len - 1];366361 if (s_len >= i_len) {367367- pr_err("incorrect dstring lengths (%d/%d)\n",368368- s_len, i_len);369369- return -EINVAL;362362+ pr_warn("incorrect dstring lengths (%d/%d),"363363+ " truncating\n", s_len, i_len);364364+ s_len = i_len - 1;365365+ /* 2-byte encoding? Need to round properly... */366366+ if (ocu_i[0] == 16)367367+ s_len -= (s_len - 1) & 2;370368 }371369 }372370
+15
fs/userfaultfd.c
···13611361 ret = -EINVAL;13621362 if (!vma_can_userfault(cur))13631363 goto out_unlock;13641364+13651365+ /*13661366+ * UFFDIO_COPY will fill file holes even without13671367+ * PROT_WRITE. This check enforces that if this is a13681368+ * MAP_SHARED, the process has write permission to the backing13691369+ * file. If VM_MAYWRITE is set it also enforces that on a13701370+ * MAP_SHARED vma: there is no F_WRITE_SEAL and no further13711371+ * F_WRITE_SEAL can be taken until the vma is destroyed.13721372+ */13731373+ ret = -EPERM;13741374+ if (unlikely(!(cur->vm_flags & VM_MAYWRITE)))13751375+ goto out_unlock;13761376+13641377 /*13651378 * If this vma contains ending address, and huge pages13661379 * check alignment.···14191406 BUG_ON(!vma_can_userfault(vma));14201407 BUG_ON(vma->vm_userfaultfd_ctx.ctx &&14211408 vma->vm_userfaultfd_ctx.ctx != ctx);14091409+ WARN_ON(!(vma->vm_flags & VM_MAYWRITE));1422141014231411 /*14241412 * Nothing to do: this vma is already registered into this···15661552 cond_resched();1567155315681554 BUG_ON(!vma_can_userfault(vma));15551555+ WARN_ON(!(vma->vm_flags & VM_MAYWRITE));1569155615701557 /*15711558 * Nothing to do: this vma is already registered into this
+9-2
fs/xfs/libxfs/xfs_attr_leaf.c
···243243 struct xfs_mount *mp = bp->b_target->bt_mount;244244 struct xfs_attr_leafblock *leaf = bp->b_addr;245245 struct xfs_attr_leaf_entry *entries;246246- uint16_t end;246246+ uint32_t end; /* must be 32bit - see below */247247 int i;248248249249 xfs_attr3_leaf_hdr_from_disk(mp->m_attr_geo, &ichdr, leaf);···293293 /*294294 * Quickly check the freemap information. Attribute data has to be295295 * aligned to 4-byte boundaries, and likewise for the free space.296296+ *297297+ * Note that for 64k block size filesystems, the freemap entries cannot298298+ * overflow as they are only be16 fields. However, when checking end299299+ * pointer of the freemap, we have to be careful to detect overflows and300300+ * so use uint32_t for those checks.296301 */297302 for (i = 0; i < XFS_ATTR_LEAF_MAPSIZE; i++) {298303 if (ichdr.freemap[i].base > mp->m_attr_geo->blksize)···308303 return __this_address;309304 if (ichdr.freemap[i].size & 0x3)310305 return __this_address;311311- end = ichdr.freemap[i].base + ichdr.freemap[i].size;306306+307307+ /* be care of 16 bit overflows here */308308+ end = (uint32_t)ichdr.freemap[i].base + ichdr.freemap[i].size;312309 if (end < ichdr.freemap[i].base)313310 return __this_address;314311 if (end > mp->m_attr_geo->blksize)
+4-1
fs/xfs/libxfs/xfs_bmap.c
···16941694 case BMAP_LEFT_FILLING | BMAP_RIGHT_FILLING | BMAP_RIGHT_CONTIG:16951695 /*16961696 * Filling in all of a previously delayed allocation extent.16971697- * The right neighbor is contiguous, the left is not.16971697+ * The right neighbor is contiguous, the left is not. Take care16981698+ * with delay -> unwritten extent allocation here because the16991699+ * delalloc record we are overwriting is always written.16981700 */16991701 PREV.br_startblock = new->br_startblock;17001702 PREV.br_blockcount += RIGHT.br_blockcount;17031703+ PREV.br_state = new->br_state;1701170417021705 xfs_iext_next(ifp, &bma->icur);17031706 xfs_iext_remove(bma->ip, &bma->icur, state);
+7-4
fs/xfs/libxfs/xfs_ialloc_btree.c
···538538539539static xfs_extlen_t540540xfs_inobt_max_size(541541- struct xfs_mount *mp)541541+ struct xfs_mount *mp,542542+ xfs_agnumber_t agno)542543{544544+ xfs_agblock_t agblocks = xfs_ag_block_count(mp, agno);545545+543546 /* Bail out if we're uninitialized, which can happen in mkfs. */544547 if (mp->m_inobt_mxr[0] == 0)545548 return 0;546549547550 return xfs_btree_calc_size(mp->m_inobt_mnr,548548- (uint64_t)mp->m_sb.sb_agblocks * mp->m_sb.sb_inopblock /549549- XFS_INODES_PER_CHUNK);551551+ (uint64_t)agblocks * mp->m_sb.sb_inopblock /552552+ XFS_INODES_PER_CHUNK);550553}551554552555static int···597594 if (error)598595 return error;599596600600- *ask += xfs_inobt_max_size(mp);597597+ *ask += xfs_inobt_max_size(mp, agno);601598 *used += tree_len;602599 return 0;603600}
+2-8
fs/xfs/xfs_bmap_util.c
···10421042 goto out_unlock;10431043}1044104410451045-static int10451045+int10461046xfs_flush_unmap_range(10471047 struct xfs_inode *ip,10481048 xfs_off_t offset,···11951195 * Writeback and invalidate cache for the remainder of the file as we're11961196 * about to shift down every extent from offset to EOF.11971197 */11981198- error = filemap_write_and_wait_range(VFS_I(ip)->i_mapping, offset, -1);11991199- if (error)12001200- return error;12011201- error = invalidate_inode_pages2_range(VFS_I(ip)->i_mapping,12021202- offset >> PAGE_SHIFT, -1);12031203- if (error)12041204- return error;11981198+ error = xfs_flush_unmap_range(ip, offset, XFS_ISIZE(ip));1205119912061200 /*12071201 * Clean out anything hanging around in the cow fork now that
···12331233}1234123412351235/*12361236- * Requeue a failed buffer for writeback12361236+ * Requeue a failed buffer for writeback.12371237 *12381238- * Return true if the buffer has been re-queued properly, false otherwise12381238+ * We clear the log item failed state here as well, but we have to be careful12391239+ * about reference counts because the only active reference counts on the buffer12401240+ * may be the failed log items. Hence if we clear the log item failed state12411241+ * before queuing the buffer for IO we can release all active references to12421242+ * the buffer and free it, leading to use after free problems in12431243+ * xfs_buf_delwri_queue. It makes no difference to the buffer or log items which12441244+ * order we process them in - the buffer is locked, and we own the buffer list12451245+ * so nothing on them is going to change while we are performing this action.12461246+ *12471247+ * Hence we can safely queue the buffer for IO before we clear the failed log12481248+ * item state, therefore always having an active reference to the buffer and12491249+ * avoiding the transient zero-reference state that leads to use-after-free.12501250+ *12511251+ * Return true if the buffer was added to the buffer list, false if it was12521252+ * already on the buffer list.12391253 */12401254bool12411255xfs_buf_resubmit_failed_buffers(···12571243 struct list_head *buffer_list)12581244{12591245 struct xfs_log_item *lip;12461246+ bool ret;12471247+12481248+ ret = xfs_buf_delwri_queue(bp, buffer_list);1260124912611250 /*12621262- * Clear XFS_LI_FAILED flag from all items before resubmit12631263- *12641264- * XFS_LI_FAILED set/clear is protected by ail_lock, caller this12511251+ * XFS_LI_FAILED set/clear is protected by ail_lock, caller of this12651252 * function already have it acquired12661253 */12671254 list_for_each_entry(lip, &bp->b_li_list, li_bio_list)12681255 xfs_clear_li_failed(lip);1269125612701270- /* Add this buffer back to the delayed write list */12711271- return xfs_buf_delwri_queue(bp, buffer_list);12571257+ return ret;12721258}
···296296 if (error)297297 return error;298298299299+ xfs_trim_extent(imap, got.br_startoff, got.br_blockcount);299300 trace_xfs_reflink_cow_alloc(ip, &got);300301 return 0;301302}···13521351 if (ret)13531352 goto out_unlock;1354135313551355- /* Zap any page cache for the destination file's range. */13561356- truncate_inode_pages_range(&inode_out->i_data,13571357- round_down(pos_out, PAGE_SIZE),13581358- round_up(pos_out + *len, PAGE_SIZE) - 1);13541354+ /*13551355+ * If pos_out > EOF, we may have dirtied blocks between EOF and13561356+ * pos_out. In that case, we need to extend the flush and unmap to cover13571357+ * from EOF to the end of the copy length.13581358+ */13591359+ if (pos_out > XFS_ISIZE(dest)) {13601360+ loff_t flen = *len + (pos_out - XFS_ISIZE(dest));13611361+ ret = xfs_flush_unmap_range(dest, XFS_ISIZE(dest), flen);13621362+ } else {13631363+ ret = xfs_flush_unmap_range(dest, pos_out, *len);13641364+ }13651365+ if (ret)13661366+ goto out_unlock;1359136713601368 return 1;13611369out_unlock:
···55#ifndef __ASSEMBLY__66#include <asm-generic/5level-fixup.h>7788-#define __PAGETABLE_PUD_FOLDED88+#define __PAGETABLE_PUD_FOLDED 1991010/*1111 * Having the pud type consist of a pgd gets the size right, and allows
···8899struct mm_struct;10101111-#define __PAGETABLE_PMD_FOLDED1111+#define __PAGETABLE_PMD_FOLDED 112121313/*1414 * Having the pmd type consist of a pud gets the size right, and allows
+1-1
include/asm-generic/pgtable-nopud.h
···99#else1010#include <asm-generic/pgtable-nop4d.h>11111212-#define __PAGETABLE_PUD_FOLDED1212+#define __PAGETABLE_PUD_FOLDED 113131414/*1515 * Having the pud type consist of a p4d gets the size right, and allows
+16
include/asm-generic/pgtable.h
···11271127#endif11281128#endif1129112911301130+/*11311131+ * On some architectures it depends on the mm if the p4d/pud or pmd11321132+ * layer of the page table hierarchy is folded or not.11331133+ */11341134+#ifndef mm_p4d_folded11351135+#define mm_p4d_folded(mm) __is_defined(__PAGETABLE_P4D_FOLDED)11361136+#endif11371137+11381138+#ifndef mm_pud_folded11391139+#define mm_pud_folded(mm) __is_defined(__PAGETABLE_PUD_FOLDED)11401140+#endif11411141+11421142+#ifndef mm_pmd_folded11431143+#define mm_pmd_folded(mm) __is_defined(__PAGETABLE_PMD_FOLDED)11441144+#endif11451145+11301146#endif /* _ASM_GENERIC_PGTABLE_H */
+1
include/linux/can/dev.h
···169169170170void can_put_echo_skb(struct sk_buff *skb, struct net_device *dev,171171 unsigned int idx);172172+struct sk_buff *__can_get_echo_skb(struct net_device *dev, unsigned int idx, u8 *len_ptr);172173unsigned int can_get_echo_skb(struct net_device *dev, unsigned int idx);173174void can_free_echo_skb(struct net_device *dev, unsigned int idx);174175
···143143#define KASAN_ABI_VERSION 3144144#endif145145146146-/*147147- * Because __no_sanitize_address conflicts with inlining:148148- * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67368149149- * we do one or the other. 150150- */151151-#ifdef CONFIG_KASAN152152-#define __no_sanitize_address_or_inline \153153- __no_sanitize_address __maybe_unused notrace154154-#else155155-#define __no_sanitize_address_or_inline inline156156-#endif157157-158146#if GCC_VERSION >= 50100159147#define COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW 1160148#endif
···4455/*66 * The attributes in this file are unconditionally defined and they directly77- * map to compiler attribute(s) -- except those that are optional.77+ * map to compiler attribute(s), unless one of the compilers does not support88+ * the attribute. In that case, __has_attribute is used to check for support99+ * and the reason is stated in its comment ("Optional: ...").810 *911 * Any other "attributes" (i.e. those that depend on a configuration option,1012 * on a compiler, on an architecture, on plugins, on other attributes...)1113 * should be defined elsewhere (e.g. compiler_types.h or compiler-*.h).1414+ * The intention is to keep this file as simple as possible, as well as1515+ * compiler- and version-agnostic (e.g. avoiding GCC_VERSION checks).1216 *1317 * This file is meant to be sorted (by actual attribute name,1418 * not by #define identifier). Use the __attribute__((__name__)) syntax1519 * (i.e. with underscores) to avoid future collisions with other macros.1616- * If an attribute is optional, state the reason in the comment.2020+ * Provide links to the documentation of each supported compiler, if it exists.1721 */18221923/*2020- * To check for optional attributes, we use __has_attribute, which is supported2121- * on gcc >= 5, clang >= 2.9 and icc >= 17. In the meantime, to support2222- * 4.6 <= gcc < 5, we implement __has_attribute by hand.2424+ * __has_attribute is supported on gcc >= 5, clang >= 2.9 and icc >= 17.2525+ * In the meantime, to support 4.6 <= gcc < 5, we implement __has_attribute2626+ * by hand.2327 *2428 * sparse does not support __has_attribute (yet) and defines __GNUC_MINOR__2529 * depending on the compiler used to build it; however, these attributes have
+4
include/linux/compiler_types.h
···130130# define randomized_struct_fields_end131131#endif132132133133+#ifndef asm_volatile_goto134134+#define asm_volatile_goto(x...) asm goto(x)135135+#endif136136+133137/* Are two types/vars the same type (ignoring qualifiers)? */134138#define __same_type(a, b) __builtin_types_compatible_p(typeof(a), typeof(b))135139
···196196static inline void fscache_retrieval_complete(struct fscache_retrieval *op,197197 int n_pages)198198{199199- atomic_sub(n_pages, &op->n_pages);200200- if (atomic_read(&op->n_pages) <= 0)199199+ if (atomic_sub_return_relaxed(n_pages, &op->n_pages) <= 0)201200 fscache_op_complete(&op->op, false);202201}203202
+2-2
include/linux/ftrace.h
···777777extern void return_to_handler(void);778778779779extern int780780-ftrace_push_return_trace(unsigned long ret, unsigned long func, int *depth,781781- unsigned long frame_pointer, unsigned long *retp);780780+function_graph_enter(unsigned long ret, unsigned long func,781781+ unsigned long frame_pointer, unsigned long *retp);782782783783unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx,784784 unsigned long ret, unsigned long *retp);
+3-1
include/linux/hid-sensor-hub.h
···177177* @attr_usage_id: Attribute usage id as per spec178178* @report_id: Report id to look for179179* @flag: Synchronous or asynchronous read180180+* @is_signed: If true then fields < 32 bits will be sign-extended180181*181182* Issues a synchronous or asynchronous read request for an input attribute.182183* Returns data upto 32 bits.···191190int sensor_hub_input_attr_get_raw_value(struct hid_sensor_hub_device *hsdev,192191 u32 usage_id,193192 u32 attr_usage_id, u32 report_id,194194- enum sensor_hub_read_flags flag193193+ enum sensor_hub_read_flags flag,194194+ bool is_signed195195);196196197197/**
+2-30
include/linux/hid.h
···722722 * input will not be passed to raw_event unless hid_device_io_start is723723 * called.724724 *725725- * raw_event and event should return 0 on no action performed, 1 when no726726- * further processing should be done and negative on error725725+ * raw_event and event should return negative on error, any other value will726726+ * pass the event on to .event() typically return 0 for success.727727 *728728 * input_mapping shall return a negative value to completely ignore this usage729729 * (e.g. doubled or invalid usage), zero to continue with parsing of this···1138113811391139int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, u32 size,11401140 int interrupt);11411141-11421142-11431143-/**11441144- * struct hid_scroll_counter - Utility class for processing high-resolution11451145- * scroll events.11461146- * @dev: the input device for which events should be reported.11471147- * @microns_per_hi_res_unit: the amount moved by the user's finger for each11481148- * high-resolution unit reported by the mouse, in11491149- * microns.11501150- * @resolution_multiplier: the wheel's resolution in high-resolution mode as a11511151- * multiple of its lower resolution. For example, if11521152- * moving the wheel by one "notch" would result in a11531153- * value of 1 in low-resolution mode but 8 in11541154- * high-resolution, the multiplier is 8.11551155- * @remainder: counts the number of high-resolution units moved since the last11561156- * low-resolution event (REL_WHEEL or REL_HWHEEL) was sent. Should11571157- * only be used by class methods.11581158- */11591159-struct hid_scroll_counter {11601160- struct input_dev *dev;11611161- int microns_per_hi_res_unit;11621162- int resolution_multiplier;11631163-11641164- int remainder;11651165-};11661166-11671167-void hid_scroll_counter_handle_scroll(struct hid_scroll_counter *counter,11681168- int hi_res_value);1169114111701142/* HID quirks API */11711143unsigned long hid_lookup_quirk(const struct hid_device *hdev);
···324324 */325325static inline unsigned int nanddev_neraseblocks(const struct nand_device *nand)326326{327327- return (u64)nand->memorg.luns_per_target *328328- nand->memorg.eraseblocks_per_lun *329329- nand->memorg.pages_per_eraseblock;327327+ return nand->memorg.ntargets * nand->memorg.luns_per_target *328328+ nand->memorg.eraseblocks_per_lun;330329}331330332331/**···568569}569570570571/**571571- * nanddev_pos_next_eraseblock() - Move a position to the next page572572+ * nanddev_pos_next_page() - Move a position to the next page572573 * @nand: NAND device573574 * @pos: the position to update574575 *
+2
include/linux/net_dim.h
···406406 }407407 /* fall through */408408 case NET_DIM_START_MEASURE:409409+ net_dim_sample(end_sample.event_ctr, end_sample.pkt_ctr, end_sample.byte_ctr,410410+ &dim->start_sample);409411 dim->state = NET_DIM_MEASURE_IN_PROGRESS;410412 break;411413 case NET_DIM_APPLY_NEW_PROFILE:
+20
include/linux/netdevice.h
···31903190#endif31913191}3192319231933193+/* Variant of netdev_tx_sent_queue() for drivers that are aware31943194+ * that they should not test BQL status themselves.31953195+ * We do want to change __QUEUE_STATE_STACK_XOFF only for the last31963196+ * skb of a batch.31973197+ * Returns true if the doorbell must be used to kick the NIC.31983198+ */31993199+static inline bool __netdev_tx_sent_queue(struct netdev_queue *dev_queue,32003200+ unsigned int bytes,32013201+ bool xmit_more)32023202+{32033203+ if (xmit_more) {32043204+#ifdef CONFIG_BQL32053205+ dql_queued(&dev_queue->dql, bytes);32063206+#endif32073207+ return netif_tx_queue_stopped(dev_queue);32083208+ }32093209+ netdev_tx_sent_queue(dev_queue, bytes);32103210+ return true;32113211+}32123212+31933213/**31943214 * netdev_sent_queue - report the number of bytes queued to hardware31953215 * @dev: network device
+1-1
include/linux/netfilter/ipset/ip_set.h
···314314extern ip_set_id_t ip_set_get_byname(struct net *net,315315 const char *name, struct ip_set **set);316316extern void ip_set_put_byindex(struct net *net, ip_set_id_t index);317317-extern const char *ip_set_name_byindex(struct net *net, ip_set_id_t index);317317+extern void ip_set_name_byindex(struct net *net, ip_set_id_t index, char *name);318318extern ip_set_id_t ip_set_nfnl_get_byindex(struct net *net, ip_set_id_t index);319319extern void ip_set_nfnl_put(struct net *net, ip_set_id_t index);320320
+2-2
include/linux/netfilter/ipset/ip_set_comment.h
···4343 rcu_assign_pointer(comment->c, c);4444}45454646-/* Used only when dumping a set, protected by rcu_read_lock_bh() */4646+/* Used only when dumping a set, protected by rcu_read_lock() */4747static inline int4848ip_set_put_comment(struct sk_buff *skb, const struct ip_set_comment *comment)4949{5050- struct ip_set_comment_rcu *c = rcu_dereference_bh(comment->c);5050+ struct ip_set_comment_rcu *c = rcu_dereference(comment->c);51515252 if (!c)5353 return 0;
···9090 *9191 * @buf_lock: spinlock to serialize access to @buf9292 * @buf: preallocated crash dump buffer9393- * @bufsize: size of @buf available for crash dump writes9393+ * @bufsize: size of @buf available for crash dump bytes (must match9494+ * smallest number of bytes available for writing to a9595+ * backend entry, since compressed bytes don't take kindly9696+ * to being truncated)9497 *9598 * @read_mutex: serializes @open, @read, @close, and @erase callbacks9699 * @flags: bitfield of frontends the backend can accept writes for
-17
include/linux/ptrace.h
···6464#define PTRACE_MODE_NOAUDIT 0x046565#define PTRACE_MODE_FSCREDS 0x086666#define PTRACE_MODE_REALCREDS 0x106767-#define PTRACE_MODE_SCHED 0x206868-#define PTRACE_MODE_IBPB 0x4069677068/* shorthands for READ/ATTACH and FSCREDS/REALCREDS combinations */7169#define PTRACE_MODE_READ_FSCREDS (PTRACE_MODE_READ | PTRACE_MODE_FSCREDS)7270#define PTRACE_MODE_READ_REALCREDS (PTRACE_MODE_READ | PTRACE_MODE_REALCREDS)7371#define PTRACE_MODE_ATTACH_FSCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_FSCREDS)7472#define PTRACE_MODE_ATTACH_REALCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_REALCREDS)7575-#define PTRACE_MODE_SPEC_IBPB (PTRACE_MODE_ATTACH_REALCREDS | PTRACE_MODE_IBPB)76737774/**7875 * ptrace_may_access - check whether the caller is permitted to access···8689 * process_vm_writev or ptrace (and should use the real credentials).8790 */8891extern bool ptrace_may_access(struct task_struct *task, unsigned int mode);8989-9090-/**9191- * ptrace_may_access - check whether the caller is permitted to access9292- * a target task.9393- * @task: target task9494- * @mode: selects type of access and caller credentials9595- *9696- * Returns true on success, false on denial.9797- *9898- * Similar to ptrace_may_access(). Only to be called from context switch9999- * code. Does not call into audit and the regular LSM hooks due to locking100100- * constraints.101101- */102102-extern bool ptrace_may_access_sched(struct task_struct *task, unsigned int mode);1039210493static inline int ptrace_reparented(struct task_struct *child)10594{
+10
include/linux/sched.h
···11161116#ifdef CONFIG_FUNCTION_GRAPH_TRACER11171117 /* Index of current stored address in ret_stack: */11181118 int curr_ret_stack;11191119+ int curr_ret_depth;1119112011201121 /* Stack of return addresses for return function tracing: */11211122 struct ftrace_ret_stack *ret_stack;···14541453#define PFA_SPREAD_SLAB 2 /* Spread some slab caches over cpuset */14551454#define PFA_SPEC_SSB_DISABLE 3 /* Speculative Store Bypass disabled */14561455#define PFA_SPEC_SSB_FORCE_DISABLE 4 /* Speculative Store Bypass force disabled*/14561456+#define PFA_SPEC_IB_DISABLE 5 /* Indirect branch speculation restricted */14571457+#define PFA_SPEC_IB_FORCE_DISABLE 6 /* Indirect branch speculation permanently restricted */1457145814581459#define TASK_PFA_TEST(name, func) \14591460 static inline bool task_##func(struct task_struct *p) \···1486148314871484TASK_PFA_TEST(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable)14881485TASK_PFA_SET(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable)14861486+14871487+TASK_PFA_TEST(SPEC_IB_DISABLE, spec_ib_disable)14881488+TASK_PFA_SET(SPEC_IB_DISABLE, spec_ib_disable)14891489+TASK_PFA_CLEAR(SPEC_IB_DISABLE, spec_ib_disable)14901490+14911491+TASK_PFA_TEST(SPEC_IB_FORCE_DISABLE, spec_ib_force_disable)14921492+TASK_PFA_SET(SPEC_IB_FORCE_DISABLE, spec_ib_force_disable)1489149314901494static inline void14911495current_restore_flags(unsigned long orig_flags, unsigned long flags)
···196196 u32 rcv_tstamp; /* timestamp of last received ACK (for keepalives) */197197 u32 lsndtime; /* timestamp of last sent data packet (for restart window) */198198 u32 last_oow_ack_time; /* timestamp of last out-of-window ACK */199199+ u32 compressed_ack_rcv_nxt;199200200201 u32 tsoffset; /* timestamp offset */201202
+2-2
include/linux/tracehook.h
···8383 * tracehook_report_syscall_entry - task is about to attempt a system call8484 * @regs: user register state of current task8585 *8686- * This will be called if %TIF_SYSCALL_TRACE has been set, when the8787- * current task has just entered the kernel for a system call.8686+ * This will be called if %TIF_SYSCALL_TRACE or %TIF_SYSCALL_EMU have been set,8787+ * when the current task has just entered the kernel for a system call.8888 * Full user register state is available here. Changing the values8989 * in @regs can affect the system call number and arguments to be tried.9090 * It is safe to block here, preventing the system call from beginning.
+3-3
include/linux/tracepoint.h
···166166 struct tracepoint_func *it_func_ptr; \167167 void *it_func; \168168 void *__data; \169169- int __maybe_unused idx = 0; \169169+ int __maybe_unused __idx = 0; \170170 \171171 if (!(cond)) \172172 return; \···182182 * doesn't work from the idle path. \183183 */ \184184 if (rcuidle) { \185185- idx = srcu_read_lock_notrace(&tracepoint_srcu); \185185+ __idx = srcu_read_lock_notrace(&tracepoint_srcu);\186186 rcu_irq_enter_irqson(); \187187 } \188188 \···198198 \199199 if (rcuidle) { \200200 rcu_irq_exit_irqson(); \201201- srcu_read_unlock_notrace(&tracepoint_srcu, idx);\201201+ srcu_read_unlock_notrace(&tracepoint_srcu, __idx);\202202 } \203203 \204204 preempt_enable_notrace(); \
+3
include/linux/usb/quirks.h
···6666/* Device needs a pause after every control message. */6767#define USB_QUIRK_DELAY_CTRL_MSG BIT(13)68686969+/* Hub needs extra delay after resetting its port. */7070+#define USB_QUIRK_HUB_SLOW_RESET BIT(14)7171+6972#endif /* __LINUX_USB_QUIRKS_H */
+203-64
include/linux/xarray.h
···289289void xa_init_flags(struct xarray *, gfp_t flags);290290void *xa_load(struct xarray *, unsigned long index);291291void *xa_store(struct xarray *, unsigned long index, void *entry, gfp_t);292292-void *xa_cmpxchg(struct xarray *, unsigned long index,293293- void *old, void *entry, gfp_t);294294-int xa_reserve(struct xarray *, unsigned long index, gfp_t);292292+void *xa_erase(struct xarray *, unsigned long index);295293void *xa_store_range(struct xarray *, unsigned long first, unsigned long last,296294 void *entry, gfp_t);297295bool xa_get_mark(struct xarray *, unsigned long index, xa_mark_t);···339341static inline bool xa_marked(const struct xarray *xa, xa_mark_t mark)340342{341343 return xa->xa_flags & XA_FLAGS_MARK(mark);342342-}343343-344344-/**345345- * xa_erase() - Erase this entry from the XArray.346346- * @xa: XArray.347347- * @index: Index of entry.348348- *349349- * This function is the equivalent of calling xa_store() with %NULL as350350- * the third argument. The XArray does not need to allocate memory, so351351- * the user does not need to provide GFP flags.352352- *353353- * Context: Process context. Takes and releases the xa_lock.354354- * Return: The entry which used to be at this index.355355- */356356-static inline void *xa_erase(struct xarray *xa, unsigned long index)357357-{358358- return xa_store(xa, index, NULL, 0);359359-}360360-361361-/**362362- * xa_insert() - Store this entry in the XArray unless another entry is363363- * already present.364364- * @xa: XArray.365365- * @index: Index into array.366366- * @entry: New entry.367367- * @gfp: Memory allocation flags.368368- *369369- * If you would rather see the existing entry in the array, use xa_cmpxchg().370370- * This function is for users who don't care what the entry is, only that371371- * one is present.372372- *373373- * Context: Process context. Takes and releases the xa_lock.374374- * May sleep if the @gfp flags permit.375375- * Return: 0 if the store succeeded. -EEXIST if another entry was present.376376- * -ENOMEM if memory could not be allocated.377377- */378378-static inline int xa_insert(struct xarray *xa, unsigned long index,379379- void *entry, gfp_t gfp)380380-{381381- void *curr = xa_cmpxchg(xa, index, NULL, entry, gfp);382382- if (!curr)383383- return 0;384384- if (xa_is_err(curr))385385- return xa_err(curr);386386- return -EEXIST;387387-}388388-389389-/**390390- * xa_release() - Release a reserved entry.391391- * @xa: XArray.392392- * @index: Index of entry.393393- *394394- * After calling xa_reserve(), you can call this function to release the395395- * reservation. If the entry at @index has been stored to, this function396396- * will do nothing.397397- */398398-static inline void xa_release(struct xarray *xa, unsigned long index)399399-{400400- xa_cmpxchg(xa, index, NULL, NULL, 0);401344}402345403346/**···394455void *__xa_cmpxchg(struct xarray *, unsigned long index, void *old,395456 void *entry, gfp_t);396457int __xa_alloc(struct xarray *, u32 *id, u32 max, void *entry, gfp_t);458458+int __xa_reserve(struct xarray *, unsigned long index, gfp_t);397459void __xa_set_mark(struct xarray *, unsigned long index, xa_mark_t);398460void __xa_clear_mark(struct xarray *, unsigned long index, xa_mark_t);399461···427487}428488429489/**490490+ * xa_store_bh() - Store this entry in the XArray.491491+ * @xa: XArray.492492+ * @index: Index into array.493493+ * @entry: New entry.494494+ * @gfp: Memory allocation flags.495495+ *496496+ * This function is like calling xa_store() except it disables softirqs497497+ * while holding the array lock.498498+ *499499+ * Context: Any context. Takes and releases the xa_lock while500500+ * disabling softirqs.501501+ * Return: The entry which used to be at this index.502502+ */503503+static inline void *xa_store_bh(struct xarray *xa, unsigned long index,504504+ void *entry, gfp_t gfp)505505+{506506+ void *curr;507507+508508+ xa_lock_bh(xa);509509+ curr = __xa_store(xa, index, entry, gfp);510510+ xa_unlock_bh(xa);511511+512512+ return curr;513513+}514514+515515+/**516516+ * xa_store_irq() - Erase this entry from the XArray.517517+ * @xa: XArray.518518+ * @index: Index into array.519519+ * @entry: New entry.520520+ * @gfp: Memory allocation flags.521521+ *522522+ * This function is like calling xa_store() except it disables interrupts523523+ * while holding the array lock.524524+ *525525+ * Context: Process context. Takes and releases the xa_lock while526526+ * disabling interrupts.527527+ * Return: The entry which used to be at this index.528528+ */529529+static inline void *xa_store_irq(struct xarray *xa, unsigned long index,530530+ void *entry, gfp_t gfp)531531+{532532+ void *curr;533533+534534+ xa_lock_irq(xa);535535+ curr = __xa_store(xa, index, entry, gfp);536536+ xa_unlock_irq(xa);537537+538538+ return curr;539539+}540540+541541+/**430542 * xa_erase_bh() - Erase this entry from the XArray.431543 * @xa: XArray.432544 * @index: Index of entry.···487495 * the third argument. The XArray does not need to allocate memory, so488496 * the user does not need to provide GFP flags.489497 *490490- * Context: Process context. Takes and releases the xa_lock while498498+ * Context: Any context. Takes and releases the xa_lock while491499 * disabling softirqs.492500 * Return: The entry which used to be at this index.493501 */···524532 xa_unlock_irq(xa);525533526534 return entry;535535+}536536+537537+/**538538+ * xa_cmpxchg() - Conditionally replace an entry in the XArray.539539+ * @xa: XArray.540540+ * @index: Index into array.541541+ * @old: Old value to test against.542542+ * @entry: New value to place in array.543543+ * @gfp: Memory allocation flags.544544+ *545545+ * If the entry at @index is the same as @old, replace it with @entry.546546+ * If the return value is equal to @old, then the exchange was successful.547547+ *548548+ * Context: Any context. Takes and releases the xa_lock. May sleep549549+ * if the @gfp flags permit.550550+ * Return: The old value at this index or xa_err() if an error happened.551551+ */552552+static inline void *xa_cmpxchg(struct xarray *xa, unsigned long index,553553+ void *old, void *entry, gfp_t gfp)554554+{555555+ void *curr;556556+557557+ xa_lock(xa);558558+ curr = __xa_cmpxchg(xa, index, old, entry, gfp);559559+ xa_unlock(xa);560560+561561+ return curr;562562+}563563+564564+/**565565+ * xa_insert() - Store this entry in the XArray unless another entry is566566+ * already present.567567+ * @xa: XArray.568568+ * @index: Index into array.569569+ * @entry: New entry.570570+ * @gfp: Memory allocation flags.571571+ *572572+ * If you would rather see the existing entry in the array, use xa_cmpxchg().573573+ * This function is for users who don't care what the entry is, only that574574+ * one is present.575575+ *576576+ * Context: Process context. Takes and releases the xa_lock.577577+ * May sleep if the @gfp flags permit.578578+ * Return: 0 if the store succeeded. -EEXIST if another entry was present.579579+ * -ENOMEM if memory could not be allocated.580580+ */581581+static inline int xa_insert(struct xarray *xa, unsigned long index,582582+ void *entry, gfp_t gfp)583583+{584584+ void *curr = xa_cmpxchg(xa, index, NULL, entry, gfp);585585+ if (!curr)586586+ return 0;587587+ if (xa_is_err(curr))588588+ return xa_err(curr);589589+ return -EEXIST;527590}528591529592/**···622575 * Updates the @id pointer with the index, then stores the entry at that623576 * index. A concurrent lookup will not see an uninitialised @id.624577 *625625- * Context: Process context. Takes and releases the xa_lock while578578+ * Context: Any context. Takes and releases the xa_lock while626579 * disabling softirqs. May sleep if the @gfp flags permit.627580 * Return: 0 on success, -ENOMEM if memory allocation fails or -ENOSPC if628581 * there is no more space in the XArray.···666619 xa_unlock_irq(xa);667620668621 return err;622622+}623623+624624+/**625625+ * xa_reserve() - Reserve this index in the XArray.626626+ * @xa: XArray.627627+ * @index: Index into array.628628+ * @gfp: Memory allocation flags.629629+ *630630+ * Ensures there is somewhere to store an entry at @index in the array.631631+ * If there is already something stored at @index, this function does632632+ * nothing. If there was nothing there, the entry is marked as reserved.633633+ * Loading from a reserved entry returns a %NULL pointer.634634+ *635635+ * If you do not use the entry that you have reserved, call xa_release()636636+ * or xa_erase() to free any unnecessary memory.637637+ *638638+ * Context: Any context. Takes and releases the xa_lock.639639+ * May sleep if the @gfp flags permit.640640+ * Return: 0 if the reservation succeeded or -ENOMEM if it failed.641641+ */642642+static inline643643+int xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp)644644+{645645+ int ret;646646+647647+ xa_lock(xa);648648+ ret = __xa_reserve(xa, index, gfp);649649+ xa_unlock(xa);650650+651651+ return ret;652652+}653653+654654+/**655655+ * xa_reserve_bh() - Reserve this index in the XArray.656656+ * @xa: XArray.657657+ * @index: Index into array.658658+ * @gfp: Memory allocation flags.659659+ *660660+ * A softirq-disabling version of xa_reserve().661661+ *662662+ * Context: Any context. Takes and releases the xa_lock while663663+ * disabling softirqs.664664+ * Return: 0 if the reservation succeeded or -ENOMEM if it failed.665665+ */666666+static inline667667+int xa_reserve_bh(struct xarray *xa, unsigned long index, gfp_t gfp)668668+{669669+ int ret;670670+671671+ xa_lock_bh(xa);672672+ ret = __xa_reserve(xa, index, gfp);673673+ xa_unlock_bh(xa);674674+675675+ return ret;676676+}677677+678678+/**679679+ * xa_reserve_irq() - Reserve this index in the XArray.680680+ * @xa: XArray.681681+ * @index: Index into array.682682+ * @gfp: Memory allocation flags.683683+ *684684+ * An interrupt-disabling version of xa_reserve().685685+ *686686+ * Context: Process context. Takes and releases the xa_lock while687687+ * disabling interrupts.688688+ * Return: 0 if the reservation succeeded or -ENOMEM if it failed.689689+ */690690+static inline691691+int xa_reserve_irq(struct xarray *xa, unsigned long index, gfp_t gfp)692692+{693693+ int ret;694694+695695+ xa_lock_irq(xa);696696+ ret = __xa_reserve(xa, index, gfp);697697+ xa_unlock_irq(xa);698698+699699+ return ret;700700+}701701+702702+/**703703+ * xa_release() - Release a reserved entry.704704+ * @xa: XArray.705705+ * @index: Index of entry.706706+ *707707+ * After calling xa_reserve(), you can call this function to release the708708+ * reservation. If the entry at @index has been stored to, this function709709+ * will do nothing.710710+ */711711+static inline void xa_release(struct xarray *xa, unsigned long index)712712+{713713+ xa_cmpxchg(xa, index, NULL, NULL, 0);669714}670715671716/* Everything below here is the Advanced API. Proceed with caution. */
···107107#ifdef CREATE_TRACE_POINTS108108static inline long __trace_sched_switch_state(bool preempt, struct task_struct *p)109109{110110+ unsigned int state;111111+110112#ifdef CONFIG_SCHED_DEBUG111113 BUG_ON(p != current);112114#endif /* CONFIG_SCHED_DEBUG */···120118 if (preempt)121119 return TASK_REPORT_MAX;122120123123- return 1 << task_state_index(p);121121+ /*122122+ * task_state_index() uses fls() and returns a value from 0-8 range.123123+ * Decrement it by 1 (except TASK_RUNNING state i.e 0) before using124124+ * it for left shift operation to get the correct task->state125125+ * mapping.126126+ */127127+ state = task_state_index(p);128128+129129+ return state ? (1 << (state - 1)) : state;124130}125131#endif /* CREATE_TRACE_POINTS */126132
-10
include/uapi/linux/input-event-codes.h
···716716 * the situation described above.717717 */718718#define REL_RESERVED 0x0a719719-#define REL_WHEEL_HI_RES 0x0b720719#define REL_MAX 0x0f721720#define REL_CNT (REL_MAX+1)722721···751752#define ABS_VOLUME 0x20752753753754#define ABS_MISC 0x28754754-755755-/*756756- * 0x2e is reserved and should not be used in input drivers.757757- * It was used by HID as ABS_MISC+6 and userspace needs to detect if758758- * the next ABS_* event is correct or is just ABS_MISC + n.759759- * We define here ABS_RESERVED so userspace can rely on it and detect760760- * the situation described above.761761- */762762-#define ABS_RESERVED 0x2e763755764756#define ABS_MT_SLOT 0x2f /* MT slot being modified */765757#define ABS_MT_TOUCH_MAJOR 0x30 /* Major axis of touching ellipse */
+9-9
include/uapi/linux/kfd_ioctl.h
···8383};84848585struct kfd_ioctl_get_queue_wave_state_args {8686- uint64_t ctl_stack_address; /* to KFD */8787- uint32_t ctl_stack_used_size; /* from KFD */8888- uint32_t save_area_used_size; /* from KFD */8989- uint32_t queue_id; /* to KFD */9090- uint32_t pad;8686+ __u64 ctl_stack_address; /* to KFD */8787+ __u32 ctl_stack_used_size; /* from KFD */8888+ __u32 save_area_used_size; /* from KFD */8989+ __u32 queue_id; /* to KFD */9090+ __u32 pad;9191};92929393/* For kfd_ioctl_set_memory_policy_args.default_policy and alternate_policy */···255255256256/* hw exception data */257257struct kfd_hsa_hw_exception_data {258258- uint32_t reset_type;259259- uint32_t reset_cause;260260- uint32_t memory_lost;261261- uint32_t gpu_id;258258+ __u32 reset_type;259259+ __u32 reset_cause;260260+ __u32 memory_lost;261261+ __u32 gpu_id;262262};263263264264/* Event data */
···42424343extern unsigned long *xen_contiguous_bitmap;44444545-#ifdef CONFIG_XEN_PV4545+#if defined(CONFIG_XEN_PV) || defined(CONFIG_ARM) || defined(CONFIG_ARM64)4646int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order,4747 unsigned int address_bits,4848 dma_addr_t *dma_handle);49495050void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order);5151-5252-int xen_remap_pfn(struct vm_area_struct *vma, unsigned long addr,5353- xen_pfn_t *pfn, int nr, int *err_ptr, pgprot_t prot,5454- unsigned int domid, bool no_translate, struct page **pages);5551#else5652static inline int xen_create_contiguous_region(phys_addr_t pstart,5753 unsigned int order,···59636064static inline void xen_destroy_contiguous_region(phys_addr_t pstart,6165 unsigned int order) { }6666+#endif62676868+#if defined(CONFIG_XEN_PV)6969+int xen_remap_pfn(struct vm_area_struct *vma, unsigned long addr,7070+ xen_pfn_t *pfn, int nr, int *err_ptr, pgprot_t prot,7171+ unsigned int domid, bool no_translate, struct page **pages);7272+#else6373static inline int xen_remap_pfn(struct vm_area_struct *vma, unsigned long addr,6474 xen_pfn_t *pfn, int nr, int *err_ptr,6575 pgprot_t prot, unsigned int domid,
+9
init/Kconfig
···509509510510 Say N if unsure.511511512512+config PSI_DEFAULT_DISABLED513513+ bool "Require boot parameter to enable pressure stall information tracking"514514+ default n515515+ depends on PSI516516+ help517517+ If set, pressure stall information tracking will be disabled518518+ per default but can be enabled through passing psi_enable=1519519+ on the kernel commandline during boot.520520+512521endmenu # "CPU/Task time and stats accounting"513522514523config CPU_ISOLATION
···553553int bpf_get_kallsym(unsigned int symnum, unsigned long *value, char *type,554554 char *sym)555555{556556- unsigned long symbol_start, symbol_end;557556 struct bpf_prog_aux *aux;558557 unsigned int it = 0;559558 int ret = -ERANGE;···565566 if (it++ != symnum)566567 continue;567568568568- bpf_get_prog_addr_region(aux->prog, &symbol_start, &symbol_end);569569 bpf_get_prog_name(aux->prog, sym);570570571571- *value = symbol_start;571571+ *value = (unsigned long)aux->prog->bpf_func;572572 *type = BPF_SYM_ELF_TYPE;573573574574 ret = 0;···670672 }671673672674 bpf_prog_unlock_free(fp);675675+}676676+677677+int bpf_jit_get_func_addr(const struct bpf_prog *prog,678678+ const struct bpf_insn *insn, bool extra_pass,679679+ u64 *func_addr, bool *func_addr_fixed)680680+{681681+ s16 off = insn->off;682682+ s32 imm = insn->imm;683683+ u8 *addr;684684+685685+ *func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL;686686+ if (!*func_addr_fixed) {687687+ /* Place-holder address till the last pass has collected688688+ * all addresses for JITed subprograms in which case we689689+ * can pick them up from prog->aux.690690+ */691691+ if (!extra_pass)692692+ addr = NULL;693693+ else if (prog->aux->func &&694694+ off >= 0 && off < prog->aux->func_cnt)695695+ addr = (u8 *)prog->aux->func[off]->bpf_func;696696+ else697697+ return -EINVAL;698698+ } else {699699+ /* Address of a BPF helper call. Since part of the core700700+ * kernel, it's always at a fixed location. __bpf_call_base701701+ * and the helper with imm relative to it are both in core702702+ * kernel.703703+ */704704+ addr = (u8 *)__bpf_call_base + imm;705705+ }706706+707707+ *func_addr = (unsigned long)addr;708708+ return 0;673709}674710675711static int bpf_jit_blind_insn(const struct bpf_insn *from,
···20782078 info.jited_prog_len = 0;20792079 info.xlated_prog_len = 0;20802080 info.nr_jited_ksyms = 0;20812081+ info.nr_jited_func_lens = 0;20812082 goto done;20822083 }20832084···21592158 }2160215921612160 ulen = info.nr_jited_ksyms;21622162- info.nr_jited_ksyms = prog->aux->func_cnt;21612161+ info.nr_jited_ksyms = prog->aux->func_cnt ? : 1;21632162 if (info.nr_jited_ksyms && ulen) {21642163 if (bpf_dump_raw_ok()) {21642164+ unsigned long ksym_addr;21652165 u64 __user *user_ksyms;21662166- ulong ksym_addr;21672166 u32 i;2168216721692168 /* copy the address of the kernel symbol···21712170 */21722171 ulen = min_t(u32, info.nr_jited_ksyms, ulen);21732172 user_ksyms = u64_to_user_ptr(info.jited_ksyms);21742174- for (i = 0; i < ulen; i++) {21752175- ksym_addr = (ulong) prog->aux->func[i]->bpf_func;21762176- ksym_addr &= PAGE_MASK;21772177- if (put_user((u64) ksym_addr, &user_ksyms[i]))21732173+ if (prog->aux->func_cnt) {21742174+ for (i = 0; i < ulen; i++) {21752175+ ksym_addr = (unsigned long)21762176+ prog->aux->func[i]->bpf_func;21772177+ if (put_user((u64) ksym_addr,21782178+ &user_ksyms[i]))21792179+ return -EFAULT;21802180+ }21812181+ } else {21822182+ ksym_addr = (unsigned long) prog->bpf_func;21832183+ if (put_user((u64) ksym_addr, &user_ksyms[0]))21782184 return -EFAULT;21792185 }21802186 } else {···21902182 }2191218321922184 ulen = info.nr_jited_func_lens;21932193- info.nr_jited_func_lens = prog->aux->func_cnt;21852185+ info.nr_jited_func_lens = prog->aux->func_cnt ? : 1;21942186 if (info.nr_jited_func_lens && ulen) {21952187 if (bpf_dump_raw_ok()) {21962188 u32 __user *user_lens;···21992191 /* copy the JITed image lengths for each function */22002192 ulen = min_t(u32, info.nr_jited_func_lens, ulen);22012193 user_lens = u64_to_user_ptr(info.jited_func_lens);22022202- for (i = 0; i < ulen; i++) {22032203- func_len = prog->aux->func[i]->jited_len;22042204- if (put_user(func_len, &user_lens[i]))21942194+ if (prog->aux->func_cnt) {21952195+ for (i = 0; i < ulen; i++) {21962196+ func_len =21972197+ prog->aux->func[i]->jited_len;21982198+ if (put_user(func_len, &user_lens[i]))21992199+ return -EFAULT;22002200+ }22012201+ } else {22022202+ func_len = prog->jited_len;22032203+ if (put_user(func_len, &user_lens[0]))22052204 return -EFAULT;22062205 }22072206 } else {
+1-1
kernel/bpf/verifier.c
···56505650 return;56515651 /* NOTE: fake 'exit' subprog should be updated as well. */56525652 for (i = 0; i <= env->subprog_cnt; i++) {56535653- if (env->subprog_info[i].start < off)56535653+ if (env->subprog_info[i].start <= off)56545654 continue;56555655 env->subprog_info[i].start += len - 1;56565656 }
+9-6
kernel/cpu.c
···1010#include <linux/sched/signal.h>1111#include <linux/sched/hotplug.h>1212#include <linux/sched/task.h>1313+#include <linux/sched/smt.h>1314#include <linux/unistd.h>1415#include <linux/cpu.h>1516#include <linux/oom.h>···367366}368367369368#endif /* CONFIG_HOTPLUG_CPU */369369+370370+/*371371+ * Architectures that need SMT-specific errata handling during SMT hotplug372372+ * should override this.373373+ */374374+void __weak arch_smt_update(void) { }370375371376#ifdef CONFIG_HOTPLUG_SMT372377enum cpuhp_smt_control cpu_smt_control __read_mostly = CPU_SMT_ENABLED;···10181011 * concurrent CPU hotplug via cpu_add_remove_lock.10191012 */10201013 lockup_detector_cleanup();10141014+ arch_smt_update();10211015 return ret;10221016}10231017···11471139 ret = cpuhp_up_callbacks(cpu, st, target);11481140out:11491141 cpus_write_unlock();11421142+ arch_smt_update();11501143 return ret;11511144}11521145···20632054 /* Tell user space about the state change */20642055 kobject_uevent(&dev->kobj, KOBJ_ONLINE);20652056}20662066-20672067-/*20682068- * Architectures that need SMT-specific errata handling during SMT hotplug20692069- * should override this.20702070- */20712071-void __weak arch_smt_update(void) { };2072205720732058static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)20742059{
+2-2
kernel/debug/kdb/kdb_bt.c
···179179 kdb_printf("no process for cpu %ld\n", cpu);180180 return 0;181181 }182182- sprintf(buf, "btt 0x%p\n", KDB_TSK(cpu));182182+ sprintf(buf, "btt 0x%px\n", KDB_TSK(cpu));183183 kdb_parse(buf);184184 return 0;185185 }186186 kdb_printf("btc: cpu status: ");187187 kdb_parse("cpu\n");188188 for_each_online_cpu(cpu) {189189- sprintf(buf, "btt 0x%p\n", KDB_TSK(cpu));189189+ sprintf(buf, "btt 0x%px\n", KDB_TSK(cpu));190190 kdb_parse(buf);191191 touch_nmi_watchdog();192192 }
+9-6
kernel/debug/kdb/kdb_io.c
···216216 int count;217217 int i;218218 int diag, dtab_count;219219- int key;219219+ int key, buf_size, ret;220220221221222222 diag = kdbgetintenv("DTABCOUNT", &dtab_count);···336336 else337337 p_tmp = tmpbuffer;338338 len = strlen(p_tmp);339339- count = kallsyms_symbol_complete(p_tmp,340340- sizeof(tmpbuffer) -341341- (p_tmp - tmpbuffer));339339+ buf_size = sizeof(tmpbuffer) - (p_tmp - tmpbuffer);340340+ count = kallsyms_symbol_complete(p_tmp, buf_size);342341 if (tab == 2 && count > 0) {343342 kdb_printf("\n%d symbols are found.", count);344343 if (count > dtab_count) {···349350 }350351 kdb_printf("\n");351352 for (i = 0; i < count; i++) {352352- if (WARN_ON(!kallsyms_symbol_next(p_tmp, i)))353353+ ret = kallsyms_symbol_next(p_tmp, i, buf_size);354354+ if (WARN_ON(!ret))353355 break;354354- kdb_printf("%s ", p_tmp);356356+ if (ret != -E2BIG)357357+ kdb_printf("%s ", p_tmp);358358+ else359359+ kdb_printf("%s... ", p_tmp);355360 *(p_tmp + len) = '\0';356361 }357362 if (i >= dtab_count)
+2-2
kernel/debug/kdb/kdb_keyboard.c
···173173 case KT_LATIN:174174 if (isprint(keychar))175175 break; /* printable characters */176176- /* drop through */176176+ /* fall through */177177 case KT_SPEC:178178 if (keychar == K_ENTER)179179 break;180180- /* drop through */180180+ /* fall through */181181 default:182182 return -1; /* ignore unprintables */183183 }
···8383 unsigned long sym_start;8484 unsigned long sym_end;8585 } kdb_symtab_t;8686-extern int kallsyms_symbol_next(char *prefix_name, int flag);8686+extern int kallsyms_symbol_next(char *prefix_name, int flag, int buf_size);8787extern int kallsyms_symbol_complete(char *prefix_name, int max_len);88888989/* Exported Symbols for kernel loadable modules to use. */
+14-14
kernel/debug/kdb/kdb_support.c
···4040int kdbgetsymval(const char *symname, kdb_symtab_t *symtab)4141{4242 if (KDB_DEBUG(AR))4343- kdb_printf("kdbgetsymval: symname=%s, symtab=%p\n", symname,4343+ kdb_printf("kdbgetsymval: symname=%s, symtab=%px\n", symname,4444 symtab);4545 memset(symtab, 0, sizeof(*symtab));4646 symtab->sym_start = kallsyms_lookup_name(symname);···8888 char *knt1 = NULL;89899090 if (KDB_DEBUG(AR))9191- kdb_printf("kdbnearsym: addr=0x%lx, symtab=%p\n", addr, symtab);9191+ kdb_printf("kdbnearsym: addr=0x%lx, symtab=%px\n", addr, symtab);9292 memset(symtab, 0, sizeof(*symtab));93939494 if (addr < 4096)···149149 symtab->mod_name = "kernel";150150 if (KDB_DEBUG(AR))151151 kdb_printf("kdbnearsym: returns %d symtab->sym_start=0x%lx, "152152- "symtab->mod_name=%p, symtab->sym_name=%p (%s)\n", ret,152152+ "symtab->mod_name=%px, symtab->sym_name=%px (%s)\n", ret,153153 symtab->sym_start, symtab->mod_name, symtab->sym_name,154154 symtab->sym_name);155155···221221 * Parameters:222222 * prefix_name prefix of a symbol name to lookup223223 * flag 0 means search from the head, 1 means continue search.224224+ * buf_size maximum length that can be written to prefix_name225225+ * buffer224226 * Returns:225227 * 1 if a symbol matches the given prefix.226228 * 0 if no string found227229 */228228-int kallsyms_symbol_next(char *prefix_name, int flag)230230+int kallsyms_symbol_next(char *prefix_name, int flag, int buf_size)229231{230232 int prefix_len = strlen(prefix_name);231233 static loff_t pos;···237235 pos = 0;238236239237 while ((name = kdb_walk_kallsyms(&pos))) {240240- if (strncmp(name, prefix_name, prefix_len) == 0) {241241- strncpy(prefix_name, name, strlen(name)+1);242242- return 1;243243- }238238+ if (!strncmp(name, prefix_name, prefix_len))239239+ return strscpy(prefix_name, name, buf_size);244240 }245241 return 0;246242}···432432 *word = w8;433433 break;434434 }435435- /* drop through */435435+ /* fall through */436436 default:437437 diag = KDB_BADWIDTH;438438 kdb_printf("kdb_getphysword: bad width %ld\n", (long) size);···481481 *word = w8;482482 break;483483 }484484- /* drop through */484484+ /* fall through */485485 default:486486 diag = KDB_BADWIDTH;487487 kdb_printf("kdb_getword: bad width %ld\n", (long) size);···525525 diag = kdb_putarea(addr, w8);526526 break;527527 }528528- /* drop through */528528+ /* fall through */529529 default:530530 diag = KDB_BADWIDTH;531531 kdb_printf("kdb_putword: bad width %ld\n", (long) size);···887887 __func__, dah_first);888888 if (dah_first) {889889 h_used = (struct debug_alloc_header *)debug_alloc_pool;890890- kdb_printf("%s: h_used %p size %d\n", __func__, h_used,890890+ kdb_printf("%s: h_used %px size %d\n", __func__, h_used,891891 h_used->size);892892 }893893 do {894894 h_used = (struct debug_alloc_header *)895895 ((char *)h_free + dah_overhead + h_free->size);896896- kdb_printf("%s: h_used %p size %d caller %p\n",896896+ kdb_printf("%s: h_used %px size %d caller %px\n",897897 __func__, h_used, h_used->size, h_used->caller);898898 h_free = (struct debug_alloc_header *)899899 (debug_alloc_pool + h_free->next);···902902 ((char *)h_free + dah_overhead + h_free->size);903903 if ((char *)h_used - debug_alloc_pool !=904904 sizeof(debug_alloc_pool_aligned))905905- kdb_printf("%s: h_used %p size %d caller %p\n",905905+ kdb_printf("%s: h_used %px size %d caller %px\n",906906 __func__, h_used, h_used->size, h_used->caller);907907out:908908 spin_unlock(&dap_lock);
···829829 BUG_ON((uprobe->offset & ~PAGE_MASK) +830830 UPROBE_SWBP_INSN_SIZE > PAGE_SIZE);831831832832- smp_wmb(); /* pairs with rmb() in find_active_uprobe() */832832+ smp_wmb(); /* pairs with the smp_rmb() in handle_swbp() */833833 set_bit(UPROBE_COPY_INSN, &uprobe->flags);834834835835 out:···21782178 * After we hit the bp, _unregister + _register can install the21792179 * new and not-yet-analyzed uprobe at the same address, restart.21802180 */21812181- smp_rmb(); /* pairs with wmb() in install_breakpoint() */21822181 if (unlikely(!test_bit(UPROBE_COPY_INSN, &uprobe->flags)))21832182 goto out;21832183+21842184+ /*21852185+ * Pairs with the smp_wmb() in prepare_uprobe().21862186+ *21872187+ * Guarantees that if we see the UPROBE_COPY_INSN bit set, then21882188+ * we must also see the stores to &uprobe->arch performed by the21892189+ * prepare_uprobe() call.21902190+ */21912191+ smp_rmb();2184219221852193 /* Tracing handlers use ->utask to communicate with fetch methods */21862194 if (!get_utask())
+2-2
kernel/kcov.c
···5656 struct task_struct *t;5757};58585959-static bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t)5959+static notrace bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t)6060{6161 unsigned int mode;6262···7878 return mode == needed_mode;7979}80808181-static unsigned long canonicalize_ip(unsigned long ip)8181+static notrace unsigned long canonicalize_ip(unsigned long ip)8282{8383#ifdef CONFIG_RANDOMIZE_BASE8484 ip -= kaslr_offset();
-10
kernel/ptrace.c
···261261262262static int ptrace_has_cap(struct user_namespace *ns, unsigned int mode)263263{264264- if (mode & PTRACE_MODE_SCHED)265265- return false;266266-267264 if (mode & PTRACE_MODE_NOAUDIT)268265 return has_ns_capability_noaudit(current, ns, CAP_SYS_PTRACE);269266 else···328331 !ptrace_has_cap(mm->user_ns, mode)))329332 return -EPERM;330333331331- if (mode & PTRACE_MODE_SCHED)332332- return 0;333334 return security_ptrace_access_check(task, mode);334334-}335335-336336-bool ptrace_may_access_sched(struct task_struct *task, unsigned int mode)337337-{338338- return __ptrace_may_access(task, mode | PTRACE_MODE_SCHED);339335}340336341337bool ptrace_may_access(struct task_struct *task, unsigned int mode)
+14-5
kernel/resource.c
···319319EXPORT_SYMBOL(release_resource);320320321321/**322322- * Finds the lowest iomem resource that covers part of [start..end]. The323323- * caller must specify start, end, flags, and desc (which may be322322+ * Finds the lowest iomem resource that covers part of [@start..@end]. The323323+ * caller must specify @start, @end, @flags, and @desc (which may be324324 * IORES_DESC_NONE).325325 *326326- * If a resource is found, returns 0 and *res is overwritten with the part327327- * of the resource that's within [start..end]; if none is found, returns328328- * -1.326326+ * If a resource is found, returns 0 and @*res is overwritten with the part327327+ * of the resource that's within [@start..@end]; if none is found, returns328328+ * -1 or -EINVAL for other invalid parameters.329329 *330330 * This function walks the whole tree and not just first level children331331 * unless @first_lvl is true.332332+ *333333+ * @start: start address of the resource searched for334334+ * @end: end address of same resource335335+ * @flags: flags which the resource must have336336+ * @desc: descriptor the resource must have337337+ * @first_lvl: walk only the first level children, if set338338+ * @res: return ptr, if resource found332339 */333340static int find_next_iomem_res(resource_size_t start, resource_size_t end,334341 unsigned long flags, unsigned long desc,···406399 * @flags: I/O resource flags407400 * @start: start addr408401 * @end: end addr402402+ * @arg: function argument for the callback @func403403+ * @func: callback function that is called for each qualifying resource area409404 *410405 * NOTE: For a new descriptor search, define a new IORES_DESC in411406 * <linux/ioport.h> and set it in 'desc' of a target resource entry.
+15-9
kernel/sched/core.c
···5738573857395739#ifdef CONFIG_SCHED_SMT57405740 /*57415741- * The sched_smt_present static key needs to be evaluated on every57425742- * hotplug event because at boot time SMT might be disabled when57435743- * the number of booted CPUs is limited.57445744- *57455745- * If then later a sibling gets hotplugged, then the key would stay57465746- * off and SMT scheduling would never be functional.57415741+ * When going up, increment the number of cores with SMT present.57475742 */57485748- if (cpumask_weight(cpu_smt_mask(cpu)) > 1)57495749- static_branch_enable_cpuslocked(&sched_smt_present);57435743+ if (cpumask_weight(cpu_smt_mask(cpu)) == 2)57445744+ static_branch_inc_cpuslocked(&sched_smt_present);57505745#endif57515746 set_cpu_active(cpu, true);57525747···57845789 * Do sync before park smpboot threads to take care the rcu boost case.57855790 */57865791 synchronize_rcu_mult(call_rcu, call_rcu_sched);57925792+57935793+#ifdef CONFIG_SCHED_SMT57945794+ /*57955795+ * When going down, decrement the number of cores with SMT present.57965796+ */57975797+ if (cpumask_weight(cpu_smt_mask(cpu)) == 2)57985798+ static_branch_dec_cpuslocked(&sched_smt_present);57995799+#endif5787580057885801 if (!sched_smp_initialized)57895802 return 0;···58545851 /*58555852 * There's no userspace yet to cause hotplug operations; hence all the58565853 * CPU masks are stable and all blatant races in the below code cannot58575857- * happen.58545854+ * happen. The hotplug lock is nevertheless taken to satisfy lockdep,58555855+ * but there won't be any contention on it.58585856 */58575857+ cpus_read_lock();58595858 mutex_lock(&sched_domains_mutex);58605859 sched_init_domains(cpu_active_mask);58615860 mutex_unlock(&sched_domains_mutex);58615861+ cpus_read_unlock();5862586258635863 /* Move init over to a non-isolated CPU */58645864 if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_FLAG_DOMAIN)) < 0)
+50-16
kernel/sched/fair.c
···24002400 local = 1;2401240124022402 /*24032403- * Retry task to preferred node migration periodically, in case it24042404- * case it previously failed, or the scheduler moved us.24032403+ * Retry to migrate task to preferred node periodically, in case it24042404+ * previously failed, or the scheduler moved us.24052405 */24062406 if (time_after(jiffies, p->numa_migrate_retry)) {24072407 task_numa_placement(p);···56745674 return target;56755675}5676567656775677-static unsigned long cpu_util_wake(int cpu, struct task_struct *p);56775677+static unsigned long cpu_util_without(int cpu, struct task_struct *p);5678567856795679-static unsigned long capacity_spare_wake(int cpu, struct task_struct *p)56795679+static unsigned long capacity_spare_without(int cpu, struct task_struct *p)56805680{56815681- return max_t(long, capacity_of(cpu) - cpu_util_wake(cpu, p), 0);56815681+ return max_t(long, capacity_of(cpu) - cpu_util_without(cpu, p), 0);56825682}5683568356845684/*···5738573857395739 avg_load += cfs_rq_load_avg(&cpu_rq(i)->cfs);5740574057415741- spare_cap = capacity_spare_wake(i, p);57415741+ spare_cap = capacity_spare_without(i, p);5742574257435743 if (spare_cap > max_spare_cap)57445744 max_spare_cap = spare_cap;···58895889 return prev_cpu;5890589058915891 /*58925892- * We need task's util for capacity_spare_wake, sync it up to prev_cpu's58935893- * last_update_time.58925892+ * We need task's util for capacity_spare_without, sync it up to58935893+ * prev_cpu's last_update_time.58945894 */58955895 if (!(sd_flag & SD_BALANCE_FORK))58965896 sync_entity_load_avg(&p->se);···62166216}6217621762186218/*62196219- * cpu_util_wake: Compute CPU utilization with any contributions from62206220- * the waking task p removed.62196219+ * cpu_util_without: compute cpu utilization without any contributions from *p62206220+ * @cpu: the CPU which utilization is requested62216221+ * @p: the task which utilization should be discounted62226222+ *62236223+ * The utilization of a CPU is defined by the utilization of tasks currently62246224+ * enqueued on that CPU as well as tasks which are currently sleeping after an62256225+ * execution on that CPU.62266226+ *62276227+ * This method returns the utilization of the specified CPU by discounting the62286228+ * utilization of the specified task, whenever the task is currently62296229+ * contributing to the CPU utilization.62216230 */62226222-static unsigned long cpu_util_wake(int cpu, struct task_struct *p)62316231+static unsigned long cpu_util_without(int cpu, struct task_struct *p)62236232{62246233 struct cfs_rq *cfs_rq;62256234 unsigned int util;···62406231 cfs_rq = &cpu_rq(cpu)->cfs;62416232 util = READ_ONCE(cfs_rq->avg.util_avg);6242623362436243- /* Discount task's blocked util from CPU's util */62346234+ /* Discount task's util from CPU's util */62446235 util -= min_t(unsigned int, util, task_util(p));6245623662466237 /*···62496240 * a) if *p is the only task sleeping on this CPU, then:62506241 * cpu_util (== task_util) > util_est (== 0)62516242 * and thus we return:62526252- * cpu_util_wake = (cpu_util - task_util) = 062436243+ * cpu_util_without = (cpu_util - task_util) = 062536244 *62546245 * b) if other tasks are SLEEPING on this CPU, which is now exiting62556246 * IDLE, then:62566247 * cpu_util >= task_util62576248 * cpu_util > util_est (== 0)62586249 * and thus we discount *p's blocked utilization to return:62596259- * cpu_util_wake = (cpu_util - task_util) >= 062506250+ * cpu_util_without = (cpu_util - task_util) >= 062606251 *62616252 * c) if other tasks are RUNNABLE on that CPU and62626253 * util_est > cpu_util···62696260 * covered by the following code when estimated utilization is62706261 * enabled.62716262 */62726272- if (sched_feat(UTIL_EST))62736273- util = max(util, READ_ONCE(cfs_rq->avg.util_est.enqueued));62636263+ if (sched_feat(UTIL_EST)) {62646264+ unsigned int estimated =62656265+ READ_ONCE(cfs_rq->avg.util_est.enqueued);62666266+62676267+ /*62686268+ * Despite the following checks we still have a small window62696269+ * for a possible race, when an execl's select_task_rq_fair()62706270+ * races with LB's detach_task():62716271+ *62726272+ * detach_task()62736273+ * p->on_rq = TASK_ON_RQ_MIGRATING;62746274+ * ---------------------------------- A62756275+ * deactivate_task() \62766276+ * dequeue_task() + RaceTime62776277+ * util_est_dequeue() /62786278+ * ---------------------------------- B62796279+ *62806280+ * The additional check on "current == p" it's required to62816281+ * properly fix the execl regression and it helps in further62826282+ * reducing the chances for the above race.62836283+ */62846284+ if (unlikely(task_on_rq_queued(p) || current == p)) {62856285+ estimated -= min_t(unsigned int, estimated,62866286+ (_task_util_est(p) | UTIL_AVG_UNCHANGED));62876287+ }62886288+ util = max(util, estimated);62896289+ }6274629062756291 /*62766292 * Utilization (estimated) can exceed the CPU capacity, thus let's
+44-31
kernel/sched/psi.c
···136136137137static int psi_bug __read_mostly;138138139139-bool psi_disabled __read_mostly;140140-core_param(psi_disabled, psi_disabled, bool, 0644);139139+DEFINE_STATIC_KEY_FALSE(psi_disabled);140140+141141+#ifdef CONFIG_PSI_DEFAULT_DISABLED142142+bool psi_enable;143143+#else144144+bool psi_enable = true;145145+#endif146146+static int __init setup_psi(char *str)147147+{148148+ return kstrtobool(str, &psi_enable) == 0;149149+}150150+__setup("psi=", setup_psi);141151142152/* Running averages - we need to be higher-res than loadavg */143153#define PSI_FREQ (2*HZ+1) /* 2 sec intervals */···179169180170void __init psi_init(void)181171{182182- if (psi_disabled)172172+ if (!psi_enable) {173173+ static_branch_enable(&psi_disabled);183174 return;175175+ }184176185177 psi_period = jiffies_to_nsecs(PSI_FREQ);186178 group_init(&psi_system);···561549 struct rq_flags rf;562550 struct rq *rq;563551564564- if (psi_disabled)552552+ if (static_branch_likely(&psi_disabled))565553 return;566554567555 *flags = current->flags & PF_MEMSTALL;···591579 struct rq_flags rf;592580 struct rq *rq;593581594594- if (psi_disabled)582582+ if (static_branch_likely(&psi_disabled))595583 return;596584597585 if (*flags)···612600#ifdef CONFIG_CGROUPS613601int psi_cgroup_alloc(struct cgroup *cgroup)614602{615615- if (psi_disabled)603603+ if (static_branch_likely(&psi_disabled))616604 return 0;617605618606 cgroup->psi.pcpu = alloc_percpu(struct psi_group_cpu);···624612625613void psi_cgroup_free(struct cgroup *cgroup)626614{627627- if (psi_disabled)615615+ if (static_branch_likely(&psi_disabled))628616 return;629617630618 cancel_delayed_work_sync(&cgroup->psi.clock_work);···645633 */646634void cgroup_move_task(struct task_struct *task, struct css_set *to)647635{648648- bool move_psi = !psi_disabled;649636 unsigned int task_flags = 0;650637 struct rq_flags rf;651638 struct rq *rq;652639653653- if (move_psi) {654654- rq = task_rq_lock(task, &rf);655655-656656- if (task_on_rq_queued(task))657657- task_flags = TSK_RUNNING;658658- else if (task->in_iowait)659659- task_flags = TSK_IOWAIT;660660-661661- if (task->flags & PF_MEMSTALL)662662- task_flags |= TSK_MEMSTALL;663663-664664- if (task_flags)665665- psi_task_change(task, task_flags, 0);640640+ if (static_branch_likely(&psi_disabled)) {641641+ /*642642+ * Lame to do this here, but the scheduler cannot be locked643643+ * from the outside, so we move cgroups from inside sched/.644644+ */645645+ rcu_assign_pointer(task->cgroups, to);646646+ return;666647 }667648668668- /*669669- * Lame to do this here, but the scheduler cannot be locked670670- * from the outside, so we move cgroups from inside sched/.671671- */649649+ rq = task_rq_lock(task, &rf);650650+651651+ if (task_on_rq_queued(task))652652+ task_flags = TSK_RUNNING;653653+ else if (task->in_iowait)654654+ task_flags = TSK_IOWAIT;655655+656656+ if (task->flags & PF_MEMSTALL)657657+ task_flags |= TSK_MEMSTALL;658658+659659+ if (task_flags)660660+ psi_task_change(task, task_flags, 0);661661+662662+ /* See comment above */672663 rcu_assign_pointer(task->cgroups, to);673664674674- if (move_psi) {675675- if (task_flags)676676- psi_task_change(task, 0, task_flags);665665+ if (task_flags)666666+ psi_task_change(task, 0, task_flags);677667678678- task_rq_unlock(rq, task, &rf);679679- }668668+ task_rq_unlock(rq, task, &rf);680669}681670#endif /* CONFIG_CGROUPS */682671···685672{686673 int full;687674688688- if (psi_disabled)675675+ if (static_branch_likely(&psi_disabled))689676 return -EOPNOTSUPP;690677691678 update_stats(group);
···6666{6767 int clear = 0, set = TSK_RUNNING;68686969- if (psi_disabled)6969+ if (static_branch_likely(&psi_disabled))7070 return;71717272 if (!wakeup || p->sched_psi_wake_requeue) {···8686{8787 int clear = TSK_RUNNING, set = 0;88888989- if (psi_disabled)8989+ if (static_branch_likely(&psi_disabled))9090 return;91919292 if (!sleep) {···102102103103static inline void psi_ttwu_dequeue(struct task_struct *p)104104{105105- if (psi_disabled)105105+ if (static_branch_likely(&psi_disabled))106106 return;107107 /*108108 * Is the task being migrated during a wakeup? Make sure to···128128129129static inline void psi_task_tick(struct rq *rq)130130{131131- if (psi_disabled)131131+ if (static_branch_likely(&psi_disabled))132132 return;133133134134 if (unlikely(rq->curr->flags & PF_MEMSTALL))
+3-1
kernel/stackleak.c
···1111 */12121313#include <linux/stackleak.h>1414+#include <linux/kprobes.h>14151516#ifdef CONFIG_STACKLEAK_RUNTIME_DISABLE1617#include <linux/jump_label.h>···4847#define skip_erasing() false4948#endif /* CONFIG_STACKLEAK_RUNTIME_DISABLE */50495151-asmlinkage void stackleak_erase(void)5050+asmlinkage void notrace stackleak_erase(void)5251{5352 /* It would be nice not to have 'kstack_ptr' and 'boundary' on stack */5453 unsigned long kstack_ptr = current->lowest_stack;···102101 /* Reset the 'lowest_stack' value for the next syscall */103102 current->lowest_stack = current_top_of_stack() - THREAD_SIZE/64;104103}104104+NOKPROBE_SYMBOL(stackleak_erase);105105106106void __used stackleak_track_stack(void)107107{
-3
kernel/time/posix-cpu-timers.c
···917917 struct task_cputime cputime;918918 unsigned long soft;919919920920- if (dl_task(tsk))921921- check_dl_overrun(tsk);922922-923920 /*924921 * If cputimer is not running, then there are no active925922 * process wide timers (POSIX 1.b, itimers, RLIMIT_CPU).
+5-3
kernel/trace/bpf_trace.c
···196196 i++;197197 } else if (fmt[i] == 'p' || fmt[i] == 's') {198198 mod[fmt_cnt]++;199199- i++;200200- if (!isspace(fmt[i]) && !ispunct(fmt[i]) && fmt[i] != 0)199199+ /* disallow any further format extensions */200200+ if (fmt[i + 1] != 0 &&201201+ !isspace(fmt[i + 1]) &&202202+ !ispunct(fmt[i + 1]))201203 return -EINVAL;202204 fmt_cnt++;203203- if (fmt[i - 1] == 's') {205205+ if (fmt[i] == 's') {204206 if (str_seen)205207 /* allow only one '%s' per fmt string */206208 return -EINVAL;
+5-2
kernel/trace/ftrace.c
···817817#ifdef CONFIG_FUNCTION_GRAPH_TRACER818818static int profile_graph_entry(struct ftrace_graph_ent *trace)819819{820820- int index = trace->depth;820820+ int index = current->curr_ret_stack;821821822822 function_profile_call(trace->func, 0, NULL, NULL);823823···852852 if (!fgraph_graph_time) {853853 int index;854854855855- index = trace->depth;855855+ index = current->curr_ret_stack;856856857857 /* Append this call time to the parent time to subtract */858858 if (index)···68146814 atomic_set(&t->tracing_graph_pause, 0);68156815 atomic_set(&t->trace_overrun, 0);68166816 t->curr_ret_stack = -1;68176817+ t->curr_ret_depth = -1;68176818 /* Make sure the tasks see the -1 first: */68186819 smp_wmb();68196820 t->ret_stack = ret_stack_list[start++];···70397038void ftrace_graph_init_idle_task(struct task_struct *t, int cpu)70407039{70417040 t->curr_ret_stack = -1;70417041+ t->curr_ret_depth = -1;70427042 /*70437043 * The idle task has no parent, it either has its own70447044 * stack or no stack at all.···70707068 /* Make sure we do not use the parent ret_stack */70717069 t->ret_stack = NULL;70727070 t->curr_ret_stack = -1;70717071+ t->curr_ret_depth = -1;7073707270747073 if (ftrace_graph_active) {70757074 struct ftrace_ret_stack *ret_stack;
+54-3
kernel/trace/trace.h
···512512 * can only be modified by current, we can reuse trace_recursion.513513 */514514 TRACE_IRQ_BIT,515515+516516+ /* Set if the function is in the set_graph_function file */517517+ TRACE_GRAPH_BIT,518518+519519+ /*520520+ * In the very unlikely case that an interrupt came in521521+ * at a start of graph tracing, and we want to trace522522+ * the function in that interrupt, the depth can be greater523523+ * than zero, because of the preempted start of a previous524524+ * trace. In an even more unlikely case, depth could be 2525525+ * if a softirq interrupted the start of graph tracing,526526+ * followed by an interrupt preempting a start of graph527527+ * tracing in the softirq, and depth can even be 3528528+ * if an NMI came in at the start of an interrupt function529529+ * that preempted a softirq start of a function that530530+ * preempted normal context!!!! Luckily, it can't be531531+ * greater than 3, so the next two bits are a mask532532+ * of what the depth is when we set TRACE_GRAPH_BIT533533+ */534534+535535+ TRACE_GRAPH_DEPTH_START_BIT,536536+ TRACE_GRAPH_DEPTH_END_BIT,515537};516538517539#define trace_recursion_set(bit) do { (current)->trace_recursion |= (1<<(bit)); } while (0)518540#define trace_recursion_clear(bit) do { (current)->trace_recursion &= ~(1<<(bit)); } while (0)519541#define trace_recursion_test(bit) ((current)->trace_recursion & (1<<(bit)))542542+543543+#define trace_recursion_depth() \544544+ (((current)->trace_recursion >> TRACE_GRAPH_DEPTH_START_BIT) & 3)545545+#define trace_recursion_set_depth(depth) \546546+ do { \547547+ current->trace_recursion &= \548548+ ~(3 << TRACE_GRAPH_DEPTH_START_BIT); \549549+ current->trace_recursion |= \550550+ ((depth) & 3) << TRACE_GRAPH_DEPTH_START_BIT; \551551+ } while (0)520552521553#define TRACE_CONTEXT_BITS 4522554···875843extern struct ftrace_hash *ftrace_graph_hash;876844extern struct ftrace_hash *ftrace_graph_notrace_hash;877845878878-static inline int ftrace_graph_addr(unsigned long addr)846846+static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace)879847{848848+ unsigned long addr = trace->func;880849 int ret = 0;881850882851 preempt_disable_notrace();···888855 }889856890857 if (ftrace_lookup_ip(ftrace_graph_hash, addr)) {858858+859859+ /*860860+ * This needs to be cleared on the return functions861861+ * when the depth is zero.862862+ */863863+ trace_recursion_set(TRACE_GRAPH_BIT);864864+ trace_recursion_set_depth(trace->depth);865865+891866 /*892867 * If no irqs are to be traced, but a set_graph_function893868 * is set, and called by an interrupt handler, we still···913872 return ret;914873}915874875875+static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace)876876+{877877+ if (trace_recursion_test(TRACE_GRAPH_BIT) &&878878+ trace->depth == trace_recursion_depth())879879+ trace_recursion_clear(TRACE_GRAPH_BIT);880880+}881881+916882static inline int ftrace_graph_notrace_addr(unsigned long addr)917883{918884 int ret = 0;···933885 return ret;934886}935887#else936936-static inline int ftrace_graph_addr(unsigned long addr)888888+static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace)937889{938890 return 1;939891}···942894{943895 return 0;944896}897897+static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace)898898+{ }945899#endif /* CONFIG_DYNAMIC_FTRACE */946900947901extern unsigned int fgraph_max_depth;···951901static inline bool ftrace_graph_ignore_func(struct ftrace_graph_ent *trace)952902{953903 /* trace it when it is-nested-in or is a function enabled. */954954- return !(trace->depth || ftrace_graph_addr(trace->func)) ||904904+ return !(trace_recursion_test(TRACE_GRAPH_BIT) ||905905+ ftrace_graph_addr(trace)) ||955906 (trace->depth < 0) ||956907 (fgraph_max_depth && trace->depth >= fgraph_max_depth);957908}
+42-11
kernel/trace/trace_functions_graph.c
···118118 struct trace_seq *s, u32 flags);119119120120/* Add a function return address to the trace stack on thread info.*/121121-int122122-ftrace_push_return_trace(unsigned long ret, unsigned long func, int *depth,121121+static int122122+ftrace_push_return_trace(unsigned long ret, unsigned long func,123123 unsigned long frame_pointer, unsigned long *retp)124124{125125 unsigned long long calltime;···177177#ifdef HAVE_FUNCTION_GRAPH_RET_ADDR_PTR178178 current->ret_stack[index].retp = retp;179179#endif180180- *depth = current->curr_ret_stack;180180+ return 0;181181+}182182+183183+int function_graph_enter(unsigned long ret, unsigned long func,184184+ unsigned long frame_pointer, unsigned long *retp)185185+{186186+ struct ftrace_graph_ent trace;187187+188188+ trace.func = func;189189+ trace.depth = ++current->curr_ret_depth;190190+191191+ if (ftrace_push_return_trace(ret, func,192192+ frame_pointer, retp))193193+ goto out;194194+195195+ /* Only trace if the calling function expects to */196196+ if (!ftrace_graph_entry(&trace))197197+ goto out_ret;181198182199 return 0;200200+ out_ret:201201+ current->curr_ret_stack--;202202+ out:203203+ current->curr_ret_depth--;204204+ return -EBUSY;183205}184206185207/* Retrieve a function return address to the trace stack on thread info.*/···263241 trace->func = current->ret_stack[index].func;264242 trace->calltime = current->ret_stack[index].calltime;265243 trace->overrun = atomic_read(¤t->trace_overrun);266266- trace->depth = index;244244+ trace->depth = current->curr_ret_depth--;245245+ /*246246+ * We still want to trace interrupts coming in if247247+ * max_depth is set to 1. Make sure the decrement is248248+ * seen before ftrace_graph_return.249249+ */250250+ barrier();267251}268252269253/*···283255284256 ftrace_pop_return_trace(&trace, &ret, frame_pointer);285257 trace.rettime = trace_clock_local();258258+ ftrace_graph_return(&trace);259259+ /*260260+ * The ftrace_graph_return() may still access the current261261+ * ret_stack structure, we need to make sure the update of262262+ * curr_ret_stack is after that.263263+ */286264 barrier();287265 current->curr_ret_stack--;288266 /*···300266 current->curr_ret_stack += FTRACE_NOTRACE_DEPTH;301267 return ret;302268 }303303-304304- /*305305- * The trace should run after decrementing the ret counter306306- * in case an interrupt were to come in. We don't want to307307- * lose the interrupt if max_depth is set.308308- */309309- ftrace_graph_return(&trace);310269311270 if (unlikely(!ret)) {312271 ftrace_graph_stop();···509482 int cpu;510483 int pc;511484485485+ ftrace_graph_addr_finish(trace);486486+512487 local_irq_save(flags);513488 cpu = raw_smp_processor_id();514489 data = per_cpu_ptr(tr->trace_buffer.data, cpu);···534505535506static void trace_graph_thresh_return(struct ftrace_graph_ret *trace)536507{508508+ ftrace_graph_addr_finish(trace);509509+537510 if (tracing_thresh &&538511 (trace->rettime - trace->calltime < tracing_thresh))539512 return;
···535535 if (code[1].op != FETCH_OP_IMM)536536 return -EINVAL;537537538538- tmp = strpbrk("+-", code->data);538538+ tmp = strpbrk(code->data, "+-");539539 if (tmp)540540 c = *tmp;541541 ret = traceprobe_split_symbol_offset(code->data,
+2
kernel/trace/trace_sched_wakeup.c
···270270 unsigned long flags;271271 int pc;272272273273+ ftrace_graph_addr_finish(trace);274274+273275 if (!func_prolog_preempt_disable(tr, &data, &pc))274276 return;275277
+8-4
kernel/user_namespace.c
···974974 if (!new_idmap_permitted(file, ns, cap_setid, &new_map))975975 goto out;976976977977- ret = sort_idmaps(&new_map);978978- if (ret < 0)979979- goto out;980980-981977 ret = -EPERM;982978 /* Map the lower ids from the parent user namespace to the983979 * kernel global id space.···999100310001004 e->lower_first = lower_first;10011005 }10061006+10071007+ /*10081008+ * If we want to use binary search for lookup, this clones the extent10091009+ * array and sorts both copies.10101010+ */10111011+ ret = sort_idmaps(&new_map);10121012+ if (ret < 0)10131013+ goto out;1002101410031015 /* Install the map */10041016 if (new_map.nr_extents <= UID_GID_MAP_MAX_BASE_EXTENTS) {
···208208 XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_2));209209210210 /* We should see two elements in the array */211211+ rcu_read_lock();211212 xas_for_each(&xas, entry, ULONG_MAX)212213 seen++;214214+ rcu_read_unlock();213215 XA_BUG_ON(xa, seen != 2);214216215217 /* One of which is marked */216218 xas_set(&xas, 0);217219 seen = 0;220220+ rcu_read_lock();218221 xas_for_each_marked(&xas, entry, ULONG_MAX, XA_MARK_0)219222 seen++;223223+ rcu_read_unlock();220224 XA_BUG_ON(xa, seen != 1);221225 }222226 XA_BUG_ON(xa, xa_get_mark(xa, next, XA_MARK_0));···377373 xa_erase_index(xa, 12345678);378374 XA_BUG_ON(xa, !xa_empty(xa));379375376376+ /* And so does xa_insert */377377+ xa_reserve(xa, 12345678, GFP_KERNEL);378378+ XA_BUG_ON(xa, xa_insert(xa, 12345678, xa_mk_value(12345678), 0) != 0);379379+ xa_erase_index(xa, 12345678);380380+ XA_BUG_ON(xa, !xa_empty(xa));381381+380382 /* Can iterate through a reserved entry */381383 xa_store_index(xa, 5, GFP_KERNEL);382384 xa_reserve(xa, 6, GFP_KERNEL);···446436 XA_BUG_ON(xa, xa_load(xa, max) != NULL);447437 XA_BUG_ON(xa, xa_load(xa, min - 1) != NULL);448438439439+ xas_lock(&xas);449440 XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(min)) != xa_mk_value(index));441441+ xas_unlock(&xas);450442 XA_BUG_ON(xa, xa_load(xa, min) != xa_mk_value(min));451443 XA_BUG_ON(xa, xa_load(xa, max - 1) != xa_mk_value(min));452444 XA_BUG_ON(xa, xa_load(xa, max) != NULL);···464452 XA_STATE(xas, xa, index);465453 xa_store_order(xa, index, order, xa_mk_value(0), GFP_KERNEL);466454455455+ xas_lock(&xas);467456 XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(1)) != xa_mk_value(0));468457 XA_BUG_ON(xa, xas.xa_index != index);469458 XA_BUG_ON(xa, xas_store(&xas, NULL) != xa_mk_value(1));459459+ xas_unlock(&xas);470460 XA_BUG_ON(xa, !xa_empty(xa));471461}472462#endif···512498 rcu_read_unlock();513499514500 /* We can erase multiple values with a single store */515515- xa_store_order(xa, 0, 63, NULL, GFP_KERNEL);501501+ xa_store_order(xa, 0, BITS_PER_LONG - 1, NULL, GFP_KERNEL);516502 XA_BUG_ON(xa, !xa_empty(xa));517503518504 /* Even when the first slot is empty but the others aren't */···716702 }717703}718704719719-static noinline void check_find(struct xarray *xa)705705+static noinline void check_find_1(struct xarray *xa)720706{721707 unsigned long i, j, k;722708···762748 XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_0));763749 }764750 XA_BUG_ON(xa, !xa_empty(xa));751751+}752752+753753+static noinline void check_find_2(struct xarray *xa)754754+{755755+ void *entry;756756+ unsigned long i, j, index = 0;757757+758758+ xa_for_each(xa, entry, index, ULONG_MAX, XA_PRESENT) {759759+ XA_BUG_ON(xa, true);760760+ }761761+762762+ for (i = 0; i < 1024; i++) {763763+ xa_store_index(xa, index, GFP_KERNEL);764764+ j = 0;765765+ index = 0;766766+ xa_for_each(xa, entry, index, ULONG_MAX, XA_PRESENT) {767767+ XA_BUG_ON(xa, xa_mk_value(index) != entry);768768+ XA_BUG_ON(xa, index != j++);769769+ }770770+ }771771+772772+ xa_destroy(xa);773773+}774774+775775+static noinline void check_find(struct xarray *xa)776776+{777777+ check_find_1(xa);778778+ check_find_2(xa);765779 check_multi_find(xa);766780 check_multi_find_2(xa);767781}···11091067 __check_store_range(xa, 4095 + i, 4095 + j);11101068 __check_store_range(xa, 4096 + i, 4096 + j);11111069 __check_store_range(xa, 123456 + i, 123456 + j);11121112- __check_store_range(xa, UINT_MAX + i, UINT_MAX + j);10701070+ __check_store_range(xa, (1 << 24) + i, (1 << 24) + j);11131071 }11141072 }11151073}···11881146 XA_STATE(xas, xa, 1 << order);1189114711901148 xa_store_order(xa, 0, order, xa, GFP_KERNEL);11491149+ rcu_read_lock();11911150 xas_load(&xas);11921151 XA_BUG_ON(xa, xas.xa_node->count == 0);11931152 XA_BUG_ON(xa, xas.xa_node->count > (1 << order));11941153 XA_BUG_ON(xa, xas.xa_node->nr_values != 0);11541154+ rcu_read_unlock();1195115511961156 xa_store_order(xa, 1 << order, order, xa_mk_value(1 << order),11971157 GFP_KERNEL);
+1-2
lib/ubsan.c
···427427EXPORT_SYMBOL(__ubsan_handle_shift_out_of_bounds);428428429429430430-void __noreturn431431-__ubsan_handle_builtin_unreachable(struct unreachable_data *data)430430+void __ubsan_handle_builtin_unreachable(struct unreachable_data *data)432431{433432 unsigned long flags;434433
+60-79
lib/xarray.c
···610610 * (see the xa_cmpxchg() implementation for an example).611611 *612612 * Return: If the slot already existed, returns the contents of this slot.613613- * If the slot was newly created, returns NULL. If it failed to create the614614- * slot, returns NULL and indicates the error in @xas.613613+ * If the slot was newly created, returns %NULL. If it failed to create the614614+ * slot, returns %NULL and indicates the error in @xas.615615 */616616static void *xas_create(struct xa_state *xas)617617{···13341334 XA_STATE(xas, xa, index);13351335 return xas_result(&xas, xas_store(&xas, NULL));13361336}13371337-EXPORT_SYMBOL_GPL(__xa_erase);13371337+EXPORT_SYMBOL(__xa_erase);1338133813391339/**13401340- * xa_store() - Store this entry in the XArray.13401340+ * xa_erase() - Erase this entry from the XArray.13411341 * @xa: XArray.13421342- * @index: Index into array.13431343- * @entry: New entry.13441344- * @gfp: Memory allocation flags.13421342+ * @index: Index of entry.13451343 *13461346- * After this function returns, loads from this index will return @entry.13471347- * Storing into an existing multislot entry updates the entry of every index.13481348- * The marks associated with @index are unaffected unless @entry is %NULL.13441344+ * This function is the equivalent of calling xa_store() with %NULL as13451345+ * the third argument. The XArray does not need to allocate memory, so13461346+ * the user does not need to provide GFP flags.13491347 *13501350- * Context: Process context. Takes and releases the xa_lock. May sleep13511351- * if the @gfp flags permit.13521352- * Return: The old entry at this index on success, xa_err(-EINVAL) if @entry13531353- * cannot be stored in an XArray, or xa_err(-ENOMEM) if memory allocation13541354- * failed.13481348+ * Context: Any context. Takes and releases the xa_lock.13491349+ * Return: The entry which used to be at this index.13551350 */13561356-void *xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)13511351+void *xa_erase(struct xarray *xa, unsigned long index)13571352{13581358- XA_STATE(xas, xa, index);13591359- void *curr;13531353+ void *entry;1360135413611361- if (WARN_ON_ONCE(xa_is_internal(entry)))13621362- return XA_ERROR(-EINVAL);13551355+ xa_lock(xa);13561356+ entry = __xa_erase(xa, index);13571357+ xa_unlock(xa);1363135813641364- do {13651365- xas_lock(&xas);13661366- curr = xas_store(&xas, entry);13671367- if (xa_track_free(xa) && entry)13681368- xas_clear_mark(&xas, XA_FREE_MARK);13691369- xas_unlock(&xas);13701370- } while (xas_nomem(&xas, gfp));13711371-13721372- return xas_result(&xas, curr);13591359+ return entry;13731360}13741374-EXPORT_SYMBOL(xa_store);13611361+EXPORT_SYMBOL(xa_erase);1375136213761363/**13771364 * __xa_store() - Store this entry in the XArray.···1382139513831396 if (WARN_ON_ONCE(xa_is_internal(entry)))13841397 return XA_ERROR(-EINVAL);13981398+ if (xa_track_free(xa) && !entry)13991399+ entry = XA_ZERO_ENTRY;1385140013861401 do {13871402 curr = xas_store(&xas, entry);13881388- if (xa_track_free(xa) && entry)14031403+ if (xa_track_free(xa))13891404 xas_clear_mark(&xas, XA_FREE_MARK);13901405 } while (__xas_nomem(&xas, gfp));13911406···13961407EXPORT_SYMBOL(__xa_store);1397140813981409/**13991399- * xa_cmpxchg() - Conditionally replace an entry in the XArray.14101410+ * xa_store() - Store this entry in the XArray.14001411 * @xa: XArray.14011412 * @index: Index into array.14021402- * @old: Old value to test against.14031403- * @entry: New value to place in array.14131413+ * @entry: New entry.14041414 * @gfp: Memory allocation flags.14051415 *14061406- * If the entry at @index is the same as @old, replace it with @entry.14071407- * If the return value is equal to @old, then the exchange was successful.14161416+ * After this function returns, loads from this index will return @entry.14171417+ * Storing into an existing multislot entry updates the entry of every index.14181418+ * The marks associated with @index are unaffected unless @entry is %NULL.14081419 *14091409- * Context: Process context. Takes and releases the xa_lock. May sleep14101410- * if the @gfp flags permit.14111411- * Return: The old value at this index or xa_err() if an error happened.14201420+ * Context: Any context. Takes and releases the xa_lock.14211421+ * May sleep if the @gfp flags permit.14221422+ * Return: The old entry at this index on success, xa_err(-EINVAL) if @entry14231423+ * cannot be stored in an XArray, or xa_err(-ENOMEM) if memory allocation14241424+ * failed.14121425 */14131413-void *xa_cmpxchg(struct xarray *xa, unsigned long index,14141414- void *old, void *entry, gfp_t gfp)14261426+void *xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)14151427{14161416- XA_STATE(xas, xa, index);14171428 void *curr;1418142914191419- if (WARN_ON_ONCE(xa_is_internal(entry)))14201420- return XA_ERROR(-EINVAL);14301430+ xa_lock(xa);14311431+ curr = __xa_store(xa, index, entry, gfp);14321432+ xa_unlock(xa);1421143314221422- do {14231423- xas_lock(&xas);14241424- curr = xas_load(&xas);14251425- if (curr == XA_ZERO_ENTRY)14261426- curr = NULL;14271427- if (curr == old) {14281428- xas_store(&xas, entry);14291429- if (xa_track_free(xa) && entry)14301430- xas_clear_mark(&xas, XA_FREE_MARK);14311431- }14321432- xas_unlock(&xas);14331433- } while (xas_nomem(&xas, gfp));14341434-14351435- return xas_result(&xas, curr);14341434+ return curr;14361435}14371437-EXPORT_SYMBOL(xa_cmpxchg);14361436+EXPORT_SYMBOL(xa_store);1438143714391438/**14401439 * __xa_cmpxchg() - Store this entry in the XArray.···1448147114491472 if (WARN_ON_ONCE(xa_is_internal(entry)))14501473 return XA_ERROR(-EINVAL);14741474+ if (xa_track_free(xa) && !entry)14751475+ entry = XA_ZERO_ENTRY;1451147614521477 do {14531478 curr = xas_load(&xas);···14571478 curr = NULL;14581479 if (curr == old) {14591480 xas_store(&xas, entry);14601460- if (xa_track_free(xa) && entry)14811481+ if (xa_track_free(xa))14611482 xas_clear_mark(&xas, XA_FREE_MARK);14621483 }14631484 } while (__xas_nomem(&xas, gfp));···14671488EXPORT_SYMBOL(__xa_cmpxchg);1468148914691490/**14701470- * xa_reserve() - Reserve this index in the XArray.14911491+ * __xa_reserve() - Reserve this index in the XArray.14711492 * @xa: XArray.14721493 * @index: Index into array.14731494 * @gfp: Memory allocation flags.···14751496 * Ensures there is somewhere to store an entry at @index in the array.14761497 * If there is already something stored at @index, this function does14771498 * nothing. If there was nothing there, the entry is marked as reserved.14781478- * Loads from @index will continue to see a %NULL pointer until a14791479- * subsequent store to @index.14991499+ * Loading from a reserved entry returns a %NULL pointer.14801500 *14811501 * If you do not use the entry that you have reserved, call xa_release()14821502 * or xa_erase() to free any unnecessary memory.14831503 *14841484- * Context: Process context. Takes and releases the xa_lock, IRQ or BH safe14851485- * if specified in XArray flags. May sleep if the @gfp flags permit.15041504+ * Context: Any context. Expects the xa_lock to be held on entry. May15051505+ * release the lock, sleep and reacquire the lock if the @gfp flags permit.14861506 * Return: 0 if the reservation succeeded or -ENOMEM if it failed.14871507 */14881488-int xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp)15081508+int __xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp)14891509{14901510 XA_STATE(xas, xa, index);14911491- unsigned int lock_type = xa_lock_type(xa);14921511 void *curr;1493151214941513 do {14951495- xas_lock_type(&xas, lock_type);14961514 curr = xas_load(&xas);14971497- if (!curr)15151515+ if (!curr) {14981516 xas_store(&xas, XA_ZERO_ENTRY);14991499- xas_unlock_type(&xas, lock_type);15001500- } while (xas_nomem(&xas, gfp));15171517+ if (xa_track_free(xa))15181518+ xas_clear_mark(&xas, XA_FREE_MARK);15191519+ }15201520+ } while (__xas_nomem(&xas, gfp));1501152115021522 return xas_error(&xas);15031523}15041504-EXPORT_SYMBOL(xa_reserve);15241524+EXPORT_SYMBOL(__xa_reserve);1505152515061526#ifdef CONFIG_XARRAY_MULTI15071527static void xas_set_range(struct xa_state *xas, unsigned long first,···15651587 do {15661588 xas_lock(&xas);15671589 if (entry) {15681568- unsigned int order = (last == ~0UL) ? 64 :15691569- ilog2(last + 1);15901590+ unsigned int order = BITS_PER_LONG;15911591+ if (last + 1)15921592+ order = __ffs(last + 1);15701593 xas_set_order(&xas, last, order);15711594 xas_create(&xas);15721595 if (xas_error(&xas))···16411662 * @index: Index of entry.16421663 * @mark: Mark number.16431664 *16441644- * Attempting to set a mark on a NULL entry does not succeed.16651665+ * Attempting to set a mark on a %NULL entry does not succeed.16451666 *16461667 * Context: Any context. Expects xa_lock to be held on entry.16471668 */···16531674 if (entry)16541675 xas_set_mark(&xas, mark);16551676}16561656-EXPORT_SYMBOL_GPL(__xa_set_mark);16771677+EXPORT_SYMBOL(__xa_set_mark);1657167816581679/**16591680 * __xa_clear_mark() - Clear this mark on this entry while locked.···16711692 if (entry)16721693 xas_clear_mark(&xas, mark);16731694}16741674-EXPORT_SYMBOL_GPL(__xa_clear_mark);16951695+EXPORT_SYMBOL(__xa_clear_mark);1675169616761697/**16771698 * xa_get_mark() - Inquire whether this mark is set on this entry.···17111732 * @index: Index of entry.17121733 * @mark: Mark number.17131734 *17141714- * Attempting to set a mark on a NULL entry does not succeed.17351735+ * Attempting to set a mark on a %NULL entry does not succeed.17151736 *17161737 * Context: Process context. Takes and releases the xa_lock.17171738 */···18081829 entry = xas_find_marked(&xas, max, filter);18091830 else18101831 entry = xas_find(&xas, max);18321832+ if (xas.xa_node == XAS_BOUNDS)18331833+ break;18111834 if (xas.xa_shift) {18121835 if (xas.xa_index & ((1UL << xas.xa_shift) - 1))18131836 continue;···18801899 *18811900 * The @filter may be an XArray mark value, in which case entries which are18821901 * marked with that mark will be copied. It may also be %XA_PRESENT, in18831883- * which case all entries which are not NULL will be copied.19021902+ * which case all entries which are not %NULL will be copied.18841903 *18851904 * The entries returned may not represent a snapshot of the XArray at a18861905 * moment in time. For example, if another thread stores to index 5, then
+9-4
mm/gup.c
···385385 * @vma: vm_area_struct mapping @address386386 * @address: virtual address to look up387387 * @flags: flags modifying lookup behaviour388388- * @page_mask: on output, *page_mask is set according to the size of the page388388+ * @ctx: contains dev_pagemap for %ZONE_DEVICE memory pinning and a389389+ * pointer to output page_mask389390 *390391 * @flags can have FOLL_ flags set, defined in <linux/mm.h>391392 *392392- * Returns the mapped (struct page *), %NULL if no mapping exists, or393393+ * When getting pages from ZONE_DEVICE memory, the @ctx->pgmap caches394394+ * the device's dev_pagemap metadata to avoid repeating expensive lookups.395395+ *396396+ * On output, the @ctx->page_mask is set according to the size of the page.397397+ *398398+ * Return: the mapped (struct page *), %NULL if no mapping exists, or393399 * an error pointer if there is a mapping to something not represented394400 * by a page descriptor (see also vm_normal_page()).395401 */···702696 if (!vma || start >= vma->vm_end) {703697 vma = find_extend_vma(mm, start);704698 if (!vma && in_gate_area(mm, start)) {705705- int ret;706699 ret = get_gate_page(mm, start & PAGE_MASK,707700 gup_flags, &vma,708701 pages ? &pages[i] : NULL);709702 if (ret)710710- return i ? : ret;703703+ goto out;711704 ctx.page_mask = 0;712705 goto next_page;713706 }
+25-18
mm/huge_memory.c
···23502350 }23512351}2352235223532353-static void freeze_page(struct page *page)23532353+static void unmap_page(struct page *page)23542354{23552355 enum ttu_flags ttu_flags = TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS |23562356 TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD;···23652365 VM_BUG_ON_PAGE(!unmap_success, page);23662366}2367236723682368-static void unfreeze_page(struct page *page)23682368+static void remap_page(struct page *page)23692369{23702370 int i;23712371 if (PageTransHuge(page)) {···24022402 (1L << PG_unevictable) |24032403 (1L << PG_dirty)));2404240424052405+ /* ->mapping in first tail page is compound_mapcount */24062406+ VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING,24072407+ page_tail);24082408+ page_tail->mapping = head->mapping;24092409+ page_tail->index = head->index + tail;24102410+24052411 /* Page flags must be visible before we make the page non-compound. */24062412 smp_wmb();24072413···24282422 if (page_is_idle(head))24292423 set_page_idle(page_tail);2430242424312431- /* ->mapping in first tail page is compound_mapcount */24322432- VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING,24332433- page_tail);24342434- page_tail->mapping = head->mapping;24352435-24362436- page_tail->index = head->index + tail;24372425 page_cpupid_xchg_last(page_tail, page_cpupid_last(head));2438242624392427 /*···24392439}2440244024412441static void __split_huge_page(struct page *page, struct list_head *list,24422442- unsigned long flags)24422442+ pgoff_t end, unsigned long flags)24432443{24442444 struct page *head = compound_head(page);24452445 struct zone *zone = page_zone(head);24462446 struct lruvec *lruvec;24472447- pgoff_t end = -1;24482447 int i;2449244824502449 lruvec = mem_cgroup_page_lruvec(head, zone->zone_pgdat);2451245024522451 /* complete memcg works before add pages to LRU */24532452 mem_cgroup_split_huge_fixup(head);24542454-24552455- if (!PageAnon(page))24562456- end = DIV_ROUND_UP(i_size_read(head->mapping->host), PAGE_SIZE);2457245324582454 for (i = HPAGE_PMD_NR - 1; i >= 1; i--) {24592455 __split_huge_page_tail(head, i, lruvec, list);···2479248324802484 spin_unlock_irqrestore(zone_lru_lock(page_zone(head)), flags);2481248524822482- unfreeze_page(head);24862486+ remap_page(head);2483248724842488 for (i = 0; i < HPAGE_PMD_NR; i++) {24852489 struct page *subpage = head + i;···26222626 int count, mapcount, extra_pins, ret;26232627 bool mlocked;26242628 unsigned long flags;26292629+ pgoff_t end;2625263026262631 VM_BUG_ON_PAGE(is_huge_zero_page(page), page);26272632 VM_BUG_ON_PAGE(!PageLocked(page), page);···26452648 ret = -EBUSY;26462649 goto out;26472650 }26512651+ end = -1;26482652 mapping = NULL;26492653 anon_vma_lock_write(anon_vma);26502654 } else {···2659266126602662 anon_vma = NULL;26612663 i_mmap_lock_read(mapping);26642664+26652665+ /*26662666+ *__split_huge_page() may need to trim off pages beyond EOF:26672667+ * but on 32-bit, i_size_read() takes an irq-unsafe seqlock,26682668+ * which cannot be nested inside the page tree lock. So note26692669+ * end now: i_size itself may be changed at any moment, but26702670+ * head page lock is good enough to serialize the trimming.26712671+ */26722672+ end = DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE);26622673 }2663267426642675 /*26652665- * Racy check if we can split the page, before freeze_page() will26762676+ * Racy check if we can split the page, before unmap_page() will26662677 * split PMDs26672678 */26682679 if (!can_split_huge_page(head, &extra_pins)) {···26802673 }2681267426822675 mlocked = PageMlocked(page);26832683- freeze_page(head);26762676+ unmap_page(head);26842677 VM_BUG_ON_PAGE(compound_mapcount(head), head);2685267826862679 /* Make sure the page is not on per-CPU pagevec as it takes pin */···27142707 if (mapping)27152708 __dec_node_page_state(page, NR_SHMEM_THPS);27162709 spin_unlock(&pgdata->split_queue_lock);27172717- __split_huge_page(page, list, flags);27102710+ __split_huge_page(page, list, end, flags);27182711 if (PageSwapCache(head)) {27192712 swp_entry_t entry = { .val = page_private(head) };27202713···27342727fail: if (mapping)27352728 xa_unlock(&mapping->i_pages);27362729 spin_unlock_irqrestore(zone_lru_lock(page_zone(head)), flags);27372737- unfreeze_page(head);27302730+ remap_page(head);27382731 ret = -EBUSY;27392732 }27402733
+20-5
mm/hugetlb.c
···32333233int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,32343234 struct vm_area_struct *vma)32353235{32363236- pte_t *src_pte, *dst_pte, entry;32363236+ pte_t *src_pte, *dst_pte, entry, dst_entry;32373237 struct page *ptepage;32383238 unsigned long addr;32393239 int cow;···32613261 break;32623262 }3263326332643264- /* If the pagetables are shared don't copy or take references */32653265- if (dst_pte == src_pte)32643264+ /*32653265+ * If the pagetables are shared don't copy or take references.32663266+ * dst_pte == src_pte is the common case of src/dest sharing.32673267+ *32683268+ * However, src could have 'unshared' and dst shares with32693269+ * another vma. If dst_pte !none, this implies sharing.32703270+ * Check here before taking page table lock, and once again32713271+ * after taking the lock below.32723272+ */32733273+ dst_entry = huge_ptep_get(dst_pte);32743274+ if ((dst_pte == src_pte) || !huge_pte_none(dst_entry))32663275 continue;3267327632683277 dst_ptl = huge_pte_lock(h, dst, dst_pte);32693278 src_ptl = huge_pte_lockptr(h, src, src_pte);32703279 spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING);32713280 entry = huge_ptep_get(src_pte);32723272- if (huge_pte_none(entry)) { /* skip none entry */32813281+ dst_entry = huge_ptep_get(dst_pte);32823282+ if (huge_pte_none(entry) || !huge_pte_none(dst_entry)) {32833283+ /*32843284+ * Skip if src entry none. Also, skip in the32853285+ * unlikely case dst entry !none as this implies32863286+ * sharing with another vma.32873287+ */32733288 ;32743289 } else if (unlikely(is_hugetlb_entry_migration(entry) ||32753290 is_hugetlb_entry_hwpoisoned(entry))) {···4080406540814066 /* fallback to copy_from_user outside mmap_sem */40824067 if (unlikely(ret)) {40834083- ret = -EFAULT;40684068+ ret = -ENOENT;40844069 *pagep = page;40854070 /* don't free the page */40864071 goto out;
+82-60
mm/khugepaged.c
···12871287 * collapse_shmem - collapse small tmpfs/shmem pages into huge one.12881288 *12891289 * Basic scheme is simple, details are more complex:12901290- * - allocate and freeze a new huge page;12901290+ * - allocate and lock a new huge page;12911291 * - scan page cache replacing old pages with the new one12921292 * + swap in pages if necessary;12931293 * + fill in gaps;···12951295 * - if replacing succeeds:12961296 * + copy data over;12971297 * + free old pages;12981298- * + unfreeze huge page;12981298+ * + unlock huge page;12991299 * - if replacing failed;13001300 * + put all pages back and unfreeze them;13011301 * + restore gaps in the page cache;13021302- * + free huge page;13021302+ * + unlock and free huge page;13031303 */13041304static void collapse_shmem(struct mm_struct *mm,13051305 struct address_space *mapping, pgoff_t start,···13291329 goto out;13301330 }1331133113321332- new_page->index = start;13331333- new_page->mapping = mapping;13341334- __SetPageSwapBacked(new_page);13351335- __SetPageLocked(new_page);13361336- BUG_ON(!page_ref_freeze(new_page, 1));13371337-13381338- /*13391339- * At this point the new_page is 'frozen' (page_count() is zero),13401340- * locked and not up-to-date. It's safe to insert it into the page13411341- * cache, because nobody would be able to map it or use it in other13421342- * way until we unfreeze it.13431343- */13441344-13451332 /* This will be less messy when we use multi-index entries */13461333 do {13471334 xas_lock_irq(&xas);···13361349 if (!xas_error(&xas))13371350 break;13381351 xas_unlock_irq(&xas);13391339- if (!xas_nomem(&xas, GFP_KERNEL))13521352+ if (!xas_nomem(&xas, GFP_KERNEL)) {13531353+ mem_cgroup_cancel_charge(new_page, memcg, true);13541354+ result = SCAN_FAIL;13401355 goto out;13561356+ }13411357 } while (1);13581358+13591359+ __SetPageLocked(new_page);13601360+ __SetPageSwapBacked(new_page);13611361+ new_page->index = start;13621362+ new_page->mapping = mapping;13631363+13641364+ /*13651365+ * At this point the new_page is locked and not up-to-date.13661366+ * It's safe to insert it into the page cache, because nobody would13671367+ * be able to map it or use it in another way until we unlock it.13681368+ */1342136913431370 xas_set(&xas, start);13441371 for (index = start; index < end; index++) {···1360135913611360 VM_BUG_ON(index != xas.xa_index);13621361 if (!page) {13621362+ /*13631363+ * Stop if extent has been truncated or hole-punched,13641364+ * and is now completely empty.13651365+ */13661366+ if (index == start) {13671367+ if (!xas_next_entry(&xas, end - 1)) {13681368+ result = SCAN_TRUNCATED;13691369+ goto xa_locked;13701370+ }13711371+ xas_set(&xas, index);13721372+ }13631373 if (!shmem_charge(mapping->host, 1)) {13641374 result = SCAN_FAIL;13651365- break;13751375+ goto xa_locked;13661376 }13671377 xas_store(&xas, new_page + (index % HPAGE_PMD_NR));13681378 nr_none++;···13881376 result = SCAN_FAIL;13891377 goto xa_unlocked;13901378 }13911391- xas_lock_irq(&xas);13921392- xas_set(&xas, index);13931379 } else if (trylock_page(page)) {13941380 get_page(page);13811381+ xas_unlock_irq(&xas);13951382 } else {13961383 result = SCAN_PAGE_LOCK;13971397- break;13841384+ goto xa_locked;13981385 }1399138614001387 /*···14021391 */14031392 VM_BUG_ON_PAGE(!PageLocked(page), page);14041393 VM_BUG_ON_PAGE(!PageUptodate(page), page);14051405- VM_BUG_ON_PAGE(PageTransCompound(page), page);13941394+13951395+ /*13961396+ * If file was truncated then extended, or hole-punched, before13971397+ * we locked the first page, then a THP might be there already.13981398+ */13991399+ if (PageTransCompound(page)) {14001400+ result = SCAN_PAGE_COMPOUND;14011401+ goto out_unlock;14021402+ }1406140314071404 if (page_mapping(page) != mapping) {14081405 result = SCAN_TRUNCATED;14091406 goto out_unlock;14101407 }14111411- xas_unlock_irq(&xas);1412140814131409 if (isolate_lru_page(page)) {14141410 result = SCAN_DEL_PAGE_LRU;14151415- goto out_isolate_failed;14111411+ goto out_unlock;14161412 }1417141314181414 if (page_mapped(page))···14391421 */14401422 if (!page_ref_freeze(page, 3)) {14411423 result = SCAN_PAGE_COUNT;14421442- goto out_lru;14241424+ xas_unlock_irq(&xas);14251425+ putback_lru_page(page);14261426+ goto out_unlock;14431427 }1444142814451429 /*···14531433 /* Finally, replace with the new page. */14541434 xas_store(&xas, new_page + (index % HPAGE_PMD_NR));14551435 continue;14561456-out_lru:14571457- xas_unlock_irq(&xas);14581458- putback_lru_page(page);14591459-out_isolate_failed:14601460- unlock_page(page);14611461- put_page(page);14621462- goto xa_unlocked;14631436out_unlock:14641437 unlock_page(page);14651438 put_page(page);14661466- break;14391439+ goto xa_unlocked;14671440 }14681468- xas_unlock_irq(&xas);1469144114421442+ __inc_node_page_state(new_page, NR_SHMEM_THPS);14431443+ if (nr_none) {14441444+ struct zone *zone = page_zone(new_page);14451445+14461446+ __mod_node_page_state(zone->zone_pgdat, NR_FILE_PAGES, nr_none);14471447+ __mod_node_page_state(zone->zone_pgdat, NR_SHMEM, nr_none);14481448+ }14491449+14501450+xa_locked:14511451+ xas_unlock_irq(&xas);14701452xa_unlocked:14531453+14711454 if (result == SCAN_SUCCEED) {14721455 struct page *page, *tmp;14731473- struct zone *zone = page_zone(new_page);1474145614751457 /*14761458 * Replacing old pages with new one has succeeded, now we14771459 * need to copy the content and free the old pages.14781460 */14611461+ index = start;14791462 list_for_each_entry_safe(page, tmp, &pagelist, lru) {14631463+ while (index < page->index) {14641464+ clear_highpage(new_page + (index % HPAGE_PMD_NR));14651465+ index++;14661466+ }14801467 copy_highpage(new_page + (page->index % HPAGE_PMD_NR),14811468 page);14821469 list_del(&page->lru);14831483- unlock_page(page);14841484- page_ref_unfreeze(page, 1);14851470 page->mapping = NULL;14711471+ page_ref_unfreeze(page, 1);14861472 ClearPageActive(page);14871473 ClearPageUnevictable(page);14741474+ unlock_page(page);14881475 put_page(page);14761476+ index++;14771477+ }14781478+ while (index < end) {14791479+ clear_highpage(new_page + (index % HPAGE_PMD_NR));14801480+ index++;14891481 }1490148214911491- local_irq_disable();14921492- __inc_node_page_state(new_page, NR_SHMEM_THPS);14931493- if (nr_none) {14941494- __mod_node_page_state(zone->zone_pgdat, NR_FILE_PAGES, nr_none);14951495- __mod_node_page_state(zone->zone_pgdat, NR_SHMEM, nr_none);14961496- }14971497- local_irq_enable();14981498-14991499- /*15001500- * Remove pte page tables, so we can re-fault15011501- * the page as huge.15021502- */15031503- retract_page_tables(mapping, start);15041504-15051505- /* Everything is ready, let's unfreeze the new_page */15061506- set_page_dirty(new_page);15071483 SetPageUptodate(new_page);15081508- page_ref_unfreeze(new_page, HPAGE_PMD_NR);14841484+ page_ref_add(new_page, HPAGE_PMD_NR - 1);14851485+ set_page_dirty(new_page);15091486 mem_cgroup_commit_charge(new_page, memcg, false, true);15101487 lru_cache_add_anon(new_page);15111511- unlock_page(new_page);1512148814891489+ /*14901490+ * Remove pte page tables, so we can re-fault the page as huge.14911491+ */14921492+ retract_page_tables(mapping, start);15131493 *hpage = NULL;1514149415151495 khugepaged_pages_collapsed++;15161496 } else {15171497 struct page *page;14981498+15181499 /* Something went wrong: roll back page cache changes */15191519- shmem_uncharge(mapping->host, nr_none);15201500 xas_lock_irq(&xas);15011501+ mapping->nrpages -= nr_none;15021502+ shmem_uncharge(mapping->host, nr_none);15031503+15211504 xas_set(&xas, start);15221505 xas_for_each(&xas, page, end - 1) {15231506 page = list_first_entry_or_null(&pagelist,···15421519 xas_store(&xas, page);15431520 xas_pause(&xas);15441521 xas_unlock_irq(&xas);15451545- putback_lru_page(page);15461522 unlock_page(page);15231523+ putback_lru_page(page);15471524 xas_lock_irq(&xas);15481525 }15491526 VM_BUG_ON(nr_none);15501527 xas_unlock_irq(&xas);1551152815521552- /* Unfreeze new_page, caller would take care about freeing it */15531553- page_ref_unfreeze(new_page, 1);15541529 mem_cgroup_cancel_charge(new_page, memcg, true);15551555- unlock_page(new_page);15561530 new_page->mapping = NULL;15571531 }15321532+15331533+ unlock_page(new_page);15581534out:15591535 VM_BUG_ON(!list_empty(&pagelist));15601536 /* TODO: tracepoints */
+1-1
mm/memblock.c
···1179117911801180#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP11811181/*11821182- * Common iterator interface used to define for_each_mem_range().11821182+ * Common iterator interface used to define for_each_mem_pfn_range().11831183 */11841184void __init_memblock __next_mem_pfn_range(int *idx, int nid,11851185 unsigned long *out_start_pfn,
+20-12
mm/page_alloc.c
···40614061 int reserve_flags;4062406240634063 /*40644064- * In the slowpath, we sanity check order to avoid ever trying to40654065- * reclaim >= MAX_ORDER areas which will never succeed. Callers may40664066- * be using allocators in order of preference for an area that is40674067- * too large.40684068- */40694069- if (order >= MAX_ORDER) {40704070- WARN_ON_ONCE(!(gfp_mask & __GFP_NOWARN));40714071- return NULL;40724072- }40734073-40744074- /*40754064 * We also sanity check to catch abuse of atomic reserves being used by40764065 * callers that are not in atomic context.40774066 */···43524363 unsigned int alloc_flags = ALLOC_WMARK_LOW;43534364 gfp_t alloc_mask; /* The gfp_t that was actually used for allocation */43544365 struct alloc_context ac = { };43664366+43674367+ /*43684368+ * There are several places where we assume that the order value is sane43694369+ * so bail out early if the request is out of bound.43704370+ */43714371+ if (unlikely(order >= MAX_ORDER)) {43724372+ WARN_ON_ONCE(!(gfp_mask & __GFP_NOWARN));43734373+ return NULL;43744374+ }4355437543564376 gfp_mask &= gfp_allowed_mask;43574377 alloc_mask = gfp_mask;···58135815 unsigned long size)58145816{58155817 struct pglist_data *pgdat = zone->zone_pgdat;58185818+ int zone_idx = zone_idx(zone) + 1;5816581958175817- pgdat->nr_zones = zone_idx(zone) + 1;58205820+ if (zone_idx > pgdat->nr_zones)58215821+ pgdat->nr_zones = zone_idx;5818582258195823 zone->zone_start_pfn = zone_start_pfn;58205824···7787778777887788 if (PageReserved(page))77897789 goto unmovable;77907790+77917791+ /*77927792+ * If the zone is movable and we have ruled out all reserved77937793+ * pages then it should be reasonably safe to assume the rest77947794+ * is movable.77957795+ */77967796+ if (zone_idx(zone) == ZONE_MOVABLE)77977797+ continue;7790779877917799 /*77927800 * Hugepages are not in LRU lists, but they're movable.
+3-10
mm/rmap.c
···16271627 address + PAGE_SIZE);16281628 } else {16291629 /*16301630- * We should not need to notify here as we reach this16311631- * case only from freeze_page() itself only call from16321632- * split_huge_page_to_list() so everything below must16331633- * be true:16341634- * - page is not anonymous16351635- * - page is locked16361636- *16371637- * So as it is a locked file back page thus it can not16381638- * be remove from the page cache and replace by a new16391639- * page before mmu_notifier_invalidate_range_end so no16301630+ * This is a locked file-backed page, thus it cannot16311631+ * be removed from the page cache and replaced by a new16321632+ * page before mmu_notifier_invalidate_range_end, so no16401633 * concurrent thread might update its page table to16411634 * point at new page while a device still is using this16421635 * page.
+38-9
mm/shmem.c
···297297 if (!shmem_inode_acct_block(inode, pages))298298 return false;299299300300+ /* nrpages adjustment first, then shmem_recalc_inode() when balanced */301301+ inode->i_mapping->nrpages += pages;302302+300303 spin_lock_irqsave(&info->lock, flags);301304 info->alloced += pages;302305 inode->i_blocks += pages * BLOCKS_PER_PAGE;303306 shmem_recalc_inode(inode);304307 spin_unlock_irqrestore(&info->lock, flags);305305- inode->i_mapping->nrpages += pages;306308307309 return true;308310}···313311{314312 struct shmem_inode_info *info = SHMEM_I(inode);315313 unsigned long flags;314314+315315+ /* nrpages adjustment done by __delete_from_page_cache() or caller */316316317317 spin_lock_irqsave(&info->lock, flags);318318 info->alloced -= pages;···15131509{15141510 struct page *oldpage, *newpage;15151511 struct address_space *swap_mapping;15121512+ swp_entry_t entry;15161513 pgoff_t swap_index;15171514 int error;1518151515191516 oldpage = *pagep;15201520- swap_index = page_private(oldpage);15171517+ entry.val = page_private(oldpage);15181518+ swap_index = swp_offset(entry);15211519 swap_mapping = page_mapping(oldpage);1522152015231521 /*···15381532 __SetPageLocked(newpage);15391533 __SetPageSwapBacked(newpage);15401534 SetPageUptodate(newpage);15411541- set_page_private(newpage, swap_index);15351535+ set_page_private(newpage, entry.val);15421536 SetPageSwapCache(newpage);1543153715441538 /*···22202214 struct page *page;22212215 pte_t _dst_pte, *dst_pte;22222216 int ret;22172217+ pgoff_t offset, max_off;2223221822242219 ret = -ENOMEM;22252220 if (!shmem_inode_acct_block(inode, 1))···22432236 *pagep = page;22442237 shmem_inode_unacct_blocks(inode, 1);22452238 /* don't free the page */22462246- return -EFAULT;22392239+ return -ENOENT;22472240 }22482241 } else { /* mfill_zeropage_atomic */22492242 clear_highpage(page);···22572250 __SetPageLocked(page);22582251 __SetPageSwapBacked(page);22592252 __SetPageUptodate(page);22532253+22542254+ ret = -EFAULT;22552255+ offset = linear_page_index(dst_vma, dst_addr);22562256+ max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);22572257+ if (unlikely(offset >= max_off))22582258+ goto out_release;2260225922612260 ret = mem_cgroup_try_charge_delay(page, dst_mm, gfp, &memcg, false);22622261 if (ret)···22782265 _dst_pte = mk_pte(page, dst_vma->vm_page_prot);22792266 if (dst_vma->vm_flags & VM_WRITE)22802267 _dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte));22682268+ else {22692269+ /*22702270+ * We don't set the pte dirty if the vma has no22712271+ * VM_WRITE permission, so mark the page dirty or it22722272+ * could be freed from under us. We could do it22732273+ * unconditionally before unlock_page(), but doing it22742274+ * only if VM_WRITE is not set is faster.22752275+ */22762276+ set_page_dirty(page);22772277+ }22782278+22792279+ dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl);22802280+22812281+ ret = -EFAULT;22822282+ max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);22832283+ if (unlikely(offset >= max_off))22842284+ goto out_release_uncharge_unlock;2281228522822286 ret = -EEXIST;22832283- dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl);22842287 if (!pte_none(*dst_pte))22852288 goto out_release_uncharge_unlock;22862289···2314228523152286 /* No need to invalidate - it was non-present before */23162287 update_mmu_cache(dst_vma, dst_addr, dst_pte);23172317- unlock_page(page);23182288 pte_unmap_unlock(dst_pte, ptl);22892289+ unlock_page(page);23192290 ret = 0;23202291out:23212292 return ret;23222293out_release_uncharge_unlock:23232294 pte_unmap_unlock(dst_pte, ptl);22952295+ ClearPageDirty(page);22962296+ delete_from_page_cache(page);23242297out_release_uncharge:23252298 mem_cgroup_cancel_charge(page, memcg, false);23262299out_release:···25942563 inode_lock(inode);25952564 /* We're holding i_mutex so we can access i_size directly */2596256525972597- if (offset < 0)25982598- offset = -EINVAL;25992599- else if (offset >= inode->i_size)25662566+ if (offset < 0 || offset >= inode->i_size)26002567 offset = -ENXIO;26012568 else {26022569 start = offset >> PAGE_SHIFT;
···517517 */518518 xa_lock_irq(&mapping->i_pages);519519 xa_unlock_irq(&mapping->i_pages);520520-521521- truncate_inode_pages(mapping, 0);522520 }521521+522522+ /*523523+ * Cleancache needs notification even if there are no pages or shadow524524+ * entries.525525+ */526526+ truncate_inode_pages(mapping, 0);523527}524528EXPORT_SYMBOL(truncate_inode_pages_final);525529
+46-16
mm/userfaultfd.c
···3333 void *page_kaddr;3434 int ret;3535 struct page *page;3636+ pgoff_t offset, max_off;3737+ struct inode *inode;36383739 if (!*pagep) {3840 ret = -ENOMEM;···50485149 /* fallback to copy_from_user outside mmap_sem */5250 if (unlikely(ret)) {5353- ret = -EFAULT;5151+ ret = -ENOENT;5452 *pagep = page;5553 /* don't free the page */5654 goto out;···7573 if (dst_vma->vm_flags & VM_WRITE)7674 _dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte));77757878- ret = -EEXIST;7976 dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl);7777+ if (dst_vma->vm_file) {7878+ /* the shmem MAP_PRIVATE case requires checking the i_size */7979+ inode = dst_vma->vm_file->f_inode;8080+ offset = linear_page_index(dst_vma, dst_addr);8181+ max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);8282+ ret = -EFAULT;8383+ if (unlikely(offset >= max_off))8484+ goto out_release_uncharge_unlock;8585+ }8686+ ret = -EEXIST;8087 if (!pte_none(*dst_pte))8188 goto out_release_uncharge_unlock;8289···119108 pte_t _dst_pte, *dst_pte;120109 spinlock_t *ptl;121110 int ret;111111+ pgoff_t offset, max_off;112112+ struct inode *inode;122113123114 _dst_pte = pte_mkspecial(pfn_pte(my_zero_pfn(dst_addr),124115 dst_vma->vm_page_prot));125125- ret = -EEXIST;126116 dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl);117117+ if (dst_vma->vm_file) {118118+ /* the shmem MAP_PRIVATE case requires checking the i_size */119119+ inode = dst_vma->vm_file->f_inode;120120+ offset = linear_page_index(dst_vma, dst_addr);121121+ max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);122122+ ret = -EFAULT;123123+ if (unlikely(offset >= max_off))124124+ goto out_unlock;125125+ }126126+ ret = -EEXIST;127127 if (!pte_none(*dst_pte))128128 goto out_unlock;129129 set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte);···227205 if (!dst_vma || !is_vm_hugetlb_page(dst_vma))228206 goto out_unlock;229207 /*230230- * Only allow __mcopy_atomic_hugetlb on userfaultfd231231- * registered ranges.208208+ * Check the vma is registered in uffd, this is209209+ * required to enforce the VM_MAYWRITE check done at210210+ * uffd registration time.232211 */233212 if (!dst_vma->vm_userfaultfd_ctx.ctx)234213 goto out_unlock;···297274298275 cond_resched();299276300300- if (unlikely(err == -EFAULT)) {277277+ if (unlikely(err == -ENOENT)) {301278 up_read(&dst_mm->mmap_sem);302279 BUG_ON(!page);303280···403380{404381 ssize_t err;405382406406- if (vma_is_anonymous(dst_vma)) {383383+ /*384384+ * The normal page fault path for a shmem will invoke the385385+ * fault, fill the hole in the file and COW it right away. The386386+ * result generates plain anonymous memory. So when we are387387+ * asked to fill an hole in a MAP_PRIVATE shmem mapping, we'll388388+ * generate anonymous memory directly without actually filling389389+ * the hole. For the MAP_PRIVATE case the robustness check390390+ * only happens in the pagetable (to verify it's still none)391391+ * and not in the radix tree.392392+ */393393+ if (!(dst_vma->vm_flags & VM_SHARED)) {407394 if (!zeropage)408395 err = mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma,409396 dst_addr, src_addr, page);···482449 if (!dst_vma)483450 goto out_unlock;484451 /*485485- * Be strict and only allow __mcopy_atomic on userfaultfd486486- * registered ranges to prevent userland errors going487487- * unnoticed. As far as the VM consistency is concerned, it488488- * would be perfectly safe to remove this check, but there's489489- * no useful usage for __mcopy_atomic ouside of userfaultfd490490- * registered ranges. This is after all why these are ioctls491491- * belonging to the userfaultfd and not syscalls.452452+ * Check the vma is registered in uffd, this is required to453453+ * enforce the VM_MAYWRITE check done at uffd registration454454+ * time.492455 */493456 if (!dst_vma->vm_userfaultfd_ctx.ctx)494457 goto out_unlock;···518489 * dst_vma.519490 */520491 err = -ENOMEM;521521- if (vma_is_anonymous(dst_vma) && unlikely(anon_vma_prepare(dst_vma)))492492+ if (!(dst_vma->vm_flags & VM_SHARED) &&493493+ unlikely(anon_vma_prepare(dst_vma)))522494 goto out_unlock;523495524496 while (src_addr < src_start + len) {···560530 src_addr, &page, zeropage);561531 cond_resched();562532563563- if (unlikely(err == -EFAULT)) {533533+ if (unlikely(err == -ENOENT)) {564534 void *page_kaddr;565535566536 up_read(&dst_mm->mmap_sem);
+4-3
mm/vmstat.c
···1827182718281828 /*18291829 * The fast way of checking if there are any vmstat diffs.18301830- * This works because the diffs are byte sized items.18311830 */18321832- if (memchr_inv(p->vm_stat_diff, 0, NR_VM_ZONE_STAT_ITEMS))18311831+ if (memchr_inv(p->vm_stat_diff, 0, NR_VM_ZONE_STAT_ITEMS *18321832+ sizeof(p->vm_stat_diff[0])))18331833 return true;18341834#ifdef CONFIG_NUMA18351835- if (memchr_inv(p->vm_numa_stat_diff, 0, NR_VM_NUMA_STAT_ITEMS))18351835+ if (memchr_inv(p->vm_numa_stat_diff, 0, NR_VM_NUMA_STAT_ITEMS *18361836+ sizeof(p->vm_numa_stat_diff[0])))18361837 return true;18371838#endif18381839 }
+63-40
mm/z3fold.c
···9999#define NCHUNKS ((PAGE_SIZE - ZHDR_SIZE_ALIGNED) >> CHUNK_SHIFT)100100101101#define BUDDY_MASK (0x3)102102+#define BUDDY_SHIFT 2102103103104/**104105 * struct z3fold_pool - stores metadata for each z3fold pool···146145 MIDDLE_CHUNK_MAPPED,147146 NEEDS_COMPACTING,148147 PAGE_STALE,149149- UNDER_RECLAIM148148+ PAGE_CLAIMED, /* by either reclaim or free */150149};151150152151/*****************···175174 clear_bit(MIDDLE_CHUNK_MAPPED, &page->private);176175 clear_bit(NEEDS_COMPACTING, &page->private);177176 clear_bit(PAGE_STALE, &page->private);178178- clear_bit(UNDER_RECLAIM, &page->private);177177+ clear_bit(PAGE_CLAIMED, &page->private);179178180179 spin_lock_init(&zhdr->page_lock);181180 kref_init(&zhdr->refcount);···224223 unsigned long handle;225224226225 handle = (unsigned long)zhdr;227227- if (bud != HEADLESS)228228- handle += (bud + zhdr->first_num) & BUDDY_MASK;226226+ if (bud != HEADLESS) {227227+ handle |= (bud + zhdr->first_num) & BUDDY_MASK;228228+ if (bud == LAST)229229+ handle |= (zhdr->last_chunks << BUDDY_SHIFT);230230+ }229231 return handle;230232}231233···236232static struct z3fold_header *handle_to_z3fold_header(unsigned long handle)237233{238234 return (struct z3fold_header *)(handle & PAGE_MASK);235235+}236236+237237+/* only for LAST bud, returns zero otherwise */238238+static unsigned short handle_to_chunks(unsigned long handle)239239+{240240+ return (handle & ~PAGE_MASK) >> BUDDY_SHIFT;239241}240242241243/*···730720 page = virt_to_page(zhdr);731721732722 if (test_bit(PAGE_HEADLESS, &page->private)) {733733- /* HEADLESS page stored */734734- bud = HEADLESS;735735- } else {736736- z3fold_page_lock(zhdr);737737- bud = handle_to_buddy(handle);738738-739739- switch (bud) {740740- case FIRST:741741- zhdr->first_chunks = 0;742742- break;743743- case MIDDLE:744744- zhdr->middle_chunks = 0;745745- zhdr->start_middle = 0;746746- break;747747- case LAST:748748- zhdr->last_chunks = 0;749749- break;750750- default:751751- pr_err("%s: unknown bud %d\n", __func__, bud);752752- WARN_ON(1);753753- z3fold_page_unlock(zhdr);754754- return;723723+ /* if a headless page is under reclaim, just leave.724724+ * NB: we use test_and_set_bit for a reason: if the bit725725+ * has not been set before, we release this page726726+ * immediately so we don't care about its value any more.727727+ */728728+ if (!test_and_set_bit(PAGE_CLAIMED, &page->private)) {729729+ spin_lock(&pool->lock);730730+ list_del(&page->lru);731731+ spin_unlock(&pool->lock);732732+ free_z3fold_page(page);733733+ atomic64_dec(&pool->pages_nr);755734 }735735+ return;756736 }757737758758- if (bud == HEADLESS) {759759- spin_lock(&pool->lock);760760- list_del(&page->lru);761761- spin_unlock(&pool->lock);762762- free_z3fold_page(page);763763- atomic64_dec(&pool->pages_nr);738738+ /* Non-headless case */739739+ z3fold_page_lock(zhdr);740740+ bud = handle_to_buddy(handle);741741+742742+ switch (bud) {743743+ case FIRST:744744+ zhdr->first_chunks = 0;745745+ break;746746+ case MIDDLE:747747+ zhdr->middle_chunks = 0;748748+ break;749749+ case LAST:750750+ zhdr->last_chunks = 0;751751+ break;752752+ default:753753+ pr_err("%s: unknown bud %d\n", __func__, bud);754754+ WARN_ON(1);755755+ z3fold_page_unlock(zhdr);764756 return;765757 }766758···770758 atomic64_dec(&pool->pages_nr);771759 return;772760 }773773- if (test_bit(UNDER_RECLAIM, &page->private)) {761761+ if (test_bit(PAGE_CLAIMED, &page->private)) {774762 z3fold_page_unlock(zhdr);775763 return;776764 }···848836 }849837 list_for_each_prev(pos, &pool->lru) {850838 page = list_entry(pos, struct page, lru);851851- if (test_bit(PAGE_HEADLESS, &page->private))852852- /* candidate found */853853- break;839839+840840+ /* this bit could have been set by free, in which case841841+ * we pass over to the next page in the pool.842842+ */843843+ if (test_and_set_bit(PAGE_CLAIMED, &page->private))844844+ continue;854845855846 zhdr = page_address(page);856856- if (!z3fold_page_trylock(zhdr))847847+ if (test_bit(PAGE_HEADLESS, &page->private))848848+ break;849849+850850+ if (!z3fold_page_trylock(zhdr)) {851851+ zhdr = NULL;857852 continue; /* can't evict at this point */853853+ }858854 kref_get(&zhdr->refcount);859855 list_del_init(&zhdr->buddy);860856 zhdr->cpu = -1;861861- set_bit(UNDER_RECLAIM, &page->private);862857 break;863858 }859859+860860+ if (!zhdr)861861+ break;864862865863 list_del_init(&page->lru);866864 spin_unlock(&pool->lock);···920898 if (test_bit(PAGE_HEADLESS, &page->private)) {921899 if (ret == 0) {922900 free_z3fold_page(page);901901+ atomic64_dec(&pool->pages_nr);923902 return 0;924903 }925904 spin_lock(&pool->lock);···928905 spin_unlock(&pool->lock);929906 } else {930907 z3fold_page_lock(zhdr);931931- clear_bit(UNDER_RECLAIM, &page->private);908908+ clear_bit(PAGE_CLAIMED, &page->private);932909 if (kref_put(&zhdr->refcount,933910 release_z3fold_page_locked)) {934911 atomic64_dec(&pool->pages_nr);···987964 set_bit(MIDDLE_CHUNK_MAPPED, &page->private);988965 break;989966 case LAST:990990- addr += PAGE_SIZE - (zhdr->last_chunks << CHUNK_SHIFT);967967+ addr += PAGE_SIZE - (handle_to_chunks(handle) << CHUNK_SHIFT);991968 break;992969 default:993970 pr_err("unknown buddy id %d\n", buddy);
···275275 kfree(entry);276276277277 packet = (struct batadv_frag_packet *)skb_out->data;278278- size = ntohs(packet->total_size);278278+ size = ntohs(packet->total_size) + hdr_size;279279280280 /* Make room for the rest of the fragments. */281281 if (pskb_expand_head(skb_out, 0, size - skb_out->len, GFP_ATOMIC) < 0) {
+7
net/bridge/br_private.h
···102102 struct metadata_dst *tunnel_dst;103103};104104105105+/* private vlan flags */106106+enum {107107+ BR_VLFLAG_PER_PORT_STATS = BIT(0),108108+};109109+105110/**106111 * struct net_bridge_vlan - per-vlan entry107112 *108113 * @vnode: rhashtable member109114 * @vid: VLAN id110115 * @flags: bridge vlan flags116116+ * @priv_flags: private (in-kernel) bridge vlan flags111117 * @stats: per-cpu VLAN statistics112118 * @br: if MASTER flag set, this points to a bridge struct113119 * @port: if MASTER flag unset, this points to a port struct···133127 struct rhash_head tnode;134128 u16 vid;135129 u16 flags;130130+ u16 priv_flags;136131 struct br_vlan_stats __percpu *stats;137132 union {138133 struct net_bridge *br;
+2-1
net/bridge/br_vlan.c
···197197 v = container_of(rcu, struct net_bridge_vlan, rcu);198198 WARN_ON(br_vlan_is_master(v));199199 /* if we had per-port stats configured then free them here */200200- if (v->brvlan->stats != v->stats)200200+ if (v->priv_flags & BR_VLFLAG_PER_PORT_STATS)201201 free_percpu(v->stats);202202 v->stats = NULL;203203 kfree(v);···264264 err = -ENOMEM;265265 goto out_filt;266266 }267267+ v->priv_flags |= BR_VLFLAG_PER_PORT_STATS;267268 } else {268269 v->stats = masterv->stats;269270 }
+9-8
net/can/raw.c
···745745 } else746746 ifindex = ro->ifindex;747747748748- if (ro->fd_frames) {749749- if (unlikely(size != CANFD_MTU && size != CAN_MTU))750750- return -EINVAL;751751- } else {752752- if (unlikely(size != CAN_MTU))753753- return -EINVAL;754754- }755755-756748 dev = dev_get_by_index(sock_net(sk), ifindex);757749 if (!dev)758750 return -ENXIO;751751+752752+ err = -EINVAL;753753+ if (ro->fd_frames && dev->mtu == CANFD_MTU) {754754+ if (unlikely(size != CANFD_MTU && size != CAN_MTU))755755+ goto put_dev;756756+ } else {757757+ if (unlikely(size != CAN_MTU))758758+ goto put_dev;759759+ }759760760761 skb = sock_alloc_send_skb(sk, size + sizeof(struct can_skb_priv),761762 msg->msg_flags & MSG_DONTWAIT, &err);
+9-3
net/ceph/messenger.c
···580580 struct bio_vec bvec;581581 int ret;582582583583- /* sendpage cannot properly handle pages with page_count == 0,584584- * we need to fallback to sendmsg if that's the case */585585- if (page_count(page) >= 1)583583+ /*584584+ * sendpage cannot properly handle pages with page_count == 0,585585+ * we need to fall back to sendmsg if that's the case.586586+ *587587+ * Same goes for slab pages: skb_can_coalesce() allows588588+ * coalescing neighboring slab objects into a single frag which589589+ * triggers one of hardened usercopy checks.590590+ */591591+ if (page_count(page) >= 1 && !PageSlab(page))586592 return __ceph_tcp_sendpage(sock, page, offset, size, more);587593588594 bvec.bv_page = page;
+10-3
net/core/dev.c
···32723272 }3273327332743274 skb = next;32753275- if (netif_xmit_stopped(txq) && skb) {32753275+ if (netif_tx_queue_stopped(txq) && skb) {32763276 rc = NETDEV_TX_BUSY;32773277 break;32783278 }···56555655 skb->vlan_tci = 0;56565656 skb->dev = napi->dev;56575657 skb->skb_iif = 0;56585658+56595659+ /* eth_type_trans() assumes pkt_type is PACKET_HOST */56605660+ skb->pkt_type = PACKET_HOST;56615661+56585662 skb->encapsulation = 0;56595663 skb_shinfo(skb)->gso_type = 0;56605664 skb->truesize = SKB_TRUESIZE(skb_end_offset(skb));···59705966 if (work_done)59715967 timeout = n->dev->gro_flush_timeout;5972596859695969+ /* When the NAPI instance uses a timeout and keeps postponing59705970+ * it, we need to bound somehow the time packets are kept in59715971+ * the GRO layer59725972+ */59735973+ napi_gro_flush(n, !!timeout);59735974 if (timeout)59745975 hrtimer_start(&n->timer, ns_to_ktime(timeout),59755976 HRTIMER_MODE_REL_PINNED);59765976- else59775977- napi_gro_flush(n, false);59785977 }59795978 if (unlikely(!list_empty(&n->poll_list))) {59805979 /* If n->poll_list is not empty, we need to mask irqs */
···48544854 nf_reset(skb);48554855 nf_reset_trace(skb);4856485648574857+#ifdef CONFIG_NET_SWITCHDEV48584858+ skb->offload_fwd_mark = 0;48594859+ skb->offload_mr_fwd_mark = 0;48604860+#endif48614861+48574862 if (!xnet)48584863 return;48594864···49484943 * - L2+L3+L4+payload size (e.g. sanity check before passing to driver)49494944 *49504945 * This is a helper to do that correctly considering GSO_BY_FRAGS.49464946+ *49474947+ * @skb: GSO skb49514948 *49524949 * @seg_len: The segmented length (from skb_gso_*_seglen). In the49534950 * GSO_BY_FRAGS case this will be [header sizes + GSO_BY_FRAGS].
···8181 int ret;82828383 ret = xt_register_target(&masquerade_tg_reg);8484+ if (ret)8585+ return ret;84868585- if (ret == 0)8686- nf_nat_masquerade_ipv4_register_notifier();8787+ ret = nf_nat_masquerade_ipv4_register_notifier();8888+ if (ret)8989+ xt_unregister_target(&masquerade_tg_reg);87908891 return ret;8992}
+30-8
net/ipv4/netfilter/nf_nat_masquerade_ipv4.c
···147147 .notifier_call = masq_inet_event,148148};149149150150-static atomic_t masquerade_notifier_refcount = ATOMIC_INIT(0);150150+static int masq_refcnt;151151+static DEFINE_MUTEX(masq_mutex);151152152152-void nf_nat_masquerade_ipv4_register_notifier(void)153153+int nf_nat_masquerade_ipv4_register_notifier(void)153154{155155+ int ret = 0;156156+157157+ mutex_lock(&masq_mutex);154158 /* check if the notifier was already set */155155- if (atomic_inc_return(&masquerade_notifier_refcount) > 1)156156- return;159159+ if (++masq_refcnt > 1)160160+ goto out_unlock;157161158162 /* Register for device down reports */159159- register_netdevice_notifier(&masq_dev_notifier);163163+ ret = register_netdevice_notifier(&masq_dev_notifier);164164+ if (ret)165165+ goto err_dec;160166 /* Register IP address change reports */161161- register_inetaddr_notifier(&masq_inet_notifier);167167+ ret = register_inetaddr_notifier(&masq_inet_notifier);168168+ if (ret)169169+ goto err_unregister;170170+171171+ mutex_unlock(&masq_mutex);172172+ return ret;173173+174174+err_unregister:175175+ unregister_netdevice_notifier(&masq_dev_notifier);176176+err_dec:177177+ masq_refcnt--;178178+out_unlock:179179+ mutex_unlock(&masq_mutex);180180+ return ret;162181}163182EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv4_register_notifier);164183165184void nf_nat_masquerade_ipv4_unregister_notifier(void)166185{186186+ mutex_lock(&masq_mutex);167187 /* check if the notifier still has clients */168168- if (atomic_dec_return(&masquerade_notifier_refcount) > 0)169169- return;188188+ if (--masq_refcnt > 0)189189+ goto out_unlock;170190171191 unregister_netdevice_notifier(&masq_dev_notifier);172192 unregister_inetaddr_notifier(&masq_inet_notifier);193193+out_unlock:194194+ mutex_unlock(&masq_mutex);173195}174196EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv4_unregister_notifier);
+3-1
net/ipv4/netfilter/nft_masq_ipv4.c
···6969 if (ret < 0)7070 return ret;71717272- nf_nat_masquerade_ipv4_register_notifier();7272+ ret = nf_nat_masquerade_ipv4_register_notifier();7373+ if (ret)7474+ nft_unregister_expr(&nft_masq_ipv4_type);73757476 return ret;7577}
+23-8
net/ipv4/tcp_input.c
···579579 u32 delta = tcp_time_stamp(tp) - tp->rx_opt.rcv_tsecr;580580 u32 delta_us;581581582582- if (!delta)583583- delta = 1;584584- delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ);585585- tcp_rcv_rtt_update(tp, delta_us, 0);582582+ if (likely(delta < INT_MAX / (USEC_PER_SEC / TCP_TS_HZ))) {583583+ if (!delta)584584+ delta = 1;585585+ delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ);586586+ tcp_rcv_rtt_update(tp, delta_us, 0);587587+ }586588 }587589}588590···29122910 if (seq_rtt_us < 0 && tp->rx_opt.saw_tstamp && tp->rx_opt.rcv_tsecr &&29132911 flag & FLAG_ACKED) {29142912 u32 delta = tcp_time_stamp(tp) - tp->rx_opt.rcv_tsecr;29152915- u32 delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ);2916291329172917- seq_rtt_us = ca_rtt_us = delta_us;29142914+ if (likely(delta < INT_MAX / (USEC_PER_SEC / TCP_TS_HZ))) {29152915+ seq_rtt_us = delta * (USEC_PER_SEC / TCP_TS_HZ);29162916+ ca_rtt_us = seq_rtt_us;29172917+ }29182918 }29192919 rs->rtt_us = ca_rtt_us; /* RTT of last (S)ACKed packet (or -1) */29202920 if (seq_rtt_us < 0)···42724268 * If the sack array is full, forget about the last one.42734269 */42744270 if (this_sack >= TCP_NUM_SACKS) {42754275- if (tp->compressed_ack)42714271+ if (tp->compressed_ack > TCP_FASTRETRANS_THRESH)42764272 tcp_send_ack(sk);42774273 this_sack--;42784274 tp->rx_opt.num_sacks--;···43674363 if (TCP_SKB_CB(from)->has_rxtstamp) {43684364 TCP_SKB_CB(to)->has_rxtstamp = true;43694365 to->tstamp = from->tstamp;43664366+ skb_hwtstamps(to)->hwtstamp = skb_hwtstamps(from)->hwtstamp;43704367 }4371436843724369 return true;···51935188 if (!tcp_is_sack(tp) ||51945189 tp->compressed_ack >= sock_net(sk)->ipv4.sysctl_tcp_comp_sack_nr)51955190 goto send_now;51965196- tp->compressed_ack++;51915191+51925192+ if (tp->compressed_ack_rcv_nxt != tp->rcv_nxt) {51935193+ tp->compressed_ack_rcv_nxt = tp->rcv_nxt;51945194+ if (tp->compressed_ack > TCP_FASTRETRANS_THRESH)51955195+ NET_ADD_STATS(sock_net(sk), LINUX_MIB_TCPACKCOMPRESSED,51965196+ tp->compressed_ack - TCP_FASTRETRANS_THRESH);51975197+ tp->compressed_ack = 0;51985198+ }51995199+52005200+ if (++tp->compressed_ack <= TCP_FASTRETRANS_THRESH)52015201+ goto send_now;5197520251985203 if (hrtimer_is_queued(&tp->compressed_ack_timer))51995204 return;
···10011001 err = ip6_flowlabel_init();10021002 if (err)10031003 goto ip6_flowlabel_fail;10041004+ err = ipv6_anycast_init();10051005+ if (err)10061006+ goto ipv6_anycast_fail;10041007 err = addrconf_init();10051008 if (err)10061009 goto addrconf_fail;···10941091ipv6_exthdrs_fail:10951092 addrconf_cleanup();10961093addrconf_fail:10941094+ ipv6_anycast_cleanup();10951095+ipv6_anycast_fail:10971096 ip6_flowlabel_cleanup();10981097ip6_flowlabel_fail:10991098 ndisc_late_cleanup();
+76-4
net/ipv6/anycast.c
···44444545#include <net/checksum.h>46464747+#define IN6_ADDR_HSIZE_SHIFT 84848+#define IN6_ADDR_HSIZE BIT(IN6_ADDR_HSIZE_SHIFT)4949+/* anycast address hash table5050+ */5151+static struct hlist_head inet6_acaddr_lst[IN6_ADDR_HSIZE];5252+static DEFINE_SPINLOCK(acaddr_hash_lock);5353+4754static int ipv6_dev_ac_dec(struct net_device *dev, const struct in6_addr *addr);5555+5656+static u32 inet6_acaddr_hash(struct net *net, const struct in6_addr *addr)5757+{5858+ u32 val = ipv6_addr_hash(addr) ^ net_hash_mix(net);5959+6060+ return hash_32(val, IN6_ADDR_HSIZE_SHIFT);6161+}48624963/*5064 * socket join an anycast group···218204 rtnl_unlock();219205}220206207207+static void ipv6_add_acaddr_hash(struct net *net, struct ifacaddr6 *aca)208208+{209209+ unsigned int hash = inet6_acaddr_hash(net, &aca->aca_addr);210210+211211+ spin_lock(&acaddr_hash_lock);212212+ hlist_add_head_rcu(&aca->aca_addr_lst, &inet6_acaddr_lst[hash]);213213+ spin_unlock(&acaddr_hash_lock);214214+}215215+216216+static void ipv6_del_acaddr_hash(struct ifacaddr6 *aca)217217+{218218+ spin_lock(&acaddr_hash_lock);219219+ hlist_del_init_rcu(&aca->aca_addr_lst);220220+ spin_unlock(&acaddr_hash_lock);221221+}222222+221223static void aca_get(struct ifacaddr6 *aca)222224{223225 refcount_inc(&aca->aca_refcnt);224226}225227228228+static void aca_free_rcu(struct rcu_head *h)229229+{230230+ struct ifacaddr6 *aca = container_of(h, struct ifacaddr6, rcu);231231+232232+ fib6_info_release(aca->aca_rt);233233+ kfree(aca);234234+}235235+226236static void aca_put(struct ifacaddr6 *ac)227237{228238 if (refcount_dec_and_test(&ac->aca_refcnt)) {229229- fib6_info_release(ac->aca_rt);230230- kfree(ac);239239+ call_rcu(&ac->rcu, aca_free_rcu);231240 }232241}233242···266229 aca->aca_addr = *addr;267230 fib6_info_hold(f6i);268231 aca->aca_rt = f6i;232232+ INIT_HLIST_NODE(&aca->aca_addr_lst);269233 aca->aca_users = 1;270234 /* aca_tstamp should be updated upon changes */271235 aca->aca_cstamp = aca->aca_tstamp = jiffies;···323285 aca_get(aca);324286 write_unlock_bh(&idev->lock);325287288288+ ipv6_add_acaddr_hash(net, aca);289289+326290 ip6_ins_rt(net, f6i);327291328292 addrconf_join_solict(idev->dev, &aca->aca_addr);···365325 else366326 idev->ac_list = aca->aca_next;367327 write_unlock_bh(&idev->lock);328328+ ipv6_del_acaddr_hash(aca);368329 addrconf_leave_solict(idev, &aca->aca_addr);369330370331 ip6_del_rt(dev_net(idev->dev), aca->aca_rt);···392351 while ((aca = idev->ac_list) != NULL) {393352 idev->ac_list = aca->aca_next;394353 write_unlock_bh(&idev->lock);354354+355355+ ipv6_del_acaddr_hash(aca);395356396357 addrconf_leave_solict(idev, &aca->aca_addr);397358···433390bool ipv6_chk_acast_addr(struct net *net, struct net_device *dev,434391 const struct in6_addr *addr)435392{393393+ unsigned int hash = inet6_acaddr_hash(net, addr);394394+ struct net_device *nh_dev;395395+ struct ifacaddr6 *aca;436396 bool found = false;437397438398 rcu_read_lock();439399 if (dev)440400 found = ipv6_chk_acast_dev(dev, addr);441401 else442442- for_each_netdev_rcu(net, dev)443443- if (ipv6_chk_acast_dev(dev, addr)) {402402+ hlist_for_each_entry_rcu(aca, &inet6_acaddr_lst[hash],403403+ aca_addr_lst) {404404+ nh_dev = fib6_info_nh_dev(aca->aca_rt);405405+ if (!nh_dev || !net_eq(dev_net(nh_dev), net))406406+ continue;407407+ if (ipv6_addr_equal(&aca->aca_addr, addr)) {444408 found = true;445409 break;446410 }411411+ }447412 rcu_read_unlock();448413 return found;449414}···591540 remove_proc_entry("anycast6", net->proc_net);592541}593542#endif543543+544544+/* Init / cleanup code545545+ */546546+int __init ipv6_anycast_init(void)547547+{548548+ int i;549549+550550+ for (i = 0; i < IN6_ADDR_HSIZE; i++)551551+ INIT_HLIST_HEAD(&inet6_acaddr_lst[i]);552552+ return 0;553553+}554554+555555+void ipv6_anycast_cleanup(void)556556+{557557+ int i;558558+559559+ spin_lock(&acaddr_hash_lock);560560+ for (i = 0; i < IN6_ADDR_HSIZE; i++)561561+ WARN_ON(!hlist_empty(&inet6_acaddr_lst[i]));562562+ spin_unlock(&acaddr_hash_lock);563563+}
+2-2
net/ipv6/ip6_fib.c
···591591592592 /* fib entries are never clones */593593 if (arg.filter.flags & RTM_F_CLONED)594594- return skb->len;594594+ goto out;595595596596 w = (void *)cb->args[2];597597 if (!w) {···621621 tb = fib6_get_table(net, arg.filter.table_id);622622 if (!tb) {623623 if (arg.filter.dump_all_families)624624- return skb->len;624624+ goto out;625625626626 NL_SET_ERR_MSG_MOD(cb->extack, "FIB table does not exist");627627 return -ENOENT;
+2-1
net/ipv6/ip6_output.c
···13541354 unsigned int fraglen;13551355 unsigned int fraggap;13561356 unsigned int alloclen;13571357- unsigned int pagedlen = 0;13571357+ unsigned int pagedlen;13581358alloc_new_skb:13591359 /* There's no room in the current skb */13601360 if (skb)···13781378 if (datalen > (cork->length <= mtu && !(cork->flags & IPCORK_ALLFRAG) ? mtu : maxfraglen) - fragheaderlen)13791379 datalen = maxfraglen - fragheaderlen - rt->dst.trailer_len;13801380 fraglen = datalen + fragheaderlen;13811381+ pagedlen = 0;1381138213821383 if ((flags & MSG_MORE) &&13831384 !(rt->dst.dev->features&NETIF_F_SG))
···5858 int err;59596060 err = xt_register_target(&masquerade_tg6_reg);6161- if (err == 0)6262- nf_nat_masquerade_ipv6_register_notifier();6161+ if (err)6262+ return err;6363+6464+ err = nf_nat_masquerade_ipv6_register_notifier();6565+ if (err)6666+ xt_unregister_target(&masquerade_tg6_reg);63676468 return err;6569}
+9-4
net/ipv6/netfilter/nf_conntrack_reasm.c
···587587 */588588 ret = -EINPROGRESS;589589 if (fq->q.flags == (INET_FRAG_FIRST_IN | INET_FRAG_LAST_IN) &&590590- fq->q.meat == fq->q.len &&591591- nf_ct_frag6_reasm(fq, skb, dev))592592- ret = 0;593593- else590590+ fq->q.meat == fq->q.len) {591591+ unsigned long orefdst = skb->_skb_refdst;592592+593593+ skb->_skb_refdst = 0UL;594594+ if (nf_ct_frag6_reasm(fq, skb, dev))595595+ ret = 0;596596+ skb->_skb_refdst = orefdst;597597+ } else {594598 skb_dst_drop(skb);599599+ }595600596601out_unlock:597602 spin_unlock_bh(&fq->q.lock);
+37-14
net/ipv6/netfilter/nf_nat_masquerade_ipv6.c
···132132 * of ipv6 addresses being deleted), we also need to add an upper133133 * limit to the number of queued work items.134134 */135135-static int masq_inet_event(struct notifier_block *this,136136- unsigned long event, void *ptr)135135+static int masq_inet6_event(struct notifier_block *this,136136+ unsigned long event, void *ptr)137137{138138 struct inet6_ifaddr *ifa = ptr;139139 const struct net_device *dev;···171171 return NOTIFY_DONE;172172}173173174174-static struct notifier_block masq_inet_notifier = {175175- .notifier_call = masq_inet_event,174174+static struct notifier_block masq_inet6_notifier = {175175+ .notifier_call = masq_inet6_event,176176};177177178178-static atomic_t masquerade_notifier_refcount = ATOMIC_INIT(0);178178+static int masq_refcnt;179179+static DEFINE_MUTEX(masq_mutex);179180180180-void nf_nat_masquerade_ipv6_register_notifier(void)181181+int nf_nat_masquerade_ipv6_register_notifier(void)181182{182182- /* check if the notifier is already set */183183- if (atomic_inc_return(&masquerade_notifier_refcount) > 1)184184- return;183183+ int ret = 0;185184186186- register_netdevice_notifier(&masq_dev_notifier);187187- register_inet6addr_notifier(&masq_inet_notifier);185185+ mutex_lock(&masq_mutex);186186+ /* check if the notifier is already set */187187+ if (++masq_refcnt > 1)188188+ goto out_unlock;189189+190190+ ret = register_netdevice_notifier(&masq_dev_notifier);191191+ if (ret)192192+ goto err_dec;193193+194194+ ret = register_inet6addr_notifier(&masq_inet6_notifier);195195+ if (ret)196196+ goto err_unregister;197197+198198+ mutex_unlock(&masq_mutex);199199+ return ret;200200+201201+err_unregister:202202+ unregister_netdevice_notifier(&masq_dev_notifier);203203+err_dec:204204+ masq_refcnt--;205205+out_unlock:206206+ mutex_unlock(&masq_mutex);207207+ return ret;188208}189209EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv6_register_notifier);190210191211void nf_nat_masquerade_ipv6_unregister_notifier(void)192212{213213+ mutex_lock(&masq_mutex);193214 /* check if the notifier still has clients */194194- if (atomic_dec_return(&masquerade_notifier_refcount) > 0)195195- return;215215+ if (--masq_refcnt > 0)216216+ goto out_unlock;196217197197- unregister_inet6addr_notifier(&masq_inet_notifier);218218+ unregister_inet6addr_notifier(&masq_inet6_notifier);198219 unregister_netdevice_notifier(&masq_dev_notifier);220220+out_unlock:221221+ mutex_unlock(&masq_mutex);199222}200223EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv6_unregister_notifier);
+3-1
net/ipv6/netfilter/nft_masq_ipv6.c
···7070 if (ret < 0)7171 return ret;72727373- nf_nat_masquerade_ipv6_register_notifier();7373+ ret = nf_nat_masquerade_ipv6_register_notifier();7474+ if (ret)7575+ nft_unregister_expr(&nft_masq_ipv6_type);74767577 return ret;7678}
···5555MODULE_DESCRIPTION("core IP set support");5656MODULE_ALIAS_NFNL_SUBSYS(NFNL_SUBSYS_IPSET);57575858-/* When the nfnl mutex is held: */5858+/* When the nfnl mutex or ip_set_ref_lock is held: */5959#define ip_set_dereference(p) \6060- rcu_dereference_protected(p, lockdep_nfnl_is_held(NFNL_SUBSYS_IPSET))6060+ rcu_dereference_protected(p, \6161+ lockdep_nfnl_is_held(NFNL_SUBSYS_IPSET) || \6262+ lockdep_is_held(&ip_set_ref_lock))6163#define ip_set(inst, id) \6264 ip_set_dereference((inst)->ip_set_list)[id]6565+#define ip_set_ref_netlink(inst,id) \6666+ rcu_dereference_raw((inst)->ip_set_list)[id]63676468/* The set types are implemented in modules and registered set types6569 * can be found in ip_set_type_list. Adding/deleting types is···697693EXPORT_SYMBOL_GPL(ip_set_put_byindex);698694699695/* Get the name of a set behind a set index.700700- * We assume the set is referenced, so it does exist and701701- * can't be destroyed. The set cannot be renamed due to702702- * the referencing either.703703- *696696+ * Set itself is protected by RCU, but its name isn't: to protect against697697+ * renaming, grab ip_set_ref_lock as reader (see ip_set_rename()) and copy the698698+ * name.704699 */705705-const char *706706-ip_set_name_byindex(struct net *net, ip_set_id_t index)700700+void701701+ip_set_name_byindex(struct net *net, ip_set_id_t index, char *name)707702{708708- const struct ip_set *set = ip_set_rcu_get(net, index);703703+ struct ip_set *set = ip_set_rcu_get(net, index);709704710705 BUG_ON(!set);711711- BUG_ON(set->ref == 0);712706713713- /* Referenced, so it's safe */714714- return set->name;707707+ read_lock_bh(&ip_set_ref_lock);708708+ strncpy(name, set->name, IPSET_MAXNAMELEN);709709+ read_unlock_bh(&ip_set_ref_lock);715710}716711EXPORT_SYMBOL_GPL(ip_set_name_byindex);717712···964961 /* Wraparound */965962 goto cleanup;966963967967- list = kcalloc(i, sizeof(struct ip_set *), GFP_KERNEL);964964+ list = kvcalloc(i, sizeof(struct ip_set *), GFP_KERNEL);968965 if (!list)969966 goto cleanup;970967 /* nfnl mutex is held, both lists are valid */···976973 /* Use new list */977974 index = inst->ip_set_max;978975 inst->ip_set_max = i;979979- kfree(tmp);976976+ kvfree(tmp);980977 ret = 0;981978 } else if (ret) {982979 goto cleanup;···11561153 if (!set)11571154 return -ENOENT;1158115511591159- read_lock_bh(&ip_set_ref_lock);11561156+ write_lock_bh(&ip_set_ref_lock);11601157 if (set->ref != 0) {11611158 ret = -IPSET_ERR_REFERENCED;11621159 goto out;···11731170 strncpy(set->name, name2, IPSET_MAXNAMELEN);1174117111751172out:11761176- read_unlock_bh(&ip_set_ref_lock);11731173+ write_unlock_bh(&ip_set_ref_lock);11771174 return ret;11781175}11791176···12551252 struct ip_set_net *inst =12561253 (struct ip_set_net *)cb->args[IPSET_CB_NET];12571254 ip_set_id_t index = (ip_set_id_t)cb->args[IPSET_CB_INDEX];12581258- struct ip_set *set = ip_set(inst, index);12551255+ struct ip_set *set = ip_set_ref_netlink(inst, index);1259125612601257 if (set->variant->uref)12611258 set->variant->uref(set, cb, false);···14441441release_refcount:14451442 /* If there was an error or set is done, release set */14461443 if (ret || !cb->args[IPSET_CB_ARG0]) {14471447- set = ip_set(inst, index);14441444+ set = ip_set_ref_netlink(inst, index);14481445 if (set->variant->uref)14491446 set->variant->uref(set, cb, false);14501447 pr_debug("release set %s\n", set->name);···20622059 if (inst->ip_set_max >= IPSET_INVALID_ID)20632060 inst->ip_set_max = IPSET_INVALID_ID - 1;2064206120652065- list = kcalloc(inst->ip_set_max, sizeof(struct ip_set *), GFP_KERNEL);20622062+ list = kvcalloc(inst->ip_set_max, sizeof(struct ip_set *), GFP_KERNEL);20662063 if (!list)20672064 return -ENOMEM;20682065 inst->is_deleted = false;···20902087 }20912088 }20922089 nfnl_unlock(NFNL_SUBSYS_IPSET);20932093- kfree(rcu_dereference_protected(inst->ip_set_list, 1));20902090+ kvfree(rcu_dereference_protected(inst->ip_set_list, 1));20942091}2095209220962093static struct pernet_operations ip_set_net_ops = {
+4-4
net/netfilter/ipset/ip_set_hash_netportnet.c
···213213214214 if (tb[IPSET_ATTR_CIDR]) {215215 e.cidr[0] = nla_get_u8(tb[IPSET_ATTR_CIDR]);216216- if (!e.cidr[0] || e.cidr[0] > HOST_MASK)216216+ if (e.cidr[0] > HOST_MASK)217217 return -IPSET_ERR_INVALID_CIDR;218218 }219219220220 if (tb[IPSET_ATTR_CIDR2]) {221221 e.cidr[1] = nla_get_u8(tb[IPSET_ATTR_CIDR2]);222222- if (!e.cidr[1] || e.cidr[1] > HOST_MASK)222222+ if (e.cidr[1] > HOST_MASK)223223 return -IPSET_ERR_INVALID_CIDR;224224 }225225···493493494494 if (tb[IPSET_ATTR_CIDR]) {495495 e.cidr[0] = nla_get_u8(tb[IPSET_ATTR_CIDR]);496496- if (!e.cidr[0] || e.cidr[0] > HOST_MASK)496496+ if (e.cidr[0] > HOST_MASK)497497 return -IPSET_ERR_INVALID_CIDR;498498 }499499500500 if (tb[IPSET_ATTR_CIDR2]) {501501 e.cidr[1] = nla_get_u8(tb[IPSET_ATTR_CIDR2]);502502- if (!e.cidr[1] || e.cidr[1] > HOST_MASK)502502+ if (e.cidr[1] > HOST_MASK)503503 return -IPSET_ERR_INVALID_CIDR;504504 }505505
···10731073 return drops;10741074}1075107510761076-static noinline int early_drop(struct net *net, unsigned int _hash)10761076+static noinline int early_drop(struct net *net, unsigned int hash)10771077{10781078- unsigned int i;10781078+ unsigned int i, bucket;1079107910801080 for (i = 0; i < NF_CT_EVICTION_RANGE; i++) {10811081 struct hlist_nulls_head *ct_hash;10821082- unsigned int hash, hsize, drops;10821082+ unsigned int hsize, drops;1083108310841084 rcu_read_lock();10851085 nf_conntrack_get_ht(&ct_hash, &hsize);10861086- hash = reciprocal_scale(_hash++, hsize);10861086+ if (!i)10871087+ bucket = reciprocal_scale(hash, hsize);10881088+ else10891089+ bucket = (bucket + 1) % hsize;1087109010881088- drops = early_drop_list(net, &ct_hash[hash]);10911091+ drops = early_drop_list(net, &ct_hash[bucket]);10891092 rcu_read_unlock();1090109310911094 if (drops) {
+4-9
net/netfilter/nf_conntrack_proto_dccp.c
···384384 },385385};386386387387-static inline struct nf_dccp_net *dccp_pernet(struct net *net)388388-{389389- return &net->ct.nf_ct_proto.dccp;390390-}391391-392387static noinline bool393388dccp_new(struct nf_conn *ct, const struct sk_buff *skb,394389 const struct dccp_hdr *dh)···396401 state = dccp_state_table[CT_DCCP_ROLE_CLIENT][dh->dccph_type][CT_DCCP_NONE];397402 switch (state) {398403 default:399399- dn = dccp_pernet(net);404404+ dn = nf_dccp_pernet(net);400405 if (dn->dccp_loose == 0) {401406 msg = "not picking up existing connection ";402407 goto out_invalid;···563568564569 timeouts = nf_ct_timeout_lookup(ct);565570 if (!timeouts)566566- timeouts = dccp_pernet(nf_ct_net(ct))->dccp_timeout;571571+ timeouts = nf_dccp_pernet(nf_ct_net(ct))->dccp_timeout;567572 nf_ct_refresh_acct(ct, ctinfo, skb, timeouts[new_state]);568573569574 return NF_ACCEPT;···676681static int dccp_timeout_nlattr_to_obj(struct nlattr *tb[],677682 struct net *net, void *data)678683{679679- struct nf_dccp_net *dn = dccp_pernet(net);684684+ struct nf_dccp_net *dn = nf_dccp_pernet(net);680685 unsigned int *timeouts = data;681686 int i;682687···809814810815static int dccp_init_net(struct net *net)811816{812812- struct nf_dccp_net *dn = dccp_pernet(net);817817+ struct nf_dccp_net *dn = nf_dccp_pernet(net);813818 struct nf_proto_net *pn = &dn->pn;814819815820 if (!pn->users) {
+3-8
net/netfilter/nf_conntrack_proto_generic.c
···2727 }2828}29293030-static inline struct nf_generic_net *generic_pernet(struct net *net)3131-{3232- return &net->ct.nf_ct_proto.generic;3333-}3434-3530static bool generic_pkt_to_tuple(const struct sk_buff *skb,3631 unsigned int dataoff,3732 struct net *net, struct nf_conntrack_tuple *tuple)···5358 }54595560 if (!timeout)5656- timeout = &generic_pernet(nf_ct_net(ct))->timeout;6161+ timeout = &nf_generic_pernet(nf_ct_net(ct))->timeout;57625863 nf_ct_refresh_acct(ct, ctinfo, skb, *timeout);5964 return NF_ACCEPT;···6772static int generic_timeout_nlattr_to_obj(struct nlattr *tb[],6873 struct net *net, void *data)6974{7070- struct nf_generic_net *gn = generic_pernet(net);7575+ struct nf_generic_net *gn = nf_generic_pernet(net);7176 unsigned int *timeout = data;72777378 if (!timeout)···133138134139static int generic_init_net(struct net *net)135140{136136- struct nf_generic_net *gn = generic_pernet(net);141141+ struct nf_generic_net *gn = nf_generic_pernet(net);137142 struct nf_proto_net *pn = &gn->pn;138143139144 gn->timeout = nf_ct_generic_timeout;
+2-12
net/netfilter/nf_conntrack_proto_gre.c
···4343#include <linux/netfilter/nf_conntrack_proto_gre.h>4444#include <linux/netfilter/nf_conntrack_pptp.h>45454646-enum grep_conntrack {4747- GRE_CT_UNREPLIED,4848- GRE_CT_REPLIED,4949- GRE_CT_MAX5050-};5151-5246static const unsigned int gre_timeouts[GRE_CT_MAX] = {5347 [GRE_CT_UNREPLIED] = 30*HZ,5448 [GRE_CT_REPLIED] = 180*HZ,5549};56505751static unsigned int proto_gre_net_id __read_mostly;5858-struct netns_proto_gre {5959- struct nf_proto_net nf;6060- rwlock_t keymap_lock;6161- struct list_head keymap_list;6262- unsigned int gre_timeouts[GRE_CT_MAX];6363-};64526553static inline struct netns_proto_gre *gre_pernet(struct net *net)6654{···389401static int __init nf_ct_proto_gre_init(void)390402{391403 int ret;404404+405405+ BUILD_BUG_ON(offsetof(struct netns_proto_gre, nf) != 0);392406393407 ret = register_pernet_subsys(&proto_gre_net_ops);394408 if (ret < 0)
+3-8
net/netfilter/nf_conntrack_proto_icmp.c
···25252626static const unsigned int nf_ct_icmp_timeout = 30*HZ;27272828-static inline struct nf_icmp_net *icmp_pernet(struct net *net)2929-{3030- return &net->ct.nf_ct_proto.icmp;3131-}3232-3328static bool icmp_pkt_to_tuple(const struct sk_buff *skb, unsigned int dataoff,3429 struct net *net, struct nf_conntrack_tuple *tuple)3530{···98103 }99104100105 if (!timeout)101101- timeout = &icmp_pernet(nf_ct_net(ct))->timeout;106106+ timeout = &nf_icmp_pernet(nf_ct_net(ct))->timeout;102107103108 nf_ct_refresh_acct(ct, ctinfo, skb, *timeout);104109 return NF_ACCEPT;···270275 struct net *net, void *data)271276{272277 unsigned int *timeout = data;273273- struct nf_icmp_net *in = icmp_pernet(net);278278+ struct nf_icmp_net *in = nf_icmp_pernet(net);274279275280 if (tb[CTA_TIMEOUT_ICMP_TIMEOUT]) {276281 if (!timeout)···332337333338static int icmp_init_net(struct net *net)334339{335335- struct nf_icmp_net *in = icmp_pernet(net);340340+ struct nf_icmp_net *in = nf_icmp_pernet(net);336341 struct nf_proto_net *pn = &in->pn;337342338343 in->timeout = nf_ct_icmp_timeout;
+3-8
net/netfilter/nf_conntrack_proto_icmpv6.c
···30303131static const unsigned int nf_ct_icmpv6_timeout = 30*HZ;32323333-static inline struct nf_icmp_net *icmpv6_pernet(struct net *net)3434-{3535- return &net->ct.nf_ct_proto.icmpv6;3636-}3737-3833static bool icmpv6_pkt_to_tuple(const struct sk_buff *skb,3934 unsigned int dataoff,4035 struct net *net,···82878388static unsigned int *icmpv6_get_timeouts(struct net *net)8489{8585- return &icmpv6_pernet(net)->timeout;9090+ return &nf_icmpv6_pernet(net)->timeout;8691}87928893/* Returns verdict for packet, or -1 for invalid. */···281286 struct net *net, void *data)282287{283288 unsigned int *timeout = data;284284- struct nf_icmp_net *in = icmpv6_pernet(net);289289+ struct nf_icmp_net *in = nf_icmpv6_pernet(net);285290286291 if (!timeout)287292 timeout = icmpv6_get_timeouts(net);···343348344349static int icmpv6_init_net(struct net *net)345350{346346- struct nf_icmp_net *in = icmpv6_pernet(net);351351+ struct nf_icmp_net *in = nf_icmpv6_pernet(net);347352 struct nf_proto_net *pn = &in->pn;348353349354 in->timeout = nf_ct_icmpv6_timeout;
+3-8
net/netfilter/nf_conntrack_proto_sctp.c
···146146 }147147};148148149149-static inline struct nf_sctp_net *sctp_pernet(struct net *net)150150-{151151- return &net->ct.nf_ct_proto.sctp;152152-}153153-154149#ifdef CONFIG_NF_CONNTRACK_PROCFS155150/* Print out the private part of the conntrack. */156151static void sctp_print_conntrack(struct seq_file *s, struct nf_conn *ct)···475480476481 timeouts = nf_ct_timeout_lookup(ct);477482 if (!timeouts)478478- timeouts = sctp_pernet(nf_ct_net(ct))->timeouts;483483+ timeouts = nf_sctp_pernet(nf_ct_net(ct))->timeouts;479484480485 nf_ct_refresh_acct(ct, ctinfo, skb, timeouts[new_state]);481486···594599 struct net *net, void *data)595600{596601 unsigned int *timeouts = data;597597- struct nf_sctp_net *sn = sctp_pernet(net);602602+ struct nf_sctp_net *sn = nf_sctp_pernet(net);598603 int i;599604600605 /* set default SCTP timeouts. */···731736732737static int sctp_init_net(struct net *net)733738{734734- struct nf_sctp_net *sn = sctp_pernet(net);739739+ struct nf_sctp_net *sn = nf_sctp_pernet(net);735740 struct nf_proto_net *pn = &sn->pn;736741737742 if (!pn->users) {
···5050 int err;5151 u8 ttl;52525353- if (nla_get_u8(tb[NFTA_OSF_TTL])) {5353+ if (tb[NFTA_OSF_TTL]) {5454 ttl = nla_get_u8(tb[NFTA_OSF_TTL]);5555 if (ttl > 2)5656 return -EINVAL;
+20
net/netfilter/xt_IDLETIMER.c
···114114 schedule_work(&timer->work);115115}116116117117+static int idletimer_check_sysfs_name(const char *name, unsigned int size)118118+{119119+ int ret;120120+121121+ ret = xt_check_proc_name(name, size);122122+ if (ret < 0)123123+ return ret;124124+125125+ if (!strcmp(name, "power") ||126126+ !strcmp(name, "subsystem") ||127127+ !strcmp(name, "uevent"))128128+ return -EINVAL;129129+130130+ return 0;131131+}132132+117133static int idletimer_tg_create(struct idletimer_tg_info *info)118134{119135 int ret;···139123 ret = -ENOMEM;140124 goto out;141125 }126126+127127+ ret = idletimer_check_sysfs_name(info->label, sizeof(info->label));128128+ if (ret < 0)129129+ goto out_free_timer;142130143131 sysfs_attr_init(&info->timer->attr.attr);144132 info->timer->attr.attr.name = kstrdup(info->label, GFP_KERNEL);
-10
net/netfilter/xt_RATEEST.c
···201201 return 0;202202}203203204204-static void __net_exit xt_rateest_net_exit(struct net *net)205205-{206206- struct xt_rateest_net *xn = net_generic(net, xt_rateest_id);207207- int i;208208-209209- for (i = 0; i < ARRAY_SIZE(xn->hash); i++)210210- WARN_ON_ONCE(!hlist_empty(&xn->hash[i]));211211-}212212-213204static struct pernet_operations xt_rateest_net_ops = {214205 .init = xt_rateest_net_init,215215- .exit = xt_rateest_net_exit,216206 .id = &xt_rateest_id,217207 .size = sizeof(struct xt_rateest_net),218208};
···375375 * getting ACKs from the server. Returns a number representing the life state376376 * which can be compared to that returned by a previous call.377377 *378378- * If this is a client call, ping ACKs will be sent to the server to find out379379- * whether it's still responsive and whether the call is still alive on the380380- * server.378378+ * If the life state stalls, rxrpc_kernel_probe_life() should be called and379379+ * then 2RTT waited.381380 */382382-u32 rxrpc_kernel_check_life(struct socket *sock, struct rxrpc_call *call)381381+u32 rxrpc_kernel_check_life(const struct socket *sock,382382+ const struct rxrpc_call *call)383383{384384 return call->acks_latest;385385}386386EXPORT_SYMBOL(rxrpc_kernel_check_life);387387+388388+/**389389+ * rxrpc_kernel_probe_life - Poke the peer to see if it's still alive390390+ * @sock: The socket the call is on391391+ * @call: The call to check392392+ *393393+ * In conjunction with rxrpc_kernel_check_life(), allow a kernel service to394394+ * find out whether a call is still alive by pinging it. This should cause the395395+ * life state to be bumped in about 2*RTT.396396+ *397397+ * The must be called in TASK_RUNNING state on pain of might_sleep() objecting.398398+ */399399+void rxrpc_kernel_probe_life(struct socket *sock, struct rxrpc_call *call)400400+{401401+ rxrpc_propose_ACK(call, RXRPC_ACK_PING, 0, 0, true, false,402402+ rxrpc_propose_ack_ping_for_check_life);403403+ rxrpc_send_ack_packet(call, true, NULL);404404+}405405+EXPORT_SYMBOL(rxrpc_kernel_probe_life);387406388407/**389408 * rxrpc_kernel_get_epoch - Retrieve the epoch value from a call.
+1
net/rxrpc/ar-internal.h
···611611 * not hard-ACK'd packet follows this.612612 */613613 rxrpc_seq_t tx_top; /* Highest Tx slot allocated. */614614+ u16 tx_backoff; /* Delay to insert due to Tx failure */614615615616 /* TCP-style slow-start congestion control [RFC5681]. Since the SMSS616617 * is fixed, we keep these numbers in terms of segments (ie. DATA
+14-4
net/rxrpc/call_event.c
···123123 else124124 ack_at = expiry;125125126126+ ack_at += READ_ONCE(call->tx_backoff);126127 ack_at += now;127128 if (time_before(ack_at, call->ack_at)) {128129 WRITE_ONCE(call->ack_at, ack_at);···312311 container_of(work, struct rxrpc_call, processor);313312 rxrpc_serial_t *send_ack;314313 unsigned long now, next, t;314314+ unsigned int iterations = 0;315315316316 rxrpc_see_call(call);317317···321319 call->debug_id, rxrpc_call_states[call->state], call->events);322320323321recheck_state:322322+ /* Limit the number of times we do this before returning to the manager */323323+ iterations++;324324+ if (iterations > 5)325325+ goto requeue;326326+324327 if (test_and_clear_bit(RXRPC_CALL_EV_ABORT, &call->events)) {325328 rxrpc_send_abort_packet(call);326329 goto recheck_state;···454447 rxrpc_reduce_call_timer(call, next, now, rxrpc_timer_restart);455448456449 /* other events may have been raised since we started checking */457457- if (call->events && call->state < RXRPC_CALL_COMPLETE) {458458- __rxrpc_queue_call(call);459459- goto out;460460- }450450+ if (call->events && call->state < RXRPC_CALL_COMPLETE)451451+ goto requeue;461452462453out_put:463454 rxrpc_put_call(call, rxrpc_call_put);464455out:465456 _leave("");457457+ return;458458+459459+requeue:460460+ __rxrpc_queue_call(call);461461+ goto out;466462}
+31-4
net/rxrpc/output.c
···3535static const char rxrpc_keepalive_string[] = "";36363737/*3838+ * Increase Tx backoff on transmission failure and clear it on success.3939+ */4040+static void rxrpc_tx_backoff(struct rxrpc_call *call, int ret)4141+{4242+ if (ret < 0) {4343+ u16 tx_backoff = READ_ONCE(call->tx_backoff);4444+4545+ if (tx_backoff < HZ)4646+ WRITE_ONCE(call->tx_backoff, tx_backoff + 1);4747+ } else {4848+ WRITE_ONCE(call->tx_backoff, 0);4949+ }5050+}5151+5252+/*3853 * Arrange for a keepalive ping a certain time after we last transmitted. This3954 * lets the far side know we're still interested in this call and helps keep4055 * the route through any intervening firewall open.···225210 else226211 trace_rxrpc_tx_packet(call->debug_id, &pkt->whdr,227212 rxrpc_tx_point_call_ack);213213+ rxrpc_tx_backoff(call, ret);228214229215 if (call->state < RXRPC_CALL_COMPLETE) {230216 if (ret < 0) {···234218 rxrpc_propose_ACK(call, pkt->ack.reason,235219 ntohs(pkt->ack.maxSkew),236220 ntohl(pkt->ack.serial),237237- true, true,221221+ false, true,238222 rxrpc_propose_ack_retry_tx);239223 } else {240224 spin_lock_bh(&call->lock);···316300 else317301 trace_rxrpc_tx_packet(call->debug_id, &pkt.whdr,318302 rxrpc_tx_point_call_abort);319319-303303+ rxrpc_tx_backoff(call, ret);320304321305 rxrpc_put_connection(conn);322306 return ret;···429413 else430414 trace_rxrpc_tx_packet(call->debug_id, &whdr,431415 rxrpc_tx_point_call_data_nofrag);416416+ rxrpc_tx_backoff(call, ret);432417 if (ret == -EMSGSIZE)433418 goto send_fragmentable;434419···462445 rxrpc_reduce_call_timer(call, expect_rx_by, nowj,463446 rxrpc_timer_set_for_normal);464447 }465465- }466448467467- rxrpc_set_keepalive(call);449449+ rxrpc_set_keepalive(call);450450+ } else {451451+ /* Cancel the call if the initial transmission fails,452452+ * particularly if that's due to network routing issues that453453+ * aren't going away anytime soon. The layer above can arrange454454+ * the retransmission.455455+ */456456+ if (!test_and_set_bit(RXRPC_CALL_BEGAN_RX_TIMER, &call->flags))457457+ rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR,458458+ RX_USER_ABORT, ret);459459+ }468460469461 _leave(" = %d [%u]", ret, call->peer->maxdata);470462 return ret;···532506 else533507 trace_rxrpc_tx_packet(call->debug_id, &whdr,534508 rxrpc_tx_point_call_data_frag);509509+ rxrpc_tx_backoff(call, ret);535510536511 up_write(&conn->params.local->defrag_sem);537512 goto done;
+2-1
net/sched/act_mirred.c
···258258 if (is_redirect) {259259 skb2->tc_redirected = 1;260260 skb2->tc_from_ingress = skb2->tc_at_ingress;261261-261261+ if (skb2->tc_from_ingress)262262+ skb2->tstamp = 0;262263 /* let's the caller reinsert the packet, if possible */263264 if (use_reinsert) {264265 res->ingress = want_ingress;
···469469 goto begin;470470 }471471 prefetch(&skb->end);472472- f->credit -= qdisc_pkt_len(skb);472472+ plen = qdisc_pkt_len(skb);473473+ f->credit -= plen;473474474474- if (ktime_to_ns(skb->tstamp) || !q->rate_enable)475475+ if (!q->rate_enable)475476 goto out;476477477478 rate = q->flow_max_rate;478478- if (skb->sk)479479- rate = min(skb->sk->sk_pacing_rate, rate);480479481481- if (rate <= q->low_rate_threshold) {482482- f->credit = 0;483483- plen = qdisc_pkt_len(skb);484484- } else {485485- plen = max(qdisc_pkt_len(skb), q->quantum);486486- if (f->credit > 0)487487- goto out;480480+ /* If EDT time was provided for this skb, we need to481481+ * update f->time_next_packet only if this qdisc enforces482482+ * a flow max rate.483483+ */484484+ if (!skb->tstamp) {485485+ if (skb->sk)486486+ rate = min(skb->sk->sk_pacing_rate, rate);487487+488488+ if (rate <= q->low_rate_threshold) {489489+ f->credit = 0;490490+ } else {491491+ plen = max(plen, q->quantum);492492+ if (f->credit > 0)493493+ goto out;494494+ }488495 }489496 if (rate != ~0UL) {490497 u64 len = (u64)plen * NSEC_PER_SEC;
-9
net/sched/sch_netem.c
···648648 */649649 skb->dev = qdisc_dev(sch);650650651651-#ifdef CONFIG_NET_CLS_ACT652652- /*653653- * If it's at ingress let's pretend the delay is654654- * from the network (tstamp will be updated).655655- */656656- if (skb->tc_redirected && skb->tc_from_ingress)657657- skb->tstamp = 0;658658-#endif659659-660651 if (q->slot.slot_next) {661652 q->slot.packets_left--;662653 q->slot.bytes_left -= qdisc_pkt_len(skb);
+5-20
net/sctp/output.c
···118118 sctp_transport_route(tp, NULL, sp);119119 if (asoc->param_flags & SPP_PMTUD_ENABLE)120120 sctp_assoc_sync_pmtu(asoc);121121+ } else if (!sctp_transport_pmtu_check(tp)) {122122+ if (asoc->param_flags & SPP_PMTUD_ENABLE)123123+ sctp_assoc_sync_pmtu(asoc);121124 }122125123126 if (asoc->pmtu_pending) {···399396 return retval;400397}401398402402-static void sctp_packet_release_owner(struct sk_buff *skb)403403-{404404- sk_free(skb->sk);405405-}406406-407407-static void sctp_packet_set_owner_w(struct sk_buff *skb, struct sock *sk)408408-{409409- skb_orphan(skb);410410- skb->sk = sk;411411- skb->destructor = sctp_packet_release_owner;412412-413413- /*414414- * The data chunks have already been accounted for in sctp_sendmsg(),415415- * therefore only reserve a single byte to keep socket around until416416- * the packet has been transmitted.417417- */418418- refcount_inc(&sk->sk_wmem_alloc);419419-}420420-421399static void sctp_packet_gso_append(struct sk_buff *head, struct sk_buff *skb)422400{423401 if (SCTP_OUTPUT_CB(head)->last == head)···410426 head->truesize += skb->truesize;411427 head->data_len += skb->len;412428 head->len += skb->len;429429+ refcount_add(skb->truesize, &head->sk->sk_wmem_alloc);413430414431 __skb_header_release(skb);415432}···586601 if (!head)587602 goto out;588603 skb_reserve(head, packet->overhead + MAX_HEADER);589589- sctp_packet_set_owner_w(head, sk);604604+ skb_set_owner_w(head, sk);590605591606 /* set sctp header */592607 sh = skb_push(head, sizeof(struct sctphdr));
+1-1
net/sctp/outqueue.c
···212212 INIT_LIST_HEAD(&q->retransmit);213213 INIT_LIST_HEAD(&q->sacked);214214 INIT_LIST_HEAD(&q->abandoned);215215- sctp_sched_set_sched(asoc, SCTP_SS_FCFS);215215+ sctp_sched_set_sched(asoc, SCTP_SS_DEFAULT);216216}217217218218/* Free the outqueue structure and any related pending chunks.
···12391239 return &gss_auth->rpc_auth;12401240}1241124112421242+static struct gss_cred *12431243+gss_dup_cred(struct gss_auth *gss_auth, struct gss_cred *gss_cred)12441244+{12451245+ struct gss_cred *new;12461246+12471247+ /* Make a copy of the cred so that we can reference count it */12481248+ new = kzalloc(sizeof(*gss_cred), GFP_NOIO);12491249+ if (new) {12501250+ struct auth_cred acred = {12511251+ .uid = gss_cred->gc_base.cr_uid,12521252+ };12531253+ struct gss_cl_ctx *ctx =12541254+ rcu_dereference_protected(gss_cred->gc_ctx, 1);12551255+12561256+ rpcauth_init_cred(&new->gc_base, &acred,12571257+ &gss_auth->rpc_auth,12581258+ &gss_nullops);12591259+ new->gc_base.cr_flags = 1UL << RPCAUTH_CRED_UPTODATE;12601260+ new->gc_service = gss_cred->gc_service;12611261+ new->gc_principal = gss_cred->gc_principal;12621262+ kref_get(&gss_auth->kref);12631263+ rcu_assign_pointer(new->gc_ctx, ctx);12641264+ gss_get_ctx(ctx);12651265+ }12661266+ return new;12671267+}12681268+12421269/*12431243- * gss_destroying_context will cause the RPCSEC_GSS to send a NULL RPC call12701270+ * gss_send_destroy_context will cause the RPCSEC_GSS to send a NULL RPC call12441271 * to the server with the GSS control procedure field set to12451272 * RPC_GSS_PROC_DESTROY. This should normally cause the server to release12461273 * all RPCSEC_GSS state associated with that context.12471274 */12481248-static int12491249-gss_destroying_context(struct rpc_cred *cred)12751275+static void12761276+gss_send_destroy_context(struct rpc_cred *cred)12501277{12511278 struct gss_cred *gss_cred = container_of(cred, struct gss_cred, gc_base);12521279 struct gss_auth *gss_auth = container_of(cred->cr_auth, struct gss_auth, rpc_auth);12531280 struct gss_cl_ctx *ctx = rcu_dereference_protected(gss_cred->gc_ctx, 1);12811281+ struct gss_cred *new;12541282 struct rpc_task *task;1255128312561256- if (test_bit(RPCAUTH_CRED_UPTODATE, &cred->cr_flags) == 0)12571257- return 0;12841284+ new = gss_dup_cred(gss_auth, gss_cred);12851285+ if (new) {12861286+ ctx->gc_proc = RPC_GSS_PROC_DESTROY;1258128712591259- ctx->gc_proc = RPC_GSS_PROC_DESTROY;12601260- cred->cr_ops = &gss_nullops;12881288+ task = rpc_call_null(gss_auth->client, &new->gc_base,12891289+ RPC_TASK_ASYNC|RPC_TASK_SOFT);12901290+ if (!IS_ERR(task))12911291+ rpc_put_task(task);1261129212621262- /* Take a reference to ensure the cred will be destroyed either12631263- * by the RPC call or by the put_rpccred() below */12641264- get_rpccred(cred);12651265-12661266- task = rpc_call_null(gss_auth->client, cred, RPC_TASK_ASYNC|RPC_TASK_SOFT);12671267- if (!IS_ERR(task))12681268- rpc_put_task(task);12691269-12701270- put_rpccred(cred);12711271- return 1;12931293+ put_rpccred(&new->gc_base);12941294+ }12721295}1273129612741297/* gss_destroy_cred (and gss_free_ctx) are used to clean up after failure···13531330gss_destroy_cred(struct rpc_cred *cred)13541331{1355133213561356- if (gss_destroying_context(cred))13571357- return;13331333+ if (test_and_clear_bit(RPCAUTH_CRED_UPTODATE, &cred->cr_flags) != 0)13341334+ gss_send_destroy_context(cred);13581335 gss_destroy_nullcred(cred);13591336}13601337
···166166167167 /* Apply trial address if we just left trial period */168168 if (!trial && !self) {169169- tipc_net_finalize(net, tn->trial_addr);169169+ tipc_sched_net_finalize(net, tn->trial_addr);170170+ msg_set_prevnode(buf_msg(d->skb), tn->trial_addr);170171 msg_set_type(buf_msg(d->skb), DSC_REQ_MSG);171172 }172173···301300 goto exit;302301 }303302304304- /* Trial period over ? */305305- if (!time_before(jiffies, tn->addr_trial_end)) {306306- /* Did we just leave it ? */307307- if (!tipc_own_addr(net))308308- tipc_net_finalize(net, tn->trial_addr);309309-310310- msg_set_type(buf_msg(d->skb), DSC_REQ_MSG);311311- msg_set_prevnode(buf_msg(d->skb), tipc_own_addr(net));303303+ /* Did we just leave trial period ? */304304+ if (!time_before(jiffies, tn->addr_trial_end) && !tipc_own_addr(net)) {305305+ mod_timer(&d->timer, jiffies + TIPC_DISC_INIT);306306+ spin_unlock_bh(&d->lock);307307+ tipc_sched_net_finalize(net, tn->trial_addr);308308+ return;312309 }313310314311 /* Adjust timeout interval according to discovery phase */···318319 d->timer_intv = TIPC_DISC_SLOW;319320 else if (!d->num_nodes && d->timer_intv > TIPC_DISC_FAST)320321 d->timer_intv = TIPC_DISC_FAST;322322+ msg_set_type(buf_msg(d->skb), DSC_REQ_MSG);323323+ msg_set_prevnode(buf_msg(d->skb), tn->trial_addr);321324 }322325323326 mod_timer(&d->timer, jiffies + d->timer_intv);
+7-4
net/tipc/link.c
···15941594 if (in_range(peers_prio, l->priority + 1, TIPC_MAX_LINK_PRI))15951595 l->priority = peers_prio;1596159615971597- /* ACTIVATE_MSG serves as PEER_RESET if link is already down */15981598- if (msg_peer_stopping(hdr))15971597+ /* If peer is going down we want full re-establish cycle */15981598+ if (msg_peer_stopping(hdr)) {15991599 rc = tipc_link_fsm_evt(l, LINK_FAILURE_EVT);16001600- else if ((mtyp == RESET_MSG) || !link_is_up(l))16001600+ break;16011601+ }16021602+ /* ACTIVATE_MSG serves as PEER_RESET if link is already down */16031603+ if (mtyp == RESET_MSG || !link_is_up(l))16011604 rc = tipc_link_fsm_evt(l, LINK_PEER_RESET_EVT);1602160516031606 /* ACTIVATE_MSG takes up link if it was already locally reset */16041604- if ((mtyp == ACTIVATE_MSG) && (l->state == LINK_ESTABLISHING))16071607+ if (mtyp == ACTIVATE_MSG && l->state == LINK_ESTABLISHING)16051608 rc = TIPC_LINK_UP_EVT;1606160916071610 l->peer_session = msg_session(hdr);
+37-8
net/tipc/net.c
···104104 * - A local spin_lock protecting the queue of subscriber events.105105*/106106107107+struct tipc_net_work {108108+ struct work_struct work;109109+ struct net *net;110110+ u32 addr;111111+};112112+113113+static void tipc_net_finalize(struct net *net, u32 addr);114114+107115int tipc_net_init(struct net *net, u8 *node_id, u32 addr)108116{109117 if (tipc_own_id(net)) {···127119 return 0;128120}129121130130-void tipc_net_finalize(struct net *net, u32 addr)122122+static void tipc_net_finalize(struct net *net, u32 addr)131123{132124 struct tipc_net *tn = tipc_net(net);133125134134- if (!cmpxchg(&tn->node_addr, 0, addr)) {135135- tipc_set_node_addr(net, addr);136136- tipc_named_reinit(net);137137- tipc_sk_reinit(net);138138- tipc_nametbl_publish(net, TIPC_CFG_SRV, addr, addr,139139- TIPC_CLUSTER_SCOPE, 0, addr);140140- }126126+ if (cmpxchg(&tn->node_addr, 0, addr))127127+ return;128128+ tipc_set_node_addr(net, addr);129129+ tipc_named_reinit(net);130130+ tipc_sk_reinit(net);131131+ tipc_nametbl_publish(net, TIPC_CFG_SRV, addr, addr,132132+ TIPC_CLUSTER_SCOPE, 0, addr);133133+}134134+135135+static void tipc_net_finalize_work(struct work_struct *work)136136+{137137+ struct tipc_net_work *fwork;138138+139139+ fwork = container_of(work, struct tipc_net_work, work);140140+ tipc_net_finalize(fwork->net, fwork->addr);141141+ kfree(fwork);142142+}143143+144144+void tipc_sched_net_finalize(struct net *net, u32 addr)145145+{146146+ struct tipc_net_work *fwork = kzalloc(sizeof(*fwork), GFP_ATOMIC);147147+148148+ if (!fwork)149149+ return;150150+ INIT_WORK(&fwork->work, tipc_net_finalize_work);151151+ fwork->net = net;152152+ fwork->addr = addr;153153+ schedule_work(&fwork->work);141154}142155143156void tipc_net_stop(struct net *net)
+1-1
net/tipc/net.h
···4242extern const struct nla_policy tipc_nl_net_policy[];43434444int tipc_net_init(struct net *net, u8 *node_id, u32 addr);4545-void tipc_net_finalize(struct net *net, u32 addr);4545+void tipc_sched_net_finalize(struct net *net, u32 addr);4646void tipc_net_stop(struct net *net);4747int tipc_nl_net_dump(struct sk_buff *skb, struct netlink_callback *cb);4848int tipc_nl_net_set(struct sk_buff *skb, struct genl_info *info);
+5-2
net/tipc/node.c
···584584/* tipc_node_cleanup - delete nodes that does not585585 * have active links for NODE_CLEANUP_AFTER time586586 */587587-static int tipc_node_cleanup(struct tipc_node *peer)587587+static bool tipc_node_cleanup(struct tipc_node *peer)588588{589589 struct tipc_net *tn = tipc_net(peer->net);590590 bool deleted = false;591591592592- spin_lock_bh(&tn->node_list_lock);592592+ /* If lock held by tipc_node_stop() the node will be deleted anyway */593593+ if (!spin_trylock_bh(&tn->node_list_lock))594594+ return false;595595+593596 tipc_node_write_lock(peer);594597595598 if (!node_is_up(peer) && time_after(jiffies, peer->delete_at)) {
+11-4
net/tipc/socket.c
···15551555/**15561556 * tipc_sk_anc_data_recv - optionally capture ancillary data for received message15571557 * @m: descriptor for message info15581558- * @msg: received message header15581558+ * @skb: received message buffer15591559 * @tsk: TIPC port associated with message15601560 *15611561 * Note: Ancillary data is not captured if not requested by receiver.15621562 *15631563 * Returns 0 if successful, otherwise errno15641564 */15651565-static int tipc_sk_anc_data_recv(struct msghdr *m, struct tipc_msg *msg,15651565+static int tipc_sk_anc_data_recv(struct msghdr *m, struct sk_buff *skb,15661566 struct tipc_sock *tsk)15671567{15681568+ struct tipc_msg *msg;15681569 u32 anc_data[3];15691570 u32 err;15701571 u32 dest_type;···1574157315751574 if (likely(m->msg_controllen == 0))15761575 return 0;15761576+ msg = buf_msg(skb);1577157715781578 /* Optionally capture errored message object(s) */15791579 err = msg ? msg_errcode(msg) : 0;···15851583 if (res)15861584 return res;15871585 if (anc_data[1]) {15861586+ if (skb_linearize(skb))15871587+ return -ENOMEM;15881588+ msg = buf_msg(skb);15881589 res = put_cmsg(m, SOL_TIPC, TIPC_RETDATA, anc_data[1],15891590 msg_data(msg));15901591 if (res)···1749174417501745 /* Collect msg meta data, including error code and rejected data */17511746 tipc_sk_set_orig_addr(m, skb);17521752- rc = tipc_sk_anc_data_recv(m, hdr, tsk);17471747+ rc = tipc_sk_anc_data_recv(m, skb, tsk);17531748 if (unlikely(rc))17541749 goto exit;17501750+ hdr = buf_msg(skb);1755175117561752 /* Capture data if non-error msg, otherwise just set return value */17571753 if (likely(!err)) {···18621856 /* Collect msg meta data, incl. error code and rejected data */18631857 if (!copied) {18641858 tipc_sk_set_orig_addr(m, skb);18651865- rc = tipc_sk_anc_data_recv(m, hdr, tsk);18591859+ rc = tipc_sk_anc_data_recv(m, skb, tsk);18661860 if (rc)18671861 break;18621862+ hdr = buf_msg(skb);18681863 }1869186418701865 /* Copy data if msg ok, otherwise return error/partial data */
···71717272# Try to figure out the source directory prefix so we can remove it from the7373# addr2line output. HACK ALERT: This assumes that start_kernel() is in7474-# kernel/init.c! This only works for vmlinux. Otherwise it falls back to7474+# init/main.c! This only works for vmlinux. Otherwise it falls back to7575# printing the absolute path.7676find_dir_prefix() {7777 local objfile=$1
+4-3
scripts/kconfig/merge_config.sh
···102102fi103103104104MERGE_LIST=$*105105-SED_CONFIG_EXP="s/^\(# \)\{0,1\}\(${CONFIG_PREFIX}[a-zA-Z0-9_]*\)[= ].*/\2/p"105105+SED_CONFIG_EXP1="s/^\(${CONFIG_PREFIX}[a-zA-Z0-9_]*\)=.*/\1/p"106106+SED_CONFIG_EXP2="s/^# \(${CONFIG_PREFIX}[a-zA-Z0-9_]*\) is not set$/\1/p"106107107108TMP_FILE=$(mktemp ./.tmp.config.XXXXXXXXXX)108109···117116 echo "The merge file '$MERGE_FILE' does not exist. Exit." >&2118117 exit 1119118 fi120120- CFG_LIST=$(sed -n "$SED_CONFIG_EXP" $MERGE_FILE)119119+ CFG_LIST=$(sed -n -e "$SED_CONFIG_EXP1" -e "$SED_CONFIG_EXP2" $MERGE_FILE)121120122121 for CFG in $CFG_LIST ; do123122 grep -q -w $CFG $TMP_FILE || continue···160159161160162161# Check all specified config values took (might have missed-dependency issues)163163-for CFG in $(sed -n "$SED_CONFIG_EXP" $TMP_FILE); do162162+for CFG in $(sed -n -e "$SED_CONFIG_EXP1" -e "$SED_CONFIG_EXP2" $TMP_FILE); do164163165164 REQUESTED_VAL=$(grep -w -e "$CFG" $TMP_FILE)166165 ACTUAL_VAL=$(grep -w -e "$CFG" "$KCONFIG_CONFIG")
+3-3
scripts/package/builddeb
···8181 cp System.map "$tmpdir/boot/System.map-$version"8282 cp $KCONFIG_CONFIG "$tmpdir/boot/config-$version"8383fi8484-cp "$($MAKE -s image_name)" "$tmpdir/$installed_image_path"8484+cp "$($MAKE -s -f $srctree/Makefile image_name)" "$tmpdir/$installed_image_path"85858686-if grep -q "^CONFIG_OF=y" $KCONFIG_CONFIG ; then8686+if grep -q "^CONFIG_OF_EARLY_FLATTREE=y" $KCONFIG_CONFIG ; then8787 # Only some architectures with OF support have this target8888- if grep -q dtbs_install "${srctree}/arch/$SRCARCH/Makefile"; then8888+ if [ -d "${srctree}/arch/$SRCARCH/boot/dts" ]; then8989 $MAKE KBUILD_SRC= INSTALL_DTBS_PATH="$tmpdir/usr/lib/$packagename" dtbs_install9090 fi9191fi
···1212# how we were called determines which rpms we build and how we build them1313if [ "$1" = prebuilt ]; then1414 S=DEL1515+ MAKE="$MAKE -f $srctree/Makefile"1516else1617 S=1718fi···7978$S %setup -q8079$S8180$S %build8282-$S make %{?_smp_mflags} KBUILD_BUILD_VERSION=%{release}8181+$S $MAKE %{?_smp_mflags} KBUILD_BUILD_VERSION=%{release}8382$S8483 %install8584 mkdir -p %{buildroot}/boot8685 %ifarch ia648786 mkdir -p %{buildroot}/boot/efi8888- cp \$(make image_name) %{buildroot}/boot/efi/vmlinuz-$KERNELRELEASE8787+ cp \$($MAKE image_name) %{buildroot}/boot/efi/vmlinuz-$KERNELRELEASE8988 ln -s efi/vmlinuz-$KERNELRELEASE %{buildroot}/boot/9089 %else9191- cp \$(make image_name) %{buildroot}/boot/vmlinuz-$KERNELRELEASE9090+ cp \$($MAKE image_name) %{buildroot}/boot/vmlinuz-$KERNELRELEASE9291 %endif9393-$M make %{?_smp_mflags} INSTALL_MOD_PATH=%{buildroot} KBUILD_SRC= modules_install9494- make %{?_smp_mflags} INSTALL_HDR_PATH=%{buildroot}/usr KBUILD_SRC= headers_install9292+$M $MAKE %{?_smp_mflags} INSTALL_MOD_PATH=%{buildroot} modules_install9393+ $MAKE %{?_smp_mflags} INSTALL_HDR_PATH=%{buildroot}/usr headers_install9594 cp System.map %{buildroot}/boot/System.map-$KERNELRELEASE9695 cp .config %{buildroot}/boot/config-$KERNELRELEASE9796 bzip2 -9 --keep vmlinux
+1-1
scripts/setlocalversion
···7474 fi75757676 # Check for uncommitted changes7777- if git status -uno --porcelain | grep -qv '^.. scripts/package'; then7777+ if git diff-index --name-only HEAD | grep -qv "^scripts/package"; then7878 printf '%s' -dirty7979 fi8080
-1
scripts/spdxcheck.py
···168168 self.curline = 0169169 try:170170 for line in fd:171171- line = line.decode(locale.getpreferredencoding(False), errors='ignore')172171 self.curline += 1173172 if self.curline > maxlines:174173 break
+2-2
scripts/unifdef.c
···395395 * When we have processed a group that starts off with a known-false396396 * #if/#elif sequence (which has therefore been deleted) followed by a397397 * #elif that we don't understand and therefore must keep, we edit the398398- * latter into a #if to keep the nesting correct. We use strncpy() to398398+ * latter into a #if to keep the nesting correct. We use memcpy() to399399 * overwrite the 4 byte token "elif" with "if " without a '\0' byte.400400 *401401 * When we find a true #elif in a group, the following block will···450450static void Itrue (void) { Ftrue(); ignoreon(); }451451static void Ifalse(void) { Ffalse(); ignoreon(); }452452/* modify this line */453453-static void Mpass (void) { strncpy(keyword, "if ", 4); Pelif(); }453453+static void Mpass (void) { memcpy(keyword, "if ", 4); Pelif(); }454454static void Mtrue (void) { keywordedit("else"); state(IS_TRUE_MIDDLE); }455455static void Melif (void) { keywordedit("endif"); state(IS_FALSE_TRAILER); }456456static void Melse (void) { keywordedit("endif"); state(IS_FALSE_ELSE); }
···53185318 addr_buf = address;5319531953205320 while (walk_size < addrlen) {53215321+ if (walk_size + sizeof(sa_family_t) > addrlen)53225322+ return -EINVAL;53235323+53215324 addr = addr_buf;53225325 switch (addr->sa_family) {53235326 case AF_UNSPEC:
+12-1
security/selinux/nlmsgtab.c
···8080 { RTM_NEWSTATS, NETLINK_ROUTE_SOCKET__NLMSG_READ },8181 { RTM_GETSTATS, NETLINK_ROUTE_SOCKET__NLMSG_READ },8282 { RTM_NEWCACHEREPORT, NETLINK_ROUTE_SOCKET__NLMSG_READ },8383+ { RTM_NEWCHAIN, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },8484+ { RTM_DELCHAIN, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },8585+ { RTM_GETCHAIN, NETLINK_ROUTE_SOCKET__NLMSG_READ },8386};84878588static const struct nlmsg_perm nlmsg_tcpdiag_perms[] =···161158162159 switch (sclass) {163160 case SECCLASS_NETLINK_ROUTE_SOCKET:164164- /* RTM_MAX always point to RTM_SETxxxx, ie RTM_NEWxxx + 3 */161161+ /* RTM_MAX always points to RTM_SETxxxx, ie RTM_NEWxxx + 3.162162+ * If the BUILD_BUG_ON() below fails you must update the163163+ * structures at the top of this file with the new mappings164164+ * before updating the BUILD_BUG_ON() macro!165165+ */165166 BUILD_BUG_ON(RTM_MAX != (RTM_NEWCHAIN + 3));166167 err = nlmsg_perm(nlmsg_type, perm, nlmsg_route_perms,167168 sizeof(nlmsg_route_perms));···177170 break;178171179172 case SECCLASS_NETLINK_XFRM_SOCKET:173173+ /* If the BUILD_BUG_ON() below fails you must update the174174+ * structures at the top of this file with the new mappings175175+ * before updating the BUILD_BUG_ON() macro!176176+ */180177 BUILD_BUG_ON(XFRM_MSG_MAX != XFRM_MSG_MAPPING);181178 err = nlmsg_perm(nlmsg_type, perm, nlmsg_xfrm_perms,182179 sizeof(nlmsg_xfrm_perms));
+7-3
security/selinux/ss/mls.c
···245245 char *rangep[2];246246247247 if (!pol->mls_enabled) {248248- if ((def_sid != SECSID_NULL && oldc) || (*scontext) == '\0')249249- return 0;250250- return -EINVAL;248248+ /*249249+ * With no MLS, only return -EINVAL if there is a MLS field250250+ * and it did not come from an xattr.251251+ */252252+ if (oldc && def_sid == SECSID_NULL)253253+ return -EINVAL;254254+ return 0;251255 }252256253257 /*
+45-35
sound/core/control.c
···348348 return 0;349349}350350351351+/* add a new kcontrol object; call with card->controls_rwsem locked */352352+static int __snd_ctl_add(struct snd_card *card, struct snd_kcontrol *kcontrol)353353+{354354+ struct snd_ctl_elem_id id;355355+ unsigned int idx;356356+ unsigned int count;357357+358358+ id = kcontrol->id;359359+ if (id.index > UINT_MAX - kcontrol->count)360360+ return -EINVAL;361361+362362+ if (snd_ctl_find_id(card, &id)) {363363+ dev_err(card->dev,364364+ "control %i:%i:%i:%s:%i is already present\n",365365+ id.iface, id.device, id.subdevice, id.name, id.index);366366+ return -EBUSY;367367+ }368368+369369+ if (snd_ctl_find_hole(card, kcontrol->count) < 0)370370+ return -ENOMEM;371371+372372+ list_add_tail(&kcontrol->list, &card->controls);373373+ card->controls_count += kcontrol->count;374374+ kcontrol->id.numid = card->last_numid + 1;375375+ card->last_numid += kcontrol->count;376376+377377+ id = kcontrol->id;378378+ count = kcontrol->count;379379+ for (idx = 0; idx < count; idx++, id.index++, id.numid++)380380+ snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_ADD, &id);381381+382382+ return 0;383383+}384384+351385/**352386 * snd_ctl_add - add the control instance to the card353387 * @card: the card instance···398364 */399365int snd_ctl_add(struct snd_card *card, struct snd_kcontrol *kcontrol)400366{401401- struct snd_ctl_elem_id id;402402- unsigned int idx;403403- unsigned int count;404367 int err = -EINVAL;405368406369 if (! kcontrol)407370 return err;408371 if (snd_BUG_ON(!card || !kcontrol->info))409372 goto error;410410- id = kcontrol->id;411411- if (id.index > UINT_MAX - kcontrol->count)412412- goto error;413373414374 down_write(&card->controls_rwsem);415415- if (snd_ctl_find_id(card, &id)) {416416- up_write(&card->controls_rwsem);417417- dev_err(card->dev, "control %i:%i:%i:%s:%i is already present\n",418418- id.iface,419419- id.device,420420- id.subdevice,421421- id.name,422422- id.index);423423- err = -EBUSY;424424- goto error;425425- }426426- if (snd_ctl_find_hole(card, kcontrol->count) < 0) {427427- up_write(&card->controls_rwsem);428428- err = -ENOMEM;429429- goto error;430430- }431431- list_add_tail(&kcontrol->list, &card->controls);432432- card->controls_count += kcontrol->count;433433- kcontrol->id.numid = card->last_numid + 1;434434- card->last_numid += kcontrol->count;435435- id = kcontrol->id;436436- count = kcontrol->count;375375+ err = __snd_ctl_add(card, kcontrol);437376 up_write(&card->controls_rwsem);438438- for (idx = 0; idx < count; idx++, id.index++, id.numid++)439439- snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_ADD, &id);377377+ if (err < 0)378378+ goto error;440379 return 0;441380442381 error:···13681361 kctl->tlv.c = snd_ctl_elem_user_tlv;1369136213701363 /* This function manage to free the instance on failure. */13711371- err = snd_ctl_add(card, kctl);13721372- if (err < 0)13731373- return err;13641364+ down_write(&card->controls_rwsem);13651365+ err = __snd_ctl_add(card, kctl);13661366+ if (err < 0) {13671367+ snd_ctl_free_one(kctl);13681368+ goto unlock;13691369+ }13741370 offset = snd_ctl_get_ioff(kctl, &info->id);13751371 snd_ctl_build_ioff(&info->id, kctl, offset);13761372 /*···13841374 * which locks the element.13851375 */1386137613871387- down_write(&card->controls_rwsem);13881377 card->user_ctl_count++;13891389- up_write(&card->controls_rwsem);1390137813791379+ unlock:13801380+ up_write(&card->controls_rwsem);13911381 return 0;13921382}13931383
···765765766766static void wm_adsp2_show_fw_status(struct wm_adsp *dsp)767767{768768- u16 scratch[4];768768+ unsigned int scratch[4];769769+ unsigned int addr = dsp->base + ADSP2_SCRATCH0;770770+ unsigned int i;769771 int ret;770772771771- ret = regmap_raw_read(dsp->regmap, dsp->base + ADSP2_SCRATCH0,772772- scratch, sizeof(scratch));773773- if (ret) {774774- adsp_err(dsp, "Failed to read SCRATCH regs: %d\n", ret);775775- return;773773+ for (i = 0; i < ARRAY_SIZE(scratch); ++i) {774774+ ret = regmap_read(dsp->regmap, addr + i, &scratch[i]);775775+ if (ret) {776776+ adsp_err(dsp, "Failed to read SCRATCH%u: %d\n", i, ret);777777+ return;778778+ }776779 }777780778781 adsp_dbg(dsp, "FW SCRATCH 0:0x%x 1:0x%x 2:0x%x 3:0x%x\n",779779- be16_to_cpu(scratch[0]),780780- be16_to_cpu(scratch[1]),781781- be16_to_cpu(scratch[2]),782782- be16_to_cpu(scratch[3]));782782+ scratch[0], scratch[1], scratch[2], scratch[3]);783783}784784785785static void wm_adsp2v2_show_fw_status(struct wm_adsp *dsp)786786{787787- u32 scratch[2];787787+ unsigned int scratch[2];788788 int ret;789789790790- ret = regmap_raw_read(dsp->regmap, dsp->base + ADSP2V2_SCRATCH0_1,791791- scratch, sizeof(scratch));792792-790790+ ret = regmap_read(dsp->regmap, dsp->base + ADSP2V2_SCRATCH0_1,791791+ &scratch[0]);793792 if (ret) {794794- adsp_err(dsp, "Failed to read SCRATCH regs: %d\n", ret);793793+ adsp_err(dsp, "Failed to read SCRATCH0_1: %d\n", ret);795794 return;796795 }797796798798- scratch[0] = be32_to_cpu(scratch[0]);799799- scratch[1] = be32_to_cpu(scratch[1]);797797+ ret = regmap_read(dsp->regmap, dsp->base + ADSP2V2_SCRATCH2_3,798798+ &scratch[1]);799799+ if (ret) {800800+ adsp_err(dsp, "Failed to read SCRATCH2_3: %d\n", ret);801801+ return;802802+ }800803801804 adsp_dbg(dsp, "FW SCRATCH 0:0x%x 1:0x%x 2:0x%x 3:0x%x\n",802805 scratch[0] & 0xFFFF,
+23-3
sound/soc/intel/Kconfig
···101101 codec, then enable this option by saying Y or m. This is a102102 recommended option103103104104-config SND_SOC_INTEL_SKYLAKE_SSP_CLK105105- tristate106106-107104config SND_SOC_INTEL_SKYLAKE108105 tristate "SKL/BXT/KBL/GLK/CNL... Platforms"109106 depends on PCI && ACPI107107+ select SND_SOC_INTEL_SKYLAKE_COMMON108108+ help109109+ If you have a Intel Skylake/Broxton/ApolloLake/KabyLake/110110+ GeminiLake or CannonLake platform with the DSP enabled in the BIOS111111+ then enable this option by saying Y or m.112112+113113+if SND_SOC_INTEL_SKYLAKE114114+115115+config SND_SOC_INTEL_SKYLAKE_SSP_CLK116116+ tristate117117+118118+config SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC119119+ bool "HDAudio codec support"120120+ help121121+ If you have a Intel Skylake/Broxton/ApolloLake/KabyLake/122122+ GeminiLake or CannonLake platform with an HDaudio codec123123+ then enable this option by saying Y124124+125125+config SND_SOC_INTEL_SKYLAKE_COMMON126126+ tristate110127 select SND_HDA_EXT_CORE111128 select SND_HDA_DSP_LOADER112129 select SND_SOC_TOPOLOGY113130 select SND_SOC_INTEL_SST131131+ select SND_SOC_HDAC_HDA if SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC114132 select SND_SOC_ACPI_INTEL_MATCH115133 help116134 If you have a Intel Skylake/Broxton/ApolloLake/KabyLake/117135 GeminiLake or CannonLake platform with the DSP enabled in the BIOS118136 then enable this option by saying Y or m.137137+138138+endif ## SND_SOC_INTEL_SKYLAKE119139120140config SND_SOC_ACPI_INTEL_MATCH121141 tristate
+14-10
sound/soc/intel/boards/Kconfig
···293293 Say Y if you have such a device.294294 If unsure select "N".295295296296-config SND_SOC_INTEL_SKL_HDA_DSP_GENERIC_MACH297297- tristate "SKL/KBL/BXT/APL with HDA Codecs"298298- select SND_SOC_HDAC_HDMI299299- select SND_SOC_HDAC_HDA300300- help301301- This adds support for ASoC machine driver for Intel platforms302302- SKL/KBL/BXT/APL with iDisp, HDA audio codecs.303303- Say Y or m if you have such a device. This is a recommended option.304304- If unsure select "N".305305-306296config SND_SOC_INTEL_GLK_RT5682_MAX98357A_MACH307297 tristate "GLK with RT5682 and MAX98357A in I2S Mode"308298 depends on MFD_INTEL_LPSS && I2C && ACPI···308318 If unsure select "N".309319310320endif ## SND_SOC_INTEL_SKYLAKE321321+322322+if SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC323323+324324+config SND_SOC_INTEL_SKL_HDA_DSP_GENERIC_MACH325325+ tristate "SKL/KBL/BXT/APL with HDA Codecs"326326+ select SND_SOC_HDAC_HDMI327327+ # SND_SOC_HDAC_HDA is already selected328328+ help329329+ This adds support for ASoC machine driver for Intel platforms330330+ SKL/KBL/BXT/APL with iDisp, HDA audio codecs.331331+ Say Y or m if you have such a device. This is a recommended option.332332+ If unsure select "N".333333+334334+endif ## SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC311335312336endif ## SND_SOC_INTEL_MACH
+29-3
sound/soc/intel/boards/cht_bsw_max98090_ti.c
···1919 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~2020 */21212222+#include <linux/dmi.h>2223#include <linux/module.h>2324#include <linux/platform_device.h>2425#include <linux/slab.h>···35343635#define CHT_PLAT_CLK_3_HZ 192000003736#define CHT_CODEC_DAI "HiFi"3737+3838+#define QUIRK_PMC_PLT_CLK_0 0x0138393940struct cht_mc_private {4041 struct clk *mclk;···388385 .num_controls = ARRAY_SIZE(cht_mc_controls),389386};390387388388+static const struct dmi_system_id cht_max98090_quirk_table[] = {389389+ {390390+ /* Swanky model Chromebook (Toshiba Chromebook 2) */391391+ .matches = {392392+ DMI_MATCH(DMI_PRODUCT_NAME, "Swanky"),393393+ },394394+ .driver_data = (void *)QUIRK_PMC_PLT_CLK_0,395395+ },396396+ {}397397+};398398+391399static int snd_cht_mc_probe(struct platform_device *pdev)392400{401401+ const struct dmi_system_id *dmi_id;393402 struct device *dev = &pdev->dev;394403 int ret_val = 0;395404 struct cht_mc_private *drv;405405+ const char *mclk_name;406406+ int quirks = 0;407407+408408+ dmi_id = dmi_first_match(cht_max98090_quirk_table);409409+ if (dmi_id)410410+ quirks = (unsigned long)dmi_id->driver_data;396411397412 drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_KERNEL);398413 if (!drv)···432411 snd_soc_card_cht.dev = &pdev->dev;433412 snd_soc_card_set_drvdata(&snd_soc_card_cht, drv);434413435435- drv->mclk = devm_clk_get(&pdev->dev, "pmc_plt_clk_3");414414+ if (quirks & QUIRK_PMC_PLT_CLK_0)415415+ mclk_name = "pmc_plt_clk_0";416416+ else417417+ mclk_name = "pmc_plt_clk_3";418418+419419+ drv->mclk = devm_clk_get(&pdev->dev, mclk_name);436420 if (IS_ERR(drv->mclk)) {437421 dev_err(&pdev->dev,438438- "Failed to get MCLK from pmc_plt_clk_3: %ld\n",439439- PTR_ERR(drv->mclk));422422+ "Failed to get MCLK from %s: %ld\n",423423+ mclk_name, PTR_ERR(drv->mclk));440424 return PTR_ERR(drv->mclk);441425 }442426
+24-8
sound/soc/intel/skylake/skl.c
···3737#include "skl.h"3838#include "skl-sst-dsp.h"3939#include "skl-sst-ipc.h"4040+#if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC)4041#include "../../../soc/codecs/hdac_hda.h"4242+#endif41434244/*4345 * initialize the PCI registers···660658 platform_device_unregister(skl->clk_dev);661659}662660661661+#if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC)662662+663663#define IDISP_INTEL_VENDOR_ID 0x80860000664664665665/*···680676#endif681677}682678679679+#endif /* CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC */680680+683681/*684682 * Probe the given codec address685683 */···691685 (AC_VERB_PARAMETERS << 8) | AC_PAR_VENDOR_ID;692686 unsigned int res = -1;693687 struct skl *skl = bus_to_skl(bus);688688+#if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC)694689 struct hdac_hda_priv *hda_codec;695695- struct hdac_device *hdev;696690 int err;691691+#endif692692+ struct hdac_device *hdev;697693698694 mutex_lock(&bus->cmd_mutex);699695 snd_hdac_bus_send_cmd(bus, cmd);···705697 return -EIO;706698 dev_dbg(bus->dev, "codec #%d probed OK: %x\n", addr, res);707699700700+#if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC)708701 hda_codec = devm_kzalloc(&skl->pci->dev, sizeof(*hda_codec),709702 GFP_KERNEL);710703 if (!hda_codec)···724715 load_codec_module(&hda_codec->codec);725716 }726717 return 0;718718+#else719719+ hdev = devm_kzalloc(&skl->pci->dev, sizeof(*hdev), GFP_KERNEL);720720+ if (!hdev)721721+ return -ENOMEM;722722+723723+ return snd_hdac_ext_bus_device_init(bus, addr, hdev);724724+#endif /* CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC */727725}728726729727/* Codec initialization */···831815 }832816 }833817818818+ /*819819+ * we are done probing so decrement link counts820820+ */821821+ list_for_each_entry(hlink, &bus->hlink_list, list)822822+ snd_hdac_ext_bus_link_put(bus, hlink);823823+834824 if (IS_ENABLED(CONFIG_SND_SOC_HDAC_HDMI)) {835825 err = snd_hdac_display_power(bus, false);836826 if (err < 0) {···845823 return;846824 }847825 }848848-849849- /*850850- * we are done probing so decrement link counts851851- */852852- list_for_each_entry(hlink, &bus->hlink_list, list)853853- snd_hdac_ext_bus_link_put(bus, hlink);854826855827 /* configure PM */856828 pm_runtime_put_noidle(bus->dev);···886870 hbus = skl_to_hbus(skl);887871 bus = skl_to_bus(skl);888872889889-#if IS_ENABLED(CONFIG_SND_SOC_HDAC_HDA)873873+#if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC)890874 ext_ops = snd_soc_hdac_hda_get_ops();891875#endif892876 snd_hdac_ext_bus_init(bus, &pci->dev, &bus_core_ops, io_ops, ext_ops);
···306306 if (rsnd_ssi_is_multi_slave(mod, io))307307 return 0;308308309309- if (ssi->rate) {309309+ if (ssi->usrcnt > 1) {310310 if (ssi->rate != rate) {311311 dev_err(dev, "SSI parent/child should use same rate\n");312312 return -EINVAL;
+8-2
sound/soc/soc-acpi.c
···1010snd_soc_acpi_find_machine(struct snd_soc_acpi_mach *machines)1111{1212 struct snd_soc_acpi_mach *mach;1313+ struct snd_soc_acpi_mach *mach_alt;13141415 for (mach = machines; mach->id[0]; mach++) {1516 if (acpi_dev_present(mach->id, NULL, -1)) {1616- if (mach->machine_quirk)1717- mach = mach->machine_quirk(mach);1717+ if (mach->machine_quirk) {1818+ mach_alt = mach->machine_quirk(mach);1919+ if (!mach_alt)2020+ continue; /* not full match, ignore */2121+ mach = mach_alt;2222+ }2323+1824 return mach;1925 }2026 }
···390390 char *mclk_name, *p, *s = (char *)pname;391391 int ret, i = 0;392392393393- mclk = devm_kzalloc(dev, sizeof(mclk), GFP_KERNEL);393393+ mclk = devm_kzalloc(dev, sizeof(*mclk), GFP_KERNEL);394394 if (!mclk)395395 return -ENOMEM;396396
+1-1
sound/soc/sunxi/Kconfig
···3131config SND_SUN50I_CODEC_ANALOG3232 tristate "Allwinner sun50i Codec Analog Controls Support"3333 depends on (ARM64 && ARCH_SUNXI) || COMPILE_TEST3434- select SND_SUNXI_ADDA_PR_REGMAP3434+ select SND_SUN8I_ADDA_PR_REGMAP3535 help3636 Say Y or M if you want to add support for the analog controls for3737 the codec embedded in Allwinner A64 SoC.
+5-7
sound/soc/sunxi/sun8i-codec.c
···481481 { "Right Digital DAC Mixer", "AIF1 Slot 0 Digital DAC Playback Switch",482482 "AIF1 Slot 0 Right"},483483484484- /* ADC routes */484484+ /* ADC Routes */485485+ { "AIF1 Slot 0 Right ADC", NULL, "ADC" },486486+ { "AIF1 Slot 0 Left ADC", NULL, "ADC" },487487+488488+ /* ADC Mixer Routes */485489 { "Left Digital ADC Mixer", "AIF1 Data Digital ADC Capture Switch",486490 "AIF1 Slot 0 Left ADC" },487491 { "Right Digital ADC Mixer", "AIF1 Data Digital ADC Capture Switch",···609605610606static int sun8i_codec_remove(struct platform_device *pdev)611607{612612- struct snd_soc_card *card = platform_get_drvdata(pdev);613613- struct sun8i_codec *scodec = snd_soc_card_get_drvdata(card);614614-615608 pm_runtime_disable(&pdev->dev);616609 if (!pm_runtime_status_suspended(&pdev->dev))617610 sun8i_codec_runtime_suspend(&pdev->dev);618618-619619- clk_disable_unprepare(scodec->clk_module);620620- clk_disable_unprepare(scodec->clk_bus);621611622612 return 0;623613}
···3434# include "test-libelf-mmap.c"3535#undef main36363737+#define main main_test_get_current_dir_name3838+# include "test-get_current_dir_name.c"3939+#undef main4040+3741#define main main_test_glibc3842# include "test-glibc.c"3943#undef main···178174 main_test_hello();179175 main_test_libelf();180176 main_test_libelf_mmap();177177+ main_test_get_current_dir_name();181178 main_test_glibc();182179 main_test_dwarf();183180 main_test_dwarf_getlocations();
···7979#define TIOCGPTLCK _IOR('T', 0x39, int) /* Get Pty lock state */8080#define TIOCGEXCL _IOR('T', 0x40, int) /* Get exclusive mode state */8181#define TIOCGPTPEER _IO('T', 0x41) /* Safely open the slave */8282+#define TIOCGISO7816 _IOR('T', 0x42, struct serial_iso7816)8383+#define TIOCSISO7816 _IOWR('T', 0x43, struct serial_iso7816)82848385#define FIONCLEX 0x54508486#define FIOCLEX 0x5451
+22
tools/include/uapi/drm/i915_drm.h
···529529 */530530#define I915_PARAM_CS_TIMESTAMP_FREQUENCY 51531531532532+/*533533+ * Once upon a time we supposed that writes through the GGTT would be534534+ * immediately in physical memory (once flushed out of the CPU path). However,535535+ * on a few different processors and chipsets, this is not necessarily the case536536+ * as the writes appear to be buffered internally. Thus a read of the backing537537+ * storage (physical memory) via a different path (with different physical tags538538+ * to the indirect write via the GGTT) will see stale values from before539539+ * the GGTT write. Inside the kernel, we can for the most part keep track of540540+ * the different read/write domains in use (e.g. set-domain), but the assumption541541+ * of coherency is baked into the ABI, hence reporting its true state in this542542+ * parameter.543543+ *544544+ * Reports true when writes via mmap_gtt are immediately visible following an545545+ * lfence to flush the WCB.546546+ *547547+ * Reports false when writes via mmap_gtt are indeterminately delayed in an in548548+ * internal buffer and are _not_ immediately visible to third parties accessing549549+ * directly via mmap_cpu/mmap_wc. Use of mmap_gtt as part of an IPC550550+ * communications channel when reporting false is strongly disadvised.551551+ */552552+#define I915_PARAM_MMAP_GTT_COHERENT 52553553+532554typedef struct drm_i915_getparam {533555 __s32 param;534556 /*
···212212#define PR_SET_SPECULATION_CTRL 53213213/* Speculation control variants */214214# define PR_SPEC_STORE_BYPASS 0215215+# define PR_SPEC_INDIRECT_BRANCH 1215216/* Return and control values for PR_SET/GET_SPECULATION_CTRL */216217# define PR_SPEC_NOT_AFFECTED 0217218# define PR_SPEC_PRCTL (1UL << 0)
+37
tools/include/uapi/linux/tc_act/tc_bpf.h
···11+/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */22+/*33+ * Copyright (c) 2015 Jiri Pirko <jiri@resnulli.us>44+ *55+ * This program is free software; you can redistribute it and/or modify66+ * it under the terms of the GNU General Public License as published by77+ * the Free Software Foundation; either version 2 of the License, or88+ * (at your option) any later version.99+ */1010+1111+#ifndef __LINUX_TC_BPF_H1212+#define __LINUX_TC_BPF_H1313+1414+#include <linux/pkt_cls.h>1515+1616+#define TCA_ACT_BPF 131717+1818+struct tc_act_bpf {1919+ tc_gen;2020+};2121+2222+enum {2323+ TCA_ACT_BPF_UNSPEC,2424+ TCA_ACT_BPF_TM,2525+ TCA_ACT_BPF_PARMS,2626+ TCA_ACT_BPF_OPS_LEN,2727+ TCA_ACT_BPF_OPS,2828+ TCA_ACT_BPF_FD,2929+ TCA_ACT_BPF_NAME,3030+ TCA_ACT_BPF_PAD,3131+ TCA_ACT_BPF_TAG,3232+ TCA_ACT_BPF_ID,3333+ __TCA_ACT_BPF_MAX,3434+};3535+#define TCA_ACT_BPF_MAX (__TCA_ACT_BPF_MAX - 1)3636+3737+#endif
···5555 S - read sample value (PERF_SAMPLE_READ)5656 D - pin the event to the PMU5757 W - group is weak and will fallback to non-group if not schedulable,5858- only supported in 'perf stat' for now.59586059The 'p' modifier can be used for specifying how precise the instruction6160address should be. The 'p' modifier can be specified multiple times:
···383383 return STAT_RECORD || counter->attr.read_format & PERF_FORMAT_ID;384384}385385386386-static struct perf_evsel *perf_evsel__reset_weak_group(struct perf_evsel *evsel)387387-{388388- struct perf_evsel *c2, *leader;389389- bool is_open = true;390390-391391- leader = evsel->leader;392392- pr_debug("Weak group for %s/%d failed\n",393393- leader->name, leader->nr_members);394394-395395- /*396396- * for_each_group_member doesn't work here because it doesn't397397- * include the first entry.398398- */399399- evlist__for_each_entry(evsel_list, c2) {400400- if (c2 == evsel)401401- is_open = false;402402- if (c2->leader == leader) {403403- if (is_open)404404- perf_evsel__close(c2);405405- c2->leader = c2;406406- c2->nr_members = 0;407407- }408408- }409409- return leader;410410-}411411-412386static bool is_target_alive(struct target *_target,413387 struct thread_map *threads)414388{···451477 if ((errno == EINVAL || errno == EBADF) &&452478 counter->leader != counter &&453479 counter->weak_group) {454454- counter = perf_evsel__reset_weak_group(counter);480480+ counter = perf_evlist__reset_weak_group(evsel_list, counter);455481 goto try_again;456482 }457483
+3
tools/perf/builtin-top.c
···14291429 }14301430 }1431143114321432+ if (opts->branch_stack && callchain_param.enabled)14331433+ symbol_conf.show_branchflag_count = true;14341434+14321435 sort__mode = SORT_MODE__TOP;14331436 /* display thread wants entries to be collapsed in a different tree */14341437 perf_hpp_list.need_collapse = 1;
+29-5
tools/perf/builtin-trace.c
···108108 } stats;109109 unsigned int max_stack;110110 unsigned int min_stack;111111+ bool raw_augmented_syscalls;111112 bool not_ev_qualifier;112113 bool live;113114 bool full_time;···17251724 return printed;17261725}1727172617281728-static void *syscall__augmented_args(struct syscall *sc, struct perf_sample *sample, int *augmented_args_size)17271727+static void *syscall__augmented_args(struct syscall *sc, struct perf_sample *sample, int *augmented_args_size, bool raw_augmented)17291728{17301729 void *augmented_args = NULL;17301730+ /*17311731+ * For now with BPF raw_augmented we hook into raw_syscalls:sys_enter17321732+ * and there we get all 6 syscall args plus the tracepoint common17331733+ * fields (sizeof(long)) and the syscall_nr (another long). So we check17341734+ * if that is the case and if so don't look after the sc->args_size,17351735+ * but always after the full raw_syscalls:sys_enter payload, which is17361736+ * fixed.17371737+ *17381738+ * We'll revisit this later to pass s->args_size to the BPF augmenter17391739+ * (now tools/perf/examples/bpf/augmented_raw_syscalls.c, so that it17401740+ * copies only what we need for each syscall, like what happens when we17411741+ * use syscalls:sys_enter_NAME, so that we reduce the kernel/userspace17421742+ * traffic to just what is needed for each syscall.17431743+ */17441744+ int args_size = raw_augmented ? (8 * (int)sizeof(long)) : sc->args_size;1731174517321732- *augmented_args_size = sample->raw_size - sc->args_size;17461746+ *augmented_args_size = sample->raw_size - args_size;17331747 if (*augmented_args_size > 0)17341734- augmented_args = sample->raw_data + sc->args_size;17481748+ augmented_args = sample->raw_data + args_size;1735174917361750 return augmented_args;17371751}···17961780 * here and avoid using augmented syscalls when the evsel is the raw_syscalls one.17971781 */17981782 if (evsel != trace->syscalls.events.sys_enter)17991799- augmented_args = syscall__augmented_args(sc, sample, &augmented_args_size);17831783+ augmented_args = syscall__augmented_args(sc, sample, &augmented_args_size, trace->raw_augmented_syscalls);18001784 ttrace->entry_time = sample->time;18011785 msg = ttrace->entry_str;18021786 printed += scnprintf(msg + printed, trace__entry_str_size - printed, "%s(", sc->name);···18491833 goto out_put;1850183418511835 args = perf_evsel__sc_tp_ptr(evsel, args, sample);18521852- augmented_args = syscall__augmented_args(sc, sample, &augmented_args_size);18361836+ augmented_args = syscall__augmented_args(sc, sample, &augmented_args_size, trace->raw_augmented_syscalls);18531837 syscall__scnprintf_args(sc, msg, sizeof(msg), args, augmented_args, augmented_args_size, trace, thread);18541838 fprintf(trace->output, "%s", msg);18551839 err = 0;···35173501 evsel->handler = trace__sys_enter;3518350235193503 evlist__for_each_entry(trace.evlist, evsel) {35043504+ bool raw_syscalls_sys_exit = strcmp(perf_evsel__name(evsel), "raw_syscalls:sys_exit") == 0;35053505+35063506+ if (raw_syscalls_sys_exit) {35073507+ trace.raw_augmented_syscalls = true;35083508+ goto init_augmented_syscall_tp;35093509+ }35103510+35203511 if (strstarts(perf_evsel__name(evsel), "syscalls:sys_exit_")) {35123512+init_augmented_syscall_tp:35213513 perf_evsel__init_augmented_syscall_tp(evsel);35223514 perf_evsel__init_augmented_syscall_tp_ret(evsel);35233515 evsel->handler = trace__sys_exit;
+131
tools/perf/examples/bpf/augmented_raw_syscalls.c
···11+// SPDX-License-Identifier: GPL-2.022+/*33+ * Augment the raw_syscalls tracepoints with the contents of the pointer arguments.44+ *55+ * Test it with:66+ *77+ * perf trace -e tools/perf/examples/bpf/augmented_raw_syscalls.c cat /etc/passwd > /dev/null88+ *99+ * This exactly matches what is marshalled into the raw_syscall:sys_enter1010+ * payload expected by the 'perf trace' beautifiers.1111+ *1212+ * For now it just uses the existing tracepoint augmentation code in 'perf1313+ * trace', in the next csets we'll hook up these with the sys_enter/sys_exit1414+ * code that will combine entry/exit in a strace like way.1515+ */1616+1717+#include <stdio.h>1818+#include <linux/socket.h>1919+2020+/* bpf-output associated map */2121+struct bpf_map SEC("maps") __augmented_syscalls__ = {2222+ .type = BPF_MAP_TYPE_PERF_EVENT_ARRAY,2323+ .key_size = sizeof(int),2424+ .value_size = sizeof(u32),2525+ .max_entries = __NR_CPUS__,2626+};2727+2828+struct syscall_enter_args {2929+ unsigned long long common_tp_fields;3030+ long syscall_nr;3131+ unsigned long args[6];3232+};3333+3434+struct syscall_exit_args {3535+ unsigned long long common_tp_fields;3636+ long syscall_nr;3737+ long ret;3838+};3939+4040+struct augmented_filename {4141+ unsigned int size;4242+ int reserved;4343+ char value[256];4444+};4545+4646+#define SYS_OPEN 24747+#define SYS_OPENAT 2574848+4949+SEC("raw_syscalls:sys_enter")5050+int sys_enter(struct syscall_enter_args *args)5151+{5252+ struct {5353+ struct syscall_enter_args args;5454+ struct augmented_filename filename;5555+ } augmented_args;5656+ unsigned int len = sizeof(augmented_args);5757+ const void *filename_arg = NULL;5858+5959+ probe_read(&augmented_args.args, sizeof(augmented_args.args), args);6060+ /*6161+ * Yonghong and Edward Cree sayz:6262+ *6363+ * https://www.spinics.net/lists/netdev/msg531645.html6464+ *6565+ * >> R0=inv(id=0) R1=inv2 R6=ctx(id=0,off=0,imm=0) R7=inv64 R10=fp0,call_-16666+ * >> 10: (bf) r1 = r66767+ * >> 11: (07) r1 += 166868+ * >> 12: (05) goto pc+26969+ * >> 15: (79) r3 = *(u64 *)(r1 +0)7070+ * >> dereference of modified ctx ptr R1 off=16 disallowed7171+ * > Aha, we at least got a different error message this time.7272+ * > And indeed llvm has done that optimisation, rather than the more obvious7373+ * > 11: r3 = *(u64 *)(r1 +16)7474+ * > because it wants to have lots of reads share a single insn. You may be able7575+ * > to defeat that optimisation by adding compiler barriers, idk. Maybe someone7676+ * > with llvm knowledge can figure out how to stop it (ideally, llvm would know7777+ * > when it's generating for bpf backend and not do that). -O0? ¯\_(ツ)_/¯7878+ *7979+ * The optimization mostly likes below:8080+ *8181+ * br1:8282+ * ...8383+ * r1 += 168484+ * goto merge8585+ * br2:8686+ * ...8787+ * r1 += 208888+ * goto merge8989+ * merge:9090+ * *(u64 *)(r1 + 0)9191+ *9292+ * The compiler tries to merge common loads. There is no easy way to9393+ * stop this compiler optimization without turning off a lot of other9494+ * optimizations. The easiest way is to add barriers:9595+ *9696+ * __asm__ __volatile__("": : :"memory")9797+ *9898+ * after the ctx memory access to prevent their down stream merging.9999+ */100100+ switch (augmented_args.args.syscall_nr) {101101+ case SYS_OPEN: filename_arg = (const void *)args->args[0];102102+ __asm__ __volatile__("": : :"memory");103103+ break;104104+ case SYS_OPENAT: filename_arg = (const void *)args->args[1];105105+ break;106106+ }107107+108108+ if (filename_arg != NULL) {109109+ augmented_args.filename.reserved = 0;110110+ augmented_args.filename.size = probe_read_str(&augmented_args.filename.value,111111+ sizeof(augmented_args.filename.value),112112+ filename_arg);113113+ if (augmented_args.filename.size < sizeof(augmented_args.filename.value)) {114114+ len -= sizeof(augmented_args.filename.value) - augmented_args.filename.size;115115+ len &= sizeof(augmented_args.filename.value) - 1;116116+ }117117+ } else {118118+ len = sizeof(augmented_args.args);119119+ }120120+121121+ perf_event_output(args, &__augmented_syscalls__, BPF_F_CURRENT_CPU, &augmented_args, len);122122+ return 0;123123+}124124+125125+SEC("raw_syscalls:sys_exit")126126+int sys_exit(struct syscall_exit_args *args)127127+{128128+ return 1; /* 0 as soon as we start copying data returned by the kernel, e.g. 'read' */129129+}130130+131131+license(GPL);
+38-11
tools/perf/jvmti/jvmti_agent.c
···125125}126126127127static int128128-debug_cache_init(void)128128+create_jit_cache_dir(void)129129{130130 char str[32];131131 char *base, *p;···144144145145 strftime(str, sizeof(str), JIT_LANG"-jit-%Y%m%d", &tm);146146147147- snprintf(jit_path, PATH_MAX - 1, "%s/.debug/", base);148148-147147+ ret = snprintf(jit_path, PATH_MAX, "%s/.debug/", base);148148+ if (ret >= PATH_MAX) {149149+ warnx("jvmti: cannot generate jit cache dir because %s/.debug/"150150+ " is too long, please check the cwd, JITDUMPDIR, and"151151+ " HOME variables", base);152152+ return -1;153153+ }149154 ret = mkdir(jit_path, 0755);150155 if (ret == -1) {151156 if (errno != EEXIST) {···159154 }160155 }161156162162- snprintf(jit_path, PATH_MAX - 1, "%s/.debug/jit", base);157157+ ret = snprintf(jit_path, PATH_MAX, "%s/.debug/jit", base);158158+ if (ret >= PATH_MAX) {159159+ warnx("jvmti: cannot generate jit cache dir because"160160+ " %s/.debug/jit is too long, please check the cwd,"161161+ " JITDUMPDIR, and HOME variables", base);162162+ return -1;163163+ }163164 ret = mkdir(jit_path, 0755);164165 if (ret == -1) {165166 if (errno != EEXIST) {166166- warn("cannot create jit cache dir %s", jit_path);167167+ warn("jvmti: cannot create jit cache dir %s", jit_path);167168 return -1;168169 }169170 }170171171171- snprintf(jit_path, PATH_MAX - 1, "%s/.debug/jit/%s.XXXXXXXX", base, str);172172-172172+ ret = snprintf(jit_path, PATH_MAX, "%s/.debug/jit/%s.XXXXXXXX", base, str);173173+ if (ret >= PATH_MAX) {174174+ warnx("jvmti: cannot generate jit cache dir because"175175+ " %s/.debug/jit/%s.XXXXXXXX is too long, please check"176176+ " the cwd, JITDUMPDIR, and HOME variables",177177+ base, str);178178+ return -1;179179+ }173180 p = mkdtemp(jit_path);174181 if (p != jit_path) {175175- warn("cannot create jit cache dir %s", jit_path);182182+ warn("jvmti: cannot create jit cache dir %s", jit_path);176183 return -1;177184 }178185···245228{246229 char dump_path[PATH_MAX];247230 struct jitheader header;248248- int fd;231231+ int fd, ret;249232 FILE *fp;250233251234 init_arch_timestamp();···262245263246 memset(&header, 0, sizeof(header));264247265265- debug_cache_init();248248+ /*249249+ * jitdump file dir250250+ */251251+ if (create_jit_cache_dir() < 0)252252+ return NULL;266253267254 /*268255 * jitdump file name269256 */270270- scnprintf(dump_path, PATH_MAX, "%s/jit-%i.dump", jit_path, getpid());257257+ ret = snprintf(dump_path, PATH_MAX, "%s/jit-%i.dump", jit_path, getpid());258258+ if (ret >= PATH_MAX) {259259+ warnx("jvmti: cannot generate jitdump file full path because"260260+ " %s/jit-%i.dump is too long, please check the cwd,"261261+ " JITDUMPDIR, and HOME variables", jit_path, getpid());262262+ return NULL;263263+ }271264272265 fd = open(dump_path, O_CREAT|O_TRUNC|O_RDWR, 0666);273266 if (fd == -1)
+490-3
tools/perf/scripts/python/exported-sql-viewer.py
···119119 return "[kernel]"120120 return name121121122122+def findnth(s, sub, n, offs=0):123123+ pos = s.find(sub)124124+ if pos < 0:125125+ return pos126126+ if n <= 1:127127+ return offs + pos128128+ return findnth(s[pos + 1:], sub, n - 1, offs + pos + 1)129129+122130# Percent to one decimal place123131124132def PercentToOneDP(n, d):···14721464 else:14731465 self.find_bar.NotFound()1474146614671467+# Dialog data item converted and validated using a SQL table14681468+14691469+class SQLTableDialogDataItem():14701470+14711471+ def __init__(self, glb, label, placeholder_text, table_name, match_column, column_name1, column_name2, parent):14721472+ self.glb = glb14731473+ self.label = label14741474+ self.placeholder_text = placeholder_text14751475+ self.table_name = table_name14761476+ self.match_column = match_column14771477+ self.column_name1 = column_name114781478+ self.column_name2 = column_name214791479+ self.parent = parent14801480+14811481+ self.value = ""14821482+14831483+ self.widget = QLineEdit()14841484+ self.widget.editingFinished.connect(self.Validate)14851485+ self.widget.textChanged.connect(self.Invalidate)14861486+ self.red = False14871487+ self.error = ""14881488+ self.validated = True14891489+14901490+ self.last_id = 014911491+ self.first_time = 014921492+ self.last_time = 2 ** 6414931493+ if self.table_name == "<timeranges>":14941494+ query = QSqlQuery(self.glb.db)14951495+ QueryExec(query, "SELECT id, time FROM samples ORDER BY id DESC LIMIT 1")14961496+ if query.next():14971497+ self.last_id = int(query.value(0))14981498+ self.last_time = int(query.value(1))14991499+ QueryExec(query, "SELECT time FROM samples WHERE time != 0 ORDER BY id LIMIT 1")15001500+ if query.next():15011501+ self.first_time = int(query.value(0))15021502+ if placeholder_text:15031503+ placeholder_text += ", between " + str(self.first_time) + " and " + str(self.last_time)15041504+15051505+ if placeholder_text:15061506+ self.widget.setPlaceholderText(placeholder_text)15071507+15081508+ def ValueToIds(self, value):15091509+ ids = []15101510+ query = QSqlQuery(self.glb.db)15111511+ stmt = "SELECT id FROM " + self.table_name + " WHERE " + self.match_column + " = '" + value + "'"15121512+ ret = query.exec_(stmt)15131513+ if ret:15141514+ while query.next():15151515+ ids.append(str(query.value(0)))15161516+ return ids15171517+15181518+ def IdBetween(self, query, lower_id, higher_id, order):15191519+ QueryExec(query, "SELECT id FROM samples WHERE id > " + str(lower_id) + " AND id < " + str(higher_id) + " ORDER BY id " + order + " LIMIT 1")15201520+ if query.next():15211521+ return True, int(query.value(0))15221522+ else:15231523+ return False, 015241524+15251525+ def BinarySearchTime(self, lower_id, higher_id, target_time, get_floor):15261526+ query = QSqlQuery(self.glb.db)15271527+ while True:15281528+ next_id = int((lower_id + higher_id) / 2)15291529+ QueryExec(query, "SELECT time FROM samples WHERE id = " + str(next_id))15301530+ if not query.next():15311531+ ok, dbid = self.IdBetween(query, lower_id, next_id, "DESC")15321532+ if not ok:15331533+ ok, dbid = self.IdBetween(query, next_id, higher_id, "")15341534+ if not ok:15351535+ return str(higher_id)15361536+ next_id = dbid15371537+ QueryExec(query, "SELECT time FROM samples WHERE id = " + str(next_id))15381538+ next_time = int(query.value(0))15391539+ if get_floor:15401540+ if target_time > next_time:15411541+ lower_id = next_id15421542+ else:15431543+ higher_id = next_id15441544+ if higher_id <= lower_id + 1:15451545+ return str(higher_id)15461546+ else:15471547+ if target_time >= next_time:15481548+ lower_id = next_id15491549+ else:15501550+ higher_id = next_id15511551+ if higher_id <= lower_id + 1:15521552+ return str(lower_id)15531553+15541554+ def ConvertRelativeTime(self, val):15551555+ print "val ", val15561556+ mult = 115571557+ suffix = val[-2:]15581558+ if suffix == "ms":15591559+ mult = 100000015601560+ elif suffix == "us":15611561+ mult = 100015621562+ elif suffix == "ns":15631563+ mult = 115641564+ else:15651565+ return val15661566+ val = val[:-2].strip()15671567+ if not self.IsNumber(val):15681568+ return val15691569+ val = int(val) * mult15701570+ if val >= 0:15711571+ val += self.first_time15721572+ else:15731573+ val += self.last_time15741574+ return str(val)15751575+15761576+ def ConvertTimeRange(self, vrange):15771577+ print "vrange ", vrange15781578+ if vrange[0] == "":15791579+ vrange[0] = str(self.first_time)15801580+ if vrange[1] == "":15811581+ vrange[1] = str(self.last_time)15821582+ vrange[0] = self.ConvertRelativeTime(vrange[0])15831583+ vrange[1] = self.ConvertRelativeTime(vrange[1])15841584+ print "vrange2 ", vrange15851585+ if not self.IsNumber(vrange[0]) or not self.IsNumber(vrange[1]):15861586+ return False15871587+ print "ok1"15881588+ beg_range = max(int(vrange[0]), self.first_time)15891589+ end_range = min(int(vrange[1]), self.last_time)15901590+ if beg_range > self.last_time or end_range < self.first_time:15911591+ return False15921592+ print "ok2"15931593+ vrange[0] = self.BinarySearchTime(0, self.last_id, beg_range, True)15941594+ vrange[1] = self.BinarySearchTime(1, self.last_id + 1, end_range, False)15951595+ print "vrange3 ", vrange15961596+ return True15971597+15981598+ def AddTimeRange(self, value, ranges):15991599+ print "value ", value16001600+ n = value.count("-")16011601+ if n == 1:16021602+ pass16031603+ elif n == 2:16041604+ if value.split("-")[1].strip() == "":16051605+ n = 116061606+ elif n == 3:16071607+ n = 216081608+ else:16091609+ return False16101610+ pos = findnth(value, "-", n)16111611+ vrange = [value[:pos].strip() ,value[pos+1:].strip()]16121612+ if self.ConvertTimeRange(vrange):16131613+ ranges.append(vrange)16141614+ return True16151615+ return False16161616+16171617+ def InvalidValue(self, value):16181618+ self.value = ""16191619+ palette = QPalette()16201620+ palette.setColor(QPalette.Text,Qt.red)16211621+ self.widget.setPalette(palette)16221622+ self.red = True16231623+ self.error = self.label + " invalid value '" + value + "'"16241624+ self.parent.ShowMessage(self.error)16251625+16261626+ def IsNumber(self, value):16271627+ try:16281628+ x = int(value)16291629+ except:16301630+ x = 016311631+ return str(x) == value16321632+16331633+ def Invalidate(self):16341634+ self.validated = False16351635+16361636+ def Validate(self):16371637+ input_string = self.widget.text()16381638+ self.validated = True16391639+ if self.red:16401640+ palette = QPalette()16411641+ self.widget.setPalette(palette)16421642+ self.red = False16431643+ if not len(input_string.strip()):16441644+ self.error = ""16451645+ self.value = ""16461646+ return16471647+ if self.table_name == "<timeranges>":16481648+ ranges = []16491649+ for value in [x.strip() for x in input_string.split(",")]:16501650+ if not self.AddTimeRange(value, ranges):16511651+ return self.InvalidValue(value)16521652+ ranges = [("(" + self.column_name1 + " >= " + r[0] + " AND " + self.column_name1 + " <= " + r[1] + ")") for r in ranges]16531653+ self.value = " OR ".join(ranges)16541654+ elif self.table_name == "<ranges>":16551655+ singles = []16561656+ ranges = []16571657+ for value in [x.strip() for x in input_string.split(",")]:16581658+ if "-" in value:16591659+ vrange = value.split("-")16601660+ if len(vrange) != 2 or not self.IsNumber(vrange[0]) or not self.IsNumber(vrange[1]):16611661+ return self.InvalidValue(value)16621662+ ranges.append(vrange)16631663+ else:16641664+ if not self.IsNumber(value):16651665+ return self.InvalidValue(value)16661666+ singles.append(value)16671667+ ranges = [("(" + self.column_name1 + " >= " + r[0] + " AND " + self.column_name1 + " <= " + r[1] + ")") for r in ranges]16681668+ if len(singles):16691669+ ranges.append(self.column_name1 + " IN (" + ",".join(singles) + ")")16701670+ self.value = " OR ".join(ranges)16711671+ elif self.table_name:16721672+ all_ids = []16731673+ for value in [x.strip() for x in input_string.split(",")]:16741674+ ids = self.ValueToIds(value)16751675+ if len(ids):16761676+ all_ids.extend(ids)16771677+ else:16781678+ return self.InvalidValue(value)16791679+ self.value = self.column_name1 + " IN (" + ",".join(all_ids) + ")"16801680+ if self.column_name2:16811681+ self.value = "( " + self.value + " OR " + self.column_name2 + " IN (" + ",".join(all_ids) + ") )"16821682+ else:16831683+ self.value = input_string.strip()16841684+ self.error = ""16851685+ self.parent.ClearMessage()16861686+16871687+ def IsValid(self):16881688+ if not self.validated:16891689+ self.Validate()16901690+ if len(self.error):16911691+ self.parent.ShowMessage(self.error)16921692+ return False16931693+ return True16941694+16951695+# Selected branch report creation dialog16961696+16971697+class SelectedBranchDialog(QDialog):16981698+16991699+ def __init__(self, glb, parent=None):17001700+ super(SelectedBranchDialog, self).__init__(parent)17011701+17021702+ self.glb = glb17031703+17041704+ self.name = ""17051705+ self.where_clause = ""17061706+17071707+ self.setWindowTitle("Selected Branches")17081708+ self.setMinimumWidth(600)17091709+17101710+ items = (17111711+ ("Report name:", "Enter a name to appear in the window title bar", "", "", "", ""),17121712+ ("Time ranges:", "Enter time ranges", "<timeranges>", "", "samples.id", ""),17131713+ ("CPUs:", "Enter CPUs or ranges e.g. 0,5-6", "<ranges>", "", "cpu", ""),17141714+ ("Commands:", "Only branches with these commands will be included", "comms", "comm", "comm_id", ""),17151715+ ("PIDs:", "Only branches with these process IDs will be included", "threads", "pid", "thread_id", ""),17161716+ ("TIDs:", "Only branches with these thread IDs will be included", "threads", "tid", "thread_id", ""),17171717+ ("DSOs:", "Only branches with these DSOs will be included", "dsos", "short_name", "samples.dso_id", "to_dso_id"),17181718+ ("Symbols:", "Only branches with these symbols will be included", "symbols", "name", "symbol_id", "to_symbol_id"),17191719+ ("Raw SQL clause: ", "Enter a raw SQL WHERE clause", "", "", "", ""),17201720+ )17211721+ self.data_items = [SQLTableDialogDataItem(glb, *x, parent=self) for x in items]17221722+17231723+ self.grid = QGridLayout()17241724+17251725+ for row in xrange(len(self.data_items)):17261726+ self.grid.addWidget(QLabel(self.data_items[row].label), row, 0)17271727+ self.grid.addWidget(self.data_items[row].widget, row, 1)17281728+17291729+ self.status = QLabel()17301730+17311731+ self.ok_button = QPushButton("Ok", self)17321732+ self.ok_button.setDefault(True)17331733+ self.ok_button.released.connect(self.Ok)17341734+ self.ok_button.setSizePolicy(QSizePolicy.Fixed, QSizePolicy.Fixed)17351735+17361736+ self.cancel_button = QPushButton("Cancel", self)17371737+ self.cancel_button.released.connect(self.reject)17381738+ self.cancel_button.setSizePolicy(QSizePolicy.Fixed, QSizePolicy.Fixed)17391739+17401740+ self.hbox = QHBoxLayout()17411741+ #self.hbox.addStretch()17421742+ self.hbox.addWidget(self.status)17431743+ self.hbox.addWidget(self.ok_button)17441744+ self.hbox.addWidget(self.cancel_button)17451745+17461746+ self.vbox = QVBoxLayout()17471747+ self.vbox.addLayout(self.grid)17481748+ self.vbox.addLayout(self.hbox)17491749+17501750+ self.setLayout(self.vbox);17511751+17521752+ def Ok(self):17531753+ self.name = self.data_items[0].value17541754+ if not self.name:17551755+ self.ShowMessage("Report name is required")17561756+ return17571757+ for d in self.data_items:17581758+ if not d.IsValid():17591759+ return17601760+ for d in self.data_items[1:]:17611761+ if len(d.value):17621762+ if len(self.where_clause):17631763+ self.where_clause += " AND "17641764+ self.where_clause += d.value17651765+ if len(self.where_clause):17661766+ self.where_clause = " AND ( " + self.where_clause + " ) "17671767+ else:17681768+ self.ShowMessage("No selection")17691769+ return17701770+ self.accept()17711771+17721772+ def ShowMessage(self, msg):17731773+ self.status.setText("<font color=#FF0000>" + msg)17741774+17751775+ def ClearMessage(self):17761776+ self.status.setText("")17771777+14751778# Event list1476177914771780def GetEventList(db):···19751656 def FindDone(self, row):19761657 self.find_bar.Idle()19771658 if row >= 0:19781978- self.view.setCurrentIndex(self.model.index(row, 0, QModelIndex()))16591659+ self.view.setCurrentIndex(self.model.mapFromSource(self.data_model.index(row, 0, QModelIndex())))19791660 else:19801661 self.find_bar.NotFound()19811662···20841765 def setActiveSubWindow(self, nr):20851766 self.mdi_area.setActiveSubWindow(self.mdi_area.subWindowList()[nr - 1])2086176717681768+# Help text17691769+17701770+glb_help_text = """17711771+<h1>Contents</h1>17721772+<style>17731773+p.c1 {17741774+ text-indent: 40px;17751775+}17761776+p.c2 {17771777+ text-indent: 80px;17781778+}17791779+}17801780+</style>17811781+<p class=c1><a href=#reports>1. Reports</a></p>17821782+<p class=c2><a href=#callgraph>1.1 Context-Sensitive Call Graph</a></p>17831783+<p class=c2><a href=#allbranches>1.2 All branches</a></p>17841784+<p class=c2><a href=#selectedbranches>1.3 Selected branches</a></p>17851785+<p class=c1><a href=#tables>2. Tables</a></p>17861786+<h1 id=reports>1. Reports</h1>17871787+<h2 id=callgraph>1.1 Context-Sensitive Call Graph</h2>17881788+The result is a GUI window with a tree representing a context-sensitive17891789+call-graph. Expanding a couple of levels of the tree and adjusting column17901790+widths to suit will display something like:17911791+<pre>17921792+ Call Graph: pt_example17931793+Call Path Object Count Time(ns) Time(%) Branch Count Branch Count(%)17941794+v- ls17951795+ v- 2638:263817961796+ v- _start ld-2.19.so 1 10074071 100.0 211135 100.017971797+ |- unknown unknown 1 13198 0.1 1 0.017981798+ >- _dl_start ld-2.19.so 1 1400980 13.9 19637 9.317991799+ >- _d_linit_internal ld-2.19.so 1 448152 4.4 11094 5.318001800+ v-__libc_start_main@plt ls 1 8211741 81.5 180397 85.418011801+ >- _dl_fixup ld-2.19.so 1 7607 0.1 108 0.118021802+ >- __cxa_atexit libc-2.19.so 1 11737 0.1 10 0.018031803+ >- __libc_csu_init ls 1 10354 0.1 10 0.018041804+ |- _setjmp libc-2.19.so 1 0 0.0 4 0.018051805+ v- main ls 1 8182043 99.6 180254 99.918061806+</pre>18071807+<h3>Points to note:</h3>18081808+<ul>18091809+<li>The top level is a command name (comm)</li>18101810+<li>The next level is a thread (pid:tid)</li>18111811+<li>Subsequent levels are functions</li>18121812+<li>'Count' is the number of calls</li>18131813+<li>'Time' is the elapsed time until the function returns</li>18141814+<li>Percentages are relative to the level above</li>18151815+<li>'Branch Count' is the total number of branches for that function and all functions that it calls18161816+</ul>18171817+<h3>Find</h3>18181818+Ctrl-F displays a Find bar which finds function names by either an exact match or a pattern match.18191819+The pattern matching symbols are ? for any character and * for zero or more characters.18201820+<h2 id=allbranches>1.2 All branches</h2>18211821+The All branches report displays all branches in chronological order.18221822+Not all data is fetched immediately. More records can be fetched using the Fetch bar provided.18231823+<h3>Disassembly</h3>18241824+Open a branch to display disassembly. This only works if:18251825+<ol>18261826+<li>The disassembler is available. Currently, only Intel XED is supported - see <a href=#xed>Intel XED Setup</a></li>18271827+<li>The object code is available. Currently, only the perf build ID cache is searched for object code.18281828+The default directory ~/.debug can be overridden by setting environment variable PERF_BUILDID_DIR.18291829+One exception is kcore where the DSO long name is used (refer dsos_view on the Tables menu),18301830+or alternatively, set environment variable PERF_KCORE to the kcore file name.</li>18311831+</ol>18321832+<h4 id=xed>Intel XED Setup</h4>18331833+To use Intel XED, libxed.so must be present. To build and install libxed.so:18341834+<pre>18351835+git clone https://github.com/intelxed/mbuild.git mbuild18361836+git clone https://github.com/intelxed/xed18371837+cd xed18381838+./mfile.py --share18391839+sudo ./mfile.py --prefix=/usr/local install18401840+sudo ldconfig18411841+</pre>18421842+<h3>Find</h3>18431843+Ctrl-F displays a Find bar which finds substrings by either an exact match or a regular expression match.18441844+Refer to Python documentation for the regular expression syntax.18451845+All columns are searched, but only currently fetched rows are searched.18461846+<h2 id=selectedbranches>1.3 Selected branches</h2>18471847+This is the same as the <a href=#allbranches>All branches</a> report but with the data reduced18481848+by various selection criteria. A dialog box displays available criteria which are AND'ed together.18491849+<h3>1.3.1 Time ranges</h3>18501850+The time ranges hint text shows the total time range. Relative time ranges can also be entered in18511851+ms, us or ns. Also, negative values are relative to the end of trace. Examples:18521852+<pre>18531853+ 81073085947329-81073085958238 From 81073085947329 to 8107308595823818541854+ 100us-200us From 100us to 200us18551855+ 10ms- From 10ms to the end18561856+ -100ns The first 100ns18571857+ -10ms- The last 10ms18581858+</pre>18591859+N.B. Due to the granularity of timestamps, there could be no branches in any given time range.18601860+<h1 id=tables>2. Tables</h1>18611861+The Tables menu shows all tables and views in the database. Most tables have an associated view18621862+which displays the information in a more friendly way. Not all data for large tables is fetched18631863+immediately. More records can be fetched using the Fetch bar provided. Columns can be sorted,18641864+but that can be slow for large tables.18651865+<p>There are also tables of database meta-information.18661866+For SQLite3 databases, the sqlite_master table is included.18671867+For PostgreSQL databases, information_schema.tables/views/columns are included.18681868+<h3>Find</h3>18691869+Ctrl-F displays a Find bar which finds substrings by either an exact match or a regular expression match.18701870+Refer to Python documentation for the regular expression syntax.18711871+All columns are searched, but only currently fetched rows are searched.18721872+<p>N.B. Results are found in id order, so if the table is re-ordered, find-next and find-previous18731873+will go to the next/previous result in id order, instead of display order.18741874+"""18751875+18761876+# Help window18771877+18781878+class HelpWindow(QMdiSubWindow):18791879+18801880+ def __init__(self, glb, parent=None):18811881+ super(HelpWindow, self).__init__(parent)18821882+18831883+ self.text = QTextBrowser()18841884+ self.text.setHtml(glb_help_text)18851885+ self.text.setReadOnly(True)18861886+ self.text.setOpenExternalLinks(True)18871887+18881888+ self.setWidget(self.text)18891889+18901890+ AddSubWindow(glb.mainwindow.mdi_area, self, "Exported SQL Viewer Help")18911891+18921892+# Main window that only displays the help text18931893+18941894+class HelpOnlyWindow(QMainWindow):18951895+18961896+ def __init__(self, parent=None):18971897+ super(HelpOnlyWindow, self).__init__(parent)18981898+18991899+ self.setMinimumSize(200, 100)19001900+ self.resize(800, 600)19011901+ self.setWindowTitle("Exported SQL Viewer Help")19021902+ self.setWindowIcon(self.style().standardIcon(QStyle.SP_MessageBoxInformation))19031903+19041904+ self.text = QTextBrowser()19051905+ self.text.setHtml(glb_help_text)19061906+ self.text.setReadOnly(True)19071907+ self.text.setOpenExternalLinks(True)19081908+19091909+ self.setCentralWidget(self.text)19101910+20871911# Font resize2088191220891913def ResizeFont(widget, diff):···2313185123141852 self.window_menu = WindowMenu(self.mdi_area, menu)2315185318541854+ help_menu = menu.addMenu("&Help")18551855+ help_menu.addAction(CreateAction("&Exported SQL Viewer Help", "Helpful information", self.Help, self, QKeySequence.HelpContents))18561856+23161857 def Find(self):23171858 win = self.mdi_area.activeSubWindow()23181859 if win:···23531888 if event == "branches":23541889 label = "All branches" if branches_events == 1 else "All branches " + "(id=" + dbid + ")"23551890 reports_menu.addAction(CreateAction(label, "Create a new window displaying branch events", lambda x=dbid: self.NewBranchView(x), self))18911891+ label = "Selected branches" if branches_events == 1 else "Selected branches " + "(id=" + dbid + ")"18921892+ reports_menu.addAction(CreateAction(label, "Create a new window displaying branch events", lambda x=dbid: self.NewSelectedBranchView(x), self))2356189323571894 def TableMenu(self, tables, menu):23581895 table_menu = menu.addMenu("&Tables")···23671900 def NewBranchView(self, event_id):23681901 BranchWindow(self.glb, event_id, "", "", self)2369190219031903+ def NewSelectedBranchView(self, event_id):19041904+ dialog = SelectedBranchDialog(self.glb, self)19051905+ ret = dialog.exec_()19061906+ if ret:19071907+ BranchWindow(self.glb, event_id, dialog.name, dialog.where_clause, self)19081908+23701909 def NewTableView(self, table_name):23711910 TableWindow(self.glb, table_name, self)19111911+19121912+ def Help(self):19131913+ HelpWindow(self.glb, self)2372191423731915# XED Disassembler23741916···24051929class LibXED():2406193024071931 def __init__(self):24082408- self.libxed = CDLL("libxed.so")19321932+ try:19331933+ self.libxed = CDLL("libxed.so")19341934+ except:19351935+ self.libxed = None19361936+ if not self.libxed:19371937+ self.libxed = CDLL("/usr/local/lib/libxed.so")2409193824101939 self.xed_tables_init = self.libxed.xed_tables_init24111940 self.xed_tables_init.restype = None···2578209725792098def Main():25802099 if (len(sys.argv) < 2):25812581- print >> sys.stderr, "Usage is: exported-sql-viewer.py <database name>"21002100+ print >> sys.stderr, "Usage is: exported-sql-viewer.py {<database name> | --help-only}"25822101 raise Exception("Too few arguments")2583210225842103 dbname = sys.argv[1]21042104+ if dbname == "--help-only":21052105+ app = QApplication(sys.argv)21062106+ mainwindow = HelpOnlyWindow()21072107+ mainwindow.show()21082108+ err = app.exec_()21092109+ sys.exit(err)2585211025862111 is_sqlite3 = False25872112 try:
···28282929 snprintf(path, sizeof(path), PATH_TO_CPU "cpu%u/cpufreq/%s",3030 cpu, fname);3131- return sysfs_read_file(path, buf, buflen);3131+ return cpupower_read_sysfs(path, buf, buflen);3232}33333434/* helper function to write a new value to a /sys file */
···1313 * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF1414 * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.1515 */1616-/* Test readlink /proc/self/map_files/... with address 0. */1616+/* Test readlink /proc/self/map_files/... with minimum address. */1717#include <errno.h>1818#include <sys/types.h>1919#include <sys/stat.h>···4747int main(void)4848{4949 const unsigned int PAGE_SIZE = sysconf(_SC_PAGESIZE);5050+#ifdef __arm__5151+ unsigned long va = 2 * PAGE_SIZE;5252+#else5353+ unsigned long va = 0;5454+#endif5055 void *p;5156 int fd;5257 unsigned long a, b;···6055 if (fd == -1)6156 return 1;62576363- p = mmap(NULL, PAGE_SIZE, PROT_NONE, MAP_PRIVATE|MAP_FILE|MAP_FIXED, fd, 0);5858+ p = mmap((void *)va, PAGE_SIZE, PROT_NONE, MAP_PRIVATE|MAP_FILE|MAP_FIXED, fd, 0);6459 if (p == MAP_FAILED) {6560 if (errno == EPERM)6661 return 2;