Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 4.20-rc5 into staging-next

We need the staging fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+12717 -6624
+1
.mailmap
··· 159 159 Peter Oruba <peter.oruba@amd.com> 160 160 Pratyush Anand <pratyush.anand@gmail.com> <pratyush.anand@st.com> 161 161 Praveen BP <praveenbp@ti.com> 162 + Punit Agrawal <punitagrawal@gmail.com> <punit.agrawal@arm.com> 162 163 Qais Yousef <qsyousef@gmail.com> <qais.yousef@imgtec.com> 163 164 Oleksij Rempel <linux@rempel-privat.de> <bug-track@fisher-privat.net> 164 165 Oleksij Rempel <linux@rempel-privat.de> <external.Oleksij.Rempel@de.bosch.com>
+8
CREDITS
··· 2138 2138 D: Soundblaster driver fixes, ISAPnP quirk 2139 2139 S: California, USA 2140 2140 2141 + N: Jarkko Lavinen 2142 + E: jarkko.lavinen@nokia.com 2143 + D: OMAP MMC support 2144 + 2141 2145 N: Jonathan Layes 2142 2146 D: ARPD support 2143 2147 ··· 2203 2199 S: Post Office Box 371 2204 2200 S: North Little Rock, Arkansas 72115 2205 2201 S: USA 2202 + 2203 + N: Christopher Li 2204 + E: sparse@chrisli.org 2205 + D: Sparse maintainer 2009 - 2018 2206 2206 2207 2207 N: Stephan Linz 2208 2208 E: linz@mazet.de
+2 -2
Documentation/ABI/testing/sysfs-class-led-trigger-pattern
··· 37 37 0-| / \/ \/ 38 38 +---0----1----2----3----4----5----6------------> time (s) 39 39 40 - 2. To make the LED go instantly from one brigntess value to another, 41 - we should use use zero-time lengths (the brightness must be same as 40 + 2. To make the LED go instantly from one brightness value to another, 41 + we should use zero-time lengths (the brightness must be same as 42 42 the previous tuple's). So the format should be: 43 43 "brightness_1 duration_1 brightness_1 0 brightness_2 duration_2 44 44 brightness_2 0 ...". For example:
+62 -3
Documentation/admin-guide/kernel-parameters.txt
··· 856 856 causing system reset or hang due to sending 857 857 INIT from AP to BSP. 858 858 859 - disable_counter_freezing [HW] 859 + perf_v4_pmi= [X86,INTEL] 860 + Format: <bool> 860 861 Disable Intel PMU counter freezing feature. 861 862 The feature only exists starting from 862 863 Arch Perfmon v4 (Skylake and newer). ··· 3505 3504 before loading. 3506 3505 See Documentation/blockdev/ramdisk.txt. 3507 3506 3507 + psi= [KNL] Enable or disable pressure stall information 3508 + tracking. 3509 + Format: <bool> 3510 + 3508 3511 psmouse.proto= [HW,MOUSE] Highest PS2 mouse protocol extension to 3509 3512 probe for; one of (bare|imps|exps|lifebook|any). 3510 3513 psmouse.rate= [HW,MOUSE] Set desired mouse report rate, in reports ··· 4199 4194 4200 4195 spectre_v2= [X86] Control mitigation of Spectre variant 2 4201 4196 (indirect branch speculation) vulnerability. 4197 + The default operation protects the kernel from 4198 + user space attacks. 4202 4199 4203 - on - unconditionally enable 4204 - off - unconditionally disable 4200 + on - unconditionally enable, implies 4201 + spectre_v2_user=on 4202 + off - unconditionally disable, implies 4203 + spectre_v2_user=off 4205 4204 auto - kernel detects whether your CPU model is 4206 4205 vulnerable 4207 4206 ··· 4215 4206 CONFIG_RETPOLINE configuration option, and the 4216 4207 compiler with which the kernel was built. 4217 4208 4209 + Selecting 'on' will also enable the mitigation 4210 + against user space to user space task attacks. 4211 + 4212 + Selecting 'off' will disable both the kernel and 4213 + the user space protections. 4214 + 4218 4215 Specific mitigations can also be selected manually: 4219 4216 4220 4217 retpoline - replace indirect branches ··· 4229 4214 4230 4215 Not specifying this option is equivalent to 4231 4216 spectre_v2=auto. 4217 + 4218 + spectre_v2_user= 4219 + [X86] Control mitigation of Spectre variant 2 4220 + (indirect branch speculation) vulnerability between 4221 + user space tasks 4222 + 4223 + on - Unconditionally enable mitigations. Is 4224 + enforced by spectre_v2=on 4225 + 4226 + off - Unconditionally disable mitigations. Is 4227 + enforced by spectre_v2=off 4228 + 4229 + prctl - Indirect branch speculation is enabled, 4230 + but mitigation can be enabled via prctl 4231 + per thread. The mitigation control state 4232 + is inherited on fork. 4233 + 4234 + prctl,ibpb 4235 + - Like "prctl" above, but only STIBP is 4236 + controlled per thread. IBPB is issued 4237 + always when switching between different user 4238 + space processes. 4239 + 4240 + seccomp 4241 + - Same as "prctl" above, but all seccomp 4242 + threads will enable the mitigation unless 4243 + they explicitly opt out. 4244 + 4245 + seccomp,ibpb 4246 + - Like "seccomp" above, but only STIBP is 4247 + controlled per thread. IBPB is issued 4248 + always when switching between different 4249 + user space processes. 4250 + 4251 + auto - Kernel selects the mitigation depending on 4252 + the available CPU features and vulnerability. 4253 + 4254 + Default mitigation: 4255 + If CONFIG_SECCOMP=y then "seccomp", otherwise "prctl" 4256 + 4257 + Not specifying this option is equivalent to 4258 + spectre_v2_user=auto. 4232 4259 4233 4260 spec_store_bypass_disable= 4234 4261 [HW] Control Speculative Store Bypass (SSB) Disable mitigation ··· 4770 4713 prevent spurious wakeup); 4771 4714 n = USB_QUIRK_DELAY_CTRL_MSG (Device needs a 4772 4715 pause after every control message); 4716 + o = USB_QUIRK_HUB_SLOW_RESET (Hub needs extra 4717 + delay after resetting its port); 4773 4718 Example: quirks=0781:5580:bk,0a5c:5834:gij 4774 4719 4775 4720 usbhid.mousepoll=
+1 -1
Documentation/admin-guide/pm/cpufreq.rst
··· 150 150 a governor ``sysfs`` interface to it. Next, the governor is started by 151 151 invoking its ``->start()`` callback. 152 152 153 - That callback it expected to register per-CPU utilization update callbacks for 153 + That callback is expected to register per-CPU utilization update callbacks for 154 154 all of the online CPUs belonging to the given policy with the CPU scheduler. 155 155 The utilization update callbacks will be invoked by the CPU scheduler on 156 156 important events, like task enqueue and dequeue, on every iteration of the
+10 -9
Documentation/admin-guide/security-bugs.rst
··· 32 32 The security list is not a disclosure channel. For that, see Coordination 33 33 below. 34 34 35 - Once a robust fix has been developed, our preference is to release the 36 - fix in a timely fashion, treating it no differently than any of the other 37 - thousands of changes and fixes the Linux kernel project releases every 38 - month. 35 + Once a robust fix has been developed, the release process starts. Fixes 36 + for publicly known bugs are released immediately. 39 37 40 - However, at the request of the reporter, we will postpone releasing the 41 - fix for up to 5 business days after the date of the report or after the 42 - embargo has lifted; whichever comes first. The only exception to that 43 - rule is if the bug is publicly known, in which case the preference is to 44 - release the fix as soon as it's available. 38 + Although our preference is to release fixes for publicly undisclosed bugs 39 + as soon as they become available, this may be postponed at the request of 40 + the reporter or an affected party for up to 7 calendar days from the start 41 + of the release process, with an exceptional extension to 14 calendar days 42 + if it is agreed that the criticality of the bug requires more time. The 43 + only valid reason for deferring the publication of a fix is to accommodate 44 + the logistics of QA and large scale rollouts which require release 45 + coordination. 45 46 46 47 Whilst embargoed information may be shared with trusted individuals in 47 48 order to develop a fix, such information will not be published alongside
+1
Documentation/arm64/silicon-errata.txt
··· 57 57 | ARM | Cortex-A73 | #858921 | ARM64_ERRATUM_858921 | 58 58 | ARM | Cortex-A55 | #1024718 | ARM64_ERRATUM_1024718 | 59 59 | ARM | Cortex-A76 | #1188873 | ARM64_ERRATUM_1188873 | 60 + | ARM | Cortex-A76 | #1286807 | ARM64_ERRATUM_1286807 | 60 61 | ARM | MMU-500 | #841119,#826419 | N/A | 61 62 | | | | | 62 63 | Cavium | ThunderX ITS | #22375, #24313 | CAVIUM_ERRATUM_22375 |
+41 -11
Documentation/core-api/xarray.rst
··· 74 74 new entry and return the previous entry stored at that index. You can 75 75 use :c:func:`xa_erase` instead of calling :c:func:`xa_store` with a 76 76 ``NULL`` entry. There is no difference between an entry that has never 77 - been stored to and one that has most recently had ``NULL`` stored to it. 77 + been stored to, one that has been erased and one that has most recently 78 + had ``NULL`` stored to it. 78 79 79 80 You can conditionally replace an entry at an index by using 80 81 :c:func:`xa_cmpxchg`. Like :c:func:`cmpxchg`, it will only succeed if ··· 106 105 indices. Storing into one index may result in the entry retrieved by 107 106 some, but not all of the other indices changing. 108 107 108 + Sometimes you need to ensure that a subsequent call to :c:func:`xa_store` 109 + will not need to allocate memory. The :c:func:`xa_reserve` function 110 + will store a reserved entry at the indicated index. Users of the normal 111 + API will see this entry as containing ``NULL``. If you do not need to 112 + use the reserved entry, you can call :c:func:`xa_release` to remove the 113 + unused entry. If another user has stored to the entry in the meantime, 114 + :c:func:`xa_release` will do nothing; if instead you want the entry to 115 + become ``NULL``, you should use :c:func:`xa_erase`. 116 + 117 + If all entries in the array are ``NULL``, the :c:func:`xa_empty` function 118 + will return ``true``. 119 + 109 120 Finally, you can remove all entries from an XArray by calling 110 121 :c:func:`xa_destroy`. If the XArray entries are pointers, you may wish 111 122 to free the entries first. You can do this by iterating over all present 112 123 entries in the XArray using the :c:func:`xa_for_each` iterator. 113 124 114 - ID assignment 115 - ------------- 125 + Allocating XArrays 126 + ------------------ 127 + 128 + If you use :c:func:`DEFINE_XARRAY_ALLOC` to define the XArray, or 129 + initialise it by passing ``XA_FLAGS_ALLOC`` to :c:func:`xa_init_flags`, 130 + the XArray changes to track whether entries are in use or not. 116 131 117 132 You can call :c:func:`xa_alloc` to store the entry at any unused index 118 133 in the XArray. If you need to modify the array from interrupt context, 119 134 you can use :c:func:`xa_alloc_bh` or :c:func:`xa_alloc_irq` to disable 120 - interrupts while allocating the ID. Unlike :c:func:`xa_store`, allocating 121 - a ``NULL`` pointer does not delete an entry. Instead it reserves an 122 - entry like :c:func:`xa_reserve` and you can release it using either 123 - :c:func:`xa_erase` or :c:func:`xa_release`. To use ID assignment, the 124 - XArray must be defined with :c:func:`DEFINE_XARRAY_ALLOC`, or initialised 125 - by passing ``XA_FLAGS_ALLOC`` to :c:func:`xa_init_flags`, 135 + interrupts while allocating the ID. 136 + 137 + Using :c:func:`xa_store`, :c:func:`xa_cmpxchg` or :c:func:`xa_insert` 138 + will mark the entry as being allocated. Unlike a normal XArray, storing 139 + ``NULL`` will mark the entry as being in use, like :c:func:`xa_reserve`. 140 + To free an entry, use :c:func:`xa_erase` (or :c:func:`xa_release` if 141 + you only want to free the entry if it's ``NULL``). 142 + 143 + You cannot use ``XA_MARK_0`` with an allocating XArray as this mark 144 + is used to track whether an entry is free or not. The other marks are 145 + available for your use. 126 146 127 147 Memory allocation 128 148 ----------------- ··· 180 158 181 159 Takes xa_lock internally: 182 160 * :c:func:`xa_store` 161 + * :c:func:`xa_store_bh` 162 + * :c:func:`xa_store_irq` 183 163 * :c:func:`xa_insert` 184 164 * :c:func:`xa_erase` 185 165 * :c:func:`xa_erase_bh` ··· 191 167 * :c:func:`xa_alloc` 192 168 * :c:func:`xa_alloc_bh` 193 169 * :c:func:`xa_alloc_irq` 170 + * :c:func:`xa_reserve` 171 + * :c:func:`xa_reserve_bh` 172 + * :c:func:`xa_reserve_irq` 194 173 * :c:func:`xa_destroy` 195 174 * :c:func:`xa_set_mark` 196 175 * :c:func:`xa_clear_mark` ··· 204 177 * :c:func:`__xa_erase` 205 178 * :c:func:`__xa_cmpxchg` 206 179 * :c:func:`__xa_alloc` 180 + * :c:func:`__xa_reserve` 207 181 * :c:func:`__xa_set_mark` 208 182 * :c:func:`__xa_clear_mark` 209 183 ··· 262 234 using :c:func:`xa_lock_irqsave` in both the interrupt handler and process 263 235 context, or :c:func:`xa_lock_irq` in process context and :c:func:`xa_lock` 264 236 in the interrupt handler. Some of the more common patterns have helper 265 - functions such as :c:func:`xa_erase_bh` and :c:func:`xa_erase_irq`. 237 + functions such as :c:func:`xa_store_bh`, :c:func:`xa_store_irq`, 238 + :c:func:`xa_erase_bh` and :c:func:`xa_erase_irq`. 266 239 267 240 Sometimes you need to protect access to the XArray with a mutex because 268 241 that lock sits above another mutex in the locking hierarchy. That does ··· 351 322 - :c:func:`xa_is_zero` 352 323 - Zero entries appear as ``NULL`` through the Normal API, but occupy 353 324 an entry in the XArray which can be used to reserve the index for 354 - future use. 325 + future use. This is used by allocating XArrays for allocated entries 326 + which are ``NULL``. 355 327 356 328 Other internal entries may be added in the future. As far as possible, they 357 329 will be handled by :c:func:`xas_retry`.
+5 -3
Documentation/cpu-freq/cpufreq-stats.txt
··· 86 86 This will give a fine grained information about all the CPU frequency 87 87 transitions. The cat output here is a two dimensional matrix, where an entry 88 88 <i,j> (row i, column j) represents the count of number of transitions from 89 - Freq_i to Freq_j. Freq_i is in descending order with increasing rows and 90 - Freq_j is in descending order with increasing columns. The output here also 91 - contains the actual freq values for each row and column for better readability. 89 + Freq_i to Freq_j. Freq_i rows and Freq_j columns follow the sorting order in 90 + which the driver has provided the frequency table initially to the cpufreq core 91 + and so can be sorted (ascending or descending) or unsorted. The output here 92 + also contains the actual freq values for each row and column for better 93 + readability. 92 94 93 95 If the transition table is bigger than PAGE_SIZE, reading this will 94 96 return an -EFBIG error.
+1 -1
Documentation/devicetree/bindings/arm/shmobile.txt
··· 27 27 compatible = "renesas,r8a77470" 28 28 - RZ/G2M (R8A774A1) 29 29 compatible = "renesas,r8a774a1" 30 - - RZ/G2E (RA8774C0) 30 + - RZ/G2E (R8A774C0) 31 31 compatible = "renesas,r8a774c0" 32 32 - R-Car M1A (R8A77781) 33 33 compatible = "renesas,r8a7778"
-65
Documentation/devicetree/bindings/cpufreq/arm_big_little_dt.txt
··· 1 - Generic ARM big LITTLE cpufreq driver's DT glue 2 - ----------------------------------------------- 3 - 4 - This is DT specific glue layer for generic cpufreq driver for big LITTLE 5 - systems. 6 - 7 - Both required and optional properties listed below must be defined 8 - under node /cpus/cpu@x. Where x is the first cpu inside a cluster. 9 - 10 - FIXME: Cpus should boot in the order specified in DT and all cpus for a cluster 11 - must be present contiguously. Generic DT driver will check only node 'x' for 12 - cpu:x. 13 - 14 - Required properties: 15 - - operating-points: Refer to Documentation/devicetree/bindings/opp/opp.txt 16 - for details 17 - 18 - Optional properties: 19 - - clock-latency: Specify the possible maximum transition latency for clock, 20 - in unit of nanoseconds. 21 - 22 - Examples: 23 - 24 - cpus { 25 - #address-cells = <1>; 26 - #size-cells = <0>; 27 - 28 - cpu@0 { 29 - compatible = "arm,cortex-a15"; 30 - reg = <0>; 31 - next-level-cache = <&L2>; 32 - operating-points = < 33 - /* kHz uV */ 34 - 792000 1100000 35 - 396000 950000 36 - 198000 850000 37 - >; 38 - clock-latency = <61036>; /* two CLK32 periods */ 39 - }; 40 - 41 - cpu@1 { 42 - compatible = "arm,cortex-a15"; 43 - reg = <1>; 44 - next-level-cache = <&L2>; 45 - }; 46 - 47 - cpu@100 { 48 - compatible = "arm,cortex-a7"; 49 - reg = <100>; 50 - next-level-cache = <&L2>; 51 - operating-points = < 52 - /* kHz uV */ 53 - 792000 950000 54 - 396000 750000 55 - 198000 450000 56 - >; 57 - clock-latency = <61036>; /* two CLK32 periods */ 58 - }; 59 - 60 - cpu@101 { 61 - compatible = "arm,cortex-a7"; 62 - reg = <101>; 63 - next-level-cache = <&L2>; 64 - }; 65 - };
+6 -2
Documentation/devicetree/bindings/i2c/i2c-omap.txt
··· 1 1 I2C for OMAP platforms 2 2 3 3 Required properties : 4 - - compatible : Must be "ti,omap2420-i2c", "ti,omap2430-i2c", "ti,omap3-i2c" 5 - or "ti,omap4-i2c" 4 + - compatible : Must be 5 + "ti,omap2420-i2c" for OMAP2420 SoCs 6 + "ti,omap2430-i2c" for OMAP2430 SoCs 7 + "ti,omap3-i2c" for OMAP3 SoCs 8 + "ti,omap4-i2c" for OMAP4+ SoCs 9 + "ti,am654-i2c", "ti,omap4-i2c" for AM654 SoCs 6 10 - ti,hwmods : Must be "i2c<n>", n being the instance number (1-based) 7 11 - #address-cells = <1>; 8 12 - #size-cells = <0>;
+1 -1
Documentation/devicetree/bindings/net/can/holt_hi311x.txt
··· 17 17 reg = <1>; 18 18 clocks = <&clk32m>; 19 19 interrupt-parent = <&gpio4>; 20 - interrupts = <13 IRQ_TYPE_EDGE_RISING>; 20 + interrupts = <13 IRQ_TYPE_LEVEL_HIGH>; 21 21 vdd-supply = <&reg5v0>; 22 22 xceiver-supply = <&reg5v0>; 23 23 };
+18 -10
Documentation/devicetree/bindings/net/can/rcar_can.txt
··· 5 5 - compatible: "renesas,can-r8a7743" if CAN controller is a part of R8A7743 SoC. 6 6 "renesas,can-r8a7744" if CAN controller is a part of R8A7744 SoC. 7 7 "renesas,can-r8a7745" if CAN controller is a part of R8A7745 SoC. 8 + "renesas,can-r8a774a1" if CAN controller is a part of R8A774A1 SoC. 8 9 "renesas,can-r8a7778" if CAN controller is a part of R8A7778 SoC. 9 10 "renesas,can-r8a7779" if CAN controller is a part of R8A7779 SoC. 10 11 "renesas,can-r8a7790" if CAN controller is a part of R8A7790 SoC. ··· 15 14 "renesas,can-r8a7794" if CAN controller is a part of R8A7794 SoC. 16 15 "renesas,can-r8a7795" if CAN controller is a part of R8A7795 SoC. 17 16 "renesas,can-r8a7796" if CAN controller is a part of R8A7796 SoC. 17 + "renesas,can-r8a77965" if CAN controller is a part of R8A77965 SoC. 18 18 "renesas,rcar-gen1-can" for a generic R-Car Gen1 compatible device. 19 19 "renesas,rcar-gen2-can" for a generic R-Car Gen2 or RZ/G1 20 20 compatible device. 21 - "renesas,rcar-gen3-can" for a generic R-Car Gen3 compatible device. 21 + "renesas,rcar-gen3-can" for a generic R-Car Gen3 or RZ/G2 22 + compatible device. 22 23 When compatible with the generic version, nodes must list the 23 24 SoC-specific version corresponding to the platform first 24 25 followed by the generic version. 25 26 26 27 - reg: physical base address and size of the R-Car CAN register map. 27 28 - interrupts: interrupt specifier for the sole interrupt. 28 - - clocks: phandles and clock specifiers for 3 CAN clock inputs. 29 - - clock-names: 3 clock input name strings: "clkp1", "clkp2", "can_clk". 29 + - clocks: phandles and clock specifiers for 2 CAN clock inputs for RZ/G2 30 + devices. 31 + phandles and clock specifiers for 3 CAN clock inputs for every other 32 + SoC. 33 + - clock-names: 2 clock input name strings for RZ/G2: "clkp1", "can_clk". 34 + 3 clock input name strings for every other SoC: "clkp1", "clkp2", 35 + "can_clk". 30 36 - pinctrl-0: pin control group to be used for this controller. 31 37 - pinctrl-names: must be "default". 32 38 33 - Required properties for "renesas,can-r8a7795" and "renesas,can-r8a7796" 34 - compatible: 35 - In R8A7795 and R8A7796 SoCs, "clkp2" can be CANFD clock. This is a div6 clock 36 - and can be used by both CAN and CAN FD controller at the same time. It needs to 37 - be scaled to maximum frequency if any of these controllers use it. This is done 39 + Required properties for R8A7795, R8A7796 and R8A77965: 40 + For the denoted SoCs, "clkp2" can be CANFD clock. This is a div6 clock and can 41 + be used by both CAN and CAN FD controller at the same time. It needs to be 42 + scaled to maximum frequency if any of these controllers use it. This is done 38 43 using the below properties: 39 44 40 45 - assigned-clocks: phandle of clkp2(CANFD) clock. ··· 49 42 Optional properties: 50 43 - renesas,can-clock-select: R-Car CAN Clock Source Select. Valid values are: 51 44 <0x0> (default) : Peripheral clock (clkp1) 52 - <0x1> : Peripheral clock (clkp2) 53 - <0x3> : Externally input clock 45 + <0x1> : Peripheral clock (clkp2) (not supported by 46 + RZ/G2 devices) 47 + <0x3> : External input clock 54 48 55 49 Example 56 50 -------
+1 -1
Documentation/devicetree/bindings/net/dsa/dsa.txt
··· 7 7 Current Binding 8 8 --------------- 9 9 10 - Switches are true Linux devices and can be probes by any means. Once 10 + Switches are true Linux devices and can be probed by any means. Once 11 11 probed, they register to the DSA framework, passing a node 12 12 pointer. This node is expected to fulfil the following binding, and 13 13 may contain additional properties as required by the device it is
+24 -9
Documentation/devicetree/bindings/phy/qcom-qmp-phy.txt
··· 40 40 "ref" for 19.2 MHz ref clk, 41 41 "com_aux" for phy common block aux clock, 42 42 "ref_aux" for phy reference aux clock, 43 + 44 + For "qcom,ipq8074-qmp-pcie-phy": no clocks are listed. 43 45 For "qcom,msm8996-qmp-pcie-phy" must contain: 44 46 "aux", "cfg_ahb", "ref". 45 47 For "qcom,msm8996-qmp-usb3-phy" must contain: 46 48 "aux", "cfg_ahb", "ref". 47 - For "qcom,qmp-v3-usb3-phy" must contain: 49 + For "qcom,sdm845-qmp-usb3-phy" must contain: 48 50 "aux", "cfg_ahb", "ref", "com_aux". 51 + For "qcom,sdm845-qmp-usb3-uni-phy" must contain: 52 + "aux", "cfg_ahb", "ref", "com_aux". 53 + For "qcom,sdm845-qmp-ufs-phy" must contain: 54 + "ref", "ref_aux". 49 55 50 56 - resets: a list of phandles and reset controller specifier pairs, 51 57 one for each entry in reset-names. 52 58 - reset-names: "phy" for reset of phy block, 53 59 "common" for phy common block reset, 54 - "cfg" for phy's ahb cfg block reset (Optional). 55 - For "qcom,msm8996-qmp-pcie-phy" must contain: 56 - "phy", "common", "cfg". 57 - For "qcom,msm8996-qmp-usb3-phy" must contain 58 - "phy", "common". 60 + "cfg" for phy's ahb cfg block reset. 61 + 59 62 For "qcom,ipq8074-qmp-pcie-phy" must contain: 60 - "phy", "common". 63 + "phy", "common". 64 + For "qcom,msm8996-qmp-pcie-phy" must contain: 65 + "phy", "common", "cfg". 66 + For "qcom,msm8996-qmp-usb3-phy" must contain 67 + "phy", "common". 68 + For "qcom,sdm845-qmp-usb3-phy" must contain: 69 + "phy", "common". 70 + For "qcom,sdm845-qmp-usb3-uni-phy" must contain: 71 + "phy", "common". 72 + For "qcom,sdm845-qmp-ufs-phy": no resets are listed. 61 73 62 74 - vdda-phy-supply: Phandle to a regulator supply to PHY core block. 63 75 - vdda-pll-supply: Phandle to 1.8V regulator supply to PHY refclk pll block. ··· 91 79 92 80 - #phy-cells: must be 0 93 81 82 + Required properties child node of pcie and usb3 qmp phys: 94 83 - clocks: a list of phandles and clock-specifier pairs, 95 84 one for each entry in clock-names. 96 - - clock-names: Must contain following for pcie and usb qmp phys: 85 + - clock-names: Must contain following: 97 86 "pipe<lane-number>" for pipe clock specific to each lane. 98 87 - clock-output-names: Name of the PHY clock that will be the parent for 99 88 the above pipe clock. ··· 104 91 (or) 105 92 "pcie20_phy1_pipe_clk" 106 93 94 + Required properties for child node of PHYs with lane reset, AKA: 95 + "qcom,msm8996-qmp-pcie-phy" 107 96 - resets: a list of phandles and reset controller specifier pairs, 108 97 one for each entry in reset-names. 109 - - reset-names: Must contain following for pcie qmp phys: 98 + - reset-names: Must contain following: 110 99 "lane<lane-number>" for reset specific to each lane. 111 100 112 101 Example:
+8 -6
Documentation/devicetree/bindings/spi/spi-uniphier.txt
··· 5 5 Required properties: 6 6 - compatible: should be "socionext,uniphier-scssi" 7 7 - reg: address and length of the spi master registers 8 - - #address-cells: must be <1>, see spi-bus.txt 9 - - #size-cells: must be <0>, see spi-bus.txt 10 - - clocks: A phandle to the clock for the device. 11 - - resets: A phandle to the reset control for the device. 8 + - interrupts: a single interrupt specifier 9 + - pinctrl-names: should be "default" 10 + - pinctrl-0: pin control state for the default mode 11 + - clocks: a phandle to the clock for the device 12 + - resets: a phandle to the reset control for the device 12 13 13 14 Example: 14 15 15 16 spi0: spi@54006000 { 16 17 compatible = "socionext,uniphier-scssi"; 17 18 reg = <0x54006000 0x100>; 18 - #address-cells = <1>; 19 - #size-cells = <0>; 19 + interrupts = <0 39 4>; 20 + pinctrl-names = "default"; 21 + pinctrl-0 = <&pinctrl_spi0>; 20 22 clocks = <&peri_clk 11>; 21 23 resets = <&peri_rst 11>; 22 24 };
+18
Documentation/i2c/busses/i2c-nvidia-gpu
··· 1 + Kernel driver i2c-nvidia-gpu 2 + 3 + Datasheet: not publicly available. 4 + 5 + Authors: 6 + Ajay Gupta <ajayg@nvidia.com> 7 + 8 + Description 9 + ----------- 10 + 11 + i2c-nvidia-gpu is a driver for I2C controller included in NVIDIA Turing 12 + and later GPUs and it is used to communicate with Type-C controller on GPUs. 13 + 14 + If your 'lspci -v' listing shows something like the following, 15 + 16 + 01:00.3 Serial bus controller [0c80]: NVIDIA Corporation Device 1ad9 (rev a1) 17 + 18 + then this driver should support the I2C controller of your GPU.
+1 -10
Documentation/input/event-codes.rst
··· 190 190 * REL_WHEEL, REL_HWHEEL: 191 191 192 192 - These codes are used for vertical and horizontal scroll wheels, 193 - respectively. The value is the number of "notches" moved on the wheel, the 194 - physical size of which varies by device. For high-resolution wheels (which 195 - report multiple events for each notch of movement, or do not have notches) 196 - this may be an approximation based on the high-resolution scroll events. 197 - 198 - * REL_WHEEL_HI_RES: 199 - 200 - - If a vertical scroll wheel supports high-resolution scrolling, this code 201 - will be emitted in addition to REL_WHEEL. The value is the (approximate) 202 - distance travelled by the user's finger, in microns. 193 + respectively. 203 194 204 195 EV_ABS 205 196 ------
+1 -1
Documentation/media/uapi/v4l/dev-meta.rst
··· 40 40 the desired operation. Both drivers and applications must set the remainder of 41 41 the :c:type:`v4l2_format` structure to 0. 42 42 43 - .. _v4l2-meta-format: 43 + .. c:type:: v4l2_meta_format 44 44 45 45 .. tabularcolumns:: |p{1.4cm}|p{2.2cm}|p{13.9cm}| 46 46
+5
Documentation/media/uapi/v4l/vidioc-g-fmt.rst
··· 133 133 - Definition of a data format, see :ref:`pixfmt`, used by SDR 134 134 capture and output devices. 135 135 * - 136 + - struct :c:type:`v4l2_meta_format` 137 + - ``meta`` 138 + - Definition of a metadata format, see :ref:`meta-formats`, used by 139 + metadata capture devices. 140 + * - 136 141 - __u8 137 142 - ``raw_data``\ [200] 138 143 - Place holder for future extensions.
+11 -6
Documentation/networking/rxrpc.txt
··· 1056 1056 1057 1057 u32 rxrpc_kernel_check_life(struct socket *sock, 1058 1058 struct rxrpc_call *call); 1059 + void rxrpc_kernel_probe_life(struct socket *sock, 1060 + struct rxrpc_call *call); 1059 1061 1060 - This returns a number that is updated when ACKs are received from the peer 1061 - (notably including PING RESPONSE ACKs which we can elicit by sending PING 1062 - ACKs to see if the call still exists on the server). The caller should 1063 - compare the numbers of two calls to see if the call is still alive after 1064 - waiting for a suitable interval. 1062 + The first function returns a number that is updated when ACKs are received 1063 + from the peer (notably including PING RESPONSE ACKs which we can elicit by 1064 + sending PING ACKs to see if the call still exists on the server). The 1065 + caller should compare the numbers of two calls to see if the call is still 1066 + alive after waiting for a suitable interval. 1065 1067 1066 1068 This allows the caller to work out if the server is still contactable and 1067 1069 if the call is still alive on the server whilst waiting for the server to 1068 1070 process a client operation. 1069 1071 1070 - This function may transmit a PING ACK. 1072 + The second function causes a ping ACK to be transmitted to try to provoke 1073 + the peer into responding, which would then cause the value returned by the 1074 + first function to change. Note that this must be called in TASK_RUNNING 1075 + state. 1071 1076 1072 1077 (*) Get reply timestamp. 1073 1078
+9
Documentation/userspace-api/spec_ctrl.rst
··· 92 92 * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_ENABLE, 0, 0); 93 93 * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_DISABLE, 0, 0); 94 94 * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_FORCE_DISABLE, 0, 0); 95 + 96 + - PR_SPEC_INDIR_BRANCH: Indirect Branch Speculation in User Processes 97 + (Mitigate Spectre V2 style attacks against user processes) 98 + 99 + Invocations: 100 + * prctl(PR_GET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, 0, 0, 0); 101 + * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_ENABLE, 0, 0); 102 + * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_DISABLE, 0, 0); 103 + * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_FORCE_DISABLE, 0, 0);
+1 -31
Documentation/x86/boot.txt
··· 61 61 to struct boot_params for loading bzImage and ramdisk 62 62 above 4G in 64bit. 63 63 64 - Protocol 2.13: (Kernel 3.14) Support 32- and 64-bit flags being set in 65 - xloadflags to support booting a 64-bit kernel from 32-bit 66 - EFI 67 - 68 - Protocol 2.14: (Kernel 4.20) Added acpi_rsdp_addr holding the physical 69 - address of the ACPI RSDP table. 70 - The bootloader updates version with: 71 - 0x8000 | min(kernel-version, bootloader-version) 72 - kernel-version being the protocol version supported by 73 - the kernel and bootloader-version the protocol version 74 - supported by the bootloader. 75 - 76 64 **** MEMORY LAYOUT 77 65 78 66 The traditional memory map for the kernel loader, used for Image or ··· 197 209 0258/8 2.10+ pref_address Preferred loading address 198 210 0260/4 2.10+ init_size Linear memory required during initialization 199 211 0264/4 2.11+ handover_offset Offset of handover entry point 200 - 0268/8 2.14+ acpi_rsdp_addr Physical address of RSDP table 201 212 202 213 (1) For backwards compatibility, if the setup_sects field contains 0, the 203 214 real value is 4. ··· 309 322 Contains the magic number "HdrS" (0x53726448). 310 323 311 324 Field name: version 312 - Type: modify 325 + Type: read 313 326 Offset/size: 0x206/2 314 327 Protocol: 2.00+ 315 328 316 329 Contains the boot protocol version, in (major << 8)+minor format, 317 330 e.g. 0x0204 for version 2.04, and 0x0a11 for a hypothetical version 318 331 10.17. 319 - 320 - Up to protocol version 2.13 this information is only read by the 321 - bootloader. From protocol version 2.14 onwards the bootloader will 322 - write the used protocol version or-ed with 0x8000 to the field. The 323 - used protocol version will be the minimum of the supported protocol 324 - versions of the bootloader and the kernel. 325 332 326 333 Field name: realmode_swtch 327 334 Type: modify (optional) ··· 743 762 handover protocol to boot the kernel should jump to this offset. 744 763 745 764 See EFI HANDOVER PROTOCOL below for more details. 746 - 747 - Field name: acpi_rsdp_addr 748 - Type: write 749 - Offset/size: 0x268/8 750 - Protocol: 2.14+ 751 - 752 - This field can be set by the boot loader to tell the kernel the 753 - physical address of the ACPI RSDP table. 754 - 755 - A value of 0 indicates the kernel should fall back to the standard 756 - methods to locate the RSDP. 757 765 758 766 759 767 **** THE IMAGE CHECKSUM
+18 -16
Documentation/x86/x86_64/mm.txt
··· 34 34 ____________________________________________________________|___________________________________________________________ 35 35 | | | | 36 36 ffff800000000000 | -128 TB | ffff87ffffffffff | 8 TB | ... guard hole, also reserved for hypervisor 37 - ffff880000000000 | -120 TB | ffffc7ffffffffff | 64 TB | direct mapping of all physical memory (page_offset_base) 38 - ffffc80000000000 | -56 TB | ffffc8ffffffffff | 1 TB | ... unused hole 37 + ffff880000000000 | -120 TB | ffff887fffffffff | 0.5 TB | LDT remap for PTI 38 + ffff888000000000 | -119.5 TB | ffffc87fffffffff | 64 TB | direct mapping of all physical memory (page_offset_base) 39 + ffffc88000000000 | -55.5 TB | ffffc8ffffffffff | 0.5 TB | ... unused hole 39 40 ffffc90000000000 | -55 TB | ffffe8ffffffffff | 32 TB | vmalloc/ioremap space (vmalloc_base) 40 41 ffffe90000000000 | -23 TB | ffffe9ffffffffff | 1 TB | ... unused hole 41 42 ffffea0000000000 | -22 TB | ffffeaffffffffff | 1 TB | virtual memory map (vmemmap_base) 42 43 ffffeb0000000000 | -21 TB | ffffebffffffffff | 1 TB | ... unused hole 43 44 ffffec0000000000 | -20 TB | fffffbffffffffff | 16 TB | KASAN shadow memory 45 + __________________|____________|__________________|_________|____________________________________________________________ 46 + | 47 + | Identical layout to the 56-bit one from here on: 48 + ____________________________________________________________|____________________________________________________________ 49 + | | | | 44 50 fffffc0000000000 | -4 TB | fffffdffffffffff | 2 TB | ... unused hole 45 51 | | | | vaddr_end for KASLR 46 52 fffffe0000000000 | -2 TB | fffffe7fffffffff | 0.5 TB | cpu_entry_area mapping 47 - fffffe8000000000 | -1.5 TB | fffffeffffffffff | 0.5 TB | LDT remap for PTI 53 + fffffe8000000000 | -1.5 TB | fffffeffffffffff | 0.5 TB | ... unused hole 48 54 ffffff0000000000 | -1 TB | ffffff7fffffffff | 0.5 TB | %esp fixup stacks 49 - __________________|____________|__________________|_________|____________________________________________________________ 50 - | 51 - | Identical layout to the 47-bit one from here on: 52 - ____________________________________________________________|____________________________________________________________ 53 - | | | | 54 55 ffffff8000000000 | -512 GB | ffffffeeffffffff | 444 GB | ... unused hole 55 56 ffffffef00000000 | -68 GB | fffffffeffffffff | 64 GB | EFI region mapping space 56 57 ffffffff00000000 | -4 GB | ffffffff7fffffff | 2 GB | ... unused hole ··· 84 83 __________________|____________|__________________|_________|___________________________________________________________ 85 84 | | | | 86 85 0000800000000000 | +64 PB | ffff7fffffffffff | ~16K PB | ... huge, still almost 64 bits wide hole of non-canonical 87 - | | | | virtual memory addresses up to the -128 TB 86 + | | | | virtual memory addresses up to the -64 PB 88 87 | | | | starting offset of kernel mappings. 89 88 __________________|____________|__________________|_________|___________________________________________________________ 90 89 | ··· 92 91 ____________________________________________________________|___________________________________________________________ 93 92 | | | | 94 93 ff00000000000000 | -64 PB | ff0fffffffffffff | 4 PB | ... guard hole, also reserved for hypervisor 95 - ff10000000000000 | -60 PB | ff8fffffffffffff | 32 PB | direct mapping of all physical memory (page_offset_base) 96 - ff90000000000000 | -28 PB | ff9fffffffffffff | 4 PB | LDT remap for PTI 94 + ff10000000000000 | -60 PB | ff10ffffffffffff | 0.25 PB | LDT remap for PTI 95 + ff11000000000000 | -59.75 PB | ff90ffffffffffff | 32 PB | direct mapping of all physical memory (page_offset_base) 96 + ff91000000000000 | -27.75 PB | ff9fffffffffffff | 3.75 PB | ... unused hole 97 97 ffa0000000000000 | -24 PB | ffd1ffffffffffff | 12.5 PB | vmalloc/ioremap space (vmalloc_base) 98 98 ffd2000000000000 | -11.5 PB | ffd3ffffffffffff | 0.5 PB | ... unused hole 99 99 ffd4000000000000 | -11 PB | ffd5ffffffffffff | 0.5 PB | virtual memory map (vmemmap_base) 100 100 ffd6000000000000 | -10.5 PB | ffdeffffffffffff | 2.25 PB | ... unused hole 101 101 ffdf000000000000 | -8.25 PB | fffffdffffffffff | ~8 PB | KASAN shadow memory 102 - fffffc0000000000 | -4 TB | fffffdffffffffff | 2 TB | ... unused hole 103 - | | | | vaddr_end for KASLR 104 - fffffe0000000000 | -2 TB | fffffe7fffffffff | 0.5 TB | cpu_entry_area mapping 105 - fffffe8000000000 | -1.5 TB | fffffeffffffffff | 0.5 TB | ... unused hole 106 - ffffff0000000000 | -1 TB | ffffff7fffffffff | 0.5 TB | %esp fixup stacks 107 102 __________________|____________|__________________|_________|____________________________________________________________ 108 103 | 109 104 | Identical layout to the 47-bit one from here on: 110 105 ____________________________________________________________|____________________________________________________________ 111 106 | | | | 107 + fffffc0000000000 | -4 TB | fffffdffffffffff | 2 TB | ... unused hole 108 + | | | | vaddr_end for KASLR 109 + fffffe0000000000 | -2 TB | fffffe7fffffffff | 0.5 TB | cpu_entry_area mapping 110 + fffffe8000000000 | -1.5 TB | fffffeffffffffff | 0.5 TB | ... unused hole 111 + ffffff0000000000 | -1 TB | ffffff7fffffffff | 0.5 TB | %esp fixup stacks 112 112 ffffff8000000000 | -512 GB | ffffffeeffffffff | 444 GB | ... unused hole 113 113 ffffffef00000000 | -68 GB | fffffffeffffffff | 64 GB | EFI region mapping space 114 114 ffffffff00000000 | -4 GB | ffffffff7fffffff | 2 GB | ... unused hole
+1 -1
Documentation/x86/zero-page.txt
··· 25 25 0C8/004 ALL ext_cmd_line_ptr cmd_line_ptr high 32bits 26 26 140/080 ALL edid_info Video mode setup (struct edid_info) 27 27 1C0/020 ALL efi_info EFI 32 information (struct efi_info) 28 - 1E0/004 ALL alk_mem_k Alternative mem check, in KB 28 + 1E0/004 ALL alt_mem_k Alternative mem check, in KB 29 29 1E4/004 ALL scratch Scratch field for the kernel setup code 30 30 1E8/001 ALL e820_entries Number of entries in e820_table (below) 31 31 1E9/001 ALL eddbuf_entries Number of entries in eddbuf (below)
+151 -57
MAINTAINERS
··· 180 180 181 181 8169 10/100/1000 GIGABIT ETHERNET DRIVER 182 182 M: Realtek linux nic maintainers <nic_swsd@realtek.com> 183 + M: Heiner Kallweit <hkallweit1@gmail.com> 183 184 L: netdev@vger.kernel.org 184 185 S: Maintained 185 186 F: drivers/net/ethernet/realtek/r8169.c ··· 718 717 F: include/dt-bindings/reset/altr,rst-mgr-a10sr.h 719 718 720 719 ALTERA TRIPLE SPEED ETHERNET DRIVER 721 - M: Vince Bridgers <vbridger@opensource.altera.com> 720 + M: Thor Thayer <thor.thayer@linux.intel.com> 722 721 L: netdev@vger.kernel.org 723 722 L: nios2-dev@lists.rocketboards.org (moderated for non-subscribers) 724 723 S: Maintained ··· 1931 1930 M: Andy Gross <andy.gross@linaro.org> 1932 1931 M: David Brown <david.brown@linaro.org> 1933 1932 L: linux-arm-msm@vger.kernel.org 1934 - L: linux-soc@vger.kernel.org 1935 1933 S: Maintained 1936 1934 F: Documentation/devicetree/bindings/soc/qcom/ 1937 1935 F: arch/arm/boot/dts/qcom-*.dts ··· 2498 2498 ATHEROS ATH5K WIRELESS DRIVER 2499 2499 M: Jiri Slaby <jirislaby@gmail.com> 2500 2500 M: Nick Kossifidis <mickflemm@gmail.com> 2501 - M: "Luis R. Rodriguez" <mcgrof@do-not-panic.com> 2501 + M: Luis Chamberlain <mcgrof@kernel.org> 2502 2502 L: linux-wireless@vger.kernel.org 2503 2503 W: http://wireless.kernel.org/en/users/Drivers/ath5k 2504 2504 S: Maintained ··· 2808 2808 T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git 2809 2809 Q: https://patchwork.ozlabs.org/project/netdev/list/?delegate=77147 2810 2810 S: Supported 2811 - F: arch/x86/net/bpf_jit* 2811 + F: arch/*/net/* 2812 2812 F: Documentation/networking/filter.txt 2813 2813 F: Documentation/bpf/ 2814 2814 F: include/linux/bpf* ··· 2827 2827 F: tools/bpf/ 2828 2828 F: tools/lib/bpf/ 2829 2829 F: tools/testing/selftests/bpf/ 2830 + 2831 + BPF JIT for ARM 2832 + M: Shubham Bansal <illusionist.neo@gmail.com> 2833 + L: netdev@vger.kernel.org 2834 + S: Maintained 2835 + F: arch/arm/net/ 2836 + 2837 + BPF JIT for ARM64 2838 + M: Daniel Borkmann <daniel@iogearbox.net> 2839 + M: Alexei Starovoitov <ast@kernel.org> 2840 + M: Zi Shen Lim <zlim.lnx@gmail.com> 2841 + L: netdev@vger.kernel.org 2842 + S: Supported 2843 + F: arch/arm64/net/ 2844 + 2845 + BPF JIT for MIPS (32-BIT AND 64-BIT) 2846 + M: Paul Burton <paul.burton@mips.com> 2847 + L: netdev@vger.kernel.org 2848 + S: Maintained 2849 + F: arch/mips/net/ 2850 + 2851 + BPF JIT for NFP NICs 2852 + M: Jakub Kicinski <jakub.kicinski@netronome.com> 2853 + L: netdev@vger.kernel.org 2854 + S: Supported 2855 + F: drivers/net/ethernet/netronome/nfp/bpf/ 2856 + 2857 + BPF JIT for POWERPC (32-BIT AND 64-BIT) 2858 + M: Naveen N. Rao <naveen.n.rao@linux.ibm.com> 2859 + M: Sandipan Das <sandipan@linux.ibm.com> 2860 + L: netdev@vger.kernel.org 2861 + S: Maintained 2862 + F: arch/powerpc/net/ 2863 + 2864 + BPF JIT for S390 2865 + M: Martin Schwidefsky <schwidefsky@de.ibm.com> 2866 + M: Heiko Carstens <heiko.carstens@de.ibm.com> 2867 + L: netdev@vger.kernel.org 2868 + S: Maintained 2869 + F: arch/s390/net/ 2870 + X: arch/s390/net/pnet.c 2871 + 2872 + BPF JIT for SPARC (32-BIT AND 64-BIT) 2873 + M: David S. Miller <davem@davemloft.net> 2874 + L: netdev@vger.kernel.org 2875 + S: Maintained 2876 + F: arch/sparc/net/ 2877 + 2878 + BPF JIT for X86 32-BIT 2879 + M: Wang YanQing <udknight@gmail.com> 2880 + L: netdev@vger.kernel.org 2881 + S: Maintained 2882 + F: arch/x86/net/bpf_jit_comp32.c 2883 + 2884 + BPF JIT for X86 64-BIT 2885 + M: Alexei Starovoitov <ast@kernel.org> 2886 + M: Daniel Borkmann <daniel@iogearbox.net> 2887 + L: netdev@vger.kernel.org 2888 + S: Supported 2889 + F: arch/x86/net/ 2890 + X: arch/x86/net/bpf_jit_comp32.c 2830 2891 2831 2892 BROADCOM B44 10/100 ETHERNET DRIVER 2832 2893 M: Michael Chan <michael.chan@broadcom.com> ··· 2929 2868 BROADCOM BCM47XX MIPS ARCHITECTURE 2930 2869 M: Hauke Mehrtens <hauke@hauke-m.de> 2931 2870 M: Rafał Miłecki <zajec5@gmail.com> 2932 - L: linux-mips@linux-mips.org 2871 + L: linux-mips@vger.kernel.org 2933 2872 S: Maintained 2934 2873 F: Documentation/devicetree/bindings/mips/brcm/ 2935 2874 F: arch/mips/bcm47xx/* ··· 2938 2877 BROADCOM BCM5301X ARM ARCHITECTURE 2939 2878 M: Hauke Mehrtens <hauke@hauke-m.de> 2940 2879 M: Rafał Miłecki <zajec5@gmail.com> 2941 - M: Jon Mason <jonmason@broadcom.com> 2942 2880 M: bcm-kernel-feedback-list@broadcom.com 2943 2881 L: linux-arm-kernel@lists.infradead.org 2944 2882 S: Maintained ··· 2992 2932 BROADCOM BMIPS MIPS ARCHITECTURE 2993 2933 M: Kevin Cernekee <cernekee@gmail.com> 2994 2934 M: Florian Fainelli <f.fainelli@gmail.com> 2995 - L: linux-mips@linux-mips.org 2935 + L: linux-mips@vger.kernel.org 2996 2936 T: git git://github.com/broadcom/stblinux.git 2997 2937 S: Maintained 2998 2938 F: arch/mips/bmips/* ··· 3083 3023 BROADCOM IPROC ARM ARCHITECTURE 3084 3024 M: Ray Jui <rjui@broadcom.com> 3085 3025 M: Scott Branden <sbranden@broadcom.com> 3086 - M: Jon Mason <jonmason@broadcom.com> 3087 3026 M: bcm-kernel-feedback-list@broadcom.com 3088 3027 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 3089 3028 T: git git://github.com/broadcom/cygnus-linux.git ··· 3129 3070 3130 3071 BROADCOM NVRAM DRIVER 3131 3072 M: Rafał Miłecki <zajec5@gmail.com> 3132 - L: linux-mips@linux-mips.org 3073 + L: linux-mips@vger.kernel.org 3133 3074 S: Maintained 3134 3075 F: drivers/firmware/broadcom/* 3135 3076 ··· 3342 3283 F: include/uapi/linux/caif/ 3343 3284 F: include/net/caif/ 3344 3285 F: net/caif/ 3286 + 3287 + CAKE QDISC 3288 + M: Toke Høiland-Jørgensen <toke@toke.dk> 3289 + L: cake@lists.bufferbloat.net (moderated for non-subscribers) 3290 + S: Maintained 3291 + F: net/sched/sch_cake.c 3345 3292 3346 3293 CALGARY x86-64 IOMMU 3347 3294 M: Muli Ben-Yehuda <mulix@mulix.org> ··· 4231 4166 4232 4167 DECSTATION PLATFORM SUPPORT 4233 4168 M: "Maciej W. Rozycki" <macro@linux-mips.org> 4234 - L: linux-mips@linux-mips.org 4169 + L: linux-mips@vger.kernel.org 4235 4170 W: http://www.linux-mips.org/wiki/DECstation 4236 4171 S: Maintained 4237 4172 F: arch/mips/dec/ ··· 5322 5257 M: Ralf Baechle <ralf@linux-mips.org> 5323 5258 M: David Daney <david.daney@cavium.com> 5324 5259 L: linux-edac@vger.kernel.org 5325 - L: linux-mips@linux-mips.org 5260 + L: linux-mips@vger.kernel.org 5326 5261 S: Supported 5327 5262 F: drivers/edac/octeon_edac* 5328 5263 ··· 5601 5536 ETHERNET PHY LIBRARY 5602 5537 M: Andrew Lunn <andrew@lunn.ch> 5603 5538 M: Florian Fainelli <f.fainelli@gmail.com> 5539 + M: Heiner Kallweit <hkallweit1@gmail.com> 5604 5540 L: netdev@vger.kernel.org 5605 5541 S: Maintained 5606 5542 F: Documentation/ABI/testing/sysfs-bus-mdio ··· 5840 5774 F: tools/firewire/ 5841 5775 5842 5776 FIRMWARE LOADER (request_firmware) 5843 - M: Luis R. Rodriguez <mcgrof@kernel.org> 5777 + M: Luis Chamberlain <mcgrof@kernel.org> 5844 5778 L: linux-kernel@vger.kernel.org 5845 5779 S: Maintained 5846 5780 F: Documentation/firmware_class/ ··· 6373 6307 6374 6308 GPIO SUBSYSTEM 6375 6309 M: Linus Walleij <linus.walleij@linaro.org> 6310 + M: Bartosz Golaszewski <bgolaszewski@baylibre.com> 6376 6311 L: linux-gpio@vger.kernel.org 6377 6312 T: git git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio.git 6378 6313 S: Maintained ··· 6682 6615 6683 6616 HID CORE LAYER 6684 6617 M: Jiri Kosina <jikos@kernel.org> 6685 - R: Benjamin Tissoires <benjamin.tissoires@redhat.com> 6618 + M: Benjamin Tissoires <benjamin.tissoires@redhat.com> 6686 6619 L: linux-input@vger.kernel.org 6687 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid.git 6620 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid.git 6688 6621 S: Maintained 6689 6622 F: drivers/hid/ 6690 6623 F: include/linux/hid* ··· 6935 6868 L: linux-acpi@vger.kernel.org 6936 6869 S: Maintained 6937 6870 F: drivers/i2c/i2c-core-acpi.c 6871 + 6872 + I2C CONTROLLER DRIVER FOR NVIDIA GPU 6873 + M: Ajay Gupta <ajayg@nvidia.com> 6874 + L: linux-i2c@vger.kernel.org 6875 + S: Maintained 6876 + F: Documentation/i2c/busses/i2c-nvidia-gpu 6877 + F: drivers/i2c/busses/i2c-nvidia-gpu.c 6938 6878 6939 6879 I2C MUXES 6940 6880 M: Peter Rosin <peda@axentia.se> ··· 7511 7437 F: Documentation/fb/intelfb.txt 7512 7438 F: drivers/video/fbdev/intelfb/ 7513 7439 7440 + INTEL GPIO DRIVERS 7441 + M: Andy Shevchenko <andriy.shevchenko@linux.intel.com> 7442 + L: linux-gpio@vger.kernel.org 7443 + S: Maintained 7444 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/andy/linux-gpio-intel.git 7445 + F: drivers/gpio/gpio-ich.c 7446 + F: drivers/gpio/gpio-intel-mid.c 7447 + F: drivers/gpio/gpio-lynxpoint.c 7448 + F: drivers/gpio/gpio-merrifield.c 7449 + F: drivers/gpio/gpio-ml-ioh.c 7450 + F: drivers/gpio/gpio-pch.c 7451 + F: drivers/gpio/gpio-sch.c 7452 + F: drivers/gpio/gpio-sodaville.c 7453 + 7514 7454 INTEL GVT-g DRIVERS (Intel GPU Virtualization) 7515 7455 M: Zhenyu Wang <zhenyuw@linux.intel.com> 7516 7456 M: Zhi Wang <zhi.a.wang@intel.com> ··· 7534 7446 T: git https://github.com/intel/gvt-linux.git 7535 7447 S: Supported 7536 7448 F: drivers/gpu/drm/i915/gvt/ 7537 - 7538 - INTEL PMIC GPIO DRIVER 7539 - R: Andy Shevchenko <andriy.shevchenko@linux.intel.com> 7540 - S: Maintained 7541 - F: drivers/gpio/gpio-*cove.c 7542 - F: drivers/gpio/gpio-msic.c 7543 7449 7544 7450 INTEL HID EVENT DRIVER 7545 7451 M: Alex Hung <alex.hung@canonical.com> ··· 7622 7540 S: Supported 7623 7541 F: drivers/platform/x86/intel_menlow.c 7624 7542 7625 - INTEL MERRIFIELD GPIO DRIVER 7626 - M: Andy Shevchenko <andriy.shevchenko@linux.intel.com> 7627 - L: linux-gpio@vger.kernel.org 7628 - S: Maintained 7629 - F: drivers/gpio/gpio-merrifield.c 7630 - 7631 7543 INTEL MIC DRIVERS (mic) 7632 7544 M: Sudeep Dutt <sudeep.dutt@intel.com> 7633 7545 M: Ashutosh Dixit <ashutosh.dixit@intel.com> ··· 7653 7577 F: drivers/platform/x86/intel_punit_ipc.c 7654 7578 F: arch/x86/include/asm/intel_pmc_ipc.h 7655 7579 F: arch/x86/include/asm/intel_punit_ipc.h 7580 + 7581 + INTEL PMIC GPIO DRIVERS 7582 + M: Andy Shevchenko <andriy.shevchenko@linux.intel.com> 7583 + S: Maintained 7584 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/andy/linux-gpio-intel.git 7585 + F: drivers/gpio/gpio-*cove.c 7586 + F: drivers/gpio/gpio-msic.c 7656 7587 7657 7588 INTEL MULTIFUNCTION PMIC DEVICE DRIVERS 7658 7589 R: Andy Shevchenko <andriy.shevchenko@linux.intel.com> ··· 7769 7686 7770 7687 IOC3 ETHERNET DRIVER 7771 7688 M: Ralf Baechle <ralf@linux-mips.org> 7772 - L: linux-mips@linux-mips.org 7689 + L: linux-mips@vger.kernel.org 7773 7690 S: Maintained 7774 7691 F: drivers/net/ethernet/sgi/ioc3-eth.c 7775 7692 ··· 8140 8057 F: Documentation/dev-tools/kselftest* 8141 8058 8142 8059 KERNEL USERMODE HELPER 8143 - M: "Luis R. Rodriguez" <mcgrof@kernel.org> 8060 + M: Luis Chamberlain <mcgrof@kernel.org> 8144 8061 L: linux-kernel@vger.kernel.org 8145 8062 S: Maintained 8146 8063 F: kernel/umh.c ··· 8197 8114 8198 8115 KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips) 8199 8116 M: James Hogan <jhogan@kernel.org> 8200 - L: linux-mips@linux-mips.org 8117 + L: linux-mips@vger.kernel.org 8201 8118 S: Supported 8202 8119 F: arch/mips/include/uapi/asm/kvm* 8203 8120 F: arch/mips/include/asm/kvm* ··· 8316 8233 F: mm/kmemleak-test.c 8317 8234 8318 8235 KMOD KERNEL MODULE LOADER - USERMODE HELPER 8319 - M: "Luis R. Rodriguez" <mcgrof@kernel.org> 8236 + M: Luis Chamberlain <mcgrof@kernel.org> 8320 8237 L: linux-kernel@vger.kernel.org 8321 8238 S: Maintained 8322 8239 F: kernel/kmod.c ··· 8370 8287 8371 8288 LANTIQ MIPS ARCHITECTURE 8372 8289 M: John Crispin <john@phrozen.org> 8373 - L: linux-mips@linux-mips.org 8290 + L: linux-mips@vger.kernel.org 8374 8291 S: Maintained 8375 8292 F: arch/mips/lantiq 8376 8293 F: drivers/soc/lantiq ··· 8458 8375 LIBATA PATA ARASAN COMPACT FLASH CONTROLLER 8459 8376 M: Viresh Kumar <vireshk@kernel.org> 8460 8377 L: linux-ide@vger.kernel.org 8461 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata.git 8378 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git 8462 8379 S: Maintained 8463 8380 F: include/linux/pata_arasan_cf_data.h 8464 8381 F: drivers/ata/pata_arasan_cf.c ··· 8475 8392 LIBATA PATA FARADAY FTIDE010 AND GEMINI SATA BRIDGE DRIVERS 8476 8393 M: Linus Walleij <linus.walleij@linaro.org> 8477 8394 L: linux-ide@vger.kernel.org 8478 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata.git 8395 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git 8479 8396 S: Maintained 8480 8397 F: drivers/ata/pata_ftide010.c 8481 8398 F: drivers/ata/sata_gemini.c ··· 8494 8411 LIBATA SATA PROMISE TX2/TX4 CONTROLLER DRIVER 8495 8412 M: Mikael Pettersson <mikpelinux@gmail.com> 8496 8413 L: linux-ide@vger.kernel.org 8497 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata.git 8414 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git 8498 8415 S: Maintained 8499 8416 F: drivers/ata/sata_promise.* 8500 8417 ··· 8933 8850 8934 8851 MARDUK (CREATOR CI40) DEVICE TREE SUPPORT 8935 8852 M: Rahul Bedarkar <rahulbedarkar89@gmail.com> 8936 - L: linux-mips@linux-mips.org 8853 + L: linux-mips@vger.kernel.org 8937 8854 S: Maintained 8938 8855 F: arch/mips/boot/dts/img/pistachio_marduk.dts 8939 8856 ··· 9892 9809 9893 9810 MICROSEMI MIPS SOCS 9894 9811 M: Alexandre Belloni <alexandre.belloni@bootlin.com> 9895 - L: linux-mips@linux-mips.org 9812 + L: linux-mips@vger.kernel.org 9896 9813 S: Maintained 9897 9814 F: arch/mips/generic/board-ocelot.c 9898 9815 F: arch/mips/configs/generic/board-ocelot.config ··· 9932 9849 M: Ralf Baechle <ralf@linux-mips.org> 9933 9850 M: Paul Burton <paul.burton@mips.com> 9934 9851 M: James Hogan <jhogan@kernel.org> 9935 - L: linux-mips@linux-mips.org 9852 + L: linux-mips@vger.kernel.org 9936 9853 W: http://www.linux-mips.org/ 9937 9854 T: git git://git.linux-mips.org/pub/scm/ralf/linux.git 9938 9855 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux.git ··· 9945 9862 9946 9863 MIPS BOSTON DEVELOPMENT BOARD 9947 9864 M: Paul Burton <paul.burton@mips.com> 9948 - L: linux-mips@linux-mips.org 9865 + L: linux-mips@vger.kernel.org 9949 9866 S: Maintained 9950 9867 F: Documentation/devicetree/bindings/clock/img,boston-clock.txt 9951 9868 F: arch/mips/boot/dts/img/boston.dts ··· 9955 9872 9956 9873 MIPS GENERIC PLATFORM 9957 9874 M: Paul Burton <paul.burton@mips.com> 9958 - L: linux-mips@linux-mips.org 9875 + L: linux-mips@vger.kernel.org 9959 9876 S: Supported 9960 9877 F: Documentation/devicetree/bindings/power/mti,mips-cpc.txt 9961 9878 F: arch/mips/generic/ ··· 9963 9880 9964 9881 MIPS/LOONGSON1 ARCHITECTURE 9965 9882 M: Keguang Zhang <keguang.zhang@gmail.com> 9966 - L: linux-mips@linux-mips.org 9883 + L: linux-mips@vger.kernel.org 9967 9884 S: Maintained 9968 9885 F: arch/mips/loongson32/ 9969 9886 F: arch/mips/include/asm/mach-loongson32/ ··· 9972 9889 9973 9890 MIPS/LOONGSON2 ARCHITECTURE 9974 9891 M: Jiaxun Yang <jiaxun.yang@flygoat.com> 9975 - L: linux-mips@linux-mips.org 9892 + L: linux-mips@vger.kernel.org 9976 9893 S: Maintained 9977 9894 F: arch/mips/loongson64/fuloong-2e/ 9978 9895 F: arch/mips/loongson64/lemote-2f/ ··· 9982 9899 9983 9900 MIPS/LOONGSON3 ARCHITECTURE 9984 9901 M: Huacai Chen <chenhc@lemote.com> 9985 - L: linux-mips@linux-mips.org 9902 + L: linux-mips@vger.kernel.org 9986 9903 S: Maintained 9987 9904 F: arch/mips/loongson64/ 9988 9905 F: arch/mips/include/asm/mach-loongson64/ ··· 9992 9909 9993 9910 MIPS RINT INSTRUCTION EMULATION 9994 9911 M: Aleksandar Markovic <aleksandar.markovic@mips.com> 9995 - L: linux-mips@linux-mips.org 9912 + L: linux-mips@vger.kernel.org 9996 9913 S: Supported 9997 9914 F: arch/mips/math-emu/sp_rint.c 9998 9915 F: arch/mips/math-emu/dp_rint.c ··· 10875 10792 S: Maintained 10876 10793 F: arch/arm/mach-omap2/omap_hwmod.* 10877 10794 10795 + OMAP I2C DRIVER 10796 + M: Vignesh R <vigneshr@ti.com> 10797 + L: linux-omap@vger.kernel.org 10798 + L: linux-i2c@vger.kernel.org 10799 + S: Maintained 10800 + F: Documentation/devicetree/bindings/i2c/i2c-omap.txt 10801 + F: drivers/i2c/busses/i2c-omap.c 10802 + 10878 10803 OMAP IMAGING SUBSYSTEM (OMAP3 ISP and OMAP4 ISS) 10879 10804 M: Laurent Pinchart <laurent.pinchart@ideasonboard.com> 10880 10805 L: linux-media@vger.kernel.org ··· 10892 10801 F: drivers/staging/media/omap4iss/ 10893 10802 10894 10803 OMAP MMC SUPPORT 10895 - M: Jarkko Lavinen <jarkko.lavinen@nokia.com> 10804 + M: Aaro Koskinen <aaro.koskinen@iki.fi> 10896 10805 L: linux-omap@vger.kernel.org 10897 - S: Maintained 10806 + S: Odd Fixes 10898 10807 F: drivers/mmc/host/omap.c 10899 10808 10900 10809 OMAP POWER MANAGEMENT SUPPORT ··· 10977 10886 10978 10887 ONION OMEGA2+ BOARD 10979 10888 M: Harvey Hunt <harveyhuntnexus@gmail.com> 10980 - L: linux-mips@linux-mips.org 10889 + L: linux-mips@vger.kernel.org 10981 10890 S: Maintained 10982 10891 F: arch/mips/boot/dts/ralink/omega2p.dts 10983 10892 ··· 11829 11738 PIN CONTROLLER - INTEL 11830 11739 M: Mika Westerberg <mika.westerberg@linux.intel.com> 11831 11740 M: Andy Shevchenko <andriy.shevchenko@linux.intel.com> 11741 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/pinctrl/intel.git 11832 11742 S: Maintained 11833 11743 F: drivers/pinctrl/intel/ 11834 11744 ··· 11886 11794 11887 11795 PISTACHIO SOC SUPPORT 11888 11796 M: James Hartley <james.hartley@sondrel.com> 11889 - L: linux-mips@linux-mips.org 11797 + L: linux-mips@vger.kernel.org 11890 11798 S: Odd Fixes 11891 11799 F: arch/mips/pistachio/ 11892 11800 F: arch/mips/include/asm/mach-pistachio/ ··· 12073 11981 F: include/linux/printk.h 12074 11982 12075 11983 PRISM54 WIRELESS DRIVER 12076 - M: "Luis R. Rodriguez" <mcgrof@gmail.com> 11984 + M: Luis Chamberlain <mcgrof@kernel.org> 12077 11985 L: linux-wireless@vger.kernel.org 12078 11986 W: http://wireless.kernel.org/en/users/Drivers/p54 12079 11987 S: Obsolete ··· 12087 11995 F: fs/proc/ 12088 11996 F: include/linux/proc_fs.h 12089 11997 F: tools/testing/selftests/proc/ 11998 + F: Documentation/filesystems/proc.txt 12090 11999 12091 12000 PROC SYSCTL 12092 - M: "Luis R. Rodriguez" <mcgrof@kernel.org> 12001 + M: Luis Chamberlain <mcgrof@kernel.org> 12093 12002 M: Kees Cook <keescook@chromium.org> 12094 12003 L: linux-kernel@vger.kernel.org 12095 12004 L: linux-fsdevel@vger.kernel.org ··· 12553 12460 12554 12461 RALINK MIPS ARCHITECTURE 12555 12462 M: John Crispin <john@phrozen.org> 12556 - L: linux-mips@linux-mips.org 12463 + L: linux-mips@vger.kernel.org 12557 12464 S: Maintained 12558 12465 F: arch/mips/ralink 12559 12466 ··· 12573 12480 12574 12481 RANCHU VIRTUAL BOARD FOR MIPS 12575 12482 M: Miodrag Dinic <miodrag.dinic@mips.com> 12576 - L: linux-mips@linux-mips.org 12483 + L: linux-mips@vger.kernel.org 12577 12484 S: Supported 12578 12485 F: arch/mips/generic/board-ranchu.c 12579 12486 F: arch/mips/configs/generic/board-ranchu.config ··· 14024 13931 F: Documentation/devicetree/bindings/sound/ 14025 13932 F: Documentation/sound/soc/ 14026 13933 F: sound/soc/ 13934 + F: include/dt-bindings/sound/ 14027 13935 F: include/sound/soc* 14028 13936 14029 13937 SOUNDWIRE SUBSYSTEM ··· 14072 13978 F: drivers/tty/vcc.c 14073 13979 14074 13980 SPARSE CHECKER 14075 - M: "Christopher Li" <sparse@chrisli.org> 13981 + M: "Luc Van Oostenryck" <luc.vanoostenryck@gmail.com> 14076 13982 L: linux-sparse@vger.kernel.org 14077 13983 W: https://sparse.wiki.kernel.org/ 14078 13984 T: git git://git.kernel.org/pub/scm/devel/sparse/sparse.git 14079 - T: git git://git.kernel.org/pub/scm/devel/sparse/chrisl/sparse.git 14080 13985 S: Maintained 14081 13986 F: include/linux/compiler.h 14082 13987 ··· 14180 14087 14181 14088 STABLE BRANCH 14182 14089 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 14090 + M: Sasha Levin <sashal@kernel.org> 14183 14091 L: stable@vger.kernel.org 14184 14092 S: Supported 14185 14093 F: Documentation/process/stable-kernel-rules.rst ··· 15318 15224 TURBOCHANNEL SUBSYSTEM 15319 15225 M: "Maciej W. Rozycki" <macro@linux-mips.org> 15320 15226 M: Ralf Baechle <ralf@linux-mips.org> 15321 - L: linux-mips@linux-mips.org 15227 + L: linux-mips@vger.kernel.org 15322 15228 Q: http://patchwork.linux-mips.org/project/linux-mips/list/ 15323 15229 S: Maintained 15324 15230 F: drivers/tc/ ··· 15554 15460 15555 15461 USB HID/HIDBP DRIVERS (USB KEYBOARDS, MICE, REMOTE CONTROLS, ...) 15556 15462 M: Jiri Kosina <jikos@kernel.org> 15557 - R: Benjamin Tissoires <benjamin.tissoires@redhat.com> 15463 + M: Benjamin Tissoires <benjamin.tissoires@redhat.com> 15558 15464 L: linux-usb@vger.kernel.org 15559 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid.git 15465 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid.git 15560 15466 S: Maintained 15561 15467 F: Documentation/hid/hiddev.txt 15562 15468 F: drivers/hid/usbhid/ ··· 16139 16045 16140 16046 VOCORE VOCORE2 BOARD 16141 16047 M: Harvey Hunt <harveyhuntnexus@gmail.com> 16142 - L: linux-mips@linux-mips.org 16048 + L: linux-mips@vger.kernel.org 16143 16049 S: Maintained 16144 16050 F: arch/mips/boot/dts/ralink/vocore2.dts 16145 16051
+2 -2
Makefile
··· 2 2 VERSION = 4 3 3 PATCHLEVEL = 20 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc1 6 - NAME = "People's Front" 5 + EXTRAVERSION = -rc5 6 + NAME = Shy Crocodile 7 7 8 8 # *DOCUMENTATION* 9 9 # To see a list of typical targets execute "make help"
+7 -1
arch/alpha/include/asm/termios.h
··· 73 73 }) 74 74 75 75 #define user_termios_to_kernel_termios(k, u) \ 76 - copy_from_user(k, u, sizeof(struct termios)) 76 + copy_from_user(k, u, sizeof(struct termios2)) 77 77 78 78 #define kernel_termios_to_user_termios(u, k) \ 79 + copy_to_user(u, k, sizeof(struct termios2)) 80 + 81 + #define user_termios_to_kernel_termios_1(k, u) \ 82 + copy_from_user(k, u, sizeof(struct termios)) 83 + 84 + #define kernel_termios_to_user_termios_1(u, k) \ 79 85 copy_to_user(u, k, sizeof(struct termios)) 80 86 81 87 #endif /* _ALPHA_TERMIOS_H */
+5
arch/alpha/include/uapi/asm/ioctls.h
··· 32 32 #define TCXONC _IO('t', 30) 33 33 #define TCFLSH _IO('t', 31) 34 34 35 + #define TCGETS2 _IOR('T', 42, struct termios2) 36 + #define TCSETS2 _IOW('T', 43, struct termios2) 37 + #define TCSETSW2 _IOW('T', 44, struct termios2) 38 + #define TCSETSF2 _IOW('T', 45, struct termios2) 39 + 35 40 #define TIOCSWINSZ _IOW('t', 103, struct winsize) 36 41 #define TIOCGWINSZ _IOR('t', 104, struct winsize) 37 42 #define TIOCSTART _IO('t', 110) /* start output, like ^Q */
+17
arch/alpha/include/uapi/asm/termbits.h
··· 26 26 speed_t c_ospeed; /* output speed */ 27 27 }; 28 28 29 + /* Alpha has identical termios and termios2 */ 30 + 31 + struct termios2 { 32 + tcflag_t c_iflag; /* input mode flags */ 33 + tcflag_t c_oflag; /* output mode flags */ 34 + tcflag_t c_cflag; /* control mode flags */ 35 + tcflag_t c_lflag; /* local mode flags */ 36 + cc_t c_cc[NCCS]; /* control characters */ 37 + cc_t c_line; /* line discipline (== c_cc[19]) */ 38 + speed_t c_ispeed; /* input speed */ 39 + speed_t c_ospeed; /* output speed */ 40 + }; 41 + 29 42 /* Alpha has matching termios and ktermios */ 30 43 31 44 struct ktermios { ··· 165 152 #define B3000000 00034 166 153 #define B3500000 00035 167 154 #define B4000000 00036 155 + #define BOTHER 00037 168 156 169 157 #define CSIZE 00001400 170 158 #define CS5 00000000 ··· 182 168 #define CLOCAL 00100000 183 169 #define CMSPAR 010000000000 /* mark or space (stick) parity */ 184 170 #define CRTSCTS 020000000000 /* flow control */ 171 + 172 + #define CIBAUD 07600000 173 + #define IBSHIFT 16 185 174 186 175 /* c_lflag bits */ 187 176 #define ISIG 0x00000080
+1 -1
arch/arm/boot/dts/am3517-evm.dts
··· 228 228 vmmc-supply = <&vmmc_fixed>; 229 229 bus-width = <4>; 230 230 wp-gpios = <&gpio4 30 GPIO_ACTIVE_HIGH>; /* gpio_126 */ 231 - cd-gpios = <&gpio4 31 GPIO_ACTIVE_HIGH>; /* gpio_127 */ 231 + cd-gpios = <&gpio4 31 GPIO_ACTIVE_LOW>; /* gpio_127 */ 232 232 }; 233 233 234 234 &mmc3 {
+1 -1
arch/arm/boot/dts/am3517-som.dtsi
··· 163 163 compatible = "ti,wl1271"; 164 164 reg = <2>; 165 165 interrupt-parent = <&gpio6>; 166 - interrupts = <10 IRQ_TYPE_LEVEL_HIGH>; /* gpio_170 */ 166 + interrupts = <10 IRQ_TYPE_EDGE_RISING>; /* gpio_170 */ 167 167 ref-clock-frequency = <26000000>; 168 168 tcxo-clock-frequency = <26000000>; 169 169 };
-6
arch/arm/boot/dts/imx51-zii-rdu1.dts
··· 492 492 pinctrl-0 = <&pinctrl_i2c2>; 493 493 status = "okay"; 494 494 495 - eeprom@50 { 496 - compatible = "atmel,24c04"; 497 - pagesize = <16>; 498 - reg = <0x50>; 499 - }; 500 - 501 495 hpa1: amp@60 { 502 496 compatible = "ti,tpa6130a2"; 503 497 reg = <0x60>;
+1 -1
arch/arm/boot/dts/imx53-ppd.dts
··· 55 55 }; 56 56 57 57 chosen { 58 - stdout-path = "&uart1:115200n8"; 58 + stdout-path = "serial0:115200n8"; 59 59 }; 60 60 61 61 memory@70000000 {
+1 -1
arch/arm/boot/dts/imx6sll.dtsi
··· 740 740 i2c1: i2c@21a0000 { 741 741 #address-cells = <1>; 742 742 #size-cells = <0>; 743 - compatible = "fs,imx6sll-i2c", "fsl,imx21-i2c"; 743 + compatible = "fsl,imx6sll-i2c", "fsl,imx21-i2c"; 744 744 reg = <0x021a0000 0x4000>; 745 745 interrupts = <GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>; 746 746 clocks = <&clks IMX6SLL_CLK_I2C1>;
+6 -1
arch/arm/boot/dts/imx6sx-sdb.dtsi
··· 117 117 regulator-name = "enet_3v3"; 118 118 regulator-min-microvolt = <3300000>; 119 119 regulator-max-microvolt = <3300000>; 120 - gpios = <&gpio2 6 GPIO_ACTIVE_LOW>; 120 + gpio = <&gpio2 6 GPIO_ACTIVE_LOW>; 121 + regulator-boot-on; 122 + regulator-always-on; 121 123 }; 122 124 123 125 reg_pcie_gpio: regulator-pcie-gpio { ··· 182 180 phy-supply = <&reg_enet_3v3>; 183 181 phy-mode = "rgmii"; 184 182 phy-handle = <&ethphy1>; 183 + phy-reset-gpios = <&gpio2 7 GPIO_ACTIVE_LOW>; 185 184 status = "okay"; 186 185 187 186 mdio { ··· 376 373 MX6SX_PAD_RGMII1_RD3__ENET1_RX_DATA_3 0x3081 377 374 MX6SX_PAD_RGMII1_RX_CTL__ENET1_RX_EN 0x3081 378 375 MX6SX_PAD_ENET2_RX_CLK__ENET2_REF_CLK_25M 0x91 376 + /* phy reset */ 377 + MX6SX_PAD_ENET2_CRS__GPIO2_IO_7 0x10b0 379 378 >; 380 379 }; 381 380
+1 -1
arch/arm/boot/dts/logicpd-som-lv.dtsi
··· 129 129 }; 130 130 131 131 &mmc3 { 132 - interrupts-extended = <&intc 94 &omap3_pmx_core2 0x46>; 132 + interrupts-extended = <&intc 94 &omap3_pmx_core 0x136>; 133 133 pinctrl-0 = <&mmc3_pins &wl127x_gpio>; 134 134 pinctrl-names = "default"; 135 135 vmmc-supply = <&wl12xx_vmmc>;
+1 -1
arch/arm/boot/dts/logicpd-torpedo-37xx-devkit.dts
··· 35 35 * jumpering combinations for the long run. 36 36 */ 37 37 &mmc3 { 38 - interrupts-extended = <&intc 94 &omap3_pmx_core2 0x46>; 38 + interrupts-extended = <&intc 94 &omap3_pmx_core 0x136>; 39 39 pinctrl-0 = <&mmc3_pins &mmc3_core2_pins>; 40 40 pinctrl-names = "default"; 41 41 vmmc-supply = <&wl12xx_vmmc>;
+5 -1
arch/arm/boot/dts/rk3288-veyron.dtsi
··· 10 10 #include "rk3288.dtsi" 11 11 12 12 / { 13 - memory@0 { 13 + /* 14 + * The default coreboot on veyron devices ignores memory@0 nodes 15 + * and would instead create another memory node. 16 + */ 17 + memory { 14 18 device_type = "memory"; 15 19 reg = <0x0 0x0 0x0 0x80000000>; 16 20 };
+1 -1
arch/arm/boot/dts/sama5d2.dtsi
··· 314 314 0x1 0x0 0x60000000 0x10000000 315 315 0x2 0x0 0x70000000 0x10000000 316 316 0x3 0x0 0x80000000 0x10000000>; 317 - clocks = <&mck>; 317 + clocks = <&h32ck>; 318 318 status = "disabled"; 319 319 320 320 nand_controller: nand-controller {
+2 -2
arch/arm/boot/dts/vf610m4-colibri.dts
··· 50 50 compatible = "fsl,vf610m4"; 51 51 52 52 chosen { 53 - bootargs = "console=ttyLP2,115200 clk_ignore_unused init=/linuxrc rw"; 54 - stdout-path = "&uart2"; 53 + bootargs = "clk_ignore_unused init=/linuxrc rw"; 54 + stdout-path = "serial2:115200"; 55 55 }; 56 56 57 57 memory@8c000000 {
-1
arch/arm/configs/multi_v7_defconfig
··· 1 1 CONFIG_SYSVIPC=y 2 2 CONFIG_NO_HZ=y 3 3 CONFIG_HIGH_RES_TIMERS=y 4 - CONFIG_PREEMPT=y 5 4 CONFIG_CGROUPS=y 6 5 CONFIG_BLK_DEV_INITRD=y 7 6 CONFIG_EMBEDDED=y
+1
arch/arm/include/asm/cputype.h
··· 111 111 #include <linux/kernel.h> 112 112 113 113 extern unsigned int processor_id; 114 + struct proc_info_list *lookup_processor(u32 midr); 114 115 115 116 #ifdef CONFIG_CPU_CP15 116 117 #define read_cpuid(reg) \
+1 -1
arch/arm/include/asm/pgtable-2level.h
··· 10 10 #ifndef _ASM_PGTABLE_2LEVEL_H 11 11 #define _ASM_PGTABLE_2LEVEL_H 12 12 13 - #define __PAGETABLE_PMD_FOLDED 13 + #define __PAGETABLE_PMD_FOLDED 1 14 14 15 15 /* 16 16 * Hardware-wise, we have a two level page table structure, where the first
+49 -12
arch/arm/include/asm/proc-fns.h
··· 23 23 /* 24 24 * Don't change this structure - ASM code relies on it. 25 25 */ 26 - extern struct processor { 26 + struct processor { 27 27 /* MISC 28 28 * get data abort address/flags 29 29 */ ··· 79 79 unsigned int suspend_size; 80 80 void (*do_suspend)(void *); 81 81 void (*do_resume)(void *); 82 - } processor; 82 + }; 83 83 84 84 #ifndef MULTI_CPU 85 + static inline void init_proc_vtable(const struct processor *p) 86 + { 87 + } 88 + 85 89 extern void cpu_proc_init(void); 86 90 extern void cpu_proc_fin(void); 87 91 extern int cpu_do_idle(void); ··· 102 98 extern void cpu_do_suspend(void *); 103 99 extern void cpu_do_resume(void *); 104 100 #else 105 - #define cpu_proc_init processor._proc_init 106 - #define cpu_proc_fin processor._proc_fin 107 - #define cpu_reset processor.reset 108 - #define cpu_do_idle processor._do_idle 109 - #define cpu_dcache_clean_area processor.dcache_clean_area 110 - #define cpu_set_pte_ext processor.set_pte_ext 111 - #define cpu_do_switch_mm processor.switch_mm 112 101 113 - /* These three are private to arch/arm/kernel/suspend.c */ 114 - #define cpu_do_suspend processor.do_suspend 115 - #define cpu_do_resume processor.do_resume 102 + extern struct processor processor; 103 + #if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) 104 + #include <linux/smp.h> 105 + /* 106 + * This can't be a per-cpu variable because we need to access it before 107 + * per-cpu has been initialised. We have a couple of functions that are 108 + * called in a pre-emptible context, and so can't use smp_processor_id() 109 + * there, hence PROC_TABLE(). We insist in init_proc_vtable() that the 110 + * function pointers for these are identical across all CPUs. 111 + */ 112 + extern struct processor *cpu_vtable[]; 113 + #define PROC_VTABLE(f) cpu_vtable[smp_processor_id()]->f 114 + #define PROC_TABLE(f) cpu_vtable[0]->f 115 + static inline void init_proc_vtable(const struct processor *p) 116 + { 117 + unsigned int cpu = smp_processor_id(); 118 + *cpu_vtable[cpu] = *p; 119 + WARN_ON_ONCE(cpu_vtable[cpu]->dcache_clean_area != 120 + cpu_vtable[0]->dcache_clean_area); 121 + WARN_ON_ONCE(cpu_vtable[cpu]->set_pte_ext != 122 + cpu_vtable[0]->set_pte_ext); 123 + } 124 + #else 125 + #define PROC_VTABLE(f) processor.f 126 + #define PROC_TABLE(f) processor.f 127 + static inline void init_proc_vtable(const struct processor *p) 128 + { 129 + processor = *p; 130 + } 131 + #endif 132 + 133 + #define cpu_proc_init PROC_VTABLE(_proc_init) 134 + #define cpu_check_bugs PROC_VTABLE(check_bugs) 135 + #define cpu_proc_fin PROC_VTABLE(_proc_fin) 136 + #define cpu_reset PROC_VTABLE(reset) 137 + #define cpu_do_idle PROC_VTABLE(_do_idle) 138 + #define cpu_dcache_clean_area PROC_TABLE(dcache_clean_area) 139 + #define cpu_set_pte_ext PROC_TABLE(set_pte_ext) 140 + #define cpu_do_switch_mm PROC_VTABLE(switch_mm) 141 + 142 + /* These two are private to arch/arm/kernel/suspend.c */ 143 + #define cpu_do_suspend PROC_VTABLE(do_suspend) 144 + #define cpu_do_resume PROC_VTABLE(do_resume) 116 145 #endif 117 146 118 147 extern void cpu_resume(void);
+2 -2
arch/arm/kernel/bugs.c
··· 6 6 void check_other_bugs(void) 7 7 { 8 8 #ifdef MULTI_CPU 9 - if (processor.check_bugs) 10 - processor.check_bugs(); 9 + if (cpu_check_bugs) 10 + cpu_check_bugs(); 11 11 #endif 12 12 } 13 13
+1 -16
arch/arm/kernel/ftrace.c
··· 183 183 unsigned long frame_pointer) 184 184 { 185 185 unsigned long return_hooker = (unsigned long) &return_to_handler; 186 - struct ftrace_graph_ent trace; 187 186 unsigned long old; 188 - int err; 189 187 190 188 if (unlikely(atomic_read(&current->tracing_graph_pause))) 191 189 return; ··· 191 193 old = *parent; 192 194 *parent = return_hooker; 193 195 194 - trace.func = self_addr; 195 - trace.depth = current->curr_ret_stack + 1; 196 - 197 - /* Only trace if the calling function expects to */ 198 - if (!ftrace_graph_entry(&trace)) { 196 + if (function_graph_enter(old, self_addr, frame_pointer, NULL)) 199 197 *parent = old; 200 - return; 201 - } 202 - 203 - err = ftrace_push_return_trace(old, self_addr, &trace.depth, 204 - frame_pointer, NULL); 205 - if (err == -EBUSY) { 206 - *parent = old; 207 - return; 208 - } 209 198 } 210 199 211 200 #ifdef CONFIG_DYNAMIC_FTRACE
+3 -3
arch/arm/kernel/head-common.S
··· 145 145 #endif 146 146 .size __mmap_switched_data, . - __mmap_switched_data 147 147 148 + __FINIT 149 + .text 150 + 148 151 /* 149 152 * This provides a C-API version of __lookup_processor_type 150 153 */ ··· 158 155 mov r0, r5 159 156 ldmfd sp!, {r4 - r6, r9, pc} 160 157 ENDPROC(lookup_processor_type) 161 - 162 - __FINIT 163 - .text 164 158 165 159 /* 166 160 * Read processor ID register (CP#15, CR0), and look up in the linker-built
+27 -17
arch/arm/kernel/setup.c
··· 114 114 115 115 #ifdef MULTI_CPU 116 116 struct processor processor __ro_after_init; 117 + #if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) 118 + struct processor *cpu_vtable[NR_CPUS] = { 119 + [0] = &processor, 120 + }; 121 + #endif 117 122 #endif 118 123 #ifdef MULTI_TLB 119 124 struct cpu_tlb_fns cpu_tlb __ro_after_init; ··· 671 666 } 672 667 #endif 673 668 669 + /* 670 + * locate processor in the list of supported processor types. The linker 671 + * builds this table for us from the entries in arch/arm/mm/proc-*.S 672 + */ 673 + struct proc_info_list *lookup_processor(u32 midr) 674 + { 675 + struct proc_info_list *list = lookup_processor_type(midr); 676 + 677 + if (!list) { 678 + pr_err("CPU%u: configuration botched (ID %08x), CPU halted\n", 679 + smp_processor_id(), midr); 680 + while (1) 681 + /* can't use cpu_relax() here as it may require MMU setup */; 682 + } 683 + 684 + return list; 685 + } 686 + 674 687 static void __init setup_processor(void) 675 688 { 676 - struct proc_info_list *list; 677 - 678 - /* 679 - * locate processor in the list of supported processor 680 - * types. The linker builds this table for us from the 681 - * entries in arch/arm/mm/proc-*.S 682 - */ 683 - list = lookup_processor_type(read_cpuid_id()); 684 - if (!list) { 685 - pr_err("CPU configuration botched (ID %08x), unable to continue.\n", 686 - read_cpuid_id()); 687 - while (1); 688 - } 689 + unsigned int midr = read_cpuid_id(); 690 + struct proc_info_list *list = lookup_processor(midr); 689 691 690 692 cpu_name = list->cpu_name; 691 693 __cpu_architecture = __get_cpu_architecture(); 692 694 693 - #ifdef MULTI_CPU 694 - processor = *list->proc; 695 - #endif 695 + init_proc_vtable(list->proc); 696 696 #ifdef MULTI_TLB 697 697 cpu_tlb = *list->tlb; 698 698 #endif ··· 709 699 #endif 710 700 711 701 pr_info("CPU: %s [%08x] revision %d (ARMv%s), cr=%08lx\n", 712 - cpu_name, read_cpuid_id(), read_cpuid_id() & 15, 702 + list->cpu_name, midr, midr & 15, 713 703 proc_arch[cpu_architecture()], get_cr()); 714 704 715 705 snprintf(init_utsname()->machine, __NEW_UTS_LEN + 1, "%s%c",
+31
arch/arm/kernel/smp.c
··· 42 42 #include <asm/mmu_context.h> 43 43 #include <asm/pgtable.h> 44 44 #include <asm/pgalloc.h> 45 + #include <asm/procinfo.h> 45 46 #include <asm/processor.h> 46 47 #include <asm/sections.h> 47 48 #include <asm/tlbflush.h> ··· 103 102 #endif 104 103 } 105 104 105 + #if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) 106 + static int secondary_biglittle_prepare(unsigned int cpu) 107 + { 108 + if (!cpu_vtable[cpu]) 109 + cpu_vtable[cpu] = kzalloc(sizeof(*cpu_vtable[cpu]), GFP_KERNEL); 110 + 111 + return cpu_vtable[cpu] ? 0 : -ENOMEM; 112 + } 113 + 114 + static void secondary_biglittle_init(void) 115 + { 116 + init_proc_vtable(lookup_processor(read_cpuid_id())->proc); 117 + } 118 + #else 119 + static int secondary_biglittle_prepare(unsigned int cpu) 120 + { 121 + return 0; 122 + } 123 + 124 + static void secondary_biglittle_init(void) 125 + { 126 + } 127 + #endif 128 + 106 129 int __cpu_up(unsigned int cpu, struct task_struct *idle) 107 130 { 108 131 int ret; 109 132 110 133 if (!smp_ops.smp_boot_secondary) 111 134 return -ENOSYS; 135 + 136 + ret = secondary_biglittle_prepare(cpu); 137 + if (ret) 138 + return ret; 112 139 113 140 /* 114 141 * We need to tell the secondary core where to find ··· 387 358 { 388 359 struct mm_struct *mm = &init_mm; 389 360 unsigned int cpu; 361 + 362 + secondary_biglittle_init(); 390 363 391 364 /* 392 365 * The identity mapping is uncached (strongly ordered), so
+3 -1
arch/arm/mach-davinci/da830.c
··· 759 759 }; 760 760 761 761 static struct davinci_gpio_platform_data da830_gpio_platform_data = { 762 - .ngpio = 128, 762 + .no_auto_base = true, 763 + .base = 0, 764 + .ngpio = 128, 763 765 }; 764 766 765 767 int __init da830_register_gpio(void)
+3 -1
arch/arm/mach-davinci/da850.c
··· 719 719 } 720 720 721 721 static struct davinci_gpio_platform_data da850_gpio_platform_data = { 722 - .ngpio = 144, 722 + .no_auto_base = true, 723 + .base = 0, 724 + .ngpio = 144, 723 725 }; 724 726 725 727 int __init da850_register_gpio(void)
+40
arch/arm/mach-davinci/devices-da8xx.c
··· 701 701 }, 702 702 { /* interrupt */ 703 703 .start = IRQ_DA8XX_GPIO0, 704 + .end = IRQ_DA8XX_GPIO0, 705 + .flags = IORESOURCE_IRQ, 706 + }, 707 + { 708 + .start = IRQ_DA8XX_GPIO1, 709 + .end = IRQ_DA8XX_GPIO1, 710 + .flags = IORESOURCE_IRQ, 711 + }, 712 + { 713 + .start = IRQ_DA8XX_GPIO2, 714 + .end = IRQ_DA8XX_GPIO2, 715 + .flags = IORESOURCE_IRQ, 716 + }, 717 + { 718 + .start = IRQ_DA8XX_GPIO3, 719 + .end = IRQ_DA8XX_GPIO3, 720 + .flags = IORESOURCE_IRQ, 721 + }, 722 + { 723 + .start = IRQ_DA8XX_GPIO4, 724 + .end = IRQ_DA8XX_GPIO4, 725 + .flags = IORESOURCE_IRQ, 726 + }, 727 + { 728 + .start = IRQ_DA8XX_GPIO5, 729 + .end = IRQ_DA8XX_GPIO5, 730 + .flags = IORESOURCE_IRQ, 731 + }, 732 + { 733 + .start = IRQ_DA8XX_GPIO6, 734 + .end = IRQ_DA8XX_GPIO6, 735 + .flags = IORESOURCE_IRQ, 736 + }, 737 + { 738 + .start = IRQ_DA8XX_GPIO7, 739 + .end = IRQ_DA8XX_GPIO7, 740 + .flags = IORESOURCE_IRQ, 741 + }, 742 + { 743 + .start = IRQ_DA8XX_GPIO8, 704 744 .end = IRQ_DA8XX_GPIO8, 705 745 .flags = IORESOURCE_IRQ, 706 746 },
+32
arch/arm/mach-davinci/dm355.c
··· 548 548 }, 549 549 { /* interrupt */ 550 550 .start = IRQ_DM355_GPIOBNK0, 551 + .end = IRQ_DM355_GPIOBNK0, 552 + .flags = IORESOURCE_IRQ, 553 + }, 554 + { 555 + .start = IRQ_DM355_GPIOBNK1, 556 + .end = IRQ_DM355_GPIOBNK1, 557 + .flags = IORESOURCE_IRQ, 558 + }, 559 + { 560 + .start = IRQ_DM355_GPIOBNK2, 561 + .end = IRQ_DM355_GPIOBNK2, 562 + .flags = IORESOURCE_IRQ, 563 + }, 564 + { 565 + .start = IRQ_DM355_GPIOBNK3, 566 + .end = IRQ_DM355_GPIOBNK3, 567 + .flags = IORESOURCE_IRQ, 568 + }, 569 + { 570 + .start = IRQ_DM355_GPIOBNK4, 571 + .end = IRQ_DM355_GPIOBNK4, 572 + .flags = IORESOURCE_IRQ, 573 + }, 574 + { 575 + .start = IRQ_DM355_GPIOBNK5, 576 + .end = IRQ_DM355_GPIOBNK5, 577 + .flags = IORESOURCE_IRQ, 578 + }, 579 + { 580 + .start = IRQ_DM355_GPIOBNK6, 551 581 .end = IRQ_DM355_GPIOBNK6, 552 582 .flags = IORESOURCE_IRQ, 553 583 }, 554 584 }; 555 585 556 586 static struct davinci_gpio_platform_data dm355_gpio_platform_data = { 587 + .no_auto_base = true, 588 + .base = 0, 557 589 .ngpio = 104, 558 590 }; 559 591
+37
arch/arm/mach-davinci/dm365.c
··· 267 267 }, 268 268 { /* interrupt */ 269 269 .start = IRQ_DM365_GPIO0, 270 + .end = IRQ_DM365_GPIO0, 271 + .flags = IORESOURCE_IRQ, 272 + }, 273 + { 274 + .start = IRQ_DM365_GPIO1, 275 + .end = IRQ_DM365_GPIO1, 276 + .flags = IORESOURCE_IRQ, 277 + }, 278 + { 279 + .start = IRQ_DM365_GPIO2, 280 + .end = IRQ_DM365_GPIO2, 281 + .flags = IORESOURCE_IRQ, 282 + }, 283 + { 284 + .start = IRQ_DM365_GPIO3, 285 + .end = IRQ_DM365_GPIO3, 286 + .flags = IORESOURCE_IRQ, 287 + }, 288 + { 289 + .start = IRQ_DM365_GPIO4, 290 + .end = IRQ_DM365_GPIO4, 291 + .flags = IORESOURCE_IRQ, 292 + }, 293 + { 294 + .start = IRQ_DM365_GPIO5, 295 + .end = IRQ_DM365_GPIO5, 296 + .flags = IORESOURCE_IRQ, 297 + }, 298 + { 299 + .start = IRQ_DM365_GPIO6, 300 + .end = IRQ_DM365_GPIO6, 301 + .flags = IORESOURCE_IRQ, 302 + }, 303 + { 304 + .start = IRQ_DM365_GPIO7, 270 305 .end = IRQ_DM365_GPIO7, 271 306 .flags = IORESOURCE_IRQ, 272 307 }, 273 308 }; 274 309 275 310 static struct davinci_gpio_platform_data dm365_gpio_platform_data = { 311 + .no_auto_base = true, 312 + .base = 0, 276 313 .ngpio = 104, 277 314 .gpio_unbanked = 8, 278 315 };
+22
arch/arm/mach-davinci/dm644x.c
··· 492 492 }, 493 493 { /* interrupt */ 494 494 .start = IRQ_GPIOBNK0, 495 + .end = IRQ_GPIOBNK0, 496 + .flags = IORESOURCE_IRQ, 497 + }, 498 + { 499 + .start = IRQ_GPIOBNK1, 500 + .end = IRQ_GPIOBNK1, 501 + .flags = IORESOURCE_IRQ, 502 + }, 503 + { 504 + .start = IRQ_GPIOBNK2, 505 + .end = IRQ_GPIOBNK2, 506 + .flags = IORESOURCE_IRQ, 507 + }, 508 + { 509 + .start = IRQ_GPIOBNK3, 510 + .end = IRQ_GPIOBNK3, 511 + .flags = IORESOURCE_IRQ, 512 + }, 513 + { 514 + .start = IRQ_GPIOBNK4, 495 515 .end = IRQ_GPIOBNK4, 496 516 .flags = IORESOURCE_IRQ, 497 517 }, 498 518 }; 499 519 500 520 static struct davinci_gpio_platform_data dm644_gpio_platform_data = { 521 + .no_auto_base = true, 522 + .base = 0, 501 523 .ngpio = 71, 502 524 }; 503 525
+12
arch/arm/mach-davinci/dm646x.c
··· 442 442 }, 443 443 { /* interrupt */ 444 444 .start = IRQ_DM646X_GPIOBNK0, 445 + .end = IRQ_DM646X_GPIOBNK0, 446 + .flags = IORESOURCE_IRQ, 447 + }, 448 + { 449 + .start = IRQ_DM646X_GPIOBNK1, 450 + .end = IRQ_DM646X_GPIOBNK1, 451 + .flags = IORESOURCE_IRQ, 452 + }, 453 + { 454 + .start = IRQ_DM646X_GPIOBNK2, 445 455 .end = IRQ_DM646X_GPIOBNK2, 446 456 .flags = IORESOURCE_IRQ, 447 457 }, 448 458 }; 449 459 450 460 static struct davinci_gpio_platform_data dm646x_gpio_platform_data = { 461 + .no_auto_base = true, 462 + .base = 0, 451 463 .ngpio = 43, 452 464 }; 453 465
+3
arch/arm/mach-omap1/board-ams-delta.c
··· 750 750 struct modem_private_data *priv = port->private_data; 751 751 int ret; 752 752 753 + if (!priv) 754 + return; 755 + 753 756 if (IS_ERR(priv->regulator)) 754 757 return; 755 758
+53 -58
arch/arm/mach-omap2/display.c
··· 209 209 210 210 return 0; 211 211 } 212 - #else 213 - static inline int omapdss_init_fbdev(void) 212 + 213 + static const char * const omapdss_compat_names[] __initconst = { 214 + "ti,omap2-dss", 215 + "ti,omap3-dss", 216 + "ti,omap4-dss", 217 + "ti,omap5-dss", 218 + "ti,dra7-dss", 219 + }; 220 + 221 + static struct device_node * __init omapdss_find_dss_of_node(void) 214 222 { 215 - return 0; 223 + struct device_node *node; 224 + int i; 225 + 226 + for (i = 0; i < ARRAY_SIZE(omapdss_compat_names); ++i) { 227 + node = of_find_compatible_node(NULL, NULL, 228 + omapdss_compat_names[i]); 229 + if (node) 230 + return node; 231 + } 232 + 233 + return NULL; 216 234 } 235 + 236 + static int __init omapdss_init_of(void) 237 + { 238 + int r; 239 + struct device_node *node; 240 + struct platform_device *pdev; 241 + 242 + /* only create dss helper devices if dss is enabled in the .dts */ 243 + 244 + node = omapdss_find_dss_of_node(); 245 + if (!node) 246 + return 0; 247 + 248 + if (!of_device_is_available(node)) 249 + return 0; 250 + 251 + pdev = of_find_device_by_node(node); 252 + 253 + if (!pdev) { 254 + pr_err("Unable to find DSS platform device\n"); 255 + return -ENODEV; 256 + } 257 + 258 + r = of_platform_populate(node, NULL, NULL, &pdev->dev); 259 + if (r) { 260 + pr_err("Unable to populate DSS submodule devices\n"); 261 + return r; 262 + } 263 + 264 + return omapdss_init_fbdev(); 265 + } 266 + omap_device_initcall(omapdss_init_of); 217 267 #endif /* CONFIG_FB_OMAP2 */ 218 268 219 269 static void dispc_disable_outputs(void) ··· 411 361 412 362 return r; 413 363 } 414 - 415 - static const char * const omapdss_compat_names[] __initconst = { 416 - "ti,omap2-dss", 417 - "ti,omap3-dss", 418 - "ti,omap4-dss", 419 - "ti,omap5-dss", 420 - "ti,dra7-dss", 421 - }; 422 - 423 - static struct device_node * __init omapdss_find_dss_of_node(void) 424 - { 425 - struct device_node *node; 426 - int i; 427 - 428 - for (i = 0; i < ARRAY_SIZE(omapdss_compat_names); ++i) { 429 - node = of_find_compatible_node(NULL, NULL, 430 - omapdss_compat_names[i]); 431 - if (node) 432 - return node; 433 - } 434 - 435 - return NULL; 436 - } 437 - 438 - static int __init omapdss_init_of(void) 439 - { 440 - int r; 441 - struct device_node *node; 442 - struct platform_device *pdev; 443 - 444 - /* only create dss helper devices if dss is enabled in the .dts */ 445 - 446 - node = omapdss_find_dss_of_node(); 447 - if (!node) 448 - return 0; 449 - 450 - if (!of_device_is_available(node)) 451 - return 0; 452 - 453 - pdev = of_find_device_by_node(node); 454 - 455 - if (!pdev) { 456 - pr_err("Unable to find DSS platform device\n"); 457 - return -ENODEV; 458 - } 459 - 460 - r = of_platform_populate(node, NULL, NULL, &pdev->dev); 461 - if (r) { 462 - pr_err("Unable to populate DSS submodule devices\n"); 463 - return r; 464 - } 465 - 466 - return omapdss_init_fbdev(); 467 - } 468 - omap_device_initcall(omapdss_init_of);
+1 -1
arch/arm/mach-omap2/prm44xx.c
··· 351 351 * to occur, WAKEUPENABLE bits must be set in the pad mux registers, and 352 352 * omap44xx_prm_reconfigure_io_chain() must be called. No return value. 353 353 */ 354 - static void __init omap44xx_prm_enable_io_wakeup(void) 354 + static void omap44xx_prm_enable_io_wakeup(void) 355 355 { 356 356 s32 inst = omap4_prmst_get_prm_dev_inst(); 357 357
+2 -15
arch/arm/mm/proc-v7-bugs.c
··· 52 52 case ARM_CPU_PART_CORTEX_A17: 53 53 case ARM_CPU_PART_CORTEX_A73: 54 54 case ARM_CPU_PART_CORTEX_A75: 55 - if (processor.switch_mm != cpu_v7_bpiall_switch_mm) 56 - goto bl_error; 57 55 per_cpu(harden_branch_predictor_fn, cpu) = 58 56 harden_branch_predictor_bpiall; 59 57 spectre_v2_method = "BPIALL"; ··· 59 61 60 62 case ARM_CPU_PART_CORTEX_A15: 61 63 case ARM_CPU_PART_BRAHMA_B15: 62 - if (processor.switch_mm != cpu_v7_iciallu_switch_mm) 63 - goto bl_error; 64 64 per_cpu(harden_branch_predictor_fn, cpu) = 65 65 harden_branch_predictor_iciallu; 66 66 spectre_v2_method = "ICIALLU"; ··· 84 88 ARM_SMCCC_ARCH_WORKAROUND_1, &res); 85 89 if ((int)res.a0 != 0) 86 90 break; 87 - if (processor.switch_mm != cpu_v7_hvc_switch_mm && cpu) 88 - goto bl_error; 89 91 per_cpu(harden_branch_predictor_fn, cpu) = 90 92 call_hvc_arch_workaround_1; 91 - processor.switch_mm = cpu_v7_hvc_switch_mm; 93 + cpu_do_switch_mm = cpu_v7_hvc_switch_mm; 92 94 spectre_v2_method = "hypervisor"; 93 95 break; 94 96 ··· 95 101 ARM_SMCCC_ARCH_WORKAROUND_1, &res); 96 102 if ((int)res.a0 != 0) 97 103 break; 98 - if (processor.switch_mm != cpu_v7_smc_switch_mm && cpu) 99 - goto bl_error; 100 104 per_cpu(harden_branch_predictor_fn, cpu) = 101 105 call_smc_arch_workaround_1; 102 - processor.switch_mm = cpu_v7_smc_switch_mm; 106 + cpu_do_switch_mm = cpu_v7_smc_switch_mm; 103 107 spectre_v2_method = "firmware"; 104 108 break; 105 109 ··· 111 119 if (spectre_v2_method) 112 120 pr_info("CPU%u: Spectre v2: using %s workaround\n", 113 121 smp_processor_id(), spectre_v2_method); 114 - return; 115 - 116 - bl_error: 117 - pr_err("CPU%u: Spectre v2: incorrect context switching function, system vulnerable\n", 118 - cpu); 119 122 } 120 123 #else 121 124 static void cpu_v7_spectre_init(void)
+1 -1
arch/arm/mm/proc-v7.S
··· 112 112 hvc #0 113 113 ldmfd sp!, {r0 - r3} 114 114 b cpu_v7_switch_mm 115 - ENDPROC(cpu_v7_smc_switch_mm) 115 + ENDPROC(cpu_v7_hvc_switch_mm) 116 116 #endif 117 117 ENTRY(cpu_v7_iciallu_switch_mm) 118 118 mov r3, #0
+1 -1
arch/arm/vfp/vfpmodule.c
··· 573 573 */ 574 574 ufp_exc->fpexc = hwstate->fpexc; 575 575 ufp_exc->fpinst = hwstate->fpinst; 576 - ufp_exc->fpinst2 = ufp_exc->fpinst2; 576 + ufp_exc->fpinst2 = hwstate->fpinst2; 577 577 578 578 /* Ensure that VFP is disabled. */ 579 579 vfp_flush_hwstate(thread);
+25
arch/arm64/Kconfig
··· 497 497 498 498 If unsure, say Y. 499 499 500 + config ARM64_ERRATUM_1286807 501 + bool "Cortex-A76: Modification of the translation table for a virtual address might lead to read-after-read ordering violation" 502 + default y 503 + select ARM64_WORKAROUND_REPEAT_TLBI 504 + help 505 + This option adds workaround for ARM Cortex-A76 erratum 1286807 506 + 507 + On the affected Cortex-A76 cores (r0p0 to r3p0), if a virtual 508 + address for a cacheable mapping of a location is being 509 + accessed by a core while another core is remapping the virtual 510 + address to a new physical page using the recommended 511 + break-before-make sequence, then under very rare circumstances 512 + TLBI+DSB completes before a read using the translation being 513 + invalidated has been observed by other observers. The 514 + workaround repeats the TLBI+DSB operation. 515 + 516 + If unsure, say Y. 517 + 500 518 config CAVIUM_ERRATUM_22375 501 519 bool "Cavium erratum 22375, 24313" 502 520 default y ··· 584 566 is unchanged. Work around the erratum by invalidating the walk cache 585 567 entries for the trampoline before entering the kernel proper. 586 568 569 + config ARM64_WORKAROUND_REPEAT_TLBI 570 + bool 571 + help 572 + Enable the repeat TLBI workaround for Falkor erratum 1009 and 573 + Cortex-A76 erratum 1286807. 574 + 587 575 config QCOM_FALKOR_ERRATUM_1009 588 576 bool "Falkor E1009: Prematurely complete a DSB after a TLBI" 589 577 default y 578 + select ARM64_WORKAROUND_REPEAT_TLBI 590 579 help 591 580 On Falkor v1, the CPU may prematurely complete a DSB following a 592 581 TLBI xxIS invalidate maintenance operation. Repeat the TLBI operation
+3
arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
··· 139 139 clock-names = "stmmaceth"; 140 140 tx-fifo-depth = <16384>; 141 141 rx-fifo-depth = <16384>; 142 + snps,multicast-filter-bins = <256>; 142 143 status = "disabled"; 143 144 }; 144 145 ··· 155 154 clock-names = "stmmaceth"; 156 155 tx-fifo-depth = <16384>; 157 156 rx-fifo-depth = <16384>; 157 + snps,multicast-filter-bins = <256>; 158 158 status = "disabled"; 159 159 }; 160 160 ··· 171 169 clock-names = "stmmaceth"; 172 170 tx-fifo-depth = <16384>; 173 171 rx-fifo-depth = <16384>; 172 + snps,multicast-filter-bins = <256>; 174 173 status = "disabled"; 175 174 }; 176 175
+4
arch/arm64/boot/dts/qcom/msm8998-mtp.dtsi
··· 241 241 }; 242 242 }; 243 243 }; 244 + 245 + &tlmm { 246 + gpio-reserved-ranges = <0 4>, <81 4>; 247 + };
+4
arch/arm64/boot/dts/qcom/sdm845-mtp.dts
··· 352 352 status = "okay"; 353 353 }; 354 354 355 + &tlmm { 356 + gpio-reserved-ranges = <0 4>, <81 4>; 357 + }; 358 + 355 359 &uart9 { 356 360 status = "okay"; 357 361 };
+1 -1
arch/arm64/boot/dts/renesas/r8a7795.dtsi
··· 652 652 clock-names = "fck", "brg_int", "scif_clk"; 653 653 dmas = <&dmac1 0x35>, <&dmac1 0x34>, 654 654 <&dmac2 0x35>, <&dmac2 0x34>; 655 - dma-names = "tx", "rx"; 655 + dma-names = "tx", "rx", "tx", "rx"; 656 656 power-domains = <&sysc R8A7795_PD_ALWAYS_ON>; 657 657 resets = <&cpg 518>; 658 658 status = "disabled";
+24 -23
arch/arm64/boot/dts/renesas/r8a77980-condor.dts
··· 15 15 16 16 aliases { 17 17 serial0 = &scif0; 18 - ethernet0 = &avb; 18 + ethernet0 = &gether; 19 19 }; 20 20 21 21 chosen { ··· 97 97 }; 98 98 }; 99 99 100 - &avb { 101 - pinctrl-0 = <&avb_pins>; 102 - pinctrl-names = "default"; 103 - 104 - phy-mode = "rgmii-id"; 105 - phy-handle = <&phy0>; 106 - renesas,no-ether-link; 107 - status = "okay"; 108 - 109 - phy0: ethernet-phy@0 { 110 - rxc-skew-ps = <1500>; 111 - reg = <0>; 112 - interrupt-parent = <&gpio1>; 113 - interrupts = <17 IRQ_TYPE_LEVEL_LOW>; 114 - }; 115 - }; 116 - 117 100 &canfd { 118 101 pinctrl-0 = <&canfd0_pins>; 119 102 pinctrl-names = "default"; ··· 120 137 121 138 &extalr_clk { 122 139 clock-frequency = <32768>; 140 + }; 141 + 142 + &gether { 143 + pinctrl-0 = <&gether_pins>; 144 + pinctrl-names = "default"; 145 + 146 + phy-mode = "rgmii-id"; 147 + phy-handle = <&phy0>; 148 + renesas,no-ether-link; 149 + status = "okay"; 150 + 151 + phy0: ethernet-phy@0 { 152 + rxc-skew-ps = <1500>; 153 + reg = <0>; 154 + interrupt-parent = <&gpio4>; 155 + interrupts = <23 IRQ_TYPE_LEVEL_LOW>; 156 + }; 123 157 }; 124 158 125 159 &i2c0 { ··· 236 236 }; 237 237 238 238 &pfc { 239 - avb_pins: avb { 240 - groups = "avb_mdio", "avb_rgmii"; 241 - function = "avb"; 242 - }; 243 - 244 239 canfd0_pins: canfd0 { 245 240 groups = "canfd0_data_a"; 246 241 function = "canfd0"; 242 + }; 243 + 244 + gether_pins: gether { 245 + groups = "gether_mdio_a", "gether_rgmii", 246 + "gether_txcrefclk", "gether_txcrefclk_mega"; 247 + function = "gether"; 247 248 }; 248 249 249 250 i2c0_pins: i2c0 {
+1 -1
arch/arm64/boot/dts/rockchip/rk3399-puma-haikou.dts
··· 153 153 }; 154 154 155 155 &pcie0 { 156 - ep-gpios = <&gpio4 RK_PC6 GPIO_ACTIVE_LOW>; 156 + ep-gpios = <&gpio4 RK_PC6 GPIO_ACTIVE_HIGH>; 157 157 num-lanes = <4>; 158 158 pinctrl-names = "default"; 159 159 pinctrl-0 = <&pcie_clkreqn_cpm>;
-12
arch/arm64/boot/dts/rockchip/rk3399-rock960.dtsi
··· 57 57 regulator-always-on; 58 58 vin-supply = <&vcc_sys>; 59 59 }; 60 - 61 - vdd_log: vdd-log { 62 - compatible = "pwm-regulator"; 63 - pwms = <&pwm2 0 25000 0>; 64 - regulator-name = "vdd_log"; 65 - regulator-min-microvolt = <800000>; 66 - regulator-max-microvolt = <1400000>; 67 - regulator-always-on; 68 - regulator-boot-on; 69 - vin-supply = <&vcc_sys>; 70 - }; 71 - 72 60 }; 73 61 74 62 &cpu_l0 {
+1 -1
arch/arm64/boot/dts/ti/k3-am65-wakeup.dtsi
··· 36 36 37 37 wkup_uart0: serial@42300000 { 38 38 compatible = "ti,am654-uart"; 39 - reg = <0x00 0x42300000 0x00 0x100>; 39 + reg = <0x42300000 0x100>; 40 40 reg-shift = <2>; 41 41 reg-io-width = <4>; 42 42 interrupts = <GIC_SPI 697 IRQ_TYPE_LEVEL_HIGH>;
+13
arch/arm64/include/asm/ftrace.h
··· 56 56 { 57 57 return is_compat_task(); 58 58 } 59 + 60 + #define ARCH_HAS_SYSCALL_MATCH_SYM_NAME 61 + 62 + static inline bool arch_syscall_match_sym_name(const char *sym, 63 + const char *name) 64 + { 65 + /* 66 + * Since all syscall functions have __arm64_ prefix, we must skip it. 67 + * However, as we described above, we decided to ignore compat 68 + * syscalls, so we don't care about __arm64_compat_ prefix here. 69 + */ 70 + return !strcmp(sym + 8, name); 71 + } 59 72 #endif /* ifndef __ASSEMBLY__ */ 60 73 61 74 #endif /* __ASM_FTRACE_H */
+8
arch/arm64/include/asm/processor.h
··· 24 24 #define KERNEL_DS UL(-1) 25 25 #define USER_DS (TASK_SIZE_64 - 1) 26 26 27 + /* 28 + * On arm64 systems, unaligned accesses by the CPU are cheap, and so there is 29 + * no point in shifting all network buffers by 2 bytes just to make some IP 30 + * header fields appear aligned in memory, potentially sacrificing some DMA 31 + * performance on some platforms. 32 + */ 33 + #define NET_IP_ALIGN 0 34 + 27 35 #ifndef __ASSEMBLY__ 28 36 #ifdef __KERNEL__ 29 37
+2 -2
arch/arm64/include/asm/sysreg.h
··· 468 468 SCTLR_ELx_SA | SCTLR_ELx_I | SCTLR_ELx_WXN | \ 469 469 SCTLR_ELx_DSSBS | ENDIAN_CLEAR_EL2 | SCTLR_EL2_RES0) 470 470 471 - #if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffffffffffff 471 + #if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffffffffffffUL 472 472 #error "Inconsistent SCTLR_EL2 set/clear bits" 473 473 #endif 474 474 ··· 509 509 SCTLR_EL1_UMA | SCTLR_ELx_WXN | ENDIAN_CLEAR_EL1 |\ 510 510 SCTLR_ELx_DSSBS | SCTLR_EL1_NTWI | SCTLR_EL1_RES0) 511 511 512 - #if (SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != 0xffffffffffffffff 512 + #if (SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != 0xffffffffffffffffUL 513 513 #error "Inconsistent SCTLR_EL1 set/clear bits" 514 514 #endif 515 515
+2 -2
arch/arm64/include/asm/tlbflush.h
··· 41 41 ALTERNATIVE("nop\n nop", \ 42 42 "dsb ish\n tlbi " #op, \ 43 43 ARM64_WORKAROUND_REPEAT_TLBI, \ 44 - CONFIG_QCOM_FALKOR_ERRATUM_1009) \ 44 + CONFIG_ARM64_WORKAROUND_REPEAT_TLBI) \ 45 45 : : ) 46 46 47 47 #define __TLBI_1(op, arg) asm ("tlbi " #op ", %0\n" \ 48 48 ALTERNATIVE("nop\n nop", \ 49 49 "dsb ish\n tlbi " #op ", %0", \ 50 50 ARM64_WORKAROUND_REPEAT_TLBI, \ 51 - CONFIG_QCOM_FALKOR_ERRATUM_1009) \ 51 + CONFIG_ARM64_WORKAROUND_REPEAT_TLBI) \ 52 52 : : "r" (arg)) 53 53 54 54 #define __TLBI_N(op, arg, n, ...) __TLBI_##n(op, arg)
+17 -3
arch/arm64/kernel/cpu_errata.c
··· 570 570 571 571 #endif 572 572 573 + #ifdef CONFIG_ARM64_WORKAROUND_REPEAT_TLBI 574 + 575 + static const struct midr_range arm64_repeat_tlbi_cpus[] = { 576 + #ifdef CONFIG_QCOM_FALKOR_ERRATUM_1009 577 + MIDR_RANGE(MIDR_QCOM_FALKOR_V1, 0, 0, 0, 0), 578 + #endif 579 + #ifdef CONFIG_ARM64_ERRATUM_1286807 580 + MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 3, 0), 581 + #endif 582 + {}, 583 + }; 584 + 585 + #endif 586 + 573 587 const struct arm64_cpu_capabilities arm64_errata[] = { 574 588 #if defined(CONFIG_ARM64_ERRATUM_826319) || \ 575 589 defined(CONFIG_ARM64_ERRATUM_827319) || \ ··· 709 695 .matches = is_kryo_midr, 710 696 }, 711 697 #endif 712 - #ifdef CONFIG_QCOM_FALKOR_ERRATUM_1009 698 + #ifdef CONFIG_ARM64_WORKAROUND_REPEAT_TLBI 713 699 { 714 - .desc = "Qualcomm Technologies Falkor erratum 1009", 700 + .desc = "Qualcomm erratum 1009, ARM erratum 1286807", 715 701 .capability = ARM64_WORKAROUND_REPEAT_TLBI, 716 - ERRATA_MIDR_REV(MIDR_QCOM_FALKOR_V1, 0, 0), 702 + ERRATA_MIDR_RANGE_LIST(arm64_repeat_tlbi_cpus), 717 703 }, 718 704 #endif 719 705 #ifdef CONFIG_ARM64_ERRATUM_858921
+1 -1
arch/arm64/kernel/cpufeature.c
··· 1333 1333 .cpu_enable = cpu_enable_hw_dbm, 1334 1334 }, 1335 1335 #endif 1336 - #ifdef CONFIG_ARM64_SSBD 1337 1336 { 1338 1337 .desc = "CRC32 instructions", 1339 1338 .capability = ARM64_HAS_CRC32, ··· 1342 1343 .field_pos = ID_AA64ISAR0_CRC32_SHIFT, 1343 1344 .min_field_value = 1, 1344 1345 }, 1346 + #ifdef CONFIG_ARM64_SSBD 1345 1347 { 1346 1348 .desc = "Speculative Store Bypassing Safe (SSBS)", 1347 1349 .capability = ARM64_SSBS,
+1 -14
arch/arm64/kernel/ftrace.c
··· 216 216 { 217 217 unsigned long return_hooker = (unsigned long)&return_to_handler; 218 218 unsigned long old; 219 - struct ftrace_graph_ent trace; 220 - int err; 221 219 222 220 if (unlikely(atomic_read(&current->tracing_graph_pause))) 223 221 return; ··· 227 229 */ 228 230 old = *parent; 229 231 230 - trace.func = self_addr; 231 - trace.depth = current->curr_ret_stack + 1; 232 - 233 - /* Only trace if the calling function expects to */ 234 - if (!ftrace_graph_entry(&trace)) 235 - return; 236 - 237 - err = ftrace_push_return_trace(old, self_addr, &trace.depth, 238 - frame_pointer, NULL); 239 - if (err == -EBUSY) 240 - return; 241 - else 232 + if (!function_graph_enter(old, self_addr, frame_pointer, NULL)) 242 233 *parent = return_hooker; 243 234 } 244 235
+1
arch/arm64/kernel/setup.c
··· 313 313 arm64_memblock_init(); 314 314 315 315 paging_init(); 316 + efi_apply_persistent_mem_reservations(); 316 317 317 318 acpi_table_upgrade(); 318 319
-2
arch/arm64/mm/init.c
··· 483 483 high_memory = __va(memblock_end_of_DRAM() - 1) + 1; 484 484 485 485 dma_contiguous_reserve(arm64_dma_phys_limit); 486 - 487 - memblock_allow_resize(); 488 486 } 489 487 490 488 void __init bootmem_init(void)
+2
arch/arm64/mm/mmu.c
··· 659 659 660 660 memblock_free(__pa_symbol(init_pg_dir), 661 661 __pa_symbol(init_pg_end) - __pa_symbol(init_pg_dir)); 662 + 663 + memblock_allow_resize(); 662 664 } 663 665 664 666 /*
+17 -9
arch/arm64/net/bpf_jit_comp.c
··· 351 351 * >0 - successfully JITed a 16-byte eBPF instruction. 352 352 * <0 - failed to JIT. 353 353 */ 354 - static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) 354 + static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, 355 + bool extra_pass) 355 356 { 356 357 const u8 code = insn->code; 357 358 const u8 dst = bpf2a64[insn->dst_reg]; ··· 626 625 case BPF_JMP | BPF_CALL: 627 626 { 628 627 const u8 r0 = bpf2a64[BPF_REG_0]; 629 - const u64 func = (u64)__bpf_call_base + imm; 628 + bool func_addr_fixed; 629 + u64 func_addr; 630 + int ret; 630 631 631 - if (ctx->prog->is_func) 632 - emit_addr_mov_i64(tmp, func, ctx); 632 + ret = bpf_jit_get_func_addr(ctx->prog, insn, extra_pass, 633 + &func_addr, &func_addr_fixed); 634 + if (ret < 0) 635 + return ret; 636 + if (func_addr_fixed) 637 + /* We can use optimized emission here. */ 638 + emit_a64_mov_i64(tmp, func_addr, ctx); 633 639 else 634 - emit_a64_mov_i64(tmp, func, ctx); 640 + emit_addr_mov_i64(tmp, func_addr, ctx); 635 641 emit(A64_BLR(tmp), ctx); 636 642 emit(A64_MOV(1, r0, A64_R(0)), ctx); 637 643 break; ··· 761 753 return 0; 762 754 } 763 755 764 - static int build_body(struct jit_ctx *ctx) 756 + static int build_body(struct jit_ctx *ctx, bool extra_pass) 765 757 { 766 758 const struct bpf_prog *prog = ctx->prog; 767 759 int i; ··· 770 762 const struct bpf_insn *insn = &prog->insnsi[i]; 771 763 int ret; 772 764 773 - ret = build_insn(insn, ctx); 765 + ret = build_insn(insn, ctx, extra_pass); 774 766 if (ret > 0) { 775 767 i++; 776 768 if (ctx->image == NULL) ··· 866 858 /* 1. Initial fake pass to compute ctx->idx. */ 867 859 868 860 /* Fake pass to fill in ctx->offset. */ 869 - if (build_body(&ctx)) { 861 + if (build_body(&ctx, extra_pass)) { 870 862 prog = orig_prog; 871 863 goto out_off; 872 864 } ··· 896 888 897 889 build_prologue(&ctx, was_classic); 898 890 899 - if (build_body(&ctx)) { 891 + if (build_body(&ctx, extra_pass)) { 900 892 bpf_jit_binary_free(header); 901 893 prog = orig_prog; 902 894 goto out_off;
+3 -1
arch/ia64/include/asm/numa.h
··· 59 59 */ 60 60 61 61 extern u8 numa_slit[MAX_NUMNODES * MAX_NUMNODES]; 62 - #define node_distance(from,to) (numa_slit[(from) * MAX_NUMNODES + (to)]) 62 + #define slit_distance(from,to) (numa_slit[(from) * MAX_NUMNODES + (to)]) 63 + extern int __node_distance(int from, int to); 64 + #define node_distance(from,to) __node_distance(from, to) 63 65 64 66 extern int paddr_to_nid(unsigned long paddr); 65 67
+3 -3
arch/ia64/kernel/acpi.c
··· 578 578 if (!slit_table) { 579 579 for (i = 0; i < MAX_NUMNODES; i++) 580 580 for (j = 0; j < MAX_NUMNODES; j++) 581 - node_distance(i, j) = i == j ? LOCAL_DISTANCE : 582 - REMOTE_DISTANCE; 581 + slit_distance(i, j) = i == j ? 582 + LOCAL_DISTANCE : REMOTE_DISTANCE; 583 583 return; 584 584 } 585 585 ··· 592 592 if (!pxm_bit_test(j)) 593 593 continue; 594 594 node_to = pxm_to_node(j); 595 - node_distance(node_from, node_to) = 595 + slit_distance(node_from, node_to) = 596 596 slit_table->entry[i * slit_table->locality_count + j]; 597 597 } 598 598 }
+6
arch/ia64/mm/numa.c
··· 36 36 */ 37 37 u8 numa_slit[MAX_NUMNODES * MAX_NUMNODES]; 38 38 39 + int __node_distance(int from, int to) 40 + { 41 + return slit_distance(from, to); 42 + } 43 + EXPORT_SYMBOL(__node_distance); 44 + 39 45 /* Identify which cnode a physical address resides on */ 40 46 int 41 47 paddr_to_nid(unsigned long paddr)
+2 -2
arch/m68k/include/asm/pgtable_mm.h
··· 55 55 */ 56 56 #ifdef CONFIG_SUN3 57 57 #define PTRS_PER_PTE 16 58 - #define __PAGETABLE_PMD_FOLDED 58 + #define __PAGETABLE_PMD_FOLDED 1 59 59 #define PTRS_PER_PMD 1 60 60 #define PTRS_PER_PGD 2048 61 61 #elif defined(CONFIG_COLDFIRE) 62 62 #define PTRS_PER_PTE 512 63 - #define __PAGETABLE_PMD_FOLDED 63 + #define __PAGETABLE_PMD_FOLDED 1 64 64 #define PTRS_PER_PMD 1 65 65 #define PTRS_PER_PGD 1024 66 66 #else
+1 -1
arch/microblaze/include/asm/pgtable.h
··· 63 63 64 64 #include <asm-generic/4level-fixup.h> 65 65 66 - #define __PAGETABLE_PMD_FOLDED 66 + #define __PAGETABLE_PMD_FOLDED 1 67 67 68 68 #ifdef __KERNEL__ 69 69 #ifndef __ASSEMBLY__
+2 -13
arch/microblaze/kernel/ftrace.c
··· 22 22 void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr) 23 23 { 24 24 unsigned long old; 25 - int faulted, err; 26 - struct ftrace_graph_ent trace; 25 + int faulted; 27 26 unsigned long return_hooker = (unsigned long) 28 27 &return_to_handler; 29 28 ··· 62 63 return; 63 64 } 64 65 65 - err = ftrace_push_return_trace(old, self_addr, &trace.depth, 0, NULL); 66 - if (err == -EBUSY) { 66 + if (function_graph_enter(old, self_addr, 0, NULL)) 67 67 *parent = old; 68 - return; 69 - } 70 - 71 - trace.func = self_addr; 72 - /* Only trace if the calling function expects to */ 73 - if (!ftrace_graph_entry(&trace)) { 74 - current->curr_ret_stack--; 75 - *parent = old; 76 - } 77 68 } 78 69 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ 79 70
+1 -1
arch/mips/cavium-octeon/executive/cvmx-helper.c
··· 67 67 void (*cvmx_override_ipd_port_setup) (int ipd_port); 68 68 69 69 /* Port count per interface */ 70 - static int interface_port_count[5]; 70 + static int interface_port_count[9]; 71 71 72 72 /** 73 73 * Return the number of interfaces the chip has. Each interface
+1
arch/mips/configs/cavium_octeon_defconfig
··· 140 140 CONFIG_RTC_DRV_DS1307=y 141 141 CONFIG_STAGING=y 142 142 CONFIG_OCTEON_ETHERNET=y 143 + CONFIG_OCTEON_USB=y 143 144 # CONFIG_IOMMU_SUPPORT is not set 144 145 CONFIG_RAS=y 145 146 CONFIG_EXT4_FS=y
+1 -1
arch/mips/include/asm/syscall.h
··· 73 73 #ifdef CONFIG_64BIT 74 74 case 4: case 5: case 6: case 7: 75 75 #ifdef CONFIG_MIPS32_O32 76 - if (test_thread_flag(TIF_32BIT_REGS)) 76 + if (test_tsk_thread_flag(task, TIF_32BIT_REGS)) 77 77 return get_user(*arg, (int *)usp + n); 78 78 else 79 79 #endif
+2 -12
arch/mips/kernel/ftrace.c
··· 322 322 unsigned long fp) 323 323 { 324 324 unsigned long old_parent_ra; 325 - struct ftrace_graph_ent trace; 326 325 unsigned long return_hooker = (unsigned long) 327 326 &return_to_handler; 328 327 int faulted, insns; ··· 368 369 if (unlikely(faulted)) 369 370 goto out; 370 371 371 - if (ftrace_push_return_trace(old_parent_ra, self_ra, &trace.depth, fp, 372 - NULL) == -EBUSY) { 373 - *parent_ra_addr = old_parent_ra; 374 - return; 375 - } 376 - 377 372 /* 378 373 * Get the recorded ip of the current mcount calling site in the 379 374 * __mcount_loc section, which will be used to filter the function ··· 375 382 */ 376 383 377 384 insns = core_kernel_text(self_ra) ? 2 : MCOUNT_OFFSET_INSNS + 1; 378 - trace.func = self_ra - (MCOUNT_INSN_SIZE * insns); 385 + self_ra -= (MCOUNT_INSN_SIZE * insns); 379 386 380 - /* Only trace if the calling function expects to */ 381 - if (!ftrace_graph_entry(&trace)) { 382 - current->curr_ret_stack--; 387 + if (function_graph_enter(old_parent_ra, self_ra, fp, NULL)) 383 388 *parent_ra_addr = old_parent_ra; 384 - } 385 389 return; 386 390 out: 387 391 ftrace_graph_stop();
+1
arch/mips/kernel/setup.c
··· 794 794 795 795 /* call board setup routine */ 796 796 plat_mem_setup(); 797 + memblock_set_bottom_up(true); 797 798 798 799 /* 799 800 * Make sure all kernel memory is in the maps. The "UP" and
+1 -2
arch/mips/kernel/traps.c
··· 2260 2260 unsigned long size = 0x200 + VECTORSPACING*64; 2261 2261 phys_addr_t ebase_pa; 2262 2262 2263 - memblock_set_bottom_up(true); 2264 2263 ebase = (unsigned long) 2265 2264 memblock_alloc_from(size, 1 << fls(size), 0); 2266 - memblock_set_bottom_up(false); 2267 2265 2268 2266 /* 2269 2267 * Try to ensure ebase resides in KSeg0 if possible. ··· 2305 2307 if (board_ebase_setup) 2306 2308 board_ebase_setup(); 2307 2309 per_cpu_trap_init(true); 2310 + memblock_set_bottom_up(false); 2308 2311 2309 2312 /* 2310 2313 * Copy the generic exception handlers to their final destination.
+2 -10
arch/mips/loongson64/loongson-3/numa.c
··· 231 231 cpumask_clear(&__node_data[(node)]->cpumask); 232 232 } 233 233 } 234 + max_low_pfn = PHYS_PFN(memblock_end_of_DRAM()); 235 + 234 236 for (cpu = 0; cpu < loongson_sysconf.nr_cpus; cpu++) { 235 237 node = cpu / loongson_sysconf.cores_per_node; 236 238 if (node >= num_online_nodes()) ··· 250 248 251 249 void __init paging_init(void) 252 250 { 253 - unsigned node; 254 251 unsigned long zones_size[MAX_NR_ZONES] = {0, }; 255 252 256 253 pagetable_init(); 257 - 258 - for_each_online_node(node) { 259 - unsigned long start_pfn, end_pfn; 260 - 261 - get_pfn_range_for_nid(node, &start_pfn, &end_pfn); 262 - 263 - if (end_pfn > max_low_pfn) 264 - max_low_pfn = end_pfn; 265 - } 266 254 #ifdef CONFIG_ZONE_DMA32 267 255 zones_size[ZONE_DMA32] = MAX_DMA32_PFN; 268 256 #endif
+1 -1
arch/mips/mm/dma-noncoherent.c
··· 50 50 void *ret; 51 51 52 52 ret = dma_direct_alloc_pages(dev, size, dma_handle, gfp, attrs); 53 - if (!ret && !(attrs & DMA_ATTR_NON_CONSISTENT)) { 53 + if (ret && !(attrs & DMA_ATTR_NON_CONSISTENT)) { 54 54 dma_cache_wback_inv((unsigned long) ret, size); 55 55 ret = (void *)UNCAC_ADDR(ret); 56 56 }
+1 -1
arch/mips/ralink/mt7620.c
··· 84 84 }; 85 85 static struct rt2880_pmx_func nd_sd_grp[] = { 86 86 FUNC("nand", MT7620_GPIO_MODE_NAND, 45, 15), 87 - FUNC("sd", MT7620_GPIO_MODE_SD, 45, 15) 87 + FUNC("sd", MT7620_GPIO_MODE_SD, 47, 13) 88 88 }; 89 89 90 90 static struct rt2880_pmx_group mt7620a_pinmux_data[] = {
+1 -10
arch/mips/sgi-ip27/ip27-memory.c
··· 435 435 436 436 mlreset(); 437 437 szmem(); 438 + max_low_pfn = PHYS_PFN(memblock_end_of_DRAM()); 438 439 439 440 for (node = 0; node < MAX_COMPACT_NODES; node++) { 440 441 if (node_online(node)) { ··· 456 455 void __init paging_init(void) 457 456 { 458 457 unsigned long zones_size[MAX_NR_ZONES] = {0, }; 459 - unsigned node; 460 458 461 459 pagetable_init(); 462 - 463 - for_each_online_node(node) { 464 - unsigned long start_pfn, end_pfn; 465 - 466 - get_pfn_range_for_nid(node, &start_pfn, &end_pfn); 467 - 468 - if (end_pfn > max_low_pfn) 469 - max_low_pfn = end_pfn; 470 - } 471 460 zones_size[ZONE_NORMAL] = max_low_pfn; 472 461 free_area_init_nodes(zones_size); 473 462 }
+1 -1
arch/nds32/include/asm/pgtable.h
··· 4 4 #ifndef _ASMNDS32_PGTABLE_H 5 5 #define _ASMNDS32_PGTABLE_H 6 6 7 - #define __PAGETABLE_PMD_FOLDED 7 + #define __PAGETABLE_PMD_FOLDED 1 8 8 #include <asm-generic/4level-fixup.h> 9 9 #include <asm-generic/sizes.h> 10 10
+2 -16
arch/nds32/kernel/ftrace.c
··· 211 211 unsigned long frame_pointer) 212 212 { 213 213 unsigned long return_hooker = (unsigned long)&return_to_handler; 214 - struct ftrace_graph_ent trace; 215 214 unsigned long old; 216 - int err; 217 215 218 216 if (unlikely(atomic_read(&current->tracing_graph_pause))) 219 217 return; 220 218 221 219 old = *parent; 222 220 223 - trace.func = self_addr; 224 - trace.depth = current->curr_ret_stack + 1; 225 - 226 - /* Only trace if the calling function expects to */ 227 - if (!ftrace_graph_entry(&trace)) 228 - return; 229 - 230 - err = ftrace_push_return_trace(old, self_addr, &trace.depth, 231 - frame_pointer, NULL); 232 - 233 - if (err == -EBUSY) 234 - return; 235 - 236 - *parent = return_hooker; 221 + if (!function_graph_enter(old, self_addr, frame_pointer, NULL)) 222 + *parent = return_hooker; 237 223 } 238 224 239 225 noinline void ftrace_graph_caller(void)
+1 -1
arch/parisc/include/asm/pgtable.h
··· 111 111 #if CONFIG_PGTABLE_LEVELS == 3 112 112 #define BITS_PER_PMD (PAGE_SHIFT + PMD_ORDER - BITS_PER_PMD_ENTRY) 113 113 #else 114 - #define __PAGETABLE_PMD_FOLDED 114 + #define __PAGETABLE_PMD_FOLDED 1 115 115 #define BITS_PER_PMD 0 116 116 #endif 117 117 #define PTRS_PER_PMD (1UL << BITS_PER_PMD)
+2 -2
arch/parisc/include/asm/spinlock.h
··· 37 37 volatile unsigned int *a; 38 38 39 39 a = __ldcw_align(x); 40 - /* Release with ordered store. */ 41 - __asm__ __volatile__("stw,ma %0,0(%1)" : : "r"(1), "r"(a) : "memory"); 40 + mb(); 41 + *a = 1; 42 42 } 43 43 44 44 static inline int arch_spin_trylock(arch_spinlock_t *x)
+3 -14
arch/parisc/kernel/ftrace.c
··· 30 30 unsigned long self_addr) 31 31 { 32 32 unsigned long old; 33 - struct ftrace_graph_ent trace; 34 33 extern int parisc_return_to_handler; 35 34 36 35 if (unlikely(ftrace_graph_is_dead())) ··· 40 41 41 42 old = *parent; 42 43 43 - trace.func = self_addr; 44 - trace.depth = current->curr_ret_stack + 1; 45 - 46 - /* Only trace if the calling function expects to */ 47 - if (!ftrace_graph_entry(&trace)) 48 - return; 49 - 50 - if (ftrace_push_return_trace(old, self_addr, &trace.depth, 51 - 0, NULL) == -EBUSY) 52 - return; 53 - 54 - /* activate parisc_return_to_handler() as return point */ 55 - *parent = (unsigned long) &parisc_return_to_handler; 44 + if (!function_graph_enter(old, self_addr, 0, NULL)) 45 + /* activate parisc_return_to_handler() as return point */ 46 + *parent = (unsigned long) &parisc_return_to_handler; 56 47 } 57 48 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ 58 49
+8 -4
arch/parisc/kernel/syscall.S
··· 640 640 sub,<> %r28, %r25, %r0 641 641 2: stw %r24, 0(%r26) 642 642 /* Free lock */ 643 - stw,ma %r20, 0(%sr2,%r20) 643 + sync 644 + stw %r20, 0(%sr2,%r20) 644 645 #if ENABLE_LWS_DEBUG 645 646 /* Clear thread register indicator */ 646 647 stw %r0, 4(%sr2,%r20) ··· 655 654 3: 656 655 /* Error occurred on load or store */ 657 656 /* Free lock */ 658 - stw,ma %r20, 0(%sr2,%r20) 657 + sync 658 + stw %r20, 0(%sr2,%r20) 659 659 #if ENABLE_LWS_DEBUG 660 660 stw %r0, 4(%sr2,%r20) 661 661 #endif ··· 857 855 858 856 cas2_end: 859 857 /* Free lock */ 860 - stw,ma %r20, 0(%sr2,%r20) 858 + sync 859 + stw %r20, 0(%sr2,%r20) 861 860 /* Enable interrupts */ 862 861 ssm PSW_SM_I, %r0 863 862 /* Return to userspace, set no error */ ··· 868 865 22: 869 866 /* Error occurred on load or store */ 870 867 /* Free lock */ 871 - stw,ma %r20, 0(%sr2,%r20) 868 + sync 869 + stw %r20, 0(%sr2,%r20) 872 870 ssm PSW_SM_I, %r0 873 871 ldo 1(%r0),%r28 874 872 b lws_exit
+7 -13
arch/powerpc/include/asm/io.h
··· 268 268 * their hooks, a bitfield is reserved for use by the platform near the 269 269 * top of MMIO addresses (not PIO, those have to cope the hard way). 270 270 * 271 - * This bit field is 12 bits and is at the top of the IO virtual 272 - * addresses PCI_IO_INDIRECT_TOKEN_MASK. 271 + * The highest address in the kernel virtual space are: 273 272 * 274 - * The kernel virtual space is thus: 273 + * d0003fffffffffff # with Hash MMU 274 + * c00fffffffffffff # with Radix MMU 275 275 * 276 - * 0xD000000000000000 : vmalloc 277 - * 0xD000080000000000 : PCI PHB IO space 278 - * 0xD000080080000000 : ioremap 279 - * 0xD0000fffffffffff : end of ioremap region 280 - * 281 - * Since the top 4 bits are reserved as the region ID, we use thus 282 - * the next 12 bits and keep 4 bits available for the future if the 283 - * virtual address space is ever to be extended. 276 + * The top 4 bits are reserved as the region ID on hash, leaving us 8 bits 277 + * that can be used for the field. 284 278 * 285 279 * The direct IO mapping operations will then mask off those bits 286 280 * before doing the actual access, though that only happen when ··· 286 292 */ 287 293 288 294 #ifdef CONFIG_PPC_INDIRECT_MMIO 289 - #define PCI_IO_IND_TOKEN_MASK 0x0fff000000000000ul 290 - #define PCI_IO_IND_TOKEN_SHIFT 48 295 + #define PCI_IO_IND_TOKEN_SHIFT 52 296 + #define PCI_IO_IND_TOKEN_MASK (0xfful << PCI_IO_IND_TOKEN_SHIFT) 291 297 #define PCI_FIX_ADDR(addr) \ 292 298 ((PCI_IO_ADDR)(((unsigned long)(addr)) & ~PCI_IO_IND_TOKEN_MASK)) 293 299 #define PCI_GET_ADDR_TOKEN(addr) \
+2
arch/powerpc/include/asm/ppc-opcode.h
··· 493 493 __PPC_RS(t) | __PPC_RA0(a) | __PPC_RB(b)) 494 494 #define PPC_SLBFEE_DOT(t, b) stringify_in_c(.long PPC_INST_SLBFEE | \ 495 495 __PPC_RT(t) | __PPC_RB(b)) 496 + #define __PPC_SLBFEE_DOT(t, b) stringify_in_c(.long PPC_INST_SLBFEE | \ 497 + ___PPC_RT(t) | ___PPC_RB(b)) 496 498 #define PPC_ICBT(c,a,b) stringify_in_c(.long PPC_INST_ICBT | \ 497 499 __PPC_CT(c) | __PPC_RA0(a) | __PPC_RB(b)) 498 500 /* PASemi instructions */
+1
arch/powerpc/include/asm/ptrace.h
··· 54 54 55 55 #ifdef CONFIG_PPC64 56 56 unsigned long ppr; 57 + unsigned long __pad; /* Maintain 16 byte interrupt stack alignment */ 57 58 #endif 58 59 }; 59 60 #endif
+2
arch/powerpc/kernel/setup_64.c
··· 636 636 { 637 637 unsigned long pa; 638 638 639 + BUILD_BUG_ON(STACK_INT_FRAME_SIZE % 16); 640 + 639 641 pa = memblock_alloc_base_nid(THREAD_SIZE, THREAD_SIZE, limit, 640 642 early_cpu_to_node(cpu), MEMBLOCK_NONE); 641 643 if (!pa) {
+2 -13
arch/powerpc/kernel/trace/ftrace.c
··· 950 950 */ 951 951 unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip) 952 952 { 953 - struct ftrace_graph_ent trace; 954 953 unsigned long return_hooker; 955 954 956 955 if (unlikely(ftrace_graph_is_dead())) ··· 960 961 961 962 return_hooker = ppc_function_entry(return_to_handler); 962 963 963 - trace.func = ip; 964 - trace.depth = current->curr_ret_stack + 1; 965 - 966 - /* Only trace if the calling function expects to */ 967 - if (!ftrace_graph_entry(&trace)) 968 - goto out; 969 - 970 - if (ftrace_push_return_trace(parent, ip, &trace.depth, 0, 971 - NULL) == -EBUSY) 972 - goto out; 973 - 974 - parent = return_hooker; 964 + if (!function_graph_enter(parent, ip, 0, NULL)) 965 + parent = return_hooker; 975 966 out: 976 967 return parent; 977 968 }
+1
arch/powerpc/kvm/book3s_hv.c
··· 983 983 ret = kvmhv_enter_nested_guest(vcpu); 984 984 if (ret == H_INTERRUPT) { 985 985 kvmppc_set_gpr(vcpu, 3, 0); 986 + vcpu->arch.hcall_needed = 0; 986 987 return -EINTR; 987 988 } 988 989 break;
+6 -2
arch/powerpc/kvm/trace.h
··· 6 6 7 7 #undef TRACE_SYSTEM 8 8 #define TRACE_SYSTEM kvm 9 - #define TRACE_INCLUDE_PATH . 10 - #define TRACE_INCLUDE_FILE trace 11 9 12 10 /* 13 11 * Tracepoint for guest mode entry. ··· 118 120 #endif /* _TRACE_KVM_H */ 119 121 120 122 /* This part must be outside protection */ 123 + #undef TRACE_INCLUDE_PATH 124 + #undef TRACE_INCLUDE_FILE 125 + 126 + #define TRACE_INCLUDE_PATH . 127 + #define TRACE_INCLUDE_FILE trace 128 + 121 129 #include <trace/define_trace.h>
+7 -2
arch/powerpc/kvm/trace_booke.h
··· 6 6 7 7 #undef TRACE_SYSTEM 8 8 #define TRACE_SYSTEM kvm_booke 9 - #define TRACE_INCLUDE_PATH . 10 - #define TRACE_INCLUDE_FILE trace_booke 11 9 12 10 #define kvm_trace_symbol_exit \ 13 11 {0, "CRITICAL"}, \ ··· 216 218 #endif 217 219 218 220 /* This part must be outside protection */ 221 + 222 + #undef TRACE_INCLUDE_PATH 223 + #undef TRACE_INCLUDE_FILE 224 + 225 + #define TRACE_INCLUDE_PATH . 226 + #define TRACE_INCLUDE_FILE trace_booke 227 + 219 228 #include <trace/define_trace.h>
+7 -2
arch/powerpc/kvm/trace_hv.h
··· 9 9 10 10 #undef TRACE_SYSTEM 11 11 #define TRACE_SYSTEM kvm_hv 12 - #define TRACE_INCLUDE_PATH . 13 - #define TRACE_INCLUDE_FILE trace_hv 14 12 15 13 #define kvm_trace_symbol_hcall \ 16 14 {H_REMOVE, "H_REMOVE"}, \ ··· 495 497 #endif /* _TRACE_KVM_HV_H */ 496 498 497 499 /* This part must be outside protection */ 500 + 501 + #undef TRACE_INCLUDE_PATH 502 + #undef TRACE_INCLUDE_FILE 503 + 504 + #define TRACE_INCLUDE_PATH . 505 + #define TRACE_INCLUDE_FILE trace_hv 506 + 498 507 #include <trace/define_trace.h>
+7 -2
arch/powerpc/kvm/trace_pr.h
··· 8 8 9 9 #undef TRACE_SYSTEM 10 10 #define TRACE_SYSTEM kvm_pr 11 - #define TRACE_INCLUDE_PATH . 12 - #define TRACE_INCLUDE_FILE trace_pr 13 11 14 12 TRACE_EVENT(kvm_book3s_reenter, 15 13 TP_PROTO(int r, struct kvm_vcpu *vcpu), ··· 255 257 #endif /* _TRACE_KVM_H */ 256 258 257 259 /* This part must be outside protection */ 260 + 261 + #undef TRACE_INCLUDE_PATH 262 + #undef TRACE_INCLUDE_FILE 263 + 264 + #define TRACE_INCLUDE_PATH . 265 + #define TRACE_INCLUDE_FILE trace_pr 266 + 258 267 #include <trace/define_trace.h>
+1 -1
arch/powerpc/mm/numa.c
··· 1178 1178 1179 1179 switch (rc) { 1180 1180 case H_FUNCTION: 1181 - printk(KERN_INFO 1181 + printk_once(KERN_INFO 1182 1182 "VPHN is not supported. Disabling polling...\n"); 1183 1183 stop_topology_update(); 1184 1184 break;
+14 -21
arch/powerpc/mm/slb.c
··· 19 19 #include <asm/mmu.h> 20 20 #include <asm/mmu_context.h> 21 21 #include <asm/paca.h> 22 + #include <asm/ppc-opcode.h> 22 23 #include <asm/cputable.h> 23 24 #include <asm/cacheflush.h> 24 25 #include <asm/smp.h> ··· 59 58 return __mk_vsid_data(get_kernel_vsid(ea, ssize), ssize, flags); 60 59 } 61 60 62 - static void assert_slb_exists(unsigned long ea) 61 + static void assert_slb_presence(bool present, unsigned long ea) 63 62 { 64 63 #ifdef CONFIG_DEBUG_VM 65 64 unsigned long tmp; 66 65 67 66 WARN_ON_ONCE(mfmsr() & MSR_EE); 68 67 69 - asm volatile("slbfee. %0, %1" : "=r"(tmp) : "r"(ea) : "cr0"); 70 - WARN_ON(tmp == 0); 71 - #endif 72 - } 68 + if (!cpu_has_feature(CPU_FTR_ARCH_206)) 69 + return; 73 70 74 - static void assert_slb_notexists(unsigned long ea) 75 - { 76 - #ifdef CONFIG_DEBUG_VM 77 - unsigned long tmp; 71 + asm volatile(__PPC_SLBFEE_DOT(%0, %1) : "=r"(tmp) : "r"(ea) : "cr0"); 78 72 79 - WARN_ON_ONCE(mfmsr() & MSR_EE); 80 - 81 - asm volatile("slbfee. %0, %1" : "=r"(tmp) : "r"(ea) : "cr0"); 82 - WARN_ON(tmp != 0); 73 + WARN_ON(present == (tmp == 0)); 83 74 #endif 84 75 } 85 76 ··· 107 114 */ 108 115 slb_shadow_update(ea, ssize, flags, index); 109 116 110 - assert_slb_notexists(ea); 117 + assert_slb_presence(false, ea); 111 118 asm volatile("slbmte %0,%1" : 112 119 : "r" (mk_vsid_data(ea, ssize, flags)), 113 120 "r" (mk_esid_data(ea, ssize, index)) ··· 130 137 "r" (be64_to_cpu(p->save_area[index].esid))); 131 138 } 132 139 133 - assert_slb_exists(local_paca->kstack); 140 + assert_slb_presence(true, local_paca->kstack); 134 141 } 135 142 136 143 /* ··· 178 185 :: "r" (be64_to_cpu(p->save_area[KSTACK_INDEX].vsid)), 179 186 "r" (be64_to_cpu(p->save_area[KSTACK_INDEX].esid)) 180 187 : "memory"); 181 - assert_slb_exists(get_paca()->kstack); 188 + assert_slb_presence(true, get_paca()->kstack); 182 189 183 190 get_paca()->slb_cache_ptr = 0; 184 191 ··· 436 443 ea = (unsigned long) 437 444 get_paca()->slb_cache[i] << SID_SHIFT; 438 445 /* 439 - * Could assert_slb_exists here, but hypervisor 440 - * or machine check could have come in and 441 - * removed the entry at this point. 446 + * Could assert_slb_presence(true) here, but 447 + * hypervisor or machine check could have come 448 + * in and removed the entry at this point. 442 449 */ 443 450 444 451 slbie_data = ea; ··· 669 676 * User preloads should add isync afterwards in case the kernel 670 677 * accesses user memory before it returns to userspace with rfid. 671 678 */ 672 - assert_slb_notexists(ea); 679 + assert_slb_presence(false, ea); 673 680 asm volatile("slbmte %0, %1" : : "r" (vsid_data), "r" (esid_data)); 674 681 675 682 barrier(); ··· 708 715 return -EFAULT; 709 716 710 717 if (ea < H_VMALLOC_END) 711 - flags = get_paca()->vmalloc_sllp; 718 + flags = local_paca->vmalloc_sllp; 712 719 else 713 720 flags = SLB_VSID_KERNEL | mmu_psize_defs[mmu_io_psize].sllp; 714 721 } else {
+38 -19
arch/powerpc/net/bpf_jit_comp64.c
··· 166 166 PPC_BLR(); 167 167 } 168 168 169 - static void bpf_jit_emit_func_call(u32 *image, struct codegen_context *ctx, u64 func) 169 + static void bpf_jit_emit_func_call_hlp(u32 *image, struct codegen_context *ctx, 170 + u64 func) 171 + { 172 + #ifdef PPC64_ELF_ABI_v1 173 + /* func points to the function descriptor */ 174 + PPC_LI64(b2p[TMP_REG_2], func); 175 + /* Load actual entry point from function descriptor */ 176 + PPC_BPF_LL(b2p[TMP_REG_1], b2p[TMP_REG_2], 0); 177 + /* ... and move it to LR */ 178 + PPC_MTLR(b2p[TMP_REG_1]); 179 + /* 180 + * Load TOC from function descriptor at offset 8. 181 + * We can clobber r2 since we get called through a 182 + * function pointer (so caller will save/restore r2) 183 + * and since we don't use a TOC ourself. 184 + */ 185 + PPC_BPF_LL(2, b2p[TMP_REG_2], 8); 186 + #else 187 + /* We can clobber r12 */ 188 + PPC_FUNC_ADDR(12, func); 189 + PPC_MTLR(12); 190 + #endif 191 + PPC_BLRL(); 192 + } 193 + 194 + static void bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx, 195 + u64 func) 170 196 { 171 197 unsigned int i, ctx_idx = ctx->idx; 172 198 ··· 299 273 { 300 274 const struct bpf_insn *insn = fp->insnsi; 301 275 int flen = fp->len; 302 - int i; 276 + int i, ret; 303 277 304 278 /* Start of epilogue code - will only be valid 2nd pass onwards */ 305 279 u32 exit_addr = addrs[flen]; ··· 310 284 u32 src_reg = b2p[insn[i].src_reg]; 311 285 s16 off = insn[i].off; 312 286 s32 imm = insn[i].imm; 287 + bool func_addr_fixed; 288 + u64 func_addr; 313 289 u64 imm64; 314 - u8 *func; 315 290 u32 true_cond; 316 291 u32 tmp_idx; 317 292 ··· 738 711 case BPF_JMP | BPF_CALL: 739 712 ctx->seen |= SEEN_FUNC; 740 713 741 - /* bpf function call */ 742 - if (insn[i].src_reg == BPF_PSEUDO_CALL) 743 - if (!extra_pass) 744 - func = NULL; 745 - else if (fp->aux->func && off < fp->aux->func_cnt) 746 - /* use the subprog id from the off 747 - * field to lookup the callee address 748 - */ 749 - func = (u8 *) fp->aux->func[off]->bpf_func; 750 - else 751 - return -EINVAL; 752 - /* kernel helper call */ 714 + ret = bpf_jit_get_func_addr(fp, &insn[i], extra_pass, 715 + &func_addr, &func_addr_fixed); 716 + if (ret < 0) 717 + return ret; 718 + 719 + if (func_addr_fixed) 720 + bpf_jit_emit_func_call_hlp(image, ctx, func_addr); 753 721 else 754 - func = (u8 *) __bpf_call_base + imm; 755 - 756 - bpf_jit_emit_func_call(image, ctx, (u64)func); 757 - 722 + bpf_jit_emit_func_call_rel(image, ctx, func_addr); 758 723 /* move return value from r3 to BPF_REG_0 */ 759 724 PPC_MR(b2p[BPF_REG_0], 3); 760 725 break;
+4 -60
arch/powerpc/platforms/powernv/npu-dma.c
··· 102 102 } 103 103 EXPORT_SYMBOL(pnv_pci_get_npu_dev); 104 104 105 - #define NPU_DMA_OP_UNSUPPORTED() \ 106 - dev_err_once(dev, "%s operation unsupported for NVLink devices\n", \ 107 - __func__) 108 - 109 - static void *dma_npu_alloc(struct device *dev, size_t size, 110 - dma_addr_t *dma_handle, gfp_t flag, 111 - unsigned long attrs) 112 - { 113 - NPU_DMA_OP_UNSUPPORTED(); 114 - return NULL; 115 - } 116 - 117 - static void dma_npu_free(struct device *dev, size_t size, 118 - void *vaddr, dma_addr_t dma_handle, 119 - unsigned long attrs) 120 - { 121 - NPU_DMA_OP_UNSUPPORTED(); 122 - } 123 - 124 - static dma_addr_t dma_npu_map_page(struct device *dev, struct page *page, 125 - unsigned long offset, size_t size, 126 - enum dma_data_direction direction, 127 - unsigned long attrs) 128 - { 129 - NPU_DMA_OP_UNSUPPORTED(); 130 - return 0; 131 - } 132 - 133 - static int dma_npu_map_sg(struct device *dev, struct scatterlist *sglist, 134 - int nelems, enum dma_data_direction direction, 135 - unsigned long attrs) 136 - { 137 - NPU_DMA_OP_UNSUPPORTED(); 138 - return 0; 139 - } 140 - 141 - static int dma_npu_dma_supported(struct device *dev, u64 mask) 142 - { 143 - NPU_DMA_OP_UNSUPPORTED(); 144 - return 0; 145 - } 146 - 147 - static u64 dma_npu_get_required_mask(struct device *dev) 148 - { 149 - NPU_DMA_OP_UNSUPPORTED(); 150 - return 0; 151 - } 152 - 153 - static const struct dma_map_ops dma_npu_ops = { 154 - .map_page = dma_npu_map_page, 155 - .map_sg = dma_npu_map_sg, 156 - .alloc = dma_npu_alloc, 157 - .free = dma_npu_free, 158 - .dma_supported = dma_npu_dma_supported, 159 - .get_required_mask = dma_npu_get_required_mask, 160 - }; 161 - 162 105 /* 163 106 * Returns the PE assoicated with the PCI device of the given 164 107 * NPU. Returns the linked pci device if pci_dev != NULL. ··· 213 270 rc = pnv_npu_set_window(npe, 0, gpe->table_group.tables[0]); 214 271 215 272 /* 216 - * We don't initialise npu_pe->tce32_table as we always use 217 - * dma_npu_ops which are nops. 273 + * NVLink devices use the same TCE table configuration as 274 + * their parent device so drivers shouldn't be doing DMA 275 + * operations directly on these devices. 218 276 */ 219 - set_dma_ops(&npe->pdev->dev, &dma_npu_ops); 277 + set_dma_ops(&npe->pdev->dev, NULL); 220 278 } 221 279 222 280 /*
+18 -1
arch/riscv/Makefile
··· 71 71 # arch specific predefines for sparse 72 72 CHECKFLAGS += -D__riscv -D__riscv_xlen=$(BITS) 73 73 74 + # Default target when executing plain make 75 + boot := arch/riscv/boot 76 + KBUILD_IMAGE := $(boot)/Image.gz 77 + 74 78 head-y := arch/riscv/kernel/head.o 75 79 76 80 core-y += arch/riscv/kernel/ arch/riscv/mm/ 77 81 78 82 libs-y += arch/riscv/lib/ 79 83 80 - all: vmlinux 84 + PHONY += vdso_install 85 + vdso_install: 86 + $(Q)$(MAKE) $(build)=arch/riscv/kernel/vdso $@ 87 + 88 + all: Image.gz 89 + 90 + Image: vmlinux 91 + $(Q)$(MAKE) $(build)=$(boot) $(boot)/$@ 92 + 93 + Image.%: Image 94 + $(Q)$(MAKE) $(build)=$(boot) $(boot)/$@ 95 + 96 + zinstall install: 97 + $(Q)$(MAKE) $(build)=$(boot) $@
+2
arch/riscv/boot/.gitignore
··· 1 + Image 2 + Image.gz
+33
arch/riscv/boot/Makefile
··· 1 + # 2 + # arch/riscv/boot/Makefile 3 + # 4 + # This file is included by the global makefile so that you can add your own 5 + # architecture-specific flags and dependencies. 6 + # 7 + # This file is subject to the terms and conditions of the GNU General Public 8 + # License. See the file "COPYING" in the main directory of this archive 9 + # for more details. 10 + # 11 + # Copyright (C) 2018, Anup Patel. 12 + # Author: Anup Patel <anup@brainfault.org> 13 + # 14 + # Based on the ia64 and arm64 boot/Makefile. 15 + # 16 + 17 + OBJCOPYFLAGS_Image :=-O binary -R .note -R .note.gnu.build-id -R .comment -S 18 + 19 + targets := Image 20 + 21 + $(obj)/Image: vmlinux FORCE 22 + $(call if_changed,objcopy) 23 + 24 + $(obj)/Image.gz: $(obj)/Image FORCE 25 + $(call if_changed,gzip) 26 + 27 + install: 28 + $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \ 29 + $(obj)/Image System.map "$(INSTALL_PATH)" 30 + 31 + zinstall: 32 + $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \ 33 + $(obj)/Image.gz System.map "$(INSTALL_PATH)"
+60
arch/riscv/boot/install.sh
··· 1 + #!/bin/sh 2 + # 3 + # arch/riscv/boot/install.sh 4 + # 5 + # This file is subject to the terms and conditions of the GNU General Public 6 + # License. See the file "COPYING" in the main directory of this archive 7 + # for more details. 8 + # 9 + # Copyright (C) 1995 by Linus Torvalds 10 + # 11 + # Adapted from code in arch/i386/boot/Makefile by H. Peter Anvin 12 + # Adapted from code in arch/i386/boot/install.sh by Russell King 13 + # 14 + # "make install" script for the RISC-V Linux port 15 + # 16 + # Arguments: 17 + # $1 - kernel version 18 + # $2 - kernel image file 19 + # $3 - kernel map file 20 + # $4 - default install path (blank if root directory) 21 + # 22 + 23 + verify () { 24 + if [ ! -f "$1" ]; then 25 + echo "" 1>&2 26 + echo " *** Missing file: $1" 1>&2 27 + echo ' *** You need to run "make" before "make install".' 1>&2 28 + echo "" 1>&2 29 + exit 1 30 + fi 31 + } 32 + 33 + # Make sure the files actually exist 34 + verify "$2" 35 + verify "$3" 36 + 37 + # User may have a custom install script 38 + if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi 39 + if [ -x /sbin/${INSTALLKERNEL} ]; then exec /sbin/${INSTALLKERNEL} "$@"; fi 40 + 41 + if [ "$(basename $2)" = "Image.gz" ]; then 42 + # Compressed install 43 + echo "Installing compressed kernel" 44 + base=vmlinuz 45 + else 46 + # Normal install 47 + echo "Installing normal kernel" 48 + base=vmlinux 49 + fi 50 + 51 + if [ -f $4/$base-$1 ]; then 52 + mv $4/$base-$1 $4/$base-$1.old 53 + fi 54 + cat $2 > $4/$base-$1 55 + 56 + # Install system map file 57 + if [ -f $4/System.map-$1 ]; then 58 + mv $4/System.map-$1 $4/System.map-$1.old 59 + fi 60 + cp $3 $4/System.map-$1
+1
arch/riscv/configs/defconfig
··· 76 76 CONFIG_NFS_V4_2=y 77 77 CONFIG_ROOT_NFS=y 78 78 CONFIG_CRYPTO_USER_API_HASH=y 79 + CONFIG_PRINTK_TIME=y 79 80 # CONFIG_RCU_TRACE is not set
+1
arch/riscv/include/asm/module.h
··· 8 8 9 9 #define MODULE_ARCH_VERMAGIC "riscv" 10 10 11 + struct module; 11 12 u64 module_emit_got_entry(struct module *mod, u64 val); 12 13 u64 module_emit_plt_entry(struct module *mod, u64 val); 13 14
+2 -2
arch/riscv/include/asm/ptrace.h
··· 56 56 unsigned long sstatus; 57 57 unsigned long sbadaddr; 58 58 unsigned long scause; 59 - /* a0 value before the syscall */ 60 - unsigned long orig_a0; 59 + /* a0 value before the syscall */ 60 + unsigned long orig_a0; 61 61 }; 62 62 63 63 #ifdef CONFIG_64BIT
+2 -2
arch/riscv/include/asm/uaccess.h
··· 400 400 static inline unsigned long 401 401 raw_copy_from_user(void *to, const void __user *from, unsigned long n) 402 402 { 403 - return __asm_copy_to_user(to, from, n); 403 + return __asm_copy_from_user(to, from, n); 404 404 } 405 405 406 406 static inline unsigned long 407 407 raw_copy_to_user(void __user *to, const void *from, unsigned long n) 408 408 { 409 - return __asm_copy_from_user(to, from, n); 409 + return __asm_copy_to_user(to, from, n); 410 410 } 411 411 412 412 extern long strncpy_from_user(char *dest, const char __user *src, long count);
+2 -3
arch/riscv/include/asm/unistd.h
··· 13 13 14 14 /* 15 15 * There is explicitly no include guard here because this file is expected to 16 - * be included multiple times. See uapi/asm/syscalls.h for more info. 16 + * be included multiple times. 17 17 */ 18 18 19 - #define __ARCH_WANT_NEW_STAT 20 19 #define __ARCH_WANT_SYS_CLONE 20 + 21 21 #include <uapi/asm/unistd.h> 22 - #include <uapi/asm/syscalls.h>
+19 -7
arch/riscv/include/uapi/asm/syscalls.h arch/riscv/include/uapi/asm/unistd.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 2 /* 3 - * Copyright (C) 2017-2018 SiFive 3 + * Copyright (C) 2018 David Abdurachmanov <david.abdurachmanov@gmail.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + * 14 + * You should have received a copy of the GNU General Public License 15 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 4 16 */ 5 17 6 - /* 7 - * There is explicitly no include guard here because this file is expected to 8 - * be included multiple times in order to define the syscall macros via 9 - * __SYSCALL. 10 - */ 18 + #ifdef __LP64__ 19 + #define __ARCH_WANT_NEW_STAT 20 + #endif /* __LP64__ */ 21 + 22 + #include <asm-generic/unistd.h> 11 23 12 24 /* 13 25 * Allows the instruction cache to be flushed from userspace. Despite RISC-V
+6 -3
arch/riscv/kernel/cpu.c
··· 64 64 65 65 static void print_isa(struct seq_file *f, const char *orig_isa) 66 66 { 67 - static const char *ext = "mafdc"; 67 + static const char *ext = "mafdcsu"; 68 68 const char *isa = orig_isa; 69 69 const char *e; 70 70 ··· 88 88 /* 89 89 * Check the rest of the ISA string for valid extensions, printing those 90 90 * we find. RISC-V ISA strings define an order, so we only print the 91 - * extension bits when they're in order. 91 + * extension bits when they're in order. Hide the supervisor (S) 92 + * extension from userspace as it's not accessible from there. 92 93 */ 93 94 for (e = ext; *e != '\0'; ++e) { 94 95 if (isa[0] == e[0]) { 95 - seq_write(f, isa, 1); 96 + if (isa[0] != 's') 97 + seq_write(f, isa, 1); 98 + 96 99 isa++; 97 100 } 98 101 }
+2 -12
arch/riscv/kernel/ftrace.c
··· 132 132 { 133 133 unsigned long return_hooker = (unsigned long)&return_to_handler; 134 134 unsigned long old; 135 - struct ftrace_graph_ent trace; 136 135 int err; 137 136 138 137 if (unlikely(atomic_read(&current->tracing_graph_pause))) ··· 143 144 */ 144 145 old = *parent; 145 146 146 - trace.func = self_addr; 147 - trace.depth = current->curr_ret_stack + 1; 148 - 149 - if (!ftrace_graph_entry(&trace)) 150 - return; 151 - 152 - err = ftrace_push_return_trace(old, self_addr, &trace.depth, 153 - frame_pointer, parent); 154 - if (err == -EBUSY) 155 - return; 156 - *parent = return_hooker; 147 + if (function_graph_enter(old, self_addr, frame_pointer, parent)) 148 + *parent = return_hooker; 157 149 } 158 150 159 151 #ifdef CONFIG_DYNAMIC_FTRACE
+10
arch/riscv/kernel/head.S
··· 44 44 amoadd.w a3, a2, (a3) 45 45 bnez a3, .Lsecondary_start 46 46 47 + /* Clear BSS for flat non-ELF images */ 48 + la a3, __bss_start 49 + la a4, __bss_stop 50 + ble a4, a3, clear_bss_done 51 + clear_bss: 52 + REG_S zero, (a3) 53 + add a3, a3, RISCV_SZPTR 54 + blt a3, a4, clear_bss 55 + clear_bss_done: 56 + 47 57 /* Save hart ID and DTB physical address */ 48 58 mv s0, a0 49 59 mv s1, a1
+6 -6
arch/riscv/kernel/module.c
··· 21 21 { 22 22 if (v != (u32)v) { 23 23 pr_err("%s: value %016llx out of range for 32-bit field\n", 24 - me->name, v); 24 + me->name, (long long)v); 25 25 return -EINVAL; 26 26 } 27 27 *location = v; ··· 102 102 if (offset != (s32)offset) { 103 103 pr_err( 104 104 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n", 105 - me->name, v, location); 105 + me->name, (long long)v, location); 106 106 return -EINVAL; 107 107 } 108 108 ··· 144 144 if (IS_ENABLED(CMODEL_MEDLOW)) { 145 145 pr_err( 146 146 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n", 147 - me->name, v, location); 147 + me->name, (long long)v, location); 148 148 return -EINVAL; 149 149 } 150 150 ··· 188 188 } else { 189 189 pr_err( 190 190 "%s: can not generate the GOT entry for symbol = %016llx from PC = %p\n", 191 - me->name, v, location); 191 + me->name, (long long)v, location); 192 192 return -EINVAL; 193 193 } 194 194 ··· 212 212 } else { 213 213 pr_err( 214 214 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n", 215 - me->name, v, location); 215 + me->name, (long long)v, location); 216 216 return -EINVAL; 217 217 } 218 218 } ··· 234 234 if (offset != fill_v) { 235 235 pr_err( 236 236 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n", 237 - me->name, v, location); 237 + me->name, (long long)v, location); 238 238 return -EINVAL; 239 239 } 240 240
+1 -1
arch/riscv/kernel/vmlinux.lds.S
··· 74 74 *(.sbss*) 75 75 } 76 76 77 - BSS_SECTION(0, 0, 0) 77 + BSS_SECTION(PAGE_SIZE, PAGE_SIZE, 0) 78 78 79 79 EXCEPTION_TABLE(0x10) 80 80 NOTES
+1 -1
arch/riscv/lib/Makefile
··· 3 3 lib-y += memset.o 4 4 lib-y += uaccess.o 5 5 6 - lib-(CONFIG_64BIT) += tishift.o 6 + lib-$(CONFIG_64BIT) += tishift.o 7 7 8 8 lib-$(CONFIG_32BIT) += udivdi3.o
+1 -1
arch/s390/Makefile
··· 27 27 KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO),-g) 28 28 KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO_DWARF4), $(call cc-option, -gdwarf-4,)) 29 29 UTS_MACHINE := s390x 30 - STACK_SIZE := $(if $(CONFIG_KASAN),32768,16384) 30 + STACK_SIZE := $(if $(CONFIG_KASAN),65536,16384) 31 31 CHECKFLAGS += -D__s390__ -D__s390x__ 32 32 33 33 export LD_BFD
+8 -8
arch/s390/boot/compressed/Makefile
··· 22 22 OBJECTS := $(addprefix $(obj)/,$(obj-y)) 23 23 24 24 LDFLAGS_vmlinux := --oformat $(LD_BFD) -e startup -T 25 - $(obj)/vmlinux: $(obj)/vmlinux.lds $(objtree)/arch/s390/boot/startup.a $(OBJECTS) 25 + $(obj)/vmlinux: $(obj)/vmlinux.lds $(objtree)/arch/s390/boot/startup.a $(OBJECTS) FORCE 26 26 $(call if_changed,ld) 27 27 28 - OBJCOPYFLAGS_info.bin := -O binary --only-section=.vmlinux.info 28 + OBJCOPYFLAGS_info.bin := -O binary --only-section=.vmlinux.info --set-section-flags .vmlinux.info=load 29 29 $(obj)/info.bin: vmlinux FORCE 30 30 $(call if_changed,objcopy) 31 31 ··· 46 46 suffix-$(CONFIG_KERNEL_LZO) := .lzo 47 47 suffix-$(CONFIG_KERNEL_XZ) := .xz 48 48 49 - $(obj)/vmlinux.bin.gz: $(vmlinux.bin.all-y) 49 + $(obj)/vmlinux.bin.gz: $(vmlinux.bin.all-y) FORCE 50 50 $(call if_changed,gzip) 51 - $(obj)/vmlinux.bin.bz2: $(vmlinux.bin.all-y) 51 + $(obj)/vmlinux.bin.bz2: $(vmlinux.bin.all-y) FORCE 52 52 $(call if_changed,bzip2) 53 - $(obj)/vmlinux.bin.lz4: $(vmlinux.bin.all-y) 53 + $(obj)/vmlinux.bin.lz4: $(vmlinux.bin.all-y) FORCE 54 54 $(call if_changed,lz4) 55 - $(obj)/vmlinux.bin.lzma: $(vmlinux.bin.all-y) 55 + $(obj)/vmlinux.bin.lzma: $(vmlinux.bin.all-y) FORCE 56 56 $(call if_changed,lzma) 57 - $(obj)/vmlinux.bin.lzo: $(vmlinux.bin.all-y) 57 + $(obj)/vmlinux.bin.lzo: $(vmlinux.bin.all-y) FORCE 58 58 $(call if_changed,lzo) 59 - $(obj)/vmlinux.bin.xz: $(vmlinux.bin.all-y) 59 + $(obj)/vmlinux.bin.xz: $(vmlinux.bin.all-y) FORCE 60 60 $(call if_changed,xzkern) 61 61 62 62 OBJCOPYFLAGS_piggy.o := -I binary -O elf64-s390 -B s390:64-bit --rename-section .data=.vmlinux.bin.compressed
+11 -3
arch/s390/configs/debug_defconfig
··· 64 64 CONFIG_PREEMPT=y 65 65 CONFIG_HZ_100=y 66 66 CONFIG_KEXEC_FILE=y 67 + CONFIG_EXPOLINE=y 68 + CONFIG_EXPOLINE_AUTO=y 67 69 CONFIG_MEMORY_HOTPLUG=y 68 70 CONFIG_MEMORY_HOTREMOVE=y 69 71 CONFIG_KSM=y ··· 86 84 CONFIG_HOTPLUG_PCI=y 87 85 CONFIG_HOTPLUG_PCI_S390=y 88 86 CONFIG_CHSC_SCH=y 87 + CONFIG_VFIO_AP=m 89 88 CONFIG_CRASH_DUMP=y 90 89 CONFIG_BINFMT_MISC=m 91 90 CONFIG_HIBERNATION=y 91 + CONFIG_PM_DEBUG=y 92 92 CONFIG_NET=y 93 93 CONFIG_PACKET=y 94 94 CONFIG_PACKET_DIAG=m ··· 165 161 CONFIG_NF_CT_NETLINK=m 166 162 CONFIG_NF_CT_NETLINK_TIMEOUT=m 167 163 CONFIG_NF_TABLES=m 168 - CONFIG_NFT_EXTHDR=m 169 - CONFIG_NFT_META=m 170 164 CONFIG_NFT_CT=m 171 165 CONFIG_NFT_COUNTER=m 172 166 CONFIG_NFT_LOG=m ··· 367 365 CONFIG_NET_ACT_CSUM=m 368 366 CONFIG_DNS_RESOLVER=y 369 367 CONFIG_OPENVSWITCH=m 368 + CONFIG_VSOCKETS=m 369 + CONFIG_VIRTIO_VSOCKETS=m 370 370 CONFIG_NETLINK_DIAG=m 371 371 CONFIG_CGROUP_NET_PRIO=y 372 372 CONFIG_BPF_JIT=y ··· 465 461 CONFIG_PPPOL2TP=m 466 462 CONFIG_PPP_ASYNC=m 467 463 CONFIG_PPP_SYNC_TTY=m 464 + CONFIG_ISM=m 468 465 CONFIG_INPUT_EVDEV=y 469 466 # CONFIG_INPUT_KEYBOARD is not set 470 467 # CONFIG_INPUT_MOUSE is not set ··· 491 486 CONFIG_MLX5_INFINIBAND=m 492 487 CONFIG_VFIO=m 493 488 CONFIG_VFIO_PCI=m 489 + CONFIG_VFIO_MDEV=m 490 + CONFIG_VFIO_MDEV_DEVICE=m 494 491 CONFIG_VIRTIO_PCI=m 495 492 CONFIG_VIRTIO_BALLOON=m 496 493 CONFIG_VIRTIO_INPUT=y 494 + CONFIG_S390_AP_IOMMU=y 497 495 CONFIG_EXT4_FS=y 498 496 CONFIG_EXT4_FS_POSIX_ACL=y 499 497 CONFIG_EXT4_FS_SECURITY=y ··· 623 615 CONFIG_RCU_TORTURE_TEST=m 624 616 CONFIG_RCU_CPU_STALL_TIMEOUT=300 625 617 CONFIG_NOTIFIER_ERROR_INJECTION=m 626 - CONFIG_PM_NOTIFIER_ERROR_INJECT=m 627 618 CONFIG_NETDEV_NOTIFIER_ERROR_INJECT=m 628 619 CONFIG_FAULT_INJECTION=y 629 620 CONFIG_FAILSLAB=y ··· 734 727 CONFIG_KVM=m 735 728 CONFIG_KVM_S390_UCONTROL=y 736 729 CONFIG_VHOST_NET=m 730 + CONFIG_VHOST_VSOCK=m
+11 -2
arch/s390/configs/performance_defconfig
··· 65 65 CONFIG_NUMA=y 66 66 CONFIG_HZ_100=y 67 67 CONFIG_KEXEC_FILE=y 68 + CONFIG_EXPOLINE=y 69 + CONFIG_EXPOLINE_AUTO=y 68 70 CONFIG_MEMORY_HOTPLUG=y 69 71 CONFIG_MEMORY_HOTREMOVE=y 70 72 CONFIG_KSM=y ··· 84 82 CONFIG_HOTPLUG_PCI=y 85 83 CONFIG_HOTPLUG_PCI_S390=y 86 84 CONFIG_CHSC_SCH=y 85 + CONFIG_VFIO_AP=m 87 86 CONFIG_CRASH_DUMP=y 88 87 CONFIG_BINFMT_MISC=m 89 88 CONFIG_HIBERNATION=y 89 + CONFIG_PM_DEBUG=y 90 90 CONFIG_NET=y 91 91 CONFIG_PACKET=y 92 92 CONFIG_PACKET_DIAG=m ··· 163 159 CONFIG_NF_CT_NETLINK=m 164 160 CONFIG_NF_CT_NETLINK_TIMEOUT=m 165 161 CONFIG_NF_TABLES=m 166 - CONFIG_NFT_EXTHDR=m 167 - CONFIG_NFT_META=m 168 162 CONFIG_NFT_CT=m 169 163 CONFIG_NFT_COUNTER=m 170 164 CONFIG_NFT_LOG=m ··· 364 362 CONFIG_NET_ACT_CSUM=m 365 363 CONFIG_DNS_RESOLVER=y 366 364 CONFIG_OPENVSWITCH=m 365 + CONFIG_VSOCKETS=m 366 + CONFIG_VIRTIO_VSOCKETS=m 367 367 CONFIG_NETLINK_DIAG=m 368 368 CONFIG_CGROUP_NET_PRIO=y 369 369 CONFIG_BPF_JIT=y ··· 462 458 CONFIG_PPPOL2TP=m 463 459 CONFIG_PPP_ASYNC=m 464 460 CONFIG_PPP_SYNC_TTY=m 461 + CONFIG_ISM=m 465 462 CONFIG_INPUT_EVDEV=y 466 463 # CONFIG_INPUT_KEYBOARD is not set 467 464 # CONFIG_INPUT_MOUSE is not set ··· 488 483 CONFIG_MLX5_INFINIBAND=m 489 484 CONFIG_VFIO=m 490 485 CONFIG_VFIO_PCI=m 486 + CONFIG_VFIO_MDEV=m 487 + CONFIG_VFIO_MDEV_DEVICE=m 491 488 CONFIG_VIRTIO_PCI=m 492 489 CONFIG_VIRTIO_BALLOON=m 493 490 CONFIG_VIRTIO_INPUT=y 491 + CONFIG_S390_AP_IOMMU=y 494 492 CONFIG_EXT4_FS=y 495 493 CONFIG_EXT4_FS_POSIX_ACL=y 496 494 CONFIG_EXT4_FS_SECURITY=y ··· 674 666 CONFIG_KVM=m 675 667 CONFIG_KVM_S390_UCONTROL=y 676 668 CONFIG_VHOST_NET=m 669 + CONFIG_VHOST_VSOCK=m
+41 -38
arch/s390/defconfig
··· 26 26 CONFIG_CGROUP_PERF=y 27 27 CONFIG_NAMESPACES=y 28 28 CONFIG_USER_NS=y 29 + CONFIG_CHECKPOINT_RESTORE=y 29 30 CONFIG_BLK_DEV_INITRD=y 30 31 CONFIG_EXPERT=y 31 32 # CONFIG_SYSFS_SYSCALL is not set 32 - CONFIG_CHECKPOINT_RESTORE=y 33 33 CONFIG_BPF_SYSCALL=y 34 34 CONFIG_USERFAULTFD=y 35 35 # CONFIG_COMPAT_BRK is not set 36 36 CONFIG_PROFILING=y 37 + CONFIG_LIVEPATCH=y 38 + CONFIG_NR_CPUS=256 39 + CONFIG_NUMA=y 40 + CONFIG_HZ_100=y 41 + CONFIG_KEXEC_FILE=y 42 + CONFIG_CRASH_DUMP=y 43 + CONFIG_HIBERNATION=y 44 + CONFIG_PM_DEBUG=y 45 + CONFIG_CMM=m 37 46 CONFIG_OPROFILE=y 38 47 CONFIG_KPROBES=y 39 48 CONFIG_JUMP_LABEL=y ··· 53 44 CONFIG_PARTITION_ADVANCED=y 54 45 CONFIG_IBM_PARTITION=y 55 46 CONFIG_DEFAULT_DEADLINE=y 56 - CONFIG_LIVEPATCH=y 57 - CONFIG_NR_CPUS=256 58 - CONFIG_NUMA=y 59 - CONFIG_HZ_100=y 60 - CONFIG_KEXEC_FILE=y 47 + CONFIG_BINFMT_MISC=m 61 48 CONFIG_MEMORY_HOTPLUG=y 62 49 CONFIG_MEMORY_HOTREMOVE=y 63 50 CONFIG_KSM=y ··· 65 60 CONFIG_ZSMALLOC=m 66 61 CONFIG_ZSMALLOC_STAT=y 67 62 CONFIG_IDLE_PAGE_TRACKING=y 68 - CONFIG_CRASH_DUMP=y 69 - CONFIG_BINFMT_MISC=m 70 - CONFIG_HIBERNATION=y 71 63 CONFIG_NET=y 72 64 CONFIG_PACKET=y 73 65 CONFIG_UNIX=y ··· 100 98 CONFIG_BLK_DEV_RAM=y 101 99 CONFIG_VIRTIO_BLK=y 102 100 CONFIG_SCSI=y 101 + # CONFIG_SCSI_MQ_DEFAULT is not set 103 102 CONFIG_BLK_DEV_SD=y 104 103 CONFIG_CHR_DEV_ST=y 105 104 CONFIG_BLK_DEV_SR=y ··· 134 131 CONFIG_TUN=m 135 132 CONFIG_VIRTIO_NET=y 136 133 # CONFIG_NET_VENDOR_ALACRITECH is not set 134 + # CONFIG_NET_VENDOR_AURORA is not set 137 135 # CONFIG_NET_VENDOR_CORTINA is not set 138 136 # CONFIG_NET_VENDOR_SOLARFLARE is not set 139 137 # CONFIG_NET_VENDOR_SOCIONEXT is not set ··· 161 157 CONFIG_TMPFS_POSIX_ACL=y 162 158 CONFIG_HUGETLBFS=y 163 159 # CONFIG_NETWORK_FILESYSTEMS is not set 164 - CONFIG_DEBUG_INFO=y 165 - CONFIG_DEBUG_INFO_DWARF4=y 166 - CONFIG_GDB_SCRIPTS=y 167 - CONFIG_UNUSED_SYMBOLS=y 168 - CONFIG_DEBUG_SECTION_MISMATCH=y 169 - CONFIG_DEBUG_FORCE_WEAK_PER_CPU=y 170 - CONFIG_MAGIC_SYSRQ=y 171 - CONFIG_DEBUG_PAGEALLOC=y 172 - CONFIG_DETECT_HUNG_TASK=y 173 - CONFIG_PANIC_ON_OOPS=y 174 - CONFIG_PROVE_LOCKING=y 175 - CONFIG_LOCK_STAT=y 176 - CONFIG_DEBUG_LOCKDEP=y 177 - CONFIG_DEBUG_ATOMIC_SLEEP=y 178 - CONFIG_DEBUG_LIST=y 179 - CONFIG_DEBUG_SG=y 180 - CONFIG_DEBUG_NOTIFIERS=y 181 - CONFIG_RCU_CPU_STALL_TIMEOUT=60 182 - CONFIG_LATENCYTOP=y 183 - CONFIG_SCHED_TRACER=y 184 - CONFIG_FTRACE_SYSCALLS=y 185 - CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP=y 186 - CONFIG_STACK_TRACER=y 187 - CONFIG_BLK_DEV_IO_TRACE=y 188 - CONFIG_FUNCTION_PROFILER=y 189 - # CONFIG_RUNTIME_TESTING_MENU is not set 190 - CONFIG_S390_PTDUMP=y 191 160 CONFIG_CRYPTO_CRYPTD=m 192 161 CONFIG_CRYPTO_AUTHENC=m 193 162 CONFIG_CRYPTO_TEST=m ··· 170 193 CONFIG_CRYPTO_CFB=m 171 194 CONFIG_CRYPTO_CTS=m 172 195 CONFIG_CRYPTO_LRW=m 196 + CONFIG_CRYPTO_OFB=m 173 197 CONFIG_CRYPTO_PCBC=m 174 198 CONFIG_CRYPTO_XTS=m 175 199 CONFIG_CRYPTO_CMAC=m ··· 209 231 CONFIG_CRYPTO_USER_API_SKCIPHER=m 210 232 CONFIG_CRYPTO_USER_API_RNG=m 211 233 CONFIG_ZCRYPT=m 212 - CONFIG_ZCRYPT_MULTIDEVNODES=y 213 234 CONFIG_PKEY=m 214 235 CONFIG_CRYPTO_PAES_S390=m 215 236 CONFIG_CRYPTO_SHA1_S390=m ··· 224 247 # CONFIG_XZ_DEC_ARM is not set 225 248 # CONFIG_XZ_DEC_ARMTHUMB is not set 226 249 # CONFIG_XZ_DEC_SPARC is not set 227 - CONFIG_CMM=m 250 + CONFIG_DEBUG_INFO=y 251 + CONFIG_DEBUG_INFO_DWARF4=y 252 + CONFIG_GDB_SCRIPTS=y 253 + CONFIG_UNUSED_SYMBOLS=y 254 + CONFIG_DEBUG_SECTION_MISMATCH=y 255 + CONFIG_DEBUG_FORCE_WEAK_PER_CPU=y 256 + CONFIG_MAGIC_SYSRQ=y 257 + CONFIG_DEBUG_PAGEALLOC=y 258 + CONFIG_DETECT_HUNG_TASK=y 259 + CONFIG_PANIC_ON_OOPS=y 260 + CONFIG_PROVE_LOCKING=y 261 + CONFIG_LOCK_STAT=y 262 + CONFIG_DEBUG_LOCKDEP=y 263 + CONFIG_DEBUG_ATOMIC_SLEEP=y 264 + CONFIG_DEBUG_LIST=y 265 + CONFIG_DEBUG_SG=y 266 + CONFIG_DEBUG_NOTIFIERS=y 267 + CONFIG_RCU_CPU_STALL_TIMEOUT=60 268 + CONFIG_LATENCYTOP=y 269 + CONFIG_SCHED_TRACER=y 270 + CONFIG_FTRACE_SYSCALLS=y 271 + CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP=y 272 + CONFIG_STACK_TRACER=y 273 + CONFIG_BLK_DEV_IO_TRACE=y 274 + CONFIG_FUNCTION_PROFILER=y 275 + # CONFIG_RUNTIME_TESTING_MENU is not set 276 + CONFIG_S390_PTDUMP=y
-5
arch/s390/include/asm/mmu_context.h
··· 46 46 mm->context.asce_limit = STACK_TOP_MAX; 47 47 mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH | 48 48 _ASCE_USER_BITS | _ASCE_TYPE_REGION3; 49 - /* pgd_alloc() did not account this pud */ 50 - mm_inc_nr_puds(mm); 51 49 break; 52 50 case -PAGE_SIZE: 53 51 /* forked 5-level task, set new asce with new_mm->pgd */ ··· 61 63 /* forked 2-level compat task, set new asce with new mm->pgd */ 62 64 mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH | 63 65 _ASCE_USER_BITS | _ASCE_TYPE_SEGMENT; 64 - /* pgd_alloc() did not account this pmd */ 65 - mm_inc_nr_pmds(mm); 66 - mm_inc_nr_puds(mm); 67 66 } 68 67 crst_table_init((unsigned long *) mm->pgd, pgd_entry_type(mm)); 69 68 return 0;
+3 -3
arch/s390/include/asm/pgalloc.h
··· 36 36 37 37 static inline unsigned long pgd_entry_type(struct mm_struct *mm) 38 38 { 39 - if (mm->context.asce_limit <= _REGION3_SIZE) 39 + if (mm_pmd_folded(mm)) 40 40 return _SEGMENT_ENTRY_EMPTY; 41 - if (mm->context.asce_limit <= _REGION2_SIZE) 41 + if (mm_pud_folded(mm)) 42 42 return _REGION3_ENTRY_EMPTY; 43 - if (mm->context.asce_limit <= _REGION1_SIZE) 43 + if (mm_p4d_folded(mm)) 44 44 return _REGION2_ENTRY_EMPTY; 45 45 return _REGION1_ENTRY_EMPTY; 46 46 }
+18
arch/s390/include/asm/pgtable.h
··· 493 493 _REGION_ENTRY_PROTECT | \ 494 494 _REGION_ENTRY_NOEXEC) 495 495 496 + static inline bool mm_p4d_folded(struct mm_struct *mm) 497 + { 498 + return mm->context.asce_limit <= _REGION1_SIZE; 499 + } 500 + #define mm_p4d_folded(mm) mm_p4d_folded(mm) 501 + 502 + static inline bool mm_pud_folded(struct mm_struct *mm) 503 + { 504 + return mm->context.asce_limit <= _REGION2_SIZE; 505 + } 506 + #define mm_pud_folded(mm) mm_pud_folded(mm) 507 + 508 + static inline bool mm_pmd_folded(struct mm_struct *mm) 509 + { 510 + return mm->context.asce_limit <= _REGION3_SIZE; 511 + } 512 + #define mm_pmd_folded(mm) mm_pmd_folded(mm) 513 + 496 514 static inline int mm_has_pgste(struct mm_struct *mm) 497 515 { 498 516 #ifdef CONFIG_PGSTE
+2 -2
arch/s390/include/asm/processor.h
··· 236 236 return sp; 237 237 } 238 238 239 - static __no_sanitize_address_or_inline unsigned short stap(void) 239 + static __no_kasan_or_inline unsigned short stap(void) 240 240 { 241 241 unsigned short cpu_address; 242 242 ··· 330 330 * Set PSW mask to specified value, while leaving the 331 331 * PSW addr pointing to the next instruction. 332 332 */ 333 - static __no_sanitize_address_or_inline void __load_psw_mask(unsigned long mask) 333 + static __no_kasan_or_inline void __load_psw_mask(unsigned long mask) 334 334 { 335 335 unsigned long addr; 336 336 psw_t psw;
+1 -1
arch/s390/include/asm/thread_info.h
··· 14 14 * General size of kernel stacks 15 15 */ 16 16 #ifdef CONFIG_KASAN 17 - #define THREAD_SIZE_ORDER 3 17 + #define THREAD_SIZE_ORDER 4 18 18 #else 19 19 #define THREAD_SIZE_ORDER 2 20 20 #endif
+3 -3
arch/s390/include/asm/tlb.h
··· 136 136 static inline void pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, 137 137 unsigned long address) 138 138 { 139 - if (tlb->mm->context.asce_limit <= _REGION3_SIZE) 139 + if (mm_pmd_folded(tlb->mm)) 140 140 return; 141 141 pgtable_pmd_page_dtor(virt_to_page(pmd)); 142 142 tlb_remove_table(tlb, pmd); ··· 152 152 static inline void p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, 153 153 unsigned long address) 154 154 { 155 - if (tlb->mm->context.asce_limit <= _REGION1_SIZE) 155 + if (mm_p4d_folded(tlb->mm)) 156 156 return; 157 157 tlb_remove_table(tlb, p4d); 158 158 } ··· 167 167 static inline void pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, 168 168 unsigned long address) 169 169 { 170 - if (tlb->mm->context.asce_limit <= _REGION2_SIZE) 170 + if (mm_pud_folded(tlb->mm)) 171 171 return; 172 172 tlb_remove_table(tlb, pud); 173 173 }
+3 -3
arch/s390/kernel/entry.S
··· 236 236 stmg %r6,%r15,__SF_GPRS(%r15) # store gprs of prev task 237 237 lghi %r4,__TASK_stack 238 238 lghi %r1,__TASK_thread 239 - lg %r5,0(%r4,%r3) # start of kernel stack of next 239 + llill %r5,STACK_INIT 240 240 stg %r15,__THREAD_ksp(%r1,%r2) # store kernel stack of prev 241 - lgr %r15,%r5 242 - aghi %r15,STACK_INIT # end of kernel stack of next 241 + lg %r15,0(%r4,%r3) # start of kernel stack of next 242 + agr %r15,%r5 # end of kernel stack of next 243 243 stg %r3,__LC_CURRENT # store task struct of next 244 244 stg %r15,__LC_KERNEL_STACK # store end of kernel stack 245 245 lg %r15,__THREAD_ksp(%r1,%r3) # load kernel stack of next
+2 -11
arch/s390/kernel/ftrace.c
··· 203 203 */ 204 204 unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip) 205 205 { 206 - struct ftrace_graph_ent trace; 207 - 208 206 if (unlikely(ftrace_graph_is_dead())) 209 207 goto out; 210 208 if (unlikely(atomic_read(&current->tracing_graph_pause))) 211 209 goto out; 212 210 ip -= MCOUNT_INSN_SIZE; 213 - trace.func = ip; 214 - trace.depth = current->curr_ret_stack + 1; 215 - /* Only trace if the calling function expects to. */ 216 - if (!ftrace_graph_entry(&trace)) 217 - goto out; 218 - if (ftrace_push_return_trace(parent, ip, &trace.depth, 0, 219 - NULL) == -EBUSY) 220 - goto out; 221 - parent = (unsigned long) return_to_handler; 211 + if (!function_graph_enter(parent, ip, 0, NULL)) 212 + parent = (unsigned long) return_to_handler; 222 213 out: 223 214 return parent; 224 215 }
+3 -1
arch/s390/kernel/perf_cpum_cf.c
··· 346 346 break; 347 347 348 348 case PERF_TYPE_HARDWARE: 349 + if (is_sampling_event(event)) /* No sampling support */ 350 + return -ENOENT; 349 351 ev = attr->config; 350 352 /* Count user space (problem-state) only */ 351 353 if (!attr->exclude_user && attr->exclude_kernel) { ··· 375 373 return -ENOENT; 376 374 377 375 if (ev > PERF_CPUM_CF_MAX_CTR) 378 - return -EINVAL; 376 + return -ENOENT; 379 377 380 378 /* Obtain the counter set to which the specified counter belongs */ 381 379 set = get_counter_set(ev);
+28 -5
arch/s390/kernel/perf_cpum_sf.c
··· 1842 1842 CPUMF_EVENT_ATTR(SF, SF_CYCLES_BASIC, PERF_EVENT_CPUM_SF); 1843 1843 CPUMF_EVENT_ATTR(SF, SF_CYCLES_BASIC_DIAG, PERF_EVENT_CPUM_SF_DIAG); 1844 1844 1845 - static struct attribute *cpumsf_pmu_events_attr[] = { 1846 - CPUMF_EVENT_PTR(SF, SF_CYCLES_BASIC), 1847 - NULL, 1848 - NULL, 1845 + /* Attribute list for CPU_SF. 1846 + * 1847 + * The availablitiy depends on the CPU_MF sampling facility authorization 1848 + * for basic + diagnositic samples. This is determined at initialization 1849 + * time by the sampling facility device driver. 1850 + * If the authorization for basic samples is turned off, it should be 1851 + * also turned off for diagnostic sampling. 1852 + * 1853 + * During initialization of the device driver, check the authorization 1854 + * level for diagnostic sampling and installs the attribute 1855 + * file for diagnostic sampling if necessary. 1856 + * 1857 + * For now install a placeholder to reference all possible attributes: 1858 + * SF_CYCLES_BASIC and SF_CYCLES_BASIC_DIAG. 1859 + * Add another entry for the final NULL pointer. 1860 + */ 1861 + enum { 1862 + SF_CYCLES_BASIC_ATTR_IDX = 0, 1863 + SF_CYCLES_BASIC_DIAG_ATTR_IDX, 1864 + SF_CYCLES_ATTR_MAX 1865 + }; 1866 + 1867 + static struct attribute *cpumsf_pmu_events_attr[SF_CYCLES_ATTR_MAX + 1] = { 1868 + [SF_CYCLES_BASIC_ATTR_IDX] = CPUMF_EVENT_PTR(SF, SF_CYCLES_BASIC) 1849 1869 }; 1850 1870 1851 1871 PMU_FORMAT_ATTR(event, "config:0-63"); ··· 2060 2040 2061 2041 if (si.ad) { 2062 2042 sfb_set_limits(CPUM_SF_MIN_SDB, CPUM_SF_MAX_SDB); 2063 - cpumsf_pmu_events_attr[1] = 2043 + /* Sampling of diagnostic data authorized, 2044 + * install event into attribute list of PMU device. 2045 + */ 2046 + cpumsf_pmu_events_attr[SF_CYCLES_BASIC_DIAG_ATTR_IDX] = 2064 2047 CPUMF_EVENT_PTR(SF, SF_CYCLES_BASIC_DIAG); 2065 2048 } 2066 2049
+3 -3
arch/s390/kernel/vdso32/Makefile
··· 37 37 $(obj)/vdso32_wrapper.o : $(obj)/vdso32.so 38 38 39 39 # link rule for the .so file, .lds has to be first 40 - $(obj)/vdso32.so.dbg: $(src)/vdso32.lds $(obj-vdso32) 40 + $(obj)/vdso32.so.dbg: $(src)/vdso32.lds $(obj-vdso32) FORCE 41 41 $(call if_changed,vdso32ld) 42 42 43 43 # strip rule for the .so file ··· 46 46 $(call if_changed,objcopy) 47 47 48 48 # assembly rules for the .S files 49 - $(obj-vdso32): %.o: %.S 49 + $(obj-vdso32): %.o: %.S FORCE 50 50 $(call if_changed_dep,vdso32as) 51 51 52 52 # actual build commands 53 53 quiet_cmd_vdso32ld = VDSO32L $@ 54 - cmd_vdso32ld = $(CC) $(c_flags) -Wl,-T $^ -o $@ 54 + cmd_vdso32ld = $(CC) $(c_flags) -Wl,-T $(filter %.lds %.o,$^) -o $@ 55 55 quiet_cmd_vdso32as = VDSO32A $@ 56 56 cmd_vdso32as = $(CC) $(a_flags) -c -o $@ $< 57 57
+3 -3
arch/s390/kernel/vdso64/Makefile
··· 37 37 $(obj)/vdso64_wrapper.o : $(obj)/vdso64.so 38 38 39 39 # link rule for the .so file, .lds has to be first 40 - $(obj)/vdso64.so.dbg: $(src)/vdso64.lds $(obj-vdso64) 40 + $(obj)/vdso64.so.dbg: $(src)/vdso64.lds $(obj-vdso64) FORCE 41 41 $(call if_changed,vdso64ld) 42 42 43 43 # strip rule for the .so file ··· 46 46 $(call if_changed,objcopy) 47 47 48 48 # assembly rules for the .S files 49 - $(obj-vdso64): %.o: %.S 49 + $(obj-vdso64): %.o: %.S FORCE 50 50 $(call if_changed_dep,vdso64as) 51 51 52 52 # actual build commands 53 53 quiet_cmd_vdso64ld = VDSO64L $@ 54 - cmd_vdso64ld = $(CC) $(c_flags) -Wl,-T $^ -o $@ 54 + cmd_vdso64ld = $(CC) $(c_flags) -Wl,-T $(filter %.lds %.o,$^) -o $@ 55 55 quiet_cmd_vdso64as = VDSO64A $@ 56 56 cmd_vdso64as = $(CC) $(a_flags) -c -o $@ $< 57 57
+2 -2
arch/s390/kernel/vmlinux.lds.S
··· 154 154 * uncompressed image info used by the decompressor 155 155 * it should match struct vmlinux_info 156 156 */ 157 - .vmlinux.info 0 : { 157 + .vmlinux.info 0 (INFO) : { 158 158 QUAD(_stext) /* default_lma */ 159 159 QUAD(startup_continue) /* entry */ 160 160 QUAD(__bss_start - _stext) /* image_size */ 161 161 QUAD(__bss_stop - __bss_start) /* bss_size */ 162 162 QUAD(__boot_data_start) /* bootdata_off */ 163 163 QUAD(__boot_data_end - __boot_data_start) /* bootdata_size */ 164 - } 164 + } :NONE 165 165 166 166 /* Debugging sections. */ 167 167 STABS_DEBUG
+2
arch/s390/mm/pgalloc.c
··· 101 101 mm->context.asce_limit = _REGION1_SIZE; 102 102 mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH | 103 103 _ASCE_USER_BITS | _ASCE_TYPE_REGION2; 104 + mm_inc_nr_puds(mm); 104 105 } else { 105 106 crst_table_init(table, _REGION1_ENTRY_EMPTY); 106 107 pgd_populate(mm, (pgd_t *) table, (p4d_t *) pgd); ··· 131 130 } 132 131 133 132 pgd = mm->pgd; 133 + mm_dec_nr_pmds(mm); 134 134 mm->pgd = (pgd_t *) (pgd_val(*pgd) & _REGION_ENTRY_ORIGIN); 135 135 mm->context.asce_limit = _REGION3_SIZE; 136 136 mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH |
+1
arch/s390/numa/numa.c
··· 53 53 { 54 54 return mode->distance ? mode->distance(a, b) : 0; 55 55 } 56 + EXPORT_SYMBOL(__node_distance); 56 57 57 58 int numa_debug_enabled; 58 59
+2 -14
arch/sh/kernel/ftrace.c
··· 321 321 void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr) 322 322 { 323 323 unsigned long old; 324 - int faulted, err; 325 - struct ftrace_graph_ent trace; 324 + int faulted; 326 325 unsigned long return_hooker = (unsigned long)&return_to_handler; 327 326 328 327 if (unlikely(ftrace_graph_is_dead())) ··· 364 365 return; 365 366 } 366 367 367 - err = ftrace_push_return_trace(old, self_addr, &trace.depth, 0, NULL); 368 - if (err == -EBUSY) { 368 + if (function_graph_enter(old, self_addr, 0, NULL)) 369 369 __raw_writel(old, parent); 370 - return; 371 - } 372 - 373 - trace.func = self_addr; 374 - 375 - /* Only trace if the calling function expects to */ 376 - if (!ftrace_graph_entry(&trace)) { 377 - current->curr_ret_stack--; 378 - __raw_writel(old, parent); 379 - } 380 370 } 381 371 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
+1 -10
arch/sparc/kernel/ftrace.c
··· 126 126 unsigned long frame_pointer) 127 127 { 128 128 unsigned long return_hooker = (unsigned long) &return_to_handler; 129 - struct ftrace_graph_ent trace; 130 129 131 130 if (unlikely(atomic_read(&current->tracing_graph_pause))) 132 131 return parent + 8UL; 133 132 134 - trace.func = self_addr; 135 - trace.depth = current->curr_ret_stack + 1; 136 - 137 - /* Only trace if the calling function expects to */ 138 - if (!ftrace_graph_entry(&trace)) 139 - return parent + 8UL; 140 - 141 - if (ftrace_push_return_trace(parent, self_addr, &trace.depth, 142 - frame_pointer, NULL) == -EBUSY) 133 + if (function_graph_enter(parent, self_addr, frame_pointer, NULL)) 143 134 return parent + 8UL; 144 135 145 136 return return_hooker;
+68 -29
arch/sparc/net/bpf_jit_comp_64.c
··· 791 791 } 792 792 793 793 /* Just skip the save instruction and the ctx register move. */ 794 - #define BPF_TAILCALL_PROLOGUE_SKIP 16 794 + #define BPF_TAILCALL_PROLOGUE_SKIP 32 795 795 #define BPF_TAILCALL_CNT_SP_OFF (STACK_BIAS + 128) 796 796 797 797 static void build_prologue(struct jit_ctx *ctx) ··· 824 824 const u8 vfp = bpf2sparc[BPF_REG_FP]; 825 825 826 826 emit(ADD | IMMED | RS1(FP) | S13(STACK_BIAS) | RD(vfp), ctx); 827 + } else { 828 + emit_nop(ctx); 827 829 } 828 830 829 831 emit_reg_move(I0, O0, ctx); 832 + emit_reg_move(I1, O1, ctx); 833 + emit_reg_move(I2, O2, ctx); 834 + emit_reg_move(I3, O3, ctx); 835 + emit_reg_move(I4, O4, ctx); 830 836 /* If you add anything here, adjust BPF_TAILCALL_PROLOGUE_SKIP above. */ 831 837 } 832 838 ··· 1276 1270 const u8 tmp2 = bpf2sparc[TMP_REG_2]; 1277 1271 u32 opcode = 0, rs2; 1278 1272 1273 + if (insn->dst_reg == BPF_REG_FP) 1274 + ctx->saw_frame_pointer = true; 1275 + 1279 1276 ctx->tmp_2_used = true; 1280 1277 emit_loadimm(imm, tmp2, ctx); 1281 1278 ··· 1317 1308 const u8 tmp = bpf2sparc[TMP_REG_1]; 1318 1309 u32 opcode = 0, rs2; 1319 1310 1311 + if (insn->dst_reg == BPF_REG_FP) 1312 + ctx->saw_frame_pointer = true; 1313 + 1320 1314 switch (BPF_SIZE(code)) { 1321 1315 case BPF_W: 1322 1316 opcode = ST32; ··· 1352 1340 const u8 tmp2 = bpf2sparc[TMP_REG_2]; 1353 1341 const u8 tmp3 = bpf2sparc[TMP_REG_3]; 1354 1342 1343 + if (insn->dst_reg == BPF_REG_FP) 1344 + ctx->saw_frame_pointer = true; 1345 + 1355 1346 ctx->tmp_1_used = true; 1356 1347 ctx->tmp_2_used = true; 1357 1348 ctx->tmp_3_used = true; ··· 1374 1359 const u8 tmp = bpf2sparc[TMP_REG_1]; 1375 1360 const u8 tmp2 = bpf2sparc[TMP_REG_2]; 1376 1361 const u8 tmp3 = bpf2sparc[TMP_REG_3]; 1362 + 1363 + if (insn->dst_reg == BPF_REG_FP) 1364 + ctx->saw_frame_pointer = true; 1377 1365 1378 1366 ctx->tmp_1_used = true; 1379 1367 ctx->tmp_2_used = true; ··· 1443 1425 struct bpf_prog *tmp, *orig_prog = prog; 1444 1426 struct sparc64_jit_data *jit_data; 1445 1427 struct bpf_binary_header *header; 1428 + u32 prev_image_size, image_size; 1446 1429 bool tmp_blinded = false; 1447 1430 bool extra_pass = false; 1448 1431 struct jit_ctx ctx; 1449 - u32 image_size; 1450 1432 u8 *image_ptr; 1451 - int pass; 1433 + int pass, i; 1452 1434 1453 1435 if (!prog->jit_requested) 1454 1436 return orig_prog; ··· 1479 1461 header = jit_data->header; 1480 1462 extra_pass = true; 1481 1463 image_size = sizeof(u32) * ctx.idx; 1464 + prev_image_size = image_size; 1465 + pass = 1; 1482 1466 goto skip_init_ctx; 1483 1467 } 1484 1468 1485 1469 memset(&ctx, 0, sizeof(ctx)); 1486 1470 ctx.prog = prog; 1487 1471 1488 - ctx.offset = kcalloc(prog->len, sizeof(unsigned int), GFP_KERNEL); 1472 + ctx.offset = kmalloc_array(prog->len, sizeof(unsigned int), GFP_KERNEL); 1489 1473 if (ctx.offset == NULL) { 1490 1474 prog = orig_prog; 1491 1475 goto out_off; 1492 1476 } 1493 1477 1494 - /* Fake pass to detect features used, and get an accurate assessment 1495 - * of what the final image size will be. 1478 + /* Longest sequence emitted is for bswap32, 12 instructions. Pre-cook 1479 + * the offset array so that we converge faster. 1496 1480 */ 1497 - if (build_body(&ctx)) { 1498 - prog = orig_prog; 1499 - goto out_off; 1481 + for (i = 0; i < prog->len; i++) 1482 + ctx.offset[i] = i * (12 * 4); 1483 + 1484 + prev_image_size = ~0U; 1485 + for (pass = 1; pass < 40; pass++) { 1486 + ctx.idx = 0; 1487 + 1488 + build_prologue(&ctx); 1489 + if (build_body(&ctx)) { 1490 + prog = orig_prog; 1491 + goto out_off; 1492 + } 1493 + build_epilogue(&ctx); 1494 + 1495 + if (bpf_jit_enable > 1) 1496 + pr_info("Pass %d: size = %u, seen = [%c%c%c%c%c%c]\n", pass, 1497 + ctx.idx * 4, 1498 + ctx.tmp_1_used ? '1' : ' ', 1499 + ctx.tmp_2_used ? '2' : ' ', 1500 + ctx.tmp_3_used ? '3' : ' ', 1501 + ctx.saw_frame_pointer ? 'F' : ' ', 1502 + ctx.saw_call ? 'C' : ' ', 1503 + ctx.saw_tail_call ? 'T' : ' '); 1504 + 1505 + if (ctx.idx * 4 == prev_image_size) 1506 + break; 1507 + prev_image_size = ctx.idx * 4; 1508 + cond_resched(); 1500 1509 } 1501 - build_prologue(&ctx); 1502 - build_epilogue(&ctx); 1503 1510 1504 1511 /* Now we know the actual image size. */ 1505 1512 image_size = sizeof(u32) * ctx.idx; ··· 1537 1494 1538 1495 ctx.image = (u32 *)image_ptr; 1539 1496 skip_init_ctx: 1540 - for (pass = 1; pass < 3; pass++) { 1541 - ctx.idx = 0; 1497 + ctx.idx = 0; 1542 1498 1543 - build_prologue(&ctx); 1499 + build_prologue(&ctx); 1544 1500 1545 - if (build_body(&ctx)) { 1546 - bpf_jit_binary_free(header); 1547 - prog = orig_prog; 1548 - goto out_off; 1549 - } 1501 + if (build_body(&ctx)) { 1502 + bpf_jit_binary_free(header); 1503 + prog = orig_prog; 1504 + goto out_off; 1505 + } 1550 1506 1551 - build_epilogue(&ctx); 1507 + build_epilogue(&ctx); 1552 1508 1553 - if (bpf_jit_enable > 1) 1554 - pr_info("Pass %d: shrink = %d, seen = [%c%c%c%c%c%c]\n", pass, 1555 - image_size - (ctx.idx * 4), 1556 - ctx.tmp_1_used ? '1' : ' ', 1557 - ctx.tmp_2_used ? '2' : ' ', 1558 - ctx.tmp_3_used ? '3' : ' ', 1559 - ctx.saw_frame_pointer ? 'F' : ' ', 1560 - ctx.saw_call ? 'C' : ' ', 1561 - ctx.saw_tail_call ? 'T' : ' '); 1509 + if (ctx.idx * 4 != prev_image_size) { 1510 + pr_err("bpf_jit: Failed to converge, prev_size=%u size=%d\n", 1511 + prev_image_size, ctx.idx * 4); 1512 + bpf_jit_binary_free(header); 1513 + prog = orig_prog; 1514 + goto out_off; 1562 1515 } 1563 1516 1564 1517 if (bpf_jit_enable > 1)
+8 -4
arch/um/drivers/ubd_kern.c
··· 1305 1305 io_req->fds[0] = dev->cow.fd; 1306 1306 else 1307 1307 io_req->fds[0] = dev->fd; 1308 + io_req->error = 0; 1308 1309 1309 1310 if (req_op(req) == REQ_OP_FLUSH) { 1310 1311 io_req->op = UBD_FLUSH; ··· 1314 1313 io_req->cow_offset = -1; 1315 1314 io_req->offset = off; 1316 1315 io_req->length = bvec->bv_len; 1317 - io_req->error = 0; 1318 1316 io_req->sector_mask = 0; 1319 - 1320 1317 io_req->op = rq_data_dir(req) == READ ? UBD_READ : UBD_WRITE; 1321 1318 io_req->offsets[0] = 0; 1322 1319 io_req->offsets[1] = dev->cow.data_offset; ··· 1340 1341 static blk_status_t ubd_queue_rq(struct blk_mq_hw_ctx *hctx, 1341 1342 const struct blk_mq_queue_data *bd) 1342 1343 { 1344 + struct ubd *ubd_dev = hctx->queue->queuedata; 1343 1345 struct request *req = bd->rq; 1344 1346 int ret = 0; 1345 1347 1346 1348 blk_mq_start_request(req); 1349 + 1350 + spin_lock_irq(&ubd_dev->lock); 1347 1351 1348 1352 if (req_op(req) == REQ_OP_FLUSH) { 1349 1353 ret = ubd_queue_one_vec(hctx, req, 0, NULL); ··· 1363 1361 } 1364 1362 } 1365 1363 out: 1366 - if (ret < 0) { 1364 + spin_unlock_irq(&ubd_dev->lock); 1365 + 1366 + if (ret < 0) 1367 1367 blk_mq_requeue_request(req, true); 1368 - } 1368 + 1369 1369 return BLK_STS_OK; 1370 1370 } 1371 1371
+1 -12
arch/x86/Kconfig
··· 444 444 branches. Requires a compiler with -mindirect-branch=thunk-extern 445 445 support for full protection. The kernel may run slower. 446 446 447 - Without compiler support, at least indirect branches in assembler 448 - code are eliminated. Since this includes the syscall entry path, 449 - it is not entirely pointless. 450 - 451 447 config INTEL_RDT 452 448 bool "Intel Resource Director Technology support" 453 449 depends on X86 && CPU_SUP_INTEL ··· 521 525 bool "ScaleMP vSMP" 522 526 select HYPERVISOR_GUEST 523 527 select PARAVIRT 524 - select PARAVIRT_XXL 525 528 depends on X86_64 && PCI 526 529 depends on X86_EXTENDED_PLATFORM 527 530 depends on SMP ··· 1000 1005 to the kernel image. 1001 1006 1002 1007 config SCHED_SMT 1003 - bool "SMT (Hyperthreading) scheduler support" 1004 - depends on SMP 1005 - ---help--- 1006 - SMT scheduler support improves the CPU scheduler's decision making 1007 - when dealing with Intel Pentium 4 chips with HyperThreading at a 1008 - cost of slightly increased overhead in some places. If unsure say 1009 - N here. 1008 + def_bool y if SMP 1010 1009 1011 1010 config SCHED_MC 1012 1011 def_bool y
+4 -5
arch/x86/Makefile
··· 213 213 KBUILD_LDFLAGS += $(call ld-option, -z max-page-size=0x200000) 214 214 endif 215 215 216 - # Speed up the build 217 - KBUILD_CFLAGS += -pipe 218 216 # Workaround for a gcc prelease that unfortunately was shipped in a suse release 219 217 KBUILD_CFLAGS += -Wno-sign-compare 220 218 # ··· 220 222 221 223 # Avoid indirect branches in kernel to deal with Spectre 222 224 ifdef CONFIG_RETPOLINE 223 - ifneq ($(RETPOLINE_CFLAGS),) 224 - KBUILD_CFLAGS += $(RETPOLINE_CFLAGS) -DRETPOLINE 225 + ifeq ($(RETPOLINE_CFLAGS),) 226 + $(error You are building kernel with non-retpoline compiler, please update your compiler.) 225 227 endif 228 + KBUILD_CFLAGS += $(RETPOLINE_CFLAGS) 226 229 endif 227 230 228 231 archscripts: scripts_basic ··· 238 239 archmacros: 239 240 $(Q)$(MAKE) $(build)=arch/x86/kernel arch/x86/kernel/macros.s 240 241 241 - ASM_MACRO_FLAGS = -Wa,arch/x86/kernel/macros.s -Wa,- 242 + ASM_MACRO_FLAGS = -Wa,arch/x86/kernel/macros.s 242 243 export ASM_MACRO_FLAGS 243 244 KBUILD_CFLAGS += $(ASM_MACRO_FLAGS) 244 245
+1 -5
arch/x86/boot/header.S
··· 300 300 # Part 2 of the header, from the old setup.S 301 301 302 302 .ascii "HdrS" # header signature 303 - .word 0x020e # header version number (>= 0x0105) 303 + .word 0x020d # header version number (>= 0x0105) 304 304 # or else old loadlin-1.5 will fail) 305 305 .globl realmode_swtch 306 306 realmode_swtch: .word 0, 0 # default_switch, SETUPSEG ··· 557 557 558 558 init_size: .long INIT_SIZE # kernel initialization size 559 559 handover_offset: .long 0 # Filled in by build.c 560 - 561 - acpi_rsdp_addr: .quad 0 # 64-bit physical pointer to the 562 - # ACPI RSDP table, added with 563 - # version 2.14 564 560 565 561 # End of setup header ##################################################### 566 562
-20
arch/x86/events/core.c
··· 438 438 if (config == -1LL) 439 439 return -EINVAL; 440 440 441 - /* 442 - * Branch tracing: 443 - */ 444 - if (attr->config == PERF_COUNT_HW_BRANCH_INSTRUCTIONS && 445 - !attr->freq && hwc->sample_period == 1) { 446 - /* BTS is not supported by this architecture. */ 447 - if (!x86_pmu.bts_active) 448 - return -EOPNOTSUPP; 449 - 450 - /* BTS is currently only allowed for user-mode. */ 451 - if (!attr->exclude_kernel) 452 - return -EOPNOTSUPP; 453 - 454 - /* disallow bts if conflicting events are present */ 455 - if (x86_add_exclusive(x86_lbr_exclusive_lbr)) 456 - return -EBUSY; 457 - 458 - event->destroy = hw_perf_lbr_event_destroy; 459 - } 460 - 461 441 hwc->config |= config; 462 442 463 443 return 0;
+52 -16
arch/x86/events/intel/core.c
··· 2306 2306 return handled; 2307 2307 } 2308 2308 2309 - static bool disable_counter_freezing; 2309 + static bool disable_counter_freezing = true; 2310 2310 static int __init intel_perf_counter_freezing_setup(char *s) 2311 2311 { 2312 - disable_counter_freezing = true; 2313 - pr_info("Intel PMU Counter freezing feature disabled\n"); 2312 + bool res; 2313 + 2314 + if (kstrtobool(s, &res)) 2315 + return -EINVAL; 2316 + 2317 + disable_counter_freezing = !res; 2314 2318 return 1; 2315 2319 } 2316 - __setup("disable_counter_freezing", intel_perf_counter_freezing_setup); 2320 + __setup("perf_v4_pmi=", intel_perf_counter_freezing_setup); 2317 2321 2318 2322 /* 2319 2323 * Simplified handler for Arch Perfmon v4: ··· 2474 2470 static struct event_constraint * 2475 2471 intel_bts_constraints(struct perf_event *event) 2476 2472 { 2477 - struct hw_perf_event *hwc = &event->hw; 2478 - unsigned int hw_event, bts_event; 2479 - 2480 - if (event->attr.freq) 2481 - return NULL; 2482 - 2483 - hw_event = hwc->config & INTEL_ARCH_EVENT_MASK; 2484 - bts_event = x86_pmu.event_map(PERF_COUNT_HW_BRANCH_INSTRUCTIONS); 2485 - 2486 - if (unlikely(hw_event == bts_event && hwc->sample_period == 1)) 2473 + if (unlikely(intel_pmu_has_bts(event))) 2487 2474 return &bts_constraint; 2488 2475 2489 2476 return NULL; ··· 3093 3098 return flags; 3094 3099 } 3095 3100 3101 + static int intel_pmu_bts_config(struct perf_event *event) 3102 + { 3103 + struct perf_event_attr *attr = &event->attr; 3104 + 3105 + if (unlikely(intel_pmu_has_bts(event))) { 3106 + /* BTS is not supported by this architecture. */ 3107 + if (!x86_pmu.bts_active) 3108 + return -EOPNOTSUPP; 3109 + 3110 + /* BTS is currently only allowed for user-mode. */ 3111 + if (!attr->exclude_kernel) 3112 + return -EOPNOTSUPP; 3113 + 3114 + /* BTS is not allowed for precise events. */ 3115 + if (attr->precise_ip) 3116 + return -EOPNOTSUPP; 3117 + 3118 + /* disallow bts if conflicting events are present */ 3119 + if (x86_add_exclusive(x86_lbr_exclusive_lbr)) 3120 + return -EBUSY; 3121 + 3122 + event->destroy = hw_perf_lbr_event_destroy; 3123 + } 3124 + 3125 + return 0; 3126 + } 3127 + 3128 + static int core_pmu_hw_config(struct perf_event *event) 3129 + { 3130 + int ret = x86_pmu_hw_config(event); 3131 + 3132 + if (ret) 3133 + return ret; 3134 + 3135 + return intel_pmu_bts_config(event); 3136 + } 3137 + 3096 3138 static int intel_pmu_hw_config(struct perf_event *event) 3097 3139 { 3098 3140 int ret = x86_pmu_hw_config(event); 3099 3141 3142 + if (ret) 3143 + return ret; 3144 + 3145 + ret = intel_pmu_bts_config(event); 3100 3146 if (ret) 3101 3147 return ret; 3102 3148 ··· 3163 3127 /* 3164 3128 * BTS is set up earlier in this path, so don't account twice 3165 3129 */ 3166 - if (!intel_pmu_has_bts(event)) { 3130 + if (!unlikely(intel_pmu_has_bts(event))) { 3167 3131 /* disallow lbr if conflicting events are present */ 3168 3132 if (x86_add_exclusive(x86_lbr_exclusive_lbr)) 3169 3133 return -EBUSY; ··· 3632 3596 .enable_all = core_pmu_enable_all, 3633 3597 .enable = core_pmu_enable_event, 3634 3598 .disable = x86_pmu_disable_event, 3635 - .hw_config = x86_pmu_hw_config, 3599 + .hw_config = core_pmu_hw_config, 3636 3600 .schedule_events = x86_schedule_events, 3637 3601 .eventsel = MSR_ARCH_PERFMON_EVENTSEL0, 3638 3602 .perfctr = MSR_ARCH_PERFMON_PERFCTR0,
+25 -8
arch/x86/events/intel/uncore.h
··· 129 129 struct intel_uncore_extra_reg shared_regs[0]; 130 130 }; 131 131 132 - #define UNCORE_BOX_FLAG_INITIATED 0 133 - #define UNCORE_BOX_FLAG_CTL_OFFS8 1 /* event config registers are 8-byte apart */ 132 + /* CFL uncore 8th cbox MSRs */ 133 + #define CFL_UNC_CBO_7_PERFEVTSEL0 0xf70 134 + #define CFL_UNC_CBO_7_PER_CTR0 0xf76 135 + 136 + #define UNCORE_BOX_FLAG_INITIATED 0 137 + /* event config registers are 8-byte apart */ 138 + #define UNCORE_BOX_FLAG_CTL_OFFS8 1 139 + /* CFL 8th CBOX has different MSR space */ 140 + #define UNCORE_BOX_FLAG_CFL8_CBOX_MSR_OFFS 2 134 141 135 142 struct uncore_event_desc { 136 143 struct kobj_attribute attr; ··· 304 297 static inline 305 298 unsigned uncore_msr_event_ctl(struct intel_uncore_box *box, int idx) 306 299 { 307 - return box->pmu->type->event_ctl + 308 - (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx) + 309 - uncore_msr_box_offset(box); 300 + if (test_bit(UNCORE_BOX_FLAG_CFL8_CBOX_MSR_OFFS, &box->flags)) { 301 + return CFL_UNC_CBO_7_PERFEVTSEL0 + 302 + (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx); 303 + } else { 304 + return box->pmu->type->event_ctl + 305 + (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx) + 306 + uncore_msr_box_offset(box); 307 + } 310 308 } 311 309 312 310 static inline 313 311 unsigned uncore_msr_perf_ctr(struct intel_uncore_box *box, int idx) 314 312 { 315 - return box->pmu->type->perf_ctr + 316 - (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx) + 317 - uncore_msr_box_offset(box); 313 + if (test_bit(UNCORE_BOX_FLAG_CFL8_CBOX_MSR_OFFS, &box->flags)) { 314 + return CFL_UNC_CBO_7_PER_CTR0 + 315 + (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx); 316 + } else { 317 + return box->pmu->type->perf_ctr + 318 + (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx) + 319 + uncore_msr_box_offset(box); 320 + } 318 321 } 319 322 320 323 static inline
+119 -2
arch/x86/events/intel/uncore_snb.c
··· 15 15 #define PCI_DEVICE_ID_INTEL_SKL_HQ_IMC 0x1910 16 16 #define PCI_DEVICE_ID_INTEL_SKL_SD_IMC 0x190f 17 17 #define PCI_DEVICE_ID_INTEL_SKL_SQ_IMC 0x191f 18 + #define PCI_DEVICE_ID_INTEL_KBL_Y_IMC 0x590c 19 + #define PCI_DEVICE_ID_INTEL_KBL_U_IMC 0x5904 20 + #define PCI_DEVICE_ID_INTEL_KBL_UQ_IMC 0x5914 21 + #define PCI_DEVICE_ID_INTEL_KBL_SD_IMC 0x590f 22 + #define PCI_DEVICE_ID_INTEL_KBL_SQ_IMC 0x591f 23 + #define PCI_DEVICE_ID_INTEL_CFL_2U_IMC 0x3ecc 24 + #define PCI_DEVICE_ID_INTEL_CFL_4U_IMC 0x3ed0 25 + #define PCI_DEVICE_ID_INTEL_CFL_4H_IMC 0x3e10 26 + #define PCI_DEVICE_ID_INTEL_CFL_6H_IMC 0x3ec4 27 + #define PCI_DEVICE_ID_INTEL_CFL_2S_D_IMC 0x3e0f 28 + #define PCI_DEVICE_ID_INTEL_CFL_4S_D_IMC 0x3e1f 29 + #define PCI_DEVICE_ID_INTEL_CFL_6S_D_IMC 0x3ec2 30 + #define PCI_DEVICE_ID_INTEL_CFL_8S_D_IMC 0x3e30 31 + #define PCI_DEVICE_ID_INTEL_CFL_4S_W_IMC 0x3e18 32 + #define PCI_DEVICE_ID_INTEL_CFL_6S_W_IMC 0x3ec6 33 + #define PCI_DEVICE_ID_INTEL_CFL_8S_W_IMC 0x3e31 34 + #define PCI_DEVICE_ID_INTEL_CFL_4S_S_IMC 0x3e33 35 + #define PCI_DEVICE_ID_INTEL_CFL_6S_S_IMC 0x3eca 36 + #define PCI_DEVICE_ID_INTEL_CFL_8S_S_IMC 0x3e32 18 37 19 38 /* SNB event control */ 20 39 #define SNB_UNC_CTL_EV_SEL_MASK 0x000000ff ··· 221 202 wrmsrl(SKL_UNC_PERF_GLOBAL_CTL, 222 203 SNB_UNC_GLOBAL_CTL_EN | SKL_UNC_GLOBAL_CTL_CORE_ALL); 223 204 } 205 + 206 + /* The 8th CBOX has different MSR space */ 207 + if (box->pmu->pmu_idx == 7) 208 + __set_bit(UNCORE_BOX_FLAG_CFL8_CBOX_MSR_OFFS, &box->flags); 224 209 } 225 210 226 211 static void skl_uncore_msr_enable_box(struct intel_uncore_box *box) ··· 251 228 static struct intel_uncore_type skl_uncore_cbox = { 252 229 .name = "cbox", 253 230 .num_counters = 4, 254 - .num_boxes = 5, 231 + .num_boxes = 8, 255 232 .perf_ctr_bits = 44, 256 233 .fixed_ctr_bits = 48, 257 234 .perf_ctr = SNB_UNC_CBO_0_PER_CTR0, ··· 592 569 PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SKL_SQ_IMC), 593 570 .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 594 571 }, 595 - 572 + { /* IMC */ 573 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_Y_IMC), 574 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 575 + }, 576 + { /* IMC */ 577 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_U_IMC), 578 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 579 + }, 580 + { /* IMC */ 581 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_UQ_IMC), 582 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 583 + }, 584 + { /* IMC */ 585 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_SD_IMC), 586 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 587 + }, 588 + { /* IMC */ 589 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_SQ_IMC), 590 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 591 + }, 592 + { /* IMC */ 593 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_2U_IMC), 594 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 595 + }, 596 + { /* IMC */ 597 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4U_IMC), 598 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 599 + }, 600 + { /* IMC */ 601 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4H_IMC), 602 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 603 + }, 604 + { /* IMC */ 605 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6H_IMC), 606 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 607 + }, 608 + { /* IMC */ 609 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_2S_D_IMC), 610 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 611 + }, 612 + { /* IMC */ 613 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4S_D_IMC), 614 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 615 + }, 616 + { /* IMC */ 617 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6S_D_IMC), 618 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 619 + }, 620 + { /* IMC */ 621 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_8S_D_IMC), 622 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 623 + }, 624 + { /* IMC */ 625 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4S_W_IMC), 626 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 627 + }, 628 + { /* IMC */ 629 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6S_W_IMC), 630 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 631 + }, 632 + { /* IMC */ 633 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_8S_W_IMC), 634 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 635 + }, 636 + { /* IMC */ 637 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4S_S_IMC), 638 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 639 + }, 640 + { /* IMC */ 641 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6S_S_IMC), 642 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 643 + }, 644 + { /* IMC */ 645 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_8S_S_IMC), 646 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 647 + }, 596 648 { /* end: all zeroes */ }, 597 649 }; 598 650 ··· 716 618 IMC_DEV(SKL_HQ_IMC, &skl_uncore_pci_driver), /* 6th Gen Core H Quad Core */ 717 619 IMC_DEV(SKL_SD_IMC, &skl_uncore_pci_driver), /* 6th Gen Core S Dual Core */ 718 620 IMC_DEV(SKL_SQ_IMC, &skl_uncore_pci_driver), /* 6th Gen Core S Quad Core */ 621 + IMC_DEV(KBL_Y_IMC, &skl_uncore_pci_driver), /* 7th Gen Core Y */ 622 + IMC_DEV(KBL_U_IMC, &skl_uncore_pci_driver), /* 7th Gen Core U */ 623 + IMC_DEV(KBL_UQ_IMC, &skl_uncore_pci_driver), /* 7th Gen Core U Quad Core */ 624 + IMC_DEV(KBL_SD_IMC, &skl_uncore_pci_driver), /* 7th Gen Core S Dual Core */ 625 + IMC_DEV(KBL_SQ_IMC, &skl_uncore_pci_driver), /* 7th Gen Core S Quad Core */ 626 + IMC_DEV(CFL_2U_IMC, &skl_uncore_pci_driver), /* 8th Gen Core U 2 Cores */ 627 + IMC_DEV(CFL_4U_IMC, &skl_uncore_pci_driver), /* 8th Gen Core U 4 Cores */ 628 + IMC_DEV(CFL_4H_IMC, &skl_uncore_pci_driver), /* 8th Gen Core H 4 Cores */ 629 + IMC_DEV(CFL_6H_IMC, &skl_uncore_pci_driver), /* 8th Gen Core H 6 Cores */ 630 + IMC_DEV(CFL_2S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 2 Cores Desktop */ 631 + IMC_DEV(CFL_4S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 4 Cores Desktop */ 632 + IMC_DEV(CFL_6S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 6 Cores Desktop */ 633 + IMC_DEV(CFL_8S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 8 Cores Desktop */ 634 + IMC_DEV(CFL_4S_W_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 4 Cores Work Station */ 635 + IMC_DEV(CFL_6S_W_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 6 Cores Work Station */ 636 + IMC_DEV(CFL_8S_W_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 8 Cores Work Station */ 637 + IMC_DEV(CFL_4S_S_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 4 Cores Server */ 638 + IMC_DEV(CFL_6S_S_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 6 Cores Server */ 639 + IMC_DEV(CFL_8S_S_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 8 Cores Server */ 719 640 { /* end marker */ } 720 641 }; 721 642
+9 -4
arch/x86/events/perf_event.h
··· 859 859 860 860 static inline bool intel_pmu_has_bts(struct perf_event *event) 861 861 { 862 - if (event->attr.config == PERF_COUNT_HW_BRANCH_INSTRUCTIONS && 863 - !event->attr.freq && event->hw.sample_period == 1) 864 - return true; 862 + struct hw_perf_event *hwc = &event->hw; 863 + unsigned int hw_event, bts_event; 865 864 866 - return false; 865 + if (event->attr.freq) 866 + return false; 867 + 868 + hw_event = hwc->config & INTEL_ARCH_EVENT_MASK; 869 + bts_event = x86_pmu.event_map(PERF_COUNT_HW_BRANCH_INSTRUCTIONS); 870 + 871 + return hw_event == bts_event && hwc->sample_period == 1; 867 872 } 868 873 869 874 int intel_pmu_save_and_restart(struct perf_event *event);
+1 -1
arch/x86/include/asm/fpu/internal.h
··· 226 226 "3: movl $-2,%[err]\n\t" \ 227 227 "jmp 2b\n\t" \ 228 228 ".popsection\n\t" \ 229 - _ASM_EXTABLE_UA(1b, 3b) \ 229 + _ASM_EXTABLE(1b, 3b) \ 230 230 : [err] "=r" (err) \ 231 231 : "D" (st), "m" (*st), "a" (lmask), "d" (hmask) \ 232 232 : "memory")
+2 -1
arch/x86/include/asm/kvm_host.h
··· 1094 1094 bool (*has_wbinvd_exit)(void); 1095 1095 1096 1096 u64 (*read_l1_tsc_offset)(struct kvm_vcpu *vcpu); 1097 - void (*write_tsc_offset)(struct kvm_vcpu *vcpu, u64 offset); 1097 + /* Returns actual tsc_offset set in active VMCS */ 1098 + u64 (*write_l1_tsc_offset)(struct kvm_vcpu *vcpu, u64 offset); 1098 1099 1099 1100 void (*get_exit_info)(struct kvm_vcpu *vcpu, u64 *info1, u64 *info2); 1100 1101
+2
arch/x86/include/asm/mce.h
··· 221 221 222 222 int mce_available(struct cpuinfo_x86 *c); 223 223 bool mce_is_memory_error(struct mce *m); 224 + bool mce_is_correctable(struct mce *m); 225 + int mce_usable_address(struct mce *m); 224 226 225 227 DECLARE_PER_CPU(unsigned, mce_exception_count); 226 228 DECLARE_PER_CPU(unsigned, mce_poll_count);
+1 -1
arch/x86/include/asm/mshyperv.h
··· 232 232 : "cc"); 233 233 } 234 234 #endif 235 - return hv_status; 235 + return hv_status; 236 236 } 237 237 238 238 /*
+3 -2
arch/x86/include/asm/msr-index.h
··· 41 41 42 42 #define MSR_IA32_SPEC_CTRL 0x00000048 /* Speculation Control */ 43 43 #define SPEC_CTRL_IBRS (1 << 0) /* Indirect Branch Restricted Speculation */ 44 - #define SPEC_CTRL_STIBP (1 << 1) /* Single Thread Indirect Branch Predictors */ 44 + #define SPEC_CTRL_STIBP_SHIFT 1 /* Single Thread Indirect Branch Predictor (STIBP) bit */ 45 + #define SPEC_CTRL_STIBP (1 << SPEC_CTRL_STIBP_SHIFT) /* STIBP mask */ 45 46 #define SPEC_CTRL_SSBD_SHIFT 2 /* Speculative Store Bypass Disable bit */ 46 - #define SPEC_CTRL_SSBD (1 << SPEC_CTRL_SSBD_SHIFT) /* Speculative Store Bypass Disable */ 47 + #define SPEC_CTRL_SSBD (1 << SPEC_CTRL_SSBD_SHIFT) /* Speculative Store Bypass Disable */ 47 48 48 49 #define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */ 49 50 #define PRED_CMD_IBPB (1 << 0) /* Indirect Branch Prediction Barrier */
+20 -6
arch/x86/include/asm/nospec-branch.h
··· 3 3 #ifndef _ASM_X86_NOSPEC_BRANCH_H_ 4 4 #define _ASM_X86_NOSPEC_BRANCH_H_ 5 5 6 + #include <linux/static_key.h> 7 + 6 8 #include <asm/alternative.h> 7 9 #include <asm/alternative-asm.h> 8 10 #include <asm/cpufeatures.h> ··· 164 162 _ASM_PTR " 999b\n\t" \ 165 163 ".popsection\n\t" 166 164 167 - #if defined(CONFIG_X86_64) && defined(RETPOLINE) 165 + #ifdef CONFIG_RETPOLINE 166 + #ifdef CONFIG_X86_64 168 167 169 168 /* 170 - * Since the inline asm uses the %V modifier which is only in newer GCC, 171 - * the 64-bit one is dependent on RETPOLINE not CONFIG_RETPOLINE. 169 + * Inline asm uses the %V modifier which is only in newer GCC 170 + * which is ensured when CONFIG_RETPOLINE is defined. 172 171 */ 173 172 # define CALL_NOSPEC \ 174 173 ANNOTATE_NOSPEC_ALTERNATIVE \ ··· 184 181 X86_FEATURE_RETPOLINE_AMD) 185 182 # define THUNK_TARGET(addr) [thunk_target] "r" (addr) 186 183 187 - #elif defined(CONFIG_X86_32) && defined(CONFIG_RETPOLINE) 184 + #else /* CONFIG_X86_32 */ 188 185 /* 189 186 * For i386 we use the original ret-equivalent retpoline, because 190 187 * otherwise we'll run out of registers. We don't care about CET ··· 214 211 X86_FEATURE_RETPOLINE_AMD) 215 212 216 213 # define THUNK_TARGET(addr) [thunk_target] "rm" (addr) 214 + #endif 217 215 #else /* No retpoline for C / inline asm */ 218 216 # define CALL_NOSPEC "call *%[thunk_target]\n" 219 217 # define THUNK_TARGET(addr) [thunk_target] "rm" (addr) ··· 223 219 /* The Spectre V2 mitigation variants */ 224 220 enum spectre_v2_mitigation { 225 221 SPECTRE_V2_NONE, 226 - SPECTRE_V2_RETPOLINE_MINIMAL, 227 - SPECTRE_V2_RETPOLINE_MINIMAL_AMD, 228 222 SPECTRE_V2_RETPOLINE_GENERIC, 229 223 SPECTRE_V2_RETPOLINE_AMD, 230 224 SPECTRE_V2_IBRS_ENHANCED, 225 + }; 226 + 227 + /* The indirect branch speculation control variants */ 228 + enum spectre_v2_user_mitigation { 229 + SPECTRE_V2_USER_NONE, 230 + SPECTRE_V2_USER_STRICT, 231 + SPECTRE_V2_USER_PRCTL, 232 + SPECTRE_V2_USER_SECCOMP, 231 233 }; 232 234 233 235 /* The Speculative Store Bypass disable variants */ ··· 312 302 X86_FEATURE_USE_IBRS_FW); \ 313 303 preempt_enable(); \ 314 304 } while (0) 305 + 306 + DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp); 307 + DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb); 308 + DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb); 315 309 316 310 #endif /* __ASSEMBLY__ */ 317 311
+7 -5
arch/x86/include/asm/page_64_types.h
··· 33 33 34 34 /* 35 35 * Set __PAGE_OFFSET to the most negative possible address + 36 - * PGDIR_SIZE*16 (pgd slot 272). The gap is to allow a space for a 37 - * hypervisor to fit. Choosing 16 slots here is arbitrary, but it's 38 - * what Xen requires. 36 + * PGDIR_SIZE*17 (pgd slot 273). 37 + * 38 + * The gap is to allow a space for LDT remap for PTI (1 pgd slot) and space for 39 + * a hypervisor (16 slots). Choosing 16 slots for a hypervisor is arbitrary, 40 + * but it's what Xen requires. 39 41 */ 40 - #define __PAGE_OFFSET_BASE_L5 _AC(0xff10000000000000, UL) 41 - #define __PAGE_OFFSET_BASE_L4 _AC(0xffff880000000000, UL) 42 + #define __PAGE_OFFSET_BASE_L5 _AC(0xff11000000000000, UL) 43 + #define __PAGE_OFFSET_BASE_L4 _AC(0xffff888000000000, UL) 42 44 43 45 #ifdef CONFIG_DYNAMIC_MEMORY_LAYOUT 44 46 #define __PAGE_OFFSET page_offset_base
+1 -3
arch/x86/include/asm/pgtable_64_types.h
··· 111 111 */ 112 112 #define MAXMEM (1UL << MAX_PHYSMEM_BITS) 113 113 114 - #define LDT_PGD_ENTRY_L4 -3UL 115 - #define LDT_PGD_ENTRY_L5 -112UL 116 - #define LDT_PGD_ENTRY (pgtable_l5_enabled() ? LDT_PGD_ENTRY_L5 : LDT_PGD_ENTRY_L4) 114 + #define LDT_PGD_ENTRY -240UL 117 115 #define LDT_BASE_ADDR (LDT_PGD_ENTRY << PGDIR_SHIFT) 118 116 #define LDT_END_ADDR (LDT_BASE_ADDR + PGDIR_SIZE) 119 117
+8 -5
arch/x86/include/asm/qspinlock.h
··· 13 13 #define queued_fetch_set_pending_acquire queued_fetch_set_pending_acquire 14 14 static __always_inline u32 queued_fetch_set_pending_acquire(struct qspinlock *lock) 15 15 { 16 - u32 val = 0; 16 + u32 val; 17 17 18 - if (GEN_BINARY_RMWcc(LOCK_PREFIX "btsl", lock->val.counter, c, 19 - "I", _Q_PENDING_OFFSET)) 20 - val |= _Q_PENDING_VAL; 21 - 18 + /* 19 + * We can't use GEN_BINARY_RMWcc() inside an if() stmt because asm goto 20 + * and CONFIG_PROFILE_ALL_BRANCHES=y results in a label inside a 21 + * statement expression, which GCC doesn't like. 22 + */ 23 + val = GEN_BINARY_RMWcc(LOCK_PREFIX "btsl", lock->val.counter, c, 24 + "I", _Q_PENDING_OFFSET) * _Q_PENDING_VAL; 22 25 val |= atomic_read(&lock->val) & ~_Q_PENDING_MASK; 23 26 24 27 return val;
+14 -6
arch/x86/include/asm/spec-ctrl.h
··· 53 53 return (tifn & _TIF_SSBD) >> (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT); 54 54 } 55 55 56 + static inline u64 stibp_tif_to_spec_ctrl(u64 tifn) 57 + { 58 + BUILD_BUG_ON(TIF_SPEC_IB < SPEC_CTRL_STIBP_SHIFT); 59 + return (tifn & _TIF_SPEC_IB) >> (TIF_SPEC_IB - SPEC_CTRL_STIBP_SHIFT); 60 + } 61 + 56 62 static inline unsigned long ssbd_spec_ctrl_to_tif(u64 spec_ctrl) 57 63 { 58 64 BUILD_BUG_ON(TIF_SSBD < SPEC_CTRL_SSBD_SHIFT); 59 65 return (spec_ctrl & SPEC_CTRL_SSBD) << (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT); 66 + } 67 + 68 + static inline unsigned long stibp_spec_ctrl_to_tif(u64 spec_ctrl) 69 + { 70 + BUILD_BUG_ON(TIF_SPEC_IB < SPEC_CTRL_STIBP_SHIFT); 71 + return (spec_ctrl & SPEC_CTRL_STIBP) << (TIF_SPEC_IB - SPEC_CTRL_STIBP_SHIFT); 60 72 } 61 73 62 74 static inline u64 ssbd_tif_to_amd_ls_cfg(u64 tifn) ··· 82 70 static inline void speculative_store_bypass_ht_init(void) { } 83 71 #endif 84 72 85 - extern void speculative_store_bypass_update(unsigned long tif); 86 - 87 - static inline void speculative_store_bypass_update_current(void) 88 - { 89 - speculative_store_bypass_update(current_thread_info()->flags); 90 - } 73 + extern void speculation_ctrl_update(unsigned long tif); 74 + extern void speculation_ctrl_update_current(void); 91 75 92 76 #endif
-3
arch/x86/include/asm/switch_to.h
··· 11 11 12 12 __visible struct task_struct *__switch_to(struct task_struct *prev, 13 13 struct task_struct *next); 14 - struct tss_struct; 15 - void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p, 16 - struct tss_struct *tss); 17 14 18 15 /* This runs runs on the previous thread's stack. */ 19 16 static inline void prepare_switch_to(struct task_struct *next)
+17 -3
arch/x86/include/asm/thread_info.h
··· 79 79 #define TIF_SIGPENDING 2 /* signal pending */ 80 80 #define TIF_NEED_RESCHED 3 /* rescheduling necessary */ 81 81 #define TIF_SINGLESTEP 4 /* reenable singlestep on user return*/ 82 - #define TIF_SSBD 5 /* Reduced data speculation */ 82 + #define TIF_SSBD 5 /* Speculative store bypass disable */ 83 83 #define TIF_SYSCALL_EMU 6 /* syscall emulation active */ 84 84 #define TIF_SYSCALL_AUDIT 7 /* syscall auditing active */ 85 85 #define TIF_SECCOMP 8 /* secure computing */ 86 + #define TIF_SPEC_IB 9 /* Indirect branch speculation mitigation */ 87 + #define TIF_SPEC_FORCE_UPDATE 10 /* Force speculation MSR update in context switch */ 86 88 #define TIF_USER_RETURN_NOTIFY 11 /* notify kernel of userspace return */ 87 89 #define TIF_UPROBE 12 /* breakpointed or singlestepping */ 88 90 #define TIF_PATCH_PENDING 13 /* pending live patching update */ ··· 112 110 #define _TIF_SYSCALL_EMU (1 << TIF_SYSCALL_EMU) 113 111 #define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT) 114 112 #define _TIF_SECCOMP (1 << TIF_SECCOMP) 113 + #define _TIF_SPEC_IB (1 << TIF_SPEC_IB) 114 + #define _TIF_SPEC_FORCE_UPDATE (1 << TIF_SPEC_FORCE_UPDATE) 115 115 #define _TIF_USER_RETURN_NOTIFY (1 << TIF_USER_RETURN_NOTIFY) 116 116 #define _TIF_UPROBE (1 << TIF_UPROBE) 117 117 #define _TIF_PATCH_PENDING (1 << TIF_PATCH_PENDING) ··· 149 145 _TIF_FSCHECK) 150 146 151 147 /* flags to check in __switch_to() */ 152 - #define _TIF_WORK_CTXSW \ 153 - (_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|_TIF_SSBD) 148 + #define _TIF_WORK_CTXSW_BASE \ 149 + (_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP| \ 150 + _TIF_SSBD | _TIF_SPEC_FORCE_UPDATE) 151 + 152 + /* 153 + * Avoid calls to __switch_to_xtra() on UP as STIBP is not evaluated. 154 + */ 155 + #ifdef CONFIG_SMP 156 + # define _TIF_WORK_CTXSW (_TIF_WORK_CTXSW_BASE | _TIF_SPEC_IB) 157 + #else 158 + # define _TIF_WORK_CTXSW (_TIF_WORK_CTXSW_BASE) 159 + #endif 154 160 155 161 #define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY) 156 162 #define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW)
+6 -2
arch/x86/include/asm/tlbflush.h
··· 169 169 170 170 #define LOADED_MM_SWITCHING ((struct mm_struct *)1) 171 171 172 + /* Last user mm for optimizing IBPB */ 173 + union { 174 + struct mm_struct *last_user_mm; 175 + unsigned long last_user_mm_ibpb; 176 + }; 177 + 172 178 u16 loaded_mm_asid; 173 179 u16 next_asid; 174 - /* last user mm's ctx id */ 175 - u64 last_ctx_id; 176 180 177 181 /* 178 182 * We can be in one of several states:
-2
arch/x86/include/asm/x86_init.h
··· 303 303 extern void x86_init_uint_noop(unsigned int unused); 304 304 extern bool x86_pnpbios_disabled(void); 305 305 306 - void x86_verify_bootdata_version(void); 307 - 308 306 #endif
+31 -4
arch/x86/include/asm/xen/page.h
··· 9 9 #include <linux/mm.h> 10 10 #include <linux/device.h> 11 11 12 - #include <linux/uaccess.h> 12 + #include <asm/extable.h> 13 13 #include <asm/page.h> 14 14 #include <asm/pgtable.h> 15 15 ··· 93 93 */ 94 94 static inline int xen_safe_write_ulong(unsigned long *addr, unsigned long val) 95 95 { 96 - return __put_user(val, (unsigned long __user *)addr); 96 + int ret = 0; 97 + 98 + asm volatile("1: mov %[val], %[ptr]\n" 99 + "2:\n" 100 + ".section .fixup, \"ax\"\n" 101 + "3: sub $1, %[ret]\n" 102 + " jmp 2b\n" 103 + ".previous\n" 104 + _ASM_EXTABLE(1b, 3b) 105 + : [ret] "+r" (ret), [ptr] "=m" (*addr) 106 + : [val] "r" (val)); 107 + 108 + return ret; 97 109 } 98 110 99 - static inline int xen_safe_read_ulong(unsigned long *addr, unsigned long *val) 111 + static inline int xen_safe_read_ulong(const unsigned long *addr, 112 + unsigned long *val) 100 113 { 101 - return __get_user(*val, (unsigned long __user *)addr); 114 + int ret = 0; 115 + unsigned long rval = ~0ul; 116 + 117 + asm volatile("1: mov %[ptr], %[rval]\n" 118 + "2:\n" 119 + ".section .fixup, \"ax\"\n" 120 + "3: sub $1, %[ret]\n" 121 + " jmp 2b\n" 122 + ".previous\n" 123 + _ASM_EXTABLE(1b, 3b) 124 + : [ret] "+r" (ret), [rval] "+r" (rval) 125 + : [ptr] "m" (*addr)); 126 + *val = rval; 127 + 128 + return ret; 102 129 } 103 130 104 131 #ifdef CONFIG_XEN_PV
+2 -5
arch/x86/include/uapi/asm/bootparam.h
··· 16 16 #define RAMDISK_PROMPT_FLAG 0x8000 17 17 #define RAMDISK_LOAD_FLAG 0x4000 18 18 19 - /* version flags */ 20 - #define VERSION_WRITTEN 0x8000 21 - 22 19 /* loadflags */ 23 20 #define LOADED_HIGH (1<<0) 24 21 #define KASLR_FLAG (1<<1) ··· 86 89 __u64 pref_address; 87 90 __u32 init_size; 88 91 __u32 handover_offset; 89 - __u64 acpi_rsdp_addr; 90 92 } __attribute__((packed)); 91 93 92 94 struct sys_desc_table { ··· 155 159 __u8 _pad2[4]; /* 0x054 */ 156 160 __u64 tboot_addr; /* 0x058 */ 157 161 struct ist_info ist_info; /* 0x060 */ 158 - __u8 _pad3[16]; /* 0x070 */ 162 + __u64 acpi_rsdp_addr; /* 0x070 */ 163 + __u8 _pad3[8]; /* 0x078 */ 159 164 __u8 hd0_info[16]; /* obsolete! */ /* 0x080 */ 160 165 __u8 hd1_info[16]; /* obsolete! */ /* 0x090 */ 161 166 struct sys_desc_table sys_desc_table; /* obsolete! */ /* 0x0a0 */
+1 -1
arch/x86/kernel/acpi/boot.c
··· 1776 1776 1777 1777 u64 x86_default_get_root_pointer(void) 1778 1778 { 1779 - return boot_params.hdr.acpi_rsdp_addr; 1779 + return boot_params.acpi_rsdp_addr; 1780 1780 }
+394 -143
arch/x86/kernel/cpu/bugs.c
··· 14 14 #include <linux/module.h> 15 15 #include <linux/nospec.h> 16 16 #include <linux/prctl.h> 17 + #include <linux/sched/smt.h> 17 18 18 19 #include <asm/spec-ctrl.h> 19 20 #include <asm/cmdline.h> ··· 53 52 */ 54 53 u64 __ro_after_init x86_amd_ls_cfg_base; 55 54 u64 __ro_after_init x86_amd_ls_cfg_ssbd_mask; 55 + 56 + /* Control conditional STIPB in switch_to() */ 57 + DEFINE_STATIC_KEY_FALSE(switch_to_cond_stibp); 58 + /* Control conditional IBPB in switch_mm() */ 59 + DEFINE_STATIC_KEY_FALSE(switch_mm_cond_ibpb); 60 + /* Control unconditional IBPB in switch_mm() */ 61 + DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb); 56 62 57 63 void __init check_bugs(void) 58 64 { ··· 131 123 #endif 132 124 } 133 125 134 - /* The kernel command line selection */ 135 - enum spectre_v2_mitigation_cmd { 136 - SPECTRE_V2_CMD_NONE, 137 - SPECTRE_V2_CMD_AUTO, 138 - SPECTRE_V2_CMD_FORCE, 139 - SPECTRE_V2_CMD_RETPOLINE, 140 - SPECTRE_V2_CMD_RETPOLINE_GENERIC, 141 - SPECTRE_V2_CMD_RETPOLINE_AMD, 142 - }; 143 - 144 - static const char *spectre_v2_strings[] = { 145 - [SPECTRE_V2_NONE] = "Vulnerable", 146 - [SPECTRE_V2_RETPOLINE_MINIMAL] = "Vulnerable: Minimal generic ASM retpoline", 147 - [SPECTRE_V2_RETPOLINE_MINIMAL_AMD] = "Vulnerable: Minimal AMD ASM retpoline", 148 - [SPECTRE_V2_RETPOLINE_GENERIC] = "Mitigation: Full generic retpoline", 149 - [SPECTRE_V2_RETPOLINE_AMD] = "Mitigation: Full AMD retpoline", 150 - [SPECTRE_V2_IBRS_ENHANCED] = "Mitigation: Enhanced IBRS", 151 - }; 152 - 153 - #undef pr_fmt 154 - #define pr_fmt(fmt) "Spectre V2 : " fmt 155 - 156 - static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = 157 - SPECTRE_V2_NONE; 158 - 159 126 void 160 127 x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest) 161 128 { ··· 151 168 if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) || 152 169 static_cpu_has(X86_FEATURE_AMD_SSBD)) 153 170 hostval |= ssbd_tif_to_spec_ctrl(ti->flags); 171 + 172 + /* Conditional STIBP enabled? */ 173 + if (static_branch_unlikely(&switch_to_cond_stibp)) 174 + hostval |= stibp_tif_to_spec_ctrl(ti->flags); 154 175 155 176 if (hostval != guestval) { 156 177 msrval = setguest ? guestval : hostval; ··· 189 202 tif = setguest ? ssbd_spec_ctrl_to_tif(guestval) : 190 203 ssbd_spec_ctrl_to_tif(hostval); 191 204 192 - speculative_store_bypass_update(tif); 205 + speculation_ctrl_update(tif); 193 206 } 194 207 } 195 208 EXPORT_SYMBOL_GPL(x86_virt_spec_ctrl); ··· 203 216 else if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD)) 204 217 wrmsrl(MSR_AMD64_LS_CFG, msrval); 205 218 } 219 + 220 + #undef pr_fmt 221 + #define pr_fmt(fmt) "Spectre V2 : " fmt 222 + 223 + static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = 224 + SPECTRE_V2_NONE; 225 + 226 + static enum spectre_v2_user_mitigation spectre_v2_user __ro_after_init = 227 + SPECTRE_V2_USER_NONE; 206 228 207 229 #ifdef RETPOLINE 208 230 static bool spectre_v2_bad_module; ··· 234 238 static inline const char *spectre_v2_module_string(void) { return ""; } 235 239 #endif 236 240 237 - static void __init spec2_print_if_insecure(const char *reason) 238 - { 239 - if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2)) 240 - pr_info("%s selected on command line.\n", reason); 241 - } 242 - 243 - static void __init spec2_print_if_secure(const char *reason) 244 - { 245 - if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2)) 246 - pr_info("%s selected on command line.\n", reason); 247 - } 248 - 249 - static inline bool retp_compiler(void) 250 - { 251 - return __is_defined(RETPOLINE); 252 - } 253 - 254 241 static inline bool match_option(const char *arg, int arglen, const char *opt) 255 242 { 256 243 int len = strlen(opt); ··· 241 262 return len == arglen && !strncmp(arg, opt, len); 242 263 } 243 264 265 + /* The kernel command line selection for spectre v2 */ 266 + enum spectre_v2_mitigation_cmd { 267 + SPECTRE_V2_CMD_NONE, 268 + SPECTRE_V2_CMD_AUTO, 269 + SPECTRE_V2_CMD_FORCE, 270 + SPECTRE_V2_CMD_RETPOLINE, 271 + SPECTRE_V2_CMD_RETPOLINE_GENERIC, 272 + SPECTRE_V2_CMD_RETPOLINE_AMD, 273 + }; 274 + 275 + enum spectre_v2_user_cmd { 276 + SPECTRE_V2_USER_CMD_NONE, 277 + SPECTRE_V2_USER_CMD_AUTO, 278 + SPECTRE_V2_USER_CMD_FORCE, 279 + SPECTRE_V2_USER_CMD_PRCTL, 280 + SPECTRE_V2_USER_CMD_PRCTL_IBPB, 281 + SPECTRE_V2_USER_CMD_SECCOMP, 282 + SPECTRE_V2_USER_CMD_SECCOMP_IBPB, 283 + }; 284 + 285 + static const char * const spectre_v2_user_strings[] = { 286 + [SPECTRE_V2_USER_NONE] = "User space: Vulnerable", 287 + [SPECTRE_V2_USER_STRICT] = "User space: Mitigation: STIBP protection", 288 + [SPECTRE_V2_USER_PRCTL] = "User space: Mitigation: STIBP via prctl", 289 + [SPECTRE_V2_USER_SECCOMP] = "User space: Mitigation: STIBP via seccomp and prctl", 290 + }; 291 + 292 + static const struct { 293 + const char *option; 294 + enum spectre_v2_user_cmd cmd; 295 + bool secure; 296 + } v2_user_options[] __initdata = { 297 + { "auto", SPECTRE_V2_USER_CMD_AUTO, false }, 298 + { "off", SPECTRE_V2_USER_CMD_NONE, false }, 299 + { "on", SPECTRE_V2_USER_CMD_FORCE, true }, 300 + { "prctl", SPECTRE_V2_USER_CMD_PRCTL, false }, 301 + { "prctl,ibpb", SPECTRE_V2_USER_CMD_PRCTL_IBPB, false }, 302 + { "seccomp", SPECTRE_V2_USER_CMD_SECCOMP, false }, 303 + { "seccomp,ibpb", SPECTRE_V2_USER_CMD_SECCOMP_IBPB, false }, 304 + }; 305 + 306 + static void __init spec_v2_user_print_cond(const char *reason, bool secure) 307 + { 308 + if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2) != secure) 309 + pr_info("spectre_v2_user=%s forced on command line.\n", reason); 310 + } 311 + 312 + static enum spectre_v2_user_cmd __init 313 + spectre_v2_parse_user_cmdline(enum spectre_v2_mitigation_cmd v2_cmd) 314 + { 315 + char arg[20]; 316 + int ret, i; 317 + 318 + switch (v2_cmd) { 319 + case SPECTRE_V2_CMD_NONE: 320 + return SPECTRE_V2_USER_CMD_NONE; 321 + case SPECTRE_V2_CMD_FORCE: 322 + return SPECTRE_V2_USER_CMD_FORCE; 323 + default: 324 + break; 325 + } 326 + 327 + ret = cmdline_find_option(boot_command_line, "spectre_v2_user", 328 + arg, sizeof(arg)); 329 + if (ret < 0) 330 + return SPECTRE_V2_USER_CMD_AUTO; 331 + 332 + for (i = 0; i < ARRAY_SIZE(v2_user_options); i++) { 333 + if (match_option(arg, ret, v2_user_options[i].option)) { 334 + spec_v2_user_print_cond(v2_user_options[i].option, 335 + v2_user_options[i].secure); 336 + return v2_user_options[i].cmd; 337 + } 338 + } 339 + 340 + pr_err("Unknown user space protection option (%s). Switching to AUTO select\n", arg); 341 + return SPECTRE_V2_USER_CMD_AUTO; 342 + } 343 + 344 + static void __init 345 + spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd) 346 + { 347 + enum spectre_v2_user_mitigation mode = SPECTRE_V2_USER_NONE; 348 + bool smt_possible = IS_ENABLED(CONFIG_SMP); 349 + enum spectre_v2_user_cmd cmd; 350 + 351 + if (!boot_cpu_has(X86_FEATURE_IBPB) && !boot_cpu_has(X86_FEATURE_STIBP)) 352 + return; 353 + 354 + if (cpu_smt_control == CPU_SMT_FORCE_DISABLED || 355 + cpu_smt_control == CPU_SMT_NOT_SUPPORTED) 356 + smt_possible = false; 357 + 358 + cmd = spectre_v2_parse_user_cmdline(v2_cmd); 359 + switch (cmd) { 360 + case SPECTRE_V2_USER_CMD_NONE: 361 + goto set_mode; 362 + case SPECTRE_V2_USER_CMD_FORCE: 363 + mode = SPECTRE_V2_USER_STRICT; 364 + break; 365 + case SPECTRE_V2_USER_CMD_PRCTL: 366 + case SPECTRE_V2_USER_CMD_PRCTL_IBPB: 367 + mode = SPECTRE_V2_USER_PRCTL; 368 + break; 369 + case SPECTRE_V2_USER_CMD_AUTO: 370 + case SPECTRE_V2_USER_CMD_SECCOMP: 371 + case SPECTRE_V2_USER_CMD_SECCOMP_IBPB: 372 + if (IS_ENABLED(CONFIG_SECCOMP)) 373 + mode = SPECTRE_V2_USER_SECCOMP; 374 + else 375 + mode = SPECTRE_V2_USER_PRCTL; 376 + break; 377 + } 378 + 379 + /* Initialize Indirect Branch Prediction Barrier */ 380 + if (boot_cpu_has(X86_FEATURE_IBPB)) { 381 + setup_force_cpu_cap(X86_FEATURE_USE_IBPB); 382 + 383 + switch (cmd) { 384 + case SPECTRE_V2_USER_CMD_FORCE: 385 + case SPECTRE_V2_USER_CMD_PRCTL_IBPB: 386 + case SPECTRE_V2_USER_CMD_SECCOMP_IBPB: 387 + static_branch_enable(&switch_mm_always_ibpb); 388 + break; 389 + case SPECTRE_V2_USER_CMD_PRCTL: 390 + case SPECTRE_V2_USER_CMD_AUTO: 391 + case SPECTRE_V2_USER_CMD_SECCOMP: 392 + static_branch_enable(&switch_mm_cond_ibpb); 393 + break; 394 + default: 395 + break; 396 + } 397 + 398 + pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n", 399 + static_key_enabled(&switch_mm_always_ibpb) ? 400 + "always-on" : "conditional"); 401 + } 402 + 403 + /* If enhanced IBRS is enabled no STIPB required */ 404 + if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED) 405 + return; 406 + 407 + /* 408 + * If SMT is not possible or STIBP is not available clear the STIPB 409 + * mode. 410 + */ 411 + if (!smt_possible || !boot_cpu_has(X86_FEATURE_STIBP)) 412 + mode = SPECTRE_V2_USER_NONE; 413 + set_mode: 414 + spectre_v2_user = mode; 415 + /* Only print the STIBP mode when SMT possible */ 416 + if (smt_possible) 417 + pr_info("%s\n", spectre_v2_user_strings[mode]); 418 + } 419 + 420 + static const char * const spectre_v2_strings[] = { 421 + [SPECTRE_V2_NONE] = "Vulnerable", 422 + [SPECTRE_V2_RETPOLINE_GENERIC] = "Mitigation: Full generic retpoline", 423 + [SPECTRE_V2_RETPOLINE_AMD] = "Mitigation: Full AMD retpoline", 424 + [SPECTRE_V2_IBRS_ENHANCED] = "Mitigation: Enhanced IBRS", 425 + }; 426 + 244 427 static const struct { 245 428 const char *option; 246 429 enum spectre_v2_mitigation_cmd cmd; 247 430 bool secure; 248 - } mitigation_options[] = { 249 - { "off", SPECTRE_V2_CMD_NONE, false }, 250 - { "on", SPECTRE_V2_CMD_FORCE, true }, 251 - { "retpoline", SPECTRE_V2_CMD_RETPOLINE, false }, 252 - { "retpoline,amd", SPECTRE_V2_CMD_RETPOLINE_AMD, false }, 253 - { "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false }, 254 - { "auto", SPECTRE_V2_CMD_AUTO, false }, 431 + } mitigation_options[] __initdata = { 432 + { "off", SPECTRE_V2_CMD_NONE, false }, 433 + { "on", SPECTRE_V2_CMD_FORCE, true }, 434 + { "retpoline", SPECTRE_V2_CMD_RETPOLINE, false }, 435 + { "retpoline,amd", SPECTRE_V2_CMD_RETPOLINE_AMD, false }, 436 + { "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false }, 437 + { "auto", SPECTRE_V2_CMD_AUTO, false }, 255 438 }; 439 + 440 + static void __init spec_v2_print_cond(const char *reason, bool secure) 441 + { 442 + if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2) != secure) 443 + pr_info("%s selected on command line.\n", reason); 444 + } 256 445 257 446 static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void) 258 447 { 448 + enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_CMD_AUTO; 259 449 char arg[20]; 260 450 int ret, i; 261 - enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_CMD_AUTO; 262 451 263 452 if (cmdline_find_option_bool(boot_command_line, "nospectre_v2")) 264 453 return SPECTRE_V2_CMD_NONE; 265 - else { 266 - ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg)); 267 - if (ret < 0) 268 - return SPECTRE_V2_CMD_AUTO; 269 454 270 - for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) { 271 - if (!match_option(arg, ret, mitigation_options[i].option)) 272 - continue; 273 - cmd = mitigation_options[i].cmd; 274 - break; 275 - } 455 + ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg)); 456 + if (ret < 0) 457 + return SPECTRE_V2_CMD_AUTO; 276 458 277 - if (i >= ARRAY_SIZE(mitigation_options)) { 278 - pr_err("unknown option (%s). Switching to AUTO select\n", arg); 279 - return SPECTRE_V2_CMD_AUTO; 280 - } 459 + for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) { 460 + if (!match_option(arg, ret, mitigation_options[i].option)) 461 + continue; 462 + cmd = mitigation_options[i].cmd; 463 + break; 464 + } 465 + 466 + if (i >= ARRAY_SIZE(mitigation_options)) { 467 + pr_err("unknown option (%s). Switching to AUTO select\n", arg); 468 + return SPECTRE_V2_CMD_AUTO; 281 469 } 282 470 283 471 if ((cmd == SPECTRE_V2_CMD_RETPOLINE || ··· 462 316 return SPECTRE_V2_CMD_AUTO; 463 317 } 464 318 465 - if (mitigation_options[i].secure) 466 - spec2_print_if_secure(mitigation_options[i].option); 467 - else 468 - spec2_print_if_insecure(mitigation_options[i].option); 469 - 319 + spec_v2_print_cond(mitigation_options[i].option, 320 + mitigation_options[i].secure); 470 321 return cmd; 471 - } 472 - 473 - static bool stibp_needed(void) 474 - { 475 - if (spectre_v2_enabled == SPECTRE_V2_NONE) 476 - return false; 477 - 478 - if (!boot_cpu_has(X86_FEATURE_STIBP)) 479 - return false; 480 - 481 - return true; 482 - } 483 - 484 - static void update_stibp_msr(void *info) 485 - { 486 - wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); 487 - } 488 - 489 - void arch_smt_update(void) 490 - { 491 - u64 mask; 492 - 493 - if (!stibp_needed()) 494 - return; 495 - 496 - mutex_lock(&spec_ctrl_mutex); 497 - mask = x86_spec_ctrl_base; 498 - if (cpu_smt_control == CPU_SMT_ENABLED) 499 - mask |= SPEC_CTRL_STIBP; 500 - else 501 - mask &= ~SPEC_CTRL_STIBP; 502 - 503 - if (mask != x86_spec_ctrl_base) { 504 - pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n", 505 - cpu_smt_control == CPU_SMT_ENABLED ? 506 - "Enabling" : "Disabling"); 507 - x86_spec_ctrl_base = mask; 508 - on_each_cpu(update_stibp_msr, NULL, 1); 509 - } 510 - mutex_unlock(&spec_ctrl_mutex); 511 322 } 512 323 513 324 static void __init spectre_v2_select_mitigation(void) ··· 520 417 pr_err("Spectre mitigation: LFENCE not serializing, switching to generic retpoline\n"); 521 418 goto retpoline_generic; 522 419 } 523 - mode = retp_compiler() ? SPECTRE_V2_RETPOLINE_AMD : 524 - SPECTRE_V2_RETPOLINE_MINIMAL_AMD; 420 + mode = SPECTRE_V2_RETPOLINE_AMD; 525 421 setup_force_cpu_cap(X86_FEATURE_RETPOLINE_AMD); 526 422 setup_force_cpu_cap(X86_FEATURE_RETPOLINE); 527 423 } else { 528 424 retpoline_generic: 529 - mode = retp_compiler() ? SPECTRE_V2_RETPOLINE_GENERIC : 530 - SPECTRE_V2_RETPOLINE_MINIMAL; 425 + mode = SPECTRE_V2_RETPOLINE_GENERIC; 531 426 setup_force_cpu_cap(X86_FEATURE_RETPOLINE); 532 427 } 533 428 ··· 544 443 setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW); 545 444 pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n"); 546 445 547 - /* Initialize Indirect Branch Prediction Barrier if supported */ 548 - if (boot_cpu_has(X86_FEATURE_IBPB)) { 549 - setup_force_cpu_cap(X86_FEATURE_USE_IBPB); 550 - pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n"); 551 - } 552 - 553 446 /* 554 447 * Retpoline means the kernel is safe because it has no indirect 555 448 * branches. Enhanced IBRS protects firmware too, so, enable restricted ··· 560 465 pr_info("Enabling Restricted Speculation for firmware calls\n"); 561 466 } 562 467 468 + /* Set up IBPB and STIBP depending on the general spectre V2 command */ 469 + spectre_v2_user_select_mitigation(cmd); 470 + 563 471 /* Enable STIBP if appropriate */ 564 472 arch_smt_update(); 473 + } 474 + 475 + static void update_stibp_msr(void * __unused) 476 + { 477 + wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); 478 + } 479 + 480 + /* Update x86_spec_ctrl_base in case SMT state changed. */ 481 + static void update_stibp_strict(void) 482 + { 483 + u64 mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP; 484 + 485 + if (sched_smt_active()) 486 + mask |= SPEC_CTRL_STIBP; 487 + 488 + if (mask == x86_spec_ctrl_base) 489 + return; 490 + 491 + pr_info("Update user space SMT mitigation: STIBP %s\n", 492 + mask & SPEC_CTRL_STIBP ? "always-on" : "off"); 493 + x86_spec_ctrl_base = mask; 494 + on_each_cpu(update_stibp_msr, NULL, 1); 495 + } 496 + 497 + /* Update the static key controlling the evaluation of TIF_SPEC_IB */ 498 + static void update_indir_branch_cond(void) 499 + { 500 + if (sched_smt_active()) 501 + static_branch_enable(&switch_to_cond_stibp); 502 + else 503 + static_branch_disable(&switch_to_cond_stibp); 504 + } 505 + 506 + void arch_smt_update(void) 507 + { 508 + /* Enhanced IBRS implies STIBP. No update required. */ 509 + if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED) 510 + return; 511 + 512 + mutex_lock(&spec_ctrl_mutex); 513 + 514 + switch (spectre_v2_user) { 515 + case SPECTRE_V2_USER_NONE: 516 + break; 517 + case SPECTRE_V2_USER_STRICT: 518 + update_stibp_strict(); 519 + break; 520 + case SPECTRE_V2_USER_PRCTL: 521 + case SPECTRE_V2_USER_SECCOMP: 522 + update_indir_branch_cond(); 523 + break; 524 + } 525 + 526 + mutex_unlock(&spec_ctrl_mutex); 565 527 } 566 528 567 529 #undef pr_fmt ··· 635 483 SPEC_STORE_BYPASS_CMD_SECCOMP, 636 484 }; 637 485 638 - static const char *ssb_strings[] = { 486 + static const char * const ssb_strings[] = { 639 487 [SPEC_STORE_BYPASS_NONE] = "Vulnerable", 640 488 [SPEC_STORE_BYPASS_DISABLE] = "Mitigation: Speculative Store Bypass disabled", 641 489 [SPEC_STORE_BYPASS_PRCTL] = "Mitigation: Speculative Store Bypass disabled via prctl", ··· 645 493 static const struct { 646 494 const char *option; 647 495 enum ssb_mitigation_cmd cmd; 648 - } ssb_mitigation_options[] = { 496 + } ssb_mitigation_options[] __initdata = { 649 497 { "auto", SPEC_STORE_BYPASS_CMD_AUTO }, /* Platform decides */ 650 498 { "on", SPEC_STORE_BYPASS_CMD_ON }, /* Disable Speculative Store Bypass */ 651 499 { "off", SPEC_STORE_BYPASS_CMD_NONE }, /* Don't touch Speculative Store Bypass */ ··· 756 604 #undef pr_fmt 757 605 #define pr_fmt(fmt) "Speculation prctl: " fmt 758 606 607 + static void task_update_spec_tif(struct task_struct *tsk) 608 + { 609 + /* Force the update of the real TIF bits */ 610 + set_tsk_thread_flag(tsk, TIF_SPEC_FORCE_UPDATE); 611 + 612 + /* 613 + * Immediately update the speculation control MSRs for the current 614 + * task, but for a non-current task delay setting the CPU 615 + * mitigation until it is scheduled next. 616 + * 617 + * This can only happen for SECCOMP mitigation. For PRCTL it's 618 + * always the current task. 619 + */ 620 + if (tsk == current) 621 + speculation_ctrl_update_current(); 622 + } 623 + 759 624 static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl) 760 625 { 761 - bool update; 762 - 763 626 if (ssb_mode != SPEC_STORE_BYPASS_PRCTL && 764 627 ssb_mode != SPEC_STORE_BYPASS_SECCOMP) 765 628 return -ENXIO; ··· 785 618 if (task_spec_ssb_force_disable(task)) 786 619 return -EPERM; 787 620 task_clear_spec_ssb_disable(task); 788 - update = test_and_clear_tsk_thread_flag(task, TIF_SSBD); 621 + task_update_spec_tif(task); 789 622 break; 790 623 case PR_SPEC_DISABLE: 791 624 task_set_spec_ssb_disable(task); 792 - update = !test_and_set_tsk_thread_flag(task, TIF_SSBD); 625 + task_update_spec_tif(task); 793 626 break; 794 627 case PR_SPEC_FORCE_DISABLE: 795 628 task_set_spec_ssb_disable(task); 796 629 task_set_spec_ssb_force_disable(task); 797 - update = !test_and_set_tsk_thread_flag(task, TIF_SSBD); 630 + task_update_spec_tif(task); 798 631 break; 799 632 default: 800 633 return -ERANGE; 801 634 } 635 + return 0; 636 + } 802 637 803 - /* 804 - * If being set on non-current task, delay setting the CPU 805 - * mitigation until it is next scheduled. 806 - */ 807 - if (task == current && update) 808 - speculative_store_bypass_update_current(); 809 - 638 + static int ib_prctl_set(struct task_struct *task, unsigned long ctrl) 639 + { 640 + switch (ctrl) { 641 + case PR_SPEC_ENABLE: 642 + if (spectre_v2_user == SPECTRE_V2_USER_NONE) 643 + return 0; 644 + /* 645 + * Indirect branch speculation is always disabled in strict 646 + * mode. 647 + */ 648 + if (spectre_v2_user == SPECTRE_V2_USER_STRICT) 649 + return -EPERM; 650 + task_clear_spec_ib_disable(task); 651 + task_update_spec_tif(task); 652 + break; 653 + case PR_SPEC_DISABLE: 654 + case PR_SPEC_FORCE_DISABLE: 655 + /* 656 + * Indirect branch speculation is always allowed when 657 + * mitigation is force disabled. 658 + */ 659 + if (spectre_v2_user == SPECTRE_V2_USER_NONE) 660 + return -EPERM; 661 + if (spectre_v2_user == SPECTRE_V2_USER_STRICT) 662 + return 0; 663 + task_set_spec_ib_disable(task); 664 + if (ctrl == PR_SPEC_FORCE_DISABLE) 665 + task_set_spec_ib_force_disable(task); 666 + task_update_spec_tif(task); 667 + break; 668 + default: 669 + return -ERANGE; 670 + } 810 671 return 0; 811 672 } 812 673 ··· 844 649 switch (which) { 845 650 case PR_SPEC_STORE_BYPASS: 846 651 return ssb_prctl_set(task, ctrl); 652 + case PR_SPEC_INDIRECT_BRANCH: 653 + return ib_prctl_set(task, ctrl); 847 654 default: 848 655 return -ENODEV; 849 656 } ··· 856 659 { 857 660 if (ssb_mode == SPEC_STORE_BYPASS_SECCOMP) 858 661 ssb_prctl_set(task, PR_SPEC_FORCE_DISABLE); 662 + if (spectre_v2_user == SPECTRE_V2_USER_SECCOMP) 663 + ib_prctl_set(task, PR_SPEC_FORCE_DISABLE); 859 664 } 860 665 #endif 861 666 ··· 880 681 } 881 682 } 882 683 684 + static int ib_prctl_get(struct task_struct *task) 685 + { 686 + if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2)) 687 + return PR_SPEC_NOT_AFFECTED; 688 + 689 + switch (spectre_v2_user) { 690 + case SPECTRE_V2_USER_NONE: 691 + return PR_SPEC_ENABLE; 692 + case SPECTRE_V2_USER_PRCTL: 693 + case SPECTRE_V2_USER_SECCOMP: 694 + if (task_spec_ib_force_disable(task)) 695 + return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE; 696 + if (task_spec_ib_disable(task)) 697 + return PR_SPEC_PRCTL | PR_SPEC_DISABLE; 698 + return PR_SPEC_PRCTL | PR_SPEC_ENABLE; 699 + case SPECTRE_V2_USER_STRICT: 700 + return PR_SPEC_DISABLE; 701 + default: 702 + return PR_SPEC_NOT_AFFECTED; 703 + } 704 + } 705 + 883 706 int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which) 884 707 { 885 708 switch (which) { 886 709 case PR_SPEC_STORE_BYPASS: 887 710 return ssb_prctl_get(task); 711 + case PR_SPEC_INDIRECT_BRANCH: 712 + return ib_prctl_get(task); 888 713 default: 889 714 return -ENODEV; 890 715 } ··· 1046 823 #define L1TF_DEFAULT_MSG "Mitigation: PTE Inversion" 1047 824 1048 825 #if IS_ENABLED(CONFIG_KVM_INTEL) 1049 - static const char *l1tf_vmx_states[] = { 826 + static const char * const l1tf_vmx_states[] = { 1050 827 [VMENTER_L1D_FLUSH_AUTO] = "auto", 1051 828 [VMENTER_L1D_FLUSH_NEVER] = "vulnerable", 1052 829 [VMENTER_L1D_FLUSH_COND] = "conditional cache flushes", ··· 1062 839 1063 840 if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_EPT_DISABLED || 1064 841 (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER && 1065 - cpu_smt_control == CPU_SMT_ENABLED)) 842 + sched_smt_active())) { 1066 843 return sprintf(buf, "%s; VMX: %s\n", L1TF_DEFAULT_MSG, 1067 844 l1tf_vmx_states[l1tf_vmx_mitigation]); 845 + } 1068 846 1069 847 return sprintf(buf, "%s; VMX: %s, SMT %s\n", L1TF_DEFAULT_MSG, 1070 848 l1tf_vmx_states[l1tf_vmx_mitigation], 1071 - cpu_smt_control == CPU_SMT_ENABLED ? "vulnerable" : "disabled"); 849 + sched_smt_active() ? "vulnerable" : "disabled"); 1072 850 } 1073 851 #else 1074 852 static ssize_t l1tf_show_state(char *buf) ··· 1078 854 } 1079 855 #endif 1080 856 857 + static char *stibp_state(void) 858 + { 859 + if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED) 860 + return ""; 861 + 862 + switch (spectre_v2_user) { 863 + case SPECTRE_V2_USER_NONE: 864 + return ", STIBP: disabled"; 865 + case SPECTRE_V2_USER_STRICT: 866 + return ", STIBP: forced"; 867 + case SPECTRE_V2_USER_PRCTL: 868 + case SPECTRE_V2_USER_SECCOMP: 869 + if (static_key_enabled(&switch_to_cond_stibp)) 870 + return ", STIBP: conditional"; 871 + } 872 + return ""; 873 + } 874 + 875 + static char *ibpb_state(void) 876 + { 877 + if (boot_cpu_has(X86_FEATURE_IBPB)) { 878 + if (static_key_enabled(&switch_mm_always_ibpb)) 879 + return ", IBPB: always-on"; 880 + if (static_key_enabled(&switch_mm_cond_ibpb)) 881 + return ", IBPB: conditional"; 882 + return ", IBPB: disabled"; 883 + } 884 + return ""; 885 + } 886 + 1081 887 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr, 1082 888 char *buf, unsigned int bug) 1083 889 { 1084 - int ret; 1085 - 1086 890 if (!boot_cpu_has_bug(bug)) 1087 891 return sprintf(buf, "Not affected\n"); 1088 892 ··· 1128 876 return sprintf(buf, "Mitigation: __user pointer sanitization\n"); 1129 877 1130 878 case X86_BUG_SPECTRE_V2: 1131 - ret = sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled], 1132 - boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "", 879 + return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled], 880 + ibpb_state(), 1133 881 boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "", 1134 - (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "", 882 + stibp_state(), 1135 883 boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "", 1136 884 spectre_v2_module_string()); 1137 - return ret; 1138 885 1139 886 case X86_BUG_SPEC_STORE_BYPASS: 1140 887 return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
+4 -2
arch/x86/kernel/cpu/mcheck/mce.c
··· 485 485 * be somewhat complicated (e.g. segment offset would require an instruction 486 486 * parser). So only support physical addresses up to page granuality for now. 487 487 */ 488 - static int mce_usable_address(struct mce *m) 488 + int mce_usable_address(struct mce *m) 489 489 { 490 490 if (!(m->status & MCI_STATUS_ADDRV)) 491 491 return 0; ··· 505 505 506 506 return 1; 507 507 } 508 + EXPORT_SYMBOL_GPL(mce_usable_address); 508 509 509 510 bool mce_is_memory_error(struct mce *m) 510 511 { ··· 535 534 } 536 535 EXPORT_SYMBOL_GPL(mce_is_memory_error); 537 536 538 - static bool mce_is_correctable(struct mce *m) 537 + bool mce_is_correctable(struct mce *m) 539 538 { 540 539 if (m->cpuvendor == X86_VENDOR_AMD && m->status & MCI_STATUS_DEFERRED) 541 540 return false; ··· 548 547 549 548 return true; 550 549 } 550 + EXPORT_SYMBOL_GPL(mce_is_correctable); 551 551 552 552 static bool cec_add_mce(struct mce *m) 553 553 {
+6 -13
arch/x86/kernel/cpu/mcheck/mce_amd.c
··· 56 56 /* Threshold LVT offset is at MSR0xC0000410[15:12] */ 57 57 #define SMCA_THR_LVT_OFF 0xF000 58 58 59 - static bool thresholding_en; 59 + static bool thresholding_irq_en; 60 60 61 61 static const char * const th_names[] = { 62 62 "load_store", ··· 534 534 535 535 set_offset: 536 536 offset = setup_APIC_mce_threshold(offset, new); 537 - 538 - if ((offset == new) && (mce_threshold_vector != amd_threshold_interrupt)) 539 - mce_threshold_vector = amd_threshold_interrupt; 537 + if (offset == new) 538 + thresholding_irq_en = true; 540 539 541 540 done: 542 541 mce_threshold_block_init(&b, offset); ··· 1356 1357 { 1357 1358 unsigned int bank; 1358 1359 1359 - if (!thresholding_en) 1360 - return 0; 1361 - 1362 1360 for (bank = 0; bank < mca_cfg.banks; ++bank) { 1363 1361 if (!(per_cpu(bank_map, cpu) & (1 << bank))) 1364 1362 continue; ··· 1372 1376 unsigned int bank; 1373 1377 struct threshold_bank **bp; 1374 1378 int err = 0; 1375 - 1376 - if (!thresholding_en) 1377 - return 0; 1378 1379 1379 1380 bp = per_cpu(threshold_banks, cpu); 1380 1381 if (bp) ··· 1401 1408 { 1402 1409 unsigned lcpu = 0; 1403 1410 1404 - if (mce_threshold_vector == amd_threshold_interrupt) 1405 - thresholding_en = true; 1406 - 1407 1411 /* to hit CPUs online before the notifier is up */ 1408 1412 for_each_online_cpu(lcpu) { 1409 1413 int err = mce_threshold_create_device(lcpu); ··· 1408 1418 if (err) 1409 1419 return err; 1410 1420 } 1421 + 1422 + if (thresholding_irq_en) 1423 + mce_threshold_vector = amd_threshold_interrupt; 1411 1424 1412 1425 return 0; 1413 1426 }
+11
arch/x86/kernel/cpu/mshyperv.c
··· 20 20 #include <linux/interrupt.h> 21 21 #include <linux/irq.h> 22 22 #include <linux/kexec.h> 23 + #include <linux/i8253.h> 23 24 #include <asm/processor.h> 24 25 #include <asm/hypervisor.h> 25 26 #include <asm/hyperv-tlfs.h> ··· 295 294 */ 296 295 if (efi_enabled(EFI_BOOT)) 297 296 x86_platform.get_nmi_reason = hv_get_nmi_reason; 297 + 298 + /* 299 + * Hyper-V VMs have a PIT emulation quirk such that zeroing the 300 + * counter register during PIT shutdown restarts the PIT. So it 301 + * continues to interrupt @18.2 HZ. Setting i8253_clear_counter 302 + * to false tells pit_shutdown() not to zero the counter so that 303 + * the PIT really is shutdown. Generation 2 VMs don't have a PIT, 304 + * and setting this value has no effect. 305 + */ 306 + i8253_clear_counter_on_shutdown = false; 298 307 299 308 #if IS_ENABLED(CONFIG_HYPERV) 300 309 /*
+1 -1
arch/x86/kernel/cpu/vmware.c
··· 77 77 } 78 78 early_param("no-vmw-sched-clock", setup_vmw_sched_clock); 79 79 80 - static unsigned long long vmware_sched_clock(void) 80 + static unsigned long long notrace vmware_sched_clock(void) 81 81 { 82 82 unsigned long long ns; 83 83
+2 -2
arch/x86/kernel/fpu/signal.c
··· 344 344 sanitize_restored_xstate(tsk, &env, xfeatures, fx_only); 345 345 } 346 346 347 + local_bh_disable(); 347 348 fpu->initialized = 1; 348 - preempt_disable(); 349 349 fpu__restore(fpu); 350 - preempt_enable(); 350 + local_bh_enable(); 351 351 352 352 return err; 353 353 } else {
+1 -14
arch/x86/kernel/ftrace.c
··· 994 994 { 995 995 unsigned long old; 996 996 int faulted; 997 - struct ftrace_graph_ent trace; 998 997 unsigned long return_hooker = (unsigned long) 999 998 &return_to_handler; 1000 999 ··· 1045 1046 return; 1046 1047 } 1047 1048 1048 - trace.func = self_addr; 1049 - trace.depth = current->curr_ret_stack + 1; 1050 - 1051 - /* Only trace if the calling function expects to */ 1052 - if (!ftrace_graph_entry(&trace)) { 1049 + if (function_graph_enter(old, self_addr, frame_pointer, parent)) 1053 1050 *parent = old; 1054 - return; 1055 - } 1056 - 1057 - if (ftrace_push_return_trace(old, self_addr, &trace.depth, 1058 - frame_pointer, parent) == -EBUSY) { 1059 - *parent = old; 1060 - return; 1061 - } 1062 1051 } 1063 1052 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
-1
arch/x86/kernel/head32.c
··· 37 37 cr4_init_shadow(); 38 38 39 39 sanitize_boot_params(&boot_params); 40 - x86_verify_bootdata_version(); 41 40 42 41 x86_early_init_platform_quirks(); 43 42
-2
arch/x86/kernel/head64.c
··· 457 457 if (!boot_params.hdr.version) 458 458 copy_bootdata(__va(real_mode_data)); 459 459 460 - x86_verify_bootdata_version(); 461 - 462 460 x86_early_init_platform_quirks(); 463 461 464 462 switch (boot_params.hdr.hardware_subarch) {
+38 -21
arch/x86/kernel/ldt.c
··· 199 199 /* 200 200 * If PTI is enabled, this maps the LDT into the kernelmode and 201 201 * usermode tables for the given mm. 202 - * 203 - * There is no corresponding unmap function. Even if the LDT is freed, we 204 - * leave the PTEs around until the slot is reused or the mm is destroyed. 205 - * This is harmless: the LDT is always in ordinary memory, and no one will 206 - * access the freed slot. 207 - * 208 - * If we wanted to unmap freed LDTs, we'd also need to do a flush to make 209 - * it useful, and the flush would slow down modify_ldt(). 210 202 */ 211 203 static int 212 204 map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot) ··· 206 214 unsigned long va; 207 215 bool is_vmalloc; 208 216 spinlock_t *ptl; 209 - pgd_t *pgd; 210 - int i; 217 + int i, nr_pages; 211 218 212 219 if (!static_cpu_has(X86_FEATURE_PTI)) 213 220 return 0; ··· 220 229 /* Check if the current mappings are sane */ 221 230 sanity_check_ldt_mapping(mm); 222 231 223 - /* 224 - * Did we already have the top level entry allocated? We can't 225 - * use pgd_none() for this because it doens't do anything on 226 - * 4-level page table kernels. 227 - */ 228 - pgd = pgd_offset(mm, LDT_BASE_ADDR); 229 - 230 232 is_vmalloc = is_vmalloc_addr(ldt->entries); 231 233 232 - for (i = 0; i * PAGE_SIZE < ldt->nr_entries * LDT_ENTRY_SIZE; i++) { 234 + nr_pages = DIV_ROUND_UP(ldt->nr_entries * LDT_ENTRY_SIZE, PAGE_SIZE); 235 + 236 + for (i = 0; i < nr_pages; i++) { 233 237 unsigned long offset = i << PAGE_SHIFT; 234 238 const void *src = (char *)ldt->entries + offset; 235 239 unsigned long pfn; ··· 258 272 /* Propagate LDT mapping to the user page-table */ 259 273 map_ldt_struct_to_user(mm); 260 274 261 - va = (unsigned long)ldt_slot_va(slot); 262 - flush_tlb_mm_range(mm, va, va + LDT_SLOT_STRIDE, PAGE_SHIFT, false); 263 - 264 275 ldt->slot = slot; 265 276 return 0; 277 + } 278 + 279 + static void unmap_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt) 280 + { 281 + unsigned long va; 282 + int i, nr_pages; 283 + 284 + if (!ldt) 285 + return; 286 + 287 + /* LDT map/unmap is only required for PTI */ 288 + if (!static_cpu_has(X86_FEATURE_PTI)) 289 + return; 290 + 291 + nr_pages = DIV_ROUND_UP(ldt->nr_entries * LDT_ENTRY_SIZE, PAGE_SIZE); 292 + 293 + for (i = 0; i < nr_pages; i++) { 294 + unsigned long offset = i << PAGE_SHIFT; 295 + spinlock_t *ptl; 296 + pte_t *ptep; 297 + 298 + va = (unsigned long)ldt_slot_va(ldt->slot) + offset; 299 + ptep = get_locked_pte(mm, va, &ptl); 300 + pte_clear(mm, va, ptep); 301 + pte_unmap_unlock(ptep, ptl); 302 + } 303 + 304 + va = (unsigned long)ldt_slot_va(ldt->slot); 305 + flush_tlb_mm_range(mm, va, va + nr_pages * PAGE_SIZE, PAGE_SHIFT, false); 266 306 } 267 307 268 308 #else /* !CONFIG_PAGE_TABLE_ISOLATION */ ··· 297 285 map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot) 298 286 { 299 287 return 0; 288 + } 289 + 290 + static void unmap_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt) 291 + { 300 292 } 301 293 #endif /* CONFIG_PAGE_TABLE_ISOLATION */ 302 294 ··· 540 524 } 541 525 542 526 install_ldt(mm, new_ldt); 527 + unmap_ldt_struct(mm, old_ldt); 543 528 free_ldt_struct(old_ldt); 544 529 error = 0; 545 530
+82 -19
arch/x86/kernel/process.c
··· 40 40 #include <asm/prctl.h> 41 41 #include <asm/spec-ctrl.h> 42 42 43 + #include "process.h" 44 + 43 45 /* 44 46 * per-CPU TSS segments. Threads are completely 'soft' on Linux, 45 47 * no more per-task TSS's. The TSS size is kept cacheline-aligned ··· 254 252 enable_cpuid(); 255 253 } 256 254 257 - static inline void switch_to_bitmap(struct tss_struct *tss, 258 - struct thread_struct *prev, 255 + static inline void switch_to_bitmap(struct thread_struct *prev, 259 256 struct thread_struct *next, 260 257 unsigned long tifp, unsigned long tifn) 261 258 { 259 + struct tss_struct *tss = this_cpu_ptr(&cpu_tss_rw); 260 + 262 261 if (tifn & _TIF_IO_BITMAP) { 263 262 /* 264 263 * Copy the relevant range of the IO bitmap. ··· 398 395 wrmsrl(MSR_AMD64_VIRT_SPEC_CTRL, ssbd_tif_to_spec_ctrl(tifn)); 399 396 } 400 397 401 - static __always_inline void intel_set_ssb_state(unsigned long tifn) 398 + /* 399 + * Update the MSRs managing speculation control, during context switch. 400 + * 401 + * tifp: Previous task's thread flags 402 + * tifn: Next task's thread flags 403 + */ 404 + static __always_inline void __speculation_ctrl_update(unsigned long tifp, 405 + unsigned long tifn) 402 406 { 403 - u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn); 407 + unsigned long tif_diff = tifp ^ tifn; 408 + u64 msr = x86_spec_ctrl_base; 409 + bool updmsr = false; 404 410 405 - wrmsrl(MSR_IA32_SPEC_CTRL, msr); 411 + /* 412 + * If TIF_SSBD is different, select the proper mitigation 413 + * method. Note that if SSBD mitigation is disabled or permanentely 414 + * enabled this branch can't be taken because nothing can set 415 + * TIF_SSBD. 416 + */ 417 + if (tif_diff & _TIF_SSBD) { 418 + if (static_cpu_has(X86_FEATURE_VIRT_SSBD)) { 419 + amd_set_ssb_virt_state(tifn); 420 + } else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) { 421 + amd_set_core_ssb_state(tifn); 422 + } else if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) || 423 + static_cpu_has(X86_FEATURE_AMD_SSBD)) { 424 + msr |= ssbd_tif_to_spec_ctrl(tifn); 425 + updmsr = true; 426 + } 427 + } 428 + 429 + /* 430 + * Only evaluate TIF_SPEC_IB if conditional STIBP is enabled, 431 + * otherwise avoid the MSR write. 432 + */ 433 + if (IS_ENABLED(CONFIG_SMP) && 434 + static_branch_unlikely(&switch_to_cond_stibp)) { 435 + updmsr |= !!(tif_diff & _TIF_SPEC_IB); 436 + msr |= stibp_tif_to_spec_ctrl(tifn); 437 + } 438 + 439 + if (updmsr) 440 + wrmsrl(MSR_IA32_SPEC_CTRL, msr); 406 441 } 407 442 408 - static __always_inline void __speculative_store_bypass_update(unsigned long tifn) 443 + static unsigned long speculation_ctrl_update_tif(struct task_struct *tsk) 409 444 { 410 - if (static_cpu_has(X86_FEATURE_VIRT_SSBD)) 411 - amd_set_ssb_virt_state(tifn); 412 - else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) 413 - amd_set_core_ssb_state(tifn); 414 - else 415 - intel_set_ssb_state(tifn); 445 + if (test_and_clear_tsk_thread_flag(tsk, TIF_SPEC_FORCE_UPDATE)) { 446 + if (task_spec_ssb_disable(tsk)) 447 + set_tsk_thread_flag(tsk, TIF_SSBD); 448 + else 449 + clear_tsk_thread_flag(tsk, TIF_SSBD); 450 + 451 + if (task_spec_ib_disable(tsk)) 452 + set_tsk_thread_flag(tsk, TIF_SPEC_IB); 453 + else 454 + clear_tsk_thread_flag(tsk, TIF_SPEC_IB); 455 + } 456 + /* Return the updated threadinfo flags*/ 457 + return task_thread_info(tsk)->flags; 416 458 } 417 459 418 - void speculative_store_bypass_update(unsigned long tif) 460 + void speculation_ctrl_update(unsigned long tif) 419 461 { 462 + /* Forced update. Make sure all relevant TIF flags are different */ 420 463 preempt_disable(); 421 - __speculative_store_bypass_update(tif); 464 + __speculation_ctrl_update(~tif, tif); 422 465 preempt_enable(); 423 466 } 424 467 425 - void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p, 426 - struct tss_struct *tss) 468 + /* Called from seccomp/prctl update */ 469 + void speculation_ctrl_update_current(void) 470 + { 471 + preempt_disable(); 472 + speculation_ctrl_update(speculation_ctrl_update_tif(current)); 473 + preempt_enable(); 474 + } 475 + 476 + void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p) 427 477 { 428 478 struct thread_struct *prev, *next; 429 479 unsigned long tifp, tifn; ··· 486 430 487 431 tifn = READ_ONCE(task_thread_info(next_p)->flags); 488 432 tifp = READ_ONCE(task_thread_info(prev_p)->flags); 489 - switch_to_bitmap(tss, prev, next, tifp, tifn); 433 + switch_to_bitmap(prev, next, tifp, tifn); 490 434 491 435 propagate_user_return_notify(prev_p, next_p); 492 436 ··· 507 451 if ((tifp ^ tifn) & _TIF_NOCPUID) 508 452 set_cpuid_faulting(!!(tifn & _TIF_NOCPUID)); 509 453 510 - if ((tifp ^ tifn) & _TIF_SSBD) 511 - __speculative_store_bypass_update(tifn); 454 + if (likely(!((tifp | tifn) & _TIF_SPEC_FORCE_UPDATE))) { 455 + __speculation_ctrl_update(tifp, tifn); 456 + } else { 457 + speculation_ctrl_update_tif(prev_p); 458 + tifn = speculation_ctrl_update_tif(next_p); 459 + 460 + /* Enforce MSR update to ensure consistent state */ 461 + __speculation_ctrl_update(~tifn, tifn); 462 + } 512 463 } 513 464 514 465 /*
+39
arch/x86/kernel/process.h
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // 3 + // Code shared between 32 and 64 bit 4 + 5 + #include <asm/spec-ctrl.h> 6 + 7 + void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p); 8 + 9 + /* 10 + * This needs to be inline to optimize for the common case where no extra 11 + * work needs to be done. 12 + */ 13 + static inline void switch_to_extra(struct task_struct *prev, 14 + struct task_struct *next) 15 + { 16 + unsigned long next_tif = task_thread_info(next)->flags; 17 + unsigned long prev_tif = task_thread_info(prev)->flags; 18 + 19 + if (IS_ENABLED(CONFIG_SMP)) { 20 + /* 21 + * Avoid __switch_to_xtra() invocation when conditional 22 + * STIPB is disabled and the only different bit is 23 + * TIF_SPEC_IB. For CONFIG_SMP=n TIF_SPEC_IB is not 24 + * in the TIF_WORK_CTXSW masks. 25 + */ 26 + if (!static_branch_likely(&switch_to_cond_stibp)) { 27 + prev_tif &= ~_TIF_SPEC_IB; 28 + next_tif &= ~_TIF_SPEC_IB; 29 + } 30 + } 31 + 32 + /* 33 + * __switch_to_xtra() handles debug registers, i/o bitmaps, 34 + * speculation mitigations etc. 35 + */ 36 + if (unlikely(next_tif & _TIF_WORK_CTXSW_NEXT || 37 + prev_tif & _TIF_WORK_CTXSW_PREV)) 38 + __switch_to_xtra(prev, next); 39 + }
+3 -7
arch/x86/kernel/process_32.c
··· 59 59 #include <asm/intel_rdt_sched.h> 60 60 #include <asm/proto.h> 61 61 62 + #include "process.h" 63 + 62 64 void __show_regs(struct pt_regs *regs, enum show_regs_mode mode) 63 65 { 64 66 unsigned long cr0 = 0L, cr2 = 0L, cr3 = 0L, cr4 = 0L; ··· 234 232 struct fpu *prev_fpu = &prev->fpu; 235 233 struct fpu *next_fpu = &next->fpu; 236 234 int cpu = smp_processor_id(); 237 - struct tss_struct *tss = &per_cpu(cpu_tss_rw, cpu); 238 235 239 236 /* never put a printk in __switch_to... printk() calls wake_up*() indirectly */ 240 237 ··· 265 264 if (get_kernel_rpl() && unlikely(prev->iopl != next->iopl)) 266 265 set_iopl_mask(next->iopl); 267 266 268 - /* 269 - * Now maybe handle debug registers and/or IO bitmaps 270 - */ 271 - if (unlikely(task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV || 272 - task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT)) 273 - __switch_to_xtra(prev_p, next_p, tss); 267 + switch_to_extra(prev_p, next_p); 274 268 275 269 /* 276 270 * Leave lazy mode, flushing any hypercalls made here.
+3 -7
arch/x86/kernel/process_64.c
··· 60 60 #include <asm/unistd_32_ia32.h> 61 61 #endif 62 62 63 + #include "process.h" 64 + 63 65 /* Prints also some state that isn't saved in the pt_regs */ 64 66 void __show_regs(struct pt_regs *regs, enum show_regs_mode mode) 65 67 { ··· 555 553 struct fpu *prev_fpu = &prev->fpu; 556 554 struct fpu *next_fpu = &next->fpu; 557 555 int cpu = smp_processor_id(); 558 - struct tss_struct *tss = &per_cpu(cpu_tss_rw, cpu); 559 556 560 557 WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_ENTRY) && 561 558 this_cpu_read(irq_count) != -1); ··· 618 617 /* Reload sp0. */ 619 618 update_task_stack(next_p); 620 619 621 - /* 622 - * Now maybe reload the debug registers and handle I/O bitmaps 623 - */ 624 - if (unlikely(task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT || 625 - task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV)) 626 - __switch_to_xtra(prev_p, next_p, tss); 620 + switch_to_extra(prev_p, next_p); 627 621 628 622 #ifdef CONFIG_XEN_PV 629 623 /*
-17
arch/x86/kernel/setup.c
··· 1280 1280 unwind_init(); 1281 1281 } 1282 1282 1283 - /* 1284 - * From boot protocol 2.14 onwards we expect the bootloader to set the 1285 - * version to "0x8000 | <used version>". In case we find a version >= 2.14 1286 - * without the 0x8000 we assume the boot loader supports 2.13 only and 1287 - * reset the version accordingly. The 0x8000 flag is removed in any case. 1288 - */ 1289 - void __init x86_verify_bootdata_version(void) 1290 - { 1291 - if (boot_params.hdr.version & VERSION_WRITTEN) 1292 - boot_params.hdr.version &= ~VERSION_WRITTEN; 1293 - else if (boot_params.hdr.version >= 0x020e) 1294 - boot_params.hdr.version = 0x020d; 1295 - 1296 - if (boot_params.hdr.version < 0x020e) 1297 - boot_params.hdr.acpi_rsdp_addr = 0; 1298 - } 1299 - 1300 1283 #ifdef CONFIG_X86_32 1301 1284 1302 1285 static struct resource video_ram_resource = {
+7 -77
arch/x86/kernel/vsmp_64.c
··· 26 26 27 27 #define TOPOLOGY_REGISTER_OFFSET 0x10 28 28 29 - #if defined CONFIG_PCI && defined CONFIG_PARAVIRT_XXL 30 - /* 31 - * Interrupt control on vSMPowered systems: 32 - * ~AC is a shadow of IF. If IF is 'on' AC should be 'off' 33 - * and vice versa. 34 - */ 35 - 36 - asmlinkage __visible unsigned long vsmp_save_fl(void) 37 - { 38 - unsigned long flags = native_save_fl(); 39 - 40 - if (!(flags & X86_EFLAGS_IF) || (flags & X86_EFLAGS_AC)) 41 - flags &= ~X86_EFLAGS_IF; 42 - return flags; 43 - } 44 - PV_CALLEE_SAVE_REGS_THUNK(vsmp_save_fl); 45 - 46 - __visible void vsmp_restore_fl(unsigned long flags) 47 - { 48 - if (flags & X86_EFLAGS_IF) 49 - flags &= ~X86_EFLAGS_AC; 50 - else 51 - flags |= X86_EFLAGS_AC; 52 - native_restore_fl(flags); 53 - } 54 - PV_CALLEE_SAVE_REGS_THUNK(vsmp_restore_fl); 55 - 56 - asmlinkage __visible void vsmp_irq_disable(void) 57 - { 58 - unsigned long flags = native_save_fl(); 59 - 60 - native_restore_fl((flags & ~X86_EFLAGS_IF) | X86_EFLAGS_AC); 61 - } 62 - PV_CALLEE_SAVE_REGS_THUNK(vsmp_irq_disable); 63 - 64 - asmlinkage __visible void vsmp_irq_enable(void) 65 - { 66 - unsigned long flags = native_save_fl(); 67 - 68 - native_restore_fl((flags | X86_EFLAGS_IF) & (~X86_EFLAGS_AC)); 69 - } 70 - PV_CALLEE_SAVE_REGS_THUNK(vsmp_irq_enable); 71 - 72 - static unsigned __init vsmp_patch(u8 type, void *ibuf, 73 - unsigned long addr, unsigned len) 74 - { 75 - switch (type) { 76 - case PARAVIRT_PATCH(irq.irq_enable): 77 - case PARAVIRT_PATCH(irq.irq_disable): 78 - case PARAVIRT_PATCH(irq.save_fl): 79 - case PARAVIRT_PATCH(irq.restore_fl): 80 - return paravirt_patch_default(type, ibuf, addr, len); 81 - default: 82 - return native_patch(type, ibuf, addr, len); 83 - } 84 - 85 - } 86 - 87 - static void __init set_vsmp_pv_ops(void) 29 + #ifdef CONFIG_PCI 30 + static void __init set_vsmp_ctl(void) 88 31 { 89 32 void __iomem *address; 90 33 unsigned int cap, ctl, cfg; ··· 52 109 } 53 110 #endif 54 111 55 - if (cap & ctl & (1 << 4)) { 56 - /* Setup irq ops and turn on vSMP IRQ fastpath handling */ 57 - pv_ops.irq.irq_disable = PV_CALLEE_SAVE(vsmp_irq_disable); 58 - pv_ops.irq.irq_enable = PV_CALLEE_SAVE(vsmp_irq_enable); 59 - pv_ops.irq.save_fl = PV_CALLEE_SAVE(vsmp_save_fl); 60 - pv_ops.irq.restore_fl = PV_CALLEE_SAVE(vsmp_restore_fl); 61 - pv_ops.init.patch = vsmp_patch; 62 - ctl &= ~(1 << 4); 63 - } 64 112 writel(ctl, address + 4); 65 113 ctl = readl(address + 4); 66 114 pr_info("vSMP CTL: control set to:0x%08x\n", ctl); 67 115 68 116 early_iounmap(address, 8); 69 117 } 70 - #else 71 - static void __init set_vsmp_pv_ops(void) 72 - { 73 - } 74 - #endif 75 - 76 - #ifdef CONFIG_PCI 77 118 static int is_vsmp = -1; 78 119 79 120 static void __init detect_vsmp_box(void) ··· 91 164 { 92 165 return 0; 93 166 } 167 + static void __init set_vsmp_ctl(void) 168 + { 169 + } 94 170 #endif 95 171 96 172 static void __init vsmp_cap_cpus(void) 97 173 { 98 - #if !defined(CONFIG_X86_VSMP) && defined(CONFIG_SMP) 174 + #if !defined(CONFIG_X86_VSMP) && defined(CONFIG_SMP) && defined(CONFIG_PCI) 99 175 void __iomem *address; 100 176 unsigned int cfg, topology, node_shift, maxcpus; 101 177 ··· 151 221 152 222 vsmp_cap_cpus(); 153 223 154 - set_vsmp_pv_ops(); 224 + set_vsmp_ctl(); 155 225 return; 156 226 }
+6 -1
arch/x86/kvm/lapic.c
··· 55 55 #define PRIo64 "o" 56 56 57 57 /* #define apic_debug(fmt,arg...) printk(KERN_WARNING fmt,##arg) */ 58 - #define apic_debug(fmt, arg...) 58 + #define apic_debug(fmt, arg...) do {} while (0) 59 59 60 60 /* 14 is the version for Xeon and Pentium 8.4.8*/ 61 61 #define APIC_VERSION (0x14UL | ((KVM_APIC_LVT_NUM - 1) << 16)) ··· 575 575 576 576 rcu_read_lock(); 577 577 map = rcu_dereference(kvm->arch.apic_map); 578 + 579 + if (unlikely(!map)) { 580 + count = -EOPNOTSUPP; 581 + goto out; 582 + } 578 583 579 584 if (min > map->max_apic_id) 580 585 goto out;
+9 -18
arch/x86/kvm/mmu.c
··· 5074 5074 } 5075 5075 5076 5076 static u64 mmu_pte_write_fetch_gpte(struct kvm_vcpu *vcpu, gpa_t *gpa, 5077 - const u8 *new, int *bytes) 5077 + int *bytes) 5078 5078 { 5079 - u64 gentry; 5079 + u64 gentry = 0; 5080 5080 int r; 5081 5081 5082 5082 /* ··· 5088 5088 /* Handle a 32-bit guest writing two halves of a 64-bit gpte */ 5089 5089 *gpa &= ~(gpa_t)7; 5090 5090 *bytes = 8; 5091 - r = kvm_vcpu_read_guest(vcpu, *gpa, &gentry, 8); 5092 - if (r) 5093 - gentry = 0; 5094 - new = (const u8 *)&gentry; 5095 5091 } 5096 5092 5097 - switch (*bytes) { 5098 - case 4: 5099 - gentry = *(const u32 *)new; 5100 - break; 5101 - case 8: 5102 - gentry = *(const u64 *)new; 5103 - break; 5104 - default: 5105 - gentry = 0; 5106 - break; 5093 + if (*bytes == 4 || *bytes == 8) { 5094 + r = kvm_vcpu_read_guest_atomic(vcpu, *gpa, &gentry, *bytes); 5095 + if (r) 5096 + gentry = 0; 5107 5097 } 5108 5098 5109 5099 return gentry; ··· 5197 5207 5198 5208 pgprintk("%s: gpa %llx bytes %d\n", __func__, gpa, bytes); 5199 5209 5200 - gentry = mmu_pte_write_fetch_gpte(vcpu, &gpa, new, &bytes); 5201 - 5202 5210 /* 5203 5211 * No need to care whether allocation memory is successful 5204 5212 * or not since pte prefetch is skiped if it does not have ··· 5205 5217 mmu_topup_memory_caches(vcpu); 5206 5218 5207 5219 spin_lock(&vcpu->kvm->mmu_lock); 5220 + 5221 + gentry = mmu_pte_write_fetch_gpte(vcpu, &gpa, &bytes); 5222 + 5208 5223 ++vcpu->kvm->stat.mmu_pte_write; 5209 5224 kvm_mmu_audit(vcpu, AUDIT_PRE_PTE_WRITE); 5210 5225
+29 -15
arch/x86/kvm/svm.c
··· 1446 1446 return vcpu->arch.tsc_offset; 1447 1447 } 1448 1448 1449 - static void svm_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) 1449 + static u64 svm_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) 1450 1450 { 1451 1451 struct vcpu_svm *svm = to_svm(vcpu); 1452 1452 u64 g_tsc_offset = 0; ··· 1464 1464 svm->vmcb->control.tsc_offset = offset + g_tsc_offset; 1465 1465 1466 1466 mark_dirty(svm->vmcb, VMCB_INTERCEPTS); 1467 + return svm->vmcb->control.tsc_offset; 1467 1468 } 1468 1469 1469 1470 static void avic_init_vmcb(struct vcpu_svm *svm) ··· 1665 1664 static int avic_init_access_page(struct kvm_vcpu *vcpu) 1666 1665 { 1667 1666 struct kvm *kvm = vcpu->kvm; 1668 - int ret; 1667 + int ret = 0; 1669 1668 1669 + mutex_lock(&kvm->slots_lock); 1670 1670 if (kvm->arch.apic_access_page_done) 1671 - return 0; 1671 + goto out; 1672 1672 1673 - ret = x86_set_memory_region(kvm, 1674 - APIC_ACCESS_PAGE_PRIVATE_MEMSLOT, 1675 - APIC_DEFAULT_PHYS_BASE, 1676 - PAGE_SIZE); 1673 + ret = __x86_set_memory_region(kvm, 1674 + APIC_ACCESS_PAGE_PRIVATE_MEMSLOT, 1675 + APIC_DEFAULT_PHYS_BASE, 1676 + PAGE_SIZE); 1677 1677 if (ret) 1678 - return ret; 1678 + goto out; 1679 1679 1680 1680 kvm->arch.apic_access_page_done = true; 1681 - return 0; 1681 + out: 1682 + mutex_unlock(&kvm->slots_lock); 1683 + return ret; 1682 1684 } 1683 1685 1684 1686 static int avic_init_backing_page(struct kvm_vcpu *vcpu) ··· 2193 2189 return ERR_PTR(err); 2194 2190 } 2195 2191 2192 + static void svm_clear_current_vmcb(struct vmcb *vmcb) 2193 + { 2194 + int i; 2195 + 2196 + for_each_online_cpu(i) 2197 + cmpxchg(&per_cpu(svm_data, i)->current_vmcb, vmcb, NULL); 2198 + } 2199 + 2196 2200 static void svm_free_vcpu(struct kvm_vcpu *vcpu) 2197 2201 { 2198 2202 struct vcpu_svm *svm = to_svm(vcpu); 2203 + 2204 + /* 2205 + * The vmcb page can be recycled, causing a false negative in 2206 + * svm_vcpu_load(). So, ensure that no logical CPU has this 2207 + * vmcb page recorded as its current vmcb. 2208 + */ 2209 + svm_clear_current_vmcb(svm->vmcb); 2199 2210 2200 2211 __free_page(pfn_to_page(__sme_clr(svm->vmcb_pa) >> PAGE_SHIFT)); 2201 2212 __free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER); ··· 2218 2199 __free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER); 2219 2200 kvm_vcpu_uninit(vcpu); 2220 2201 kmem_cache_free(kvm_vcpu_cache, svm); 2221 - /* 2222 - * The vmcb page can be recycled, causing a false negative in 2223 - * svm_vcpu_load(). So do a full IBPB now. 2224 - */ 2225 - indirect_branch_prediction_barrier(); 2226 2202 } 2227 2203 2228 2204 static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) ··· 7163 7149 .has_wbinvd_exit = svm_has_wbinvd_exit, 7164 7150 7165 7151 .read_l1_tsc_offset = svm_read_l1_tsc_offset, 7166 - .write_tsc_offset = svm_write_tsc_offset, 7152 + .write_l1_tsc_offset = svm_write_l1_tsc_offset, 7167 7153 7168 7154 .set_tdp_cr3 = set_tdp_cr3, 7169 7155
+65 -33
arch/x86/kvm/vmx.c
··· 174 174 * refer SDM volume 3b section 21.6.13 & 22.1.3. 175 175 */ 176 176 static unsigned int ple_gap = KVM_DEFAULT_PLE_GAP; 177 + module_param(ple_gap, uint, 0444); 177 178 178 179 static unsigned int ple_window = KVM_VMX_DEFAULT_PLE_WINDOW; 179 180 module_param(ple_window, uint, 0444); ··· 985 984 struct shared_msr_entry *guest_msrs; 986 985 int nmsrs; 987 986 int save_nmsrs; 987 + bool guest_msrs_dirty; 988 988 unsigned long host_idt_base; 989 989 #ifdef CONFIG_X86_64 990 990 u64 msr_host_kernel_gs_base; ··· 1308 1306 static bool nested_vmx_is_page_fault_vmexit(struct vmcs12 *vmcs12, 1309 1307 u16 error_code); 1310 1308 static void vmx_update_msr_bitmap(struct kvm_vcpu *vcpu); 1311 - static void __always_inline vmx_disable_intercept_for_msr(unsigned long *msr_bitmap, 1309 + static __always_inline void vmx_disable_intercept_for_msr(unsigned long *msr_bitmap, 1312 1310 u32 msr, int type); 1313 1311 1314 1312 static DEFINE_PER_CPU(struct vmcs *, vmxarea); ··· 1612 1610 { 1613 1611 struct vcpu_vmx *vmx = to_vmx(vcpu); 1614 1612 1615 - /* We don't support disabling the feature for simplicity. */ 1616 - if (vmx->nested.enlightened_vmcs_enabled) 1617 - return 0; 1618 - 1619 - vmx->nested.enlightened_vmcs_enabled = true; 1620 - 1621 1613 /* 1622 1614 * vmcs_version represents the range of supported Enlightened VMCS 1623 1615 * versions: lower 8 bits is the minimal version, higher 8 bits is the ··· 1620 1624 */ 1621 1625 if (vmcs_version) 1622 1626 *vmcs_version = (KVM_EVMCS_VERSION << 8) | 1; 1627 + 1628 + /* We don't support disabling the feature for simplicity. */ 1629 + if (vmx->nested.enlightened_vmcs_enabled) 1630 + return 0; 1631 + 1632 + vmx->nested.enlightened_vmcs_enabled = true; 1623 1633 1624 1634 vmx->nested.msrs.pinbased_ctls_high &= ~EVMCS1_UNSUPPORTED_PINCTRL; 1625 1635 vmx->nested.msrs.entry_ctls_high &= ~EVMCS1_UNSUPPORTED_VMENTRY_CTRL; ··· 2899 2897 2900 2898 vmx->req_immediate_exit = false; 2901 2899 2900 + /* 2901 + * Note that guest MSRs to be saved/restored can also be changed 2902 + * when guest state is loaded. This happens when guest transitions 2903 + * to/from long-mode by setting MSR_EFER.LMA. 2904 + */ 2905 + if (!vmx->loaded_cpu_state || vmx->guest_msrs_dirty) { 2906 + vmx->guest_msrs_dirty = false; 2907 + for (i = 0; i < vmx->save_nmsrs; ++i) 2908 + kvm_set_shared_msr(vmx->guest_msrs[i].index, 2909 + vmx->guest_msrs[i].data, 2910 + vmx->guest_msrs[i].mask); 2911 + 2912 + } 2913 + 2902 2914 if (vmx->loaded_cpu_state) 2903 2915 return; 2904 2916 ··· 2973 2957 vmcs_writel(HOST_GS_BASE, gs_base); 2974 2958 host_state->gs_base = gs_base; 2975 2959 } 2976 - 2977 - for (i = 0; i < vmx->save_nmsrs; ++i) 2978 - kvm_set_shared_msr(vmx->guest_msrs[i].index, 2979 - vmx->guest_msrs[i].data, 2980 - vmx->guest_msrs[i].mask); 2981 2960 } 2982 2961 2983 2962 static void vmx_prepare_switch_to_host(struct vcpu_vmx *vmx) ··· 3447 3436 move_msr_up(vmx, index, save_nmsrs++); 3448 3437 3449 3438 vmx->save_nmsrs = save_nmsrs; 3439 + vmx->guest_msrs_dirty = true; 3450 3440 3451 3441 if (cpu_has_vmx_msr_bitmap()) 3452 3442 vmx_update_msr_bitmap(&vmx->vcpu); ··· 3464 3452 return vcpu->arch.tsc_offset; 3465 3453 } 3466 3454 3467 - /* 3468 - * writes 'offset' into guest's timestamp counter offset register 3469 - */ 3470 - static void vmx_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) 3455 + static u64 vmx_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) 3471 3456 { 3457 + u64 active_offset = offset; 3472 3458 if (is_guest_mode(vcpu)) { 3473 3459 /* 3474 3460 * We're here if L1 chose not to trap WRMSR to TSC. According ··· 3474 3464 * set for L2 remains unchanged, and still needs to be added 3475 3465 * to the newly set TSC to get L2's TSC. 3476 3466 */ 3477 - struct vmcs12 *vmcs12; 3478 - /* recalculate vmcs02.TSC_OFFSET: */ 3479 - vmcs12 = get_vmcs12(vcpu); 3480 - vmcs_write64(TSC_OFFSET, offset + 3481 - (nested_cpu_has(vmcs12, CPU_BASED_USE_TSC_OFFSETING) ? 3482 - vmcs12->tsc_offset : 0)); 3467 + struct vmcs12 *vmcs12 = get_vmcs12(vcpu); 3468 + if (nested_cpu_has(vmcs12, CPU_BASED_USE_TSC_OFFSETING)) 3469 + active_offset += vmcs12->tsc_offset; 3483 3470 } else { 3484 3471 trace_kvm_write_tsc_offset(vcpu->vcpu_id, 3485 3472 vmcs_read64(TSC_OFFSET), offset); 3486 - vmcs_write64(TSC_OFFSET, offset); 3487 3473 } 3474 + 3475 + vmcs_write64(TSC_OFFSET, active_offset); 3476 + return active_offset; 3488 3477 } 3489 3478 3490 3479 /* ··· 5953 5944 spin_unlock(&vmx_vpid_lock); 5954 5945 } 5955 5946 5956 - static void __always_inline vmx_disable_intercept_for_msr(unsigned long *msr_bitmap, 5947 + static __always_inline void vmx_disable_intercept_for_msr(unsigned long *msr_bitmap, 5957 5948 u32 msr, int type) 5958 5949 { 5959 5950 int f = sizeof(unsigned long); ··· 5991 5982 } 5992 5983 } 5993 5984 5994 - static void __always_inline vmx_enable_intercept_for_msr(unsigned long *msr_bitmap, 5985 + static __always_inline void vmx_enable_intercept_for_msr(unsigned long *msr_bitmap, 5995 5986 u32 msr, int type) 5996 5987 { 5997 5988 int f = sizeof(unsigned long); ··· 6029 6020 } 6030 6021 } 6031 6022 6032 - static void __always_inline vmx_set_intercept_for_msr(unsigned long *msr_bitmap, 6023 + static __always_inline void vmx_set_intercept_for_msr(unsigned long *msr_bitmap, 6033 6024 u32 msr, int type, bool value) 6034 6025 { 6035 6026 if (value) ··· 8673 8664 struct vmcs12 *vmcs12 = vmx->nested.cached_vmcs12; 8674 8665 struct hv_enlightened_vmcs *evmcs = vmx->nested.hv_evmcs; 8675 8666 8676 - vmcs12->hdr.revision_id = evmcs->revision_id; 8677 - 8678 8667 /* HV_VMX_ENLIGHTENED_CLEAN_FIELD_NONE */ 8679 8668 vmcs12->tpr_threshold = evmcs->tpr_threshold; 8680 8669 vmcs12->guest_rip = evmcs->guest_rip; ··· 9376 9369 9377 9370 vmx->nested.hv_evmcs = kmap(vmx->nested.hv_evmcs_page); 9378 9371 9379 - if (vmx->nested.hv_evmcs->revision_id != VMCS12_REVISION) { 9372 + /* 9373 + * Currently, KVM only supports eVMCS version 1 9374 + * (== KVM_EVMCS_VERSION) and thus we expect guest to set this 9375 + * value to first u32 field of eVMCS which should specify eVMCS 9376 + * VersionNumber. 9377 + * 9378 + * Guest should be aware of supported eVMCS versions by host by 9379 + * examining CPUID.0x4000000A.EAX[0:15]. Host userspace VMM is 9380 + * expected to set this CPUID leaf according to the value 9381 + * returned in vmcs_version from nested_enable_evmcs(). 9382 + * 9383 + * However, it turns out that Microsoft Hyper-V fails to comply 9384 + * to their own invented interface: When Hyper-V use eVMCS, it 9385 + * just sets first u32 field of eVMCS to revision_id specified 9386 + * in MSR_IA32_VMX_BASIC. Instead of used eVMCS version number 9387 + * which is one of the supported versions specified in 9388 + * CPUID.0x4000000A.EAX[0:15]. 9389 + * 9390 + * To overcome Hyper-V bug, we accept here either a supported 9391 + * eVMCS version or VMCS12 revision_id as valid values for first 9392 + * u32 field of eVMCS. 9393 + */ 9394 + if ((vmx->nested.hv_evmcs->revision_id != KVM_EVMCS_VERSION) && 9395 + (vmx->nested.hv_evmcs->revision_id != VMCS12_REVISION)) { 9380 9396 nested_release_evmcs(vcpu); 9381 9397 return 0; 9382 9398 } ··· 9420 9390 * present in struct hv_enlightened_vmcs, ...). Make sure there 9421 9391 * are no leftovers. 9422 9392 */ 9423 - if (from_launch) 9424 - memset(vmx->nested.cached_vmcs12, 0, 9425 - sizeof(*vmx->nested.cached_vmcs12)); 9393 + if (from_launch) { 9394 + struct vmcs12 *vmcs12 = get_vmcs12(vcpu); 9395 + memset(vmcs12, 0, sizeof(*vmcs12)); 9396 + vmcs12->hdr.revision_id = VMCS12_REVISION; 9397 + } 9426 9398 9427 9399 } 9428 9400 return 1; ··· 15094 15062 .has_wbinvd_exit = cpu_has_vmx_wbinvd_exit, 15095 15063 15096 15064 .read_l1_tsc_offset = vmx_read_l1_tsc_offset, 15097 - .write_tsc_offset = vmx_write_tsc_offset, 15065 + .write_l1_tsc_offset = vmx_write_l1_tsc_offset, 15098 15066 15099 15067 .set_tdp_cr3 = vmx_set_cr3, 15100 15068
+6 -4
arch/x86/kvm/x86.c
··· 1665 1665 1666 1666 static void kvm_vcpu_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) 1667 1667 { 1668 - kvm_x86_ops->write_tsc_offset(vcpu, offset); 1669 - vcpu->arch.tsc_offset = offset; 1668 + vcpu->arch.tsc_offset = kvm_x86_ops->write_l1_tsc_offset(vcpu, offset); 1670 1669 } 1671 1670 1672 1671 static inline bool kvm_check_tsc_unstable(void) ··· 1793 1794 static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu, 1794 1795 s64 adjustment) 1795 1796 { 1796 - kvm_vcpu_write_tsc_offset(vcpu, vcpu->arch.tsc_offset + adjustment); 1797 + u64 tsc_offset = kvm_x86_ops->read_l1_tsc_offset(vcpu); 1798 + kvm_vcpu_write_tsc_offset(vcpu, tsc_offset + adjustment); 1797 1799 } 1798 1800 1799 1801 static inline void adjust_tsc_offset_host(struct kvm_vcpu *vcpu, s64 adjustment) ··· 6918 6918 clock_pairing.nsec = ts.tv_nsec; 6919 6919 clock_pairing.tsc = kvm_read_l1_tsc(vcpu, cycle); 6920 6920 clock_pairing.flags = 0; 6921 + memset(&clock_pairing.pad, 0, sizeof(clock_pairing.pad)); 6921 6922 6922 6923 ret = 0; 6923 6924 if (kvm_write_guest(vcpu->kvm, paddr, &clock_pairing, ··· 7456 7455 else { 7457 7456 if (vcpu->arch.apicv_active) 7458 7457 kvm_x86_ops->sync_pir_to_irr(vcpu); 7459 - kvm_ioapic_scan_entry(vcpu, vcpu->arch.ioapic_handled_vectors); 7458 + if (ioapic_in_kernel(vcpu->kvm)) 7459 + kvm_ioapic_scan_entry(vcpu, vcpu->arch.ioapic_handled_vectors); 7460 7460 } 7461 7461 7462 7462 if (is_guest_mode(vcpu))
+86 -29
arch/x86/mm/tlb.c
··· 7 7 #include <linux/export.h> 8 8 #include <linux/cpu.h> 9 9 #include <linux/debugfs.h> 10 - #include <linux/ptrace.h> 11 10 12 11 #include <asm/tlbflush.h> 13 12 #include <asm/mmu_context.h> ··· 28 29 * 29 30 * Implement flush IPI by CALL_FUNCTION_VECTOR, Alex Shi 30 31 */ 32 + 33 + /* 34 + * Use bit 0 to mangle the TIF_SPEC_IB state into the mm pointer which is 35 + * stored in cpu_tlb_state.last_user_mm_ibpb. 36 + */ 37 + #define LAST_USER_MM_IBPB 0x1UL 31 38 32 39 /* 33 40 * We get here when we do something requiring a TLB invalidation ··· 186 181 } 187 182 } 188 183 189 - static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id) 184 + static inline unsigned long mm_mangle_tif_spec_ib(struct task_struct *next) 190 185 { 186 + unsigned long next_tif = task_thread_info(next)->flags; 187 + unsigned long ibpb = (next_tif >> TIF_SPEC_IB) & LAST_USER_MM_IBPB; 188 + 189 + return (unsigned long)next->mm | ibpb; 190 + } 191 + 192 + static void cond_ibpb(struct task_struct *next) 193 + { 194 + if (!next || !next->mm) 195 + return; 196 + 191 197 /* 192 - * Check if the current (previous) task has access to the memory 193 - * of the @tsk (next) task. If access is denied, make sure to 194 - * issue a IBPB to stop user->user Spectre-v2 attacks. 195 - * 196 - * Note: __ptrace_may_access() returns 0 or -ERRNO. 198 + * Both, the conditional and the always IBPB mode use the mm 199 + * pointer to avoid the IBPB when switching between tasks of the 200 + * same process. Using the mm pointer instead of mm->context.ctx_id 201 + * opens a hypothetical hole vs. mm_struct reuse, which is more or 202 + * less impossible to control by an attacker. Aside of that it 203 + * would only affect the first schedule so the theoretically 204 + * exposed data is not really interesting. 197 205 */ 198 - return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id && 199 - ptrace_may_access_sched(tsk, PTRACE_MODE_SPEC_IBPB)); 206 + if (static_branch_likely(&switch_mm_cond_ibpb)) { 207 + unsigned long prev_mm, next_mm; 208 + 209 + /* 210 + * This is a bit more complex than the always mode because 211 + * it has to handle two cases: 212 + * 213 + * 1) Switch from a user space task (potential attacker) 214 + * which has TIF_SPEC_IB set to a user space task 215 + * (potential victim) which has TIF_SPEC_IB not set. 216 + * 217 + * 2) Switch from a user space task (potential attacker) 218 + * which has TIF_SPEC_IB not set to a user space task 219 + * (potential victim) which has TIF_SPEC_IB set. 220 + * 221 + * This could be done by unconditionally issuing IBPB when 222 + * a task which has TIF_SPEC_IB set is either scheduled in 223 + * or out. Though that results in two flushes when: 224 + * 225 + * - the same user space task is scheduled out and later 226 + * scheduled in again and only a kernel thread ran in 227 + * between. 228 + * 229 + * - a user space task belonging to the same process is 230 + * scheduled in after a kernel thread ran in between 231 + * 232 + * - a user space task belonging to the same process is 233 + * scheduled in immediately. 234 + * 235 + * Optimize this with reasonably small overhead for the 236 + * above cases. Mangle the TIF_SPEC_IB bit into the mm 237 + * pointer of the incoming task which is stored in 238 + * cpu_tlbstate.last_user_mm_ibpb for comparison. 239 + */ 240 + next_mm = mm_mangle_tif_spec_ib(next); 241 + prev_mm = this_cpu_read(cpu_tlbstate.last_user_mm_ibpb); 242 + 243 + /* 244 + * Issue IBPB only if the mm's are different and one or 245 + * both have the IBPB bit set. 246 + */ 247 + if (next_mm != prev_mm && 248 + (next_mm | prev_mm) & LAST_USER_MM_IBPB) 249 + indirect_branch_prediction_barrier(); 250 + 251 + this_cpu_write(cpu_tlbstate.last_user_mm_ibpb, next_mm); 252 + } 253 + 254 + if (static_branch_unlikely(&switch_mm_always_ibpb)) { 255 + /* 256 + * Only flush when switching to a user space task with a 257 + * different context than the user space task which ran 258 + * last on this CPU. 259 + */ 260 + if (this_cpu_read(cpu_tlbstate.last_user_mm) != next->mm) { 261 + indirect_branch_prediction_barrier(); 262 + this_cpu_write(cpu_tlbstate.last_user_mm, next->mm); 263 + } 264 + } 200 265 } 201 266 202 267 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, ··· 367 292 new_asid = prev_asid; 368 293 need_flush = true; 369 294 } else { 370 - u64 last_ctx_id = this_cpu_read(cpu_tlbstate.last_ctx_id); 371 - 372 295 /* 373 296 * Avoid user/user BTB poisoning by flushing the branch 374 297 * predictor when switching between processes. This stops 375 298 * one process from doing Spectre-v2 attacks on another. 376 - * 377 - * As an optimization, flush indirect branches only when 378 - * switching into a processes that can't be ptrace by the 379 - * current one (as in such case, attacker has much more 380 - * convenient way how to tamper with the next process than 381 - * branch buffer poisoning). 382 299 */ 383 - if (static_cpu_has(X86_FEATURE_USE_IBPB) && 384 - ibpb_needed(tsk, last_ctx_id)) 385 - indirect_branch_prediction_barrier(); 300 + cond_ibpb(tsk); 386 301 387 302 if (IS_ENABLED(CONFIG_VMAP_STACK)) { 388 303 /* ··· 429 364 /* See above wrt _rcuidle. */ 430 365 trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, 0); 431 366 } 432 - 433 - /* 434 - * Record last user mm's context id, so we can avoid 435 - * flushing branch buffer with IBPB if we switch back 436 - * to the same user. 437 - */ 438 - if (next != &init_mm) 439 - this_cpu_write(cpu_tlbstate.last_ctx_id, next->context.ctx_id); 440 367 441 368 /* Make sure we write CR3 before loaded_mm. */ 442 369 barrier(); ··· 498 441 write_cr3(build_cr3(mm->pgd, 0)); 499 442 500 443 /* Reinitialize tlbstate. */ 501 - this_cpu_write(cpu_tlbstate.last_ctx_id, mm->context.ctx_id); 444 + this_cpu_write(cpu_tlbstate.last_user_mm_ibpb, LAST_USER_MM_IBPB); 502 445 this_cpu_write(cpu_tlbstate.loaded_mm_asid, 0); 503 446 this_cpu_write(cpu_tlbstate.next_asid, 1); 504 447 this_cpu_write(cpu_tlbstate.ctxs[0].ctx_id, mm->context.ctx_id);
-78
arch/x86/xen/enlighten.c
··· 10 10 #include <xen/xen.h> 11 11 #include <xen/features.h> 12 12 #include <xen/page.h> 13 - #include <xen/interface/memory.h> 14 13 15 14 #include <asm/xen/hypercall.h> 16 15 #include <asm/xen/hypervisor.h> ··· 345 346 } 346 347 EXPORT_SYMBOL(xen_arch_unregister_cpu); 347 348 #endif 348 - 349 - #ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG 350 - void __init arch_xen_balloon_init(struct resource *hostmem_resource) 351 - { 352 - struct xen_memory_map memmap; 353 - int rc; 354 - unsigned int i, last_guest_ram; 355 - phys_addr_t max_addr = PFN_PHYS(max_pfn); 356 - struct e820_table *xen_e820_table; 357 - const struct e820_entry *entry; 358 - struct resource *res; 359 - 360 - if (!xen_initial_domain()) 361 - return; 362 - 363 - xen_e820_table = kmalloc(sizeof(*xen_e820_table), GFP_KERNEL); 364 - if (!xen_e820_table) 365 - return; 366 - 367 - memmap.nr_entries = ARRAY_SIZE(xen_e820_table->entries); 368 - set_xen_guest_handle(memmap.buffer, xen_e820_table->entries); 369 - rc = HYPERVISOR_memory_op(XENMEM_machine_memory_map, &memmap); 370 - if (rc) { 371 - pr_warn("%s: Can't read host e820 (%d)\n", __func__, rc); 372 - goto out; 373 - } 374 - 375 - last_guest_ram = 0; 376 - for (i = 0; i < memmap.nr_entries; i++) { 377 - if (xen_e820_table->entries[i].addr >= max_addr) 378 - break; 379 - if (xen_e820_table->entries[i].type == E820_TYPE_RAM) 380 - last_guest_ram = i; 381 - } 382 - 383 - entry = &xen_e820_table->entries[last_guest_ram]; 384 - if (max_addr >= entry->addr + entry->size) 385 - goto out; /* No unallocated host RAM. */ 386 - 387 - hostmem_resource->start = max_addr; 388 - hostmem_resource->end = entry->addr + entry->size; 389 - 390 - /* 391 - * Mark non-RAM regions between the end of dom0 RAM and end of host RAM 392 - * as unavailable. The rest of that region can be used for hotplug-based 393 - * ballooning. 394 - */ 395 - for (; i < memmap.nr_entries; i++) { 396 - entry = &xen_e820_table->entries[i]; 397 - 398 - if (entry->type == E820_TYPE_RAM) 399 - continue; 400 - 401 - if (entry->addr >= hostmem_resource->end) 402 - break; 403 - 404 - res = kzalloc(sizeof(*res), GFP_KERNEL); 405 - if (!res) 406 - goto out; 407 - 408 - res->name = "Unavailable host RAM"; 409 - res->start = entry->addr; 410 - res->end = (entry->addr + entry->size < hostmem_resource->end) ? 411 - entry->addr + entry->size : hostmem_resource->end; 412 - rc = insert_resource(hostmem_resource, res); 413 - if (rc) { 414 - pr_warn("%s: Can't insert [%llx - %llx) (%d)\n", 415 - __func__, res->start, res->end, rc); 416 - kfree(res); 417 - goto out; 418 - } 419 - } 420 - 421 - out: 422 - kfree(xen_e820_table); 423 - } 424 - #endif /* CONFIG_XEN_BALLOON_MEMORY_HOTPLUG */
+3 -3
arch/x86/xen/mmu_pv.c
··· 1905 1905 init_top_pgt[0] = __pgd(0); 1906 1906 1907 1907 /* Pre-constructed entries are in pfn, so convert to mfn */ 1908 - /* L4[272] -> level3_ident_pgt */ 1908 + /* L4[273] -> level3_ident_pgt */ 1909 1909 /* L4[511] -> level3_kernel_pgt */ 1910 1910 convert_pfn_mfn(init_top_pgt); 1911 1911 ··· 1925 1925 addr[0] = (unsigned long)pgd; 1926 1926 addr[1] = (unsigned long)l3; 1927 1927 addr[2] = (unsigned long)l2; 1928 - /* Graft it onto L4[272][0]. Note that we creating an aliasing problem: 1929 - * Both L4[272][0] and L4[511][510] have entries that point to the same 1928 + /* Graft it onto L4[273][0]. Note that we creating an aliasing problem: 1929 + * Both L4[273][0] and L4[511][510] have entries that point to the same 1930 1930 * L2 (PMD) tables. Meaning that if you modify it in __va space 1931 1931 * it will be also modified in the __ka space! (But if you just 1932 1932 * modify the PMD table to point to other PTE's or none, then you
+20 -15
arch/x86/xen/multicalls.c
··· 69 69 70 70 trace_xen_mc_flush(b->mcidx, b->argidx, b->cbidx); 71 71 72 + #if MC_DEBUG 73 + memcpy(b->debug, b->entries, 74 + b->mcidx * sizeof(struct multicall_entry)); 75 + #endif 76 + 72 77 switch (b->mcidx) { 73 78 case 0: 74 79 /* no-op */ ··· 92 87 break; 93 88 94 89 default: 95 - #if MC_DEBUG 96 - memcpy(b->debug, b->entries, 97 - b->mcidx * sizeof(struct multicall_entry)); 98 - #endif 99 - 100 90 if (HYPERVISOR_multicall(b->entries, b->mcidx) != 0) 101 91 BUG(); 102 92 for (i = 0; i < b->mcidx; i++) 103 93 if (b->entries[i].result < 0) 104 94 ret++; 95 + } 105 96 97 + if (WARN_ON(ret)) { 98 + pr_err("%d of %d multicall(s) failed: cpu %d\n", 99 + ret, b->mcidx, smp_processor_id()); 100 + for (i = 0; i < b->mcidx; i++) { 101 + if (b->entries[i].result < 0) { 106 102 #if MC_DEBUG 107 - if (ret) { 108 - printk(KERN_ERR "%d multicall(s) failed: cpu %d\n", 109 - ret, smp_processor_id()); 110 - dump_stack(); 111 - for (i = 0; i < b->mcidx; i++) { 112 - printk(KERN_DEBUG " call %2d/%d: op=%lu arg=[%lx] result=%ld\t%pF\n", 113 - i+1, b->mcidx, 103 + pr_err(" call %2d: op=%lu arg=[%lx] result=%ld\t%pF\n", 104 + i + 1, 114 105 b->debug[i].op, 115 106 b->debug[i].args[0], 116 107 b->entries[i].result, 117 108 b->caller[i]); 109 + #else 110 + pr_err(" call %2d: op=%lu arg=[%lx] result=%ld\n", 111 + i + 1, 112 + b->entries[i].op, 113 + b->entries[i].args[0], 114 + b->entries[i].result); 115 + #endif 118 116 } 119 117 } 120 - #endif 121 118 } 122 119 123 120 b->mcidx = 0; ··· 133 126 b->cbidx = 0; 134 127 135 128 local_irq_restore(flags); 136 - 137 - WARN_ON(ret); 138 129 } 139 130 140 131 struct multicall_space __xen_mc_entry(size_t args)
+1 -2
arch/x86/xen/p2m.c
··· 656 656 657 657 /* 658 658 * The interface requires atomic updates on p2m elements. 659 - * xen_safe_write_ulong() is using __put_user which does an atomic 660 - * store via asm(). 659 + * xen_safe_write_ulong() is using an atomic store via asm(). 661 660 */ 662 661 if (likely(!xen_safe_write_ulong(xen_p2m_addr + pfn, mfn))) 663 662 return true;
+4 -2
arch/x86/xen/setup.c
··· 808 808 addr = xen_e820_table.entries[0].addr; 809 809 size = xen_e820_table.entries[0].size; 810 810 while (i < xen_e820_table.nr_entries) { 811 + bool discard = false; 811 812 812 813 chunk_size = size; 813 814 type = xen_e820_table.entries[i].type; ··· 824 823 xen_add_extra_mem(pfn_s, n_pfns); 825 824 xen_max_p2m_pfn = pfn_s + n_pfns; 826 825 } else 827 - type = E820_TYPE_UNUSABLE; 826 + discard = true; 828 827 } 829 828 830 - xen_align_and_add_e820_region(addr, chunk_size, type); 829 + if (!discard) 830 + xen_align_and_add_e820_region(addr, chunk_size, type); 831 831 832 832 addr += chunk_size; 833 833 size -= chunk_size;
+9 -12
arch/x86/xen/spinlock.c
··· 3 3 * Split spinlock implementation out into its own file, so it can be 4 4 * compiled in a FTRACE-compatible way. 5 5 */ 6 - #include <linux/kernel_stat.h> 6 + #include <linux/kernel.h> 7 7 #include <linux/spinlock.h> 8 - #include <linux/debugfs.h> 9 - #include <linux/log2.h> 10 - #include <linux/gfp.h> 11 8 #include <linux/slab.h> 9 + #include <linux/atomic.h> 12 10 13 11 #include <asm/paravirt.h> 14 12 #include <asm/qspinlock.h> 15 13 16 - #include <xen/interface/xen.h> 17 14 #include <xen/events.h> 18 15 19 16 #include "xen-ops.h" 20 - #include "debugfs.h" 21 17 22 18 static DEFINE_PER_CPU(int, lock_kicker_irq) = -1; 23 19 static DEFINE_PER_CPU(char *, irq_name); 20 + static DEFINE_PER_CPU(atomic_t, xen_qlock_wait_nest); 24 21 static bool xen_pvspin = true; 25 22 26 23 static void xen_qlock_kick(int cpu) ··· 36 39 */ 37 40 static void xen_qlock_wait(u8 *byte, u8 val) 38 41 { 39 - unsigned long flags; 40 42 int irq = __this_cpu_read(lock_kicker_irq); 43 + atomic_t *nest_cnt = this_cpu_ptr(&xen_qlock_wait_nest); 41 44 42 45 /* If kicker interrupts not initialized yet, just spin */ 43 46 if (irq == -1 || in_nmi()) 44 47 return; 45 48 46 - /* Guard against reentry. */ 47 - local_irq_save(flags); 49 + /* Detect reentry. */ 50 + atomic_inc(nest_cnt); 48 51 49 - /* If irq pending already clear it. */ 50 - if (xen_test_irq_pending(irq)) { 52 + /* If irq pending already and no nested call clear it. */ 53 + if (atomic_read(nest_cnt) == 1 && xen_test_irq_pending(irq)) { 51 54 xen_clear_irq_pending(irq); 52 55 } else if (READ_ONCE(*byte) == val) { 53 56 /* Block until irq becomes pending (or a spurious wakeup) */ 54 57 xen_poll_irq(irq); 55 58 } 56 59 57 - local_irq_restore(flags); 60 + atomic_dec(nest_cnt); 58 61 } 59 62 60 63 static irqreturn_t dummy_handler(int irq, void *dev_id)
+5 -1
arch/xtensa/include/asm/processor.h
··· 23 23 # error Linux requires the Xtensa Windowed Registers Option. 24 24 #endif 25 25 26 - #define ARCH_SLAB_MINALIGN XCHAL_DATA_WIDTH 26 + /* Xtensa ABI requires stack alignment to be at least 16 */ 27 + 28 + #define STACK_ALIGN (XCHAL_DATA_WIDTH > 16 ? XCHAL_DATA_WIDTH : 16) 29 + 30 + #define ARCH_SLAB_MINALIGN STACK_ALIGN 27 31 28 32 /* 29 33 * User space process size: 1 GB.
+8 -8
arch/xtensa/kernel/asm-offsets.c
··· 94 94 DEFINE(THREAD_SP, offsetof (struct task_struct, thread.sp)); 95 95 DEFINE(THREAD_CPENABLE, offsetof (struct thread_info, cpenable)); 96 96 #if XTENSA_HAVE_COPROCESSORS 97 - DEFINE(THREAD_XTREGS_CP0, offsetof (struct thread_info, xtregs_cp)); 98 - DEFINE(THREAD_XTREGS_CP1, offsetof (struct thread_info, xtregs_cp)); 99 - DEFINE(THREAD_XTREGS_CP2, offsetof (struct thread_info, xtregs_cp)); 100 - DEFINE(THREAD_XTREGS_CP3, offsetof (struct thread_info, xtregs_cp)); 101 - DEFINE(THREAD_XTREGS_CP4, offsetof (struct thread_info, xtregs_cp)); 102 - DEFINE(THREAD_XTREGS_CP5, offsetof (struct thread_info, xtregs_cp)); 103 - DEFINE(THREAD_XTREGS_CP6, offsetof (struct thread_info, xtregs_cp)); 104 - DEFINE(THREAD_XTREGS_CP7, offsetof (struct thread_info, xtregs_cp)); 97 + DEFINE(THREAD_XTREGS_CP0, offsetof(struct thread_info, xtregs_cp.cp0)); 98 + DEFINE(THREAD_XTREGS_CP1, offsetof(struct thread_info, xtregs_cp.cp1)); 99 + DEFINE(THREAD_XTREGS_CP2, offsetof(struct thread_info, xtregs_cp.cp2)); 100 + DEFINE(THREAD_XTREGS_CP3, offsetof(struct thread_info, xtregs_cp.cp3)); 101 + DEFINE(THREAD_XTREGS_CP4, offsetof(struct thread_info, xtregs_cp.cp4)); 102 + DEFINE(THREAD_XTREGS_CP5, offsetof(struct thread_info, xtregs_cp.cp5)); 103 + DEFINE(THREAD_XTREGS_CP6, offsetof(struct thread_info, xtregs_cp.cp6)); 104 + DEFINE(THREAD_XTREGS_CP7, offsetof(struct thread_info, xtregs_cp.cp7)); 105 105 #endif 106 106 DEFINE(THREAD_XTREGS_USER, offsetof (struct thread_info, xtregs_user)); 107 107 DEFINE(XTREGS_USER_SIZE, sizeof(xtregs_user_t));
+5 -2
arch/xtensa/kernel/head.S
··· 88 88 initialize_mmu 89 89 #if defined(CONFIG_MMU) && XCHAL_HAVE_PTP_MMU && XCHAL_HAVE_SPANNING_WAY 90 90 rsr a2, excsave1 91 - movi a3, 0x08000000 91 + movi a3, XCHAL_KSEG_PADDR 92 + bltu a2, a3, 1f 93 + sub a2, a2, a3 94 + movi a3, XCHAL_KSEG_SIZE 92 95 bgeu a2, a3, 1f 93 - movi a3, 0xd0000000 96 + movi a3, XCHAL_KSEG_CACHED_VADDR 94 97 add a2, a2, a3 95 98 wsr a2, excsave1 96 99 1:
+4 -1
arch/xtensa/kernel/process.c
··· 94 94 95 95 void coprocessor_flush_all(struct thread_info *ti) 96 96 { 97 - unsigned long cpenable; 97 + unsigned long cpenable, old_cpenable; 98 98 int i; 99 99 100 100 preempt_disable(); 101 101 102 + RSR_CPENABLE(old_cpenable); 102 103 cpenable = ti->cpenable; 104 + WSR_CPENABLE(cpenable); 103 105 104 106 for (i = 0; i < XCHAL_CP_MAX; i++) { 105 107 if ((cpenable & 1) != 0 && coprocessor_owner[i] == ti) 106 108 coprocessor_flush(ti, i); 107 109 cpenable >>= 1; 108 110 } 111 + WSR_CPENABLE(old_cpenable); 109 112 110 113 preempt_enable(); 111 114 }
+38 -4
arch/xtensa/kernel/ptrace.c
··· 127 127 } 128 128 129 129 130 + #if XTENSA_HAVE_COPROCESSORS 131 + #define CP_OFFSETS(cp) \ 132 + { \ 133 + .elf_xtregs_offset = offsetof(elf_xtregs_t, cp), \ 134 + .ti_offset = offsetof(struct thread_info, xtregs_cp.cp), \ 135 + .sz = sizeof(xtregs_ ## cp ## _t), \ 136 + } 137 + 138 + static const struct { 139 + size_t elf_xtregs_offset; 140 + size_t ti_offset; 141 + size_t sz; 142 + } cp_offsets[] = { 143 + CP_OFFSETS(cp0), 144 + CP_OFFSETS(cp1), 145 + CP_OFFSETS(cp2), 146 + CP_OFFSETS(cp3), 147 + CP_OFFSETS(cp4), 148 + CP_OFFSETS(cp5), 149 + CP_OFFSETS(cp6), 150 + CP_OFFSETS(cp7), 151 + }; 152 + #endif 153 + 130 154 static int ptrace_getxregs(struct task_struct *child, void __user *uregs) 131 155 { 132 156 struct pt_regs *regs = task_pt_regs(child); 133 157 struct thread_info *ti = task_thread_info(child); 134 158 elf_xtregs_t __user *xtregs = uregs; 135 159 int ret = 0; 160 + int i __maybe_unused; 136 161 137 162 if (!access_ok(VERIFY_WRITE, uregs, sizeof(elf_xtregs_t))) 138 163 return -EIO; ··· 165 140 #if XTENSA_HAVE_COPROCESSORS 166 141 /* Flush all coprocessor registers to memory. */ 167 142 coprocessor_flush_all(ti); 168 - ret |= __copy_to_user(&xtregs->cp0, &ti->xtregs_cp, 169 - sizeof(xtregs_coprocessor_t)); 143 + 144 + for (i = 0; i < ARRAY_SIZE(cp_offsets); ++i) 145 + ret |= __copy_to_user((char __user *)xtregs + 146 + cp_offsets[i].elf_xtregs_offset, 147 + (const char *)ti + 148 + cp_offsets[i].ti_offset, 149 + cp_offsets[i].sz); 170 150 #endif 171 151 ret |= __copy_to_user(&xtregs->opt, &regs->xtregs_opt, 172 152 sizeof(xtregs->opt)); ··· 187 157 struct pt_regs *regs = task_pt_regs(child); 188 158 elf_xtregs_t *xtregs = uregs; 189 159 int ret = 0; 160 + int i __maybe_unused; 190 161 191 162 if (!access_ok(VERIFY_READ, uregs, sizeof(elf_xtregs_t))) 192 163 return -EFAULT; ··· 197 166 coprocessor_flush_all(ti); 198 167 coprocessor_release_all(ti); 199 168 200 - ret |= __copy_from_user(&ti->xtregs_cp, &xtregs->cp0, 201 - sizeof(xtregs_coprocessor_t)); 169 + for (i = 0; i < ARRAY_SIZE(cp_offsets); ++i) 170 + ret |= __copy_from_user((char *)ti + cp_offsets[i].ti_offset, 171 + (const char __user *)xtregs + 172 + cp_offsets[i].elf_xtregs_offset, 173 + cp_offsets[i].sz); 202 174 #endif 203 175 ret |= __copy_from_user(&regs->xtregs_opt, &xtregs->opt, 204 176 sizeof(xtregs->opt));
+2
block/bio.c
··· 605 605 if (bio_flagged(bio_src, BIO_THROTTLED)) 606 606 bio_set_flag(bio, BIO_THROTTLED); 607 607 bio->bi_opf = bio_src->bi_opf; 608 + bio->bi_ioprio = bio_src->bi_ioprio; 608 609 bio->bi_write_hint = bio_src->bi_write_hint; 609 610 bio->bi_iter = bio_src->bi_iter; 610 611 bio->bi_io_vec = bio_src->bi_io_vec; ··· 1261 1260 if (ret) 1262 1261 goto cleanup; 1263 1262 } else { 1263 + zero_fill_bio(bio); 1264 1264 iov_iter_advance(iter, bio->bi_iter.bi_size); 1265 1265 } 1266 1266
+2 -3
block/blk-core.c
··· 798 798 * dispatch may still be in-progress since we dispatch requests 799 799 * from more than one contexts. 800 800 * 801 - * No need to quiesce queue if it isn't initialized yet since 802 - * blk_freeze_queue() should be enough for cases of passthrough 803 - * request. 801 + * We rely on driver to deal with the race in case that queue 802 + * initialization isn't done. 804 803 */ 805 804 if (q->mq_ops && blk_queue_init_done(q)) 806 805 blk_mq_quiesce_queue(q);
+8 -18
block/blk-lib.c
··· 51 51 if ((sector | nr_sects) & bs_mask) 52 52 return -EINVAL; 53 53 54 + if (!nr_sects) 55 + return -EINVAL; 56 + 54 57 while (nr_sects) { 55 - unsigned int req_sects = nr_sects; 56 - sector_t end_sect; 58 + sector_t req_sects = min_t(sector_t, nr_sects, 59 + bio_allowed_max_sectors(q)); 57 60 58 - if (!req_sects) 59 - goto fail; 60 - if (req_sects > UINT_MAX >> 9) 61 - req_sects = UINT_MAX >> 9; 62 - 63 - end_sect = sector + req_sects; 61 + WARN_ON_ONCE((req_sects << 9) > UINT_MAX); 64 62 65 63 bio = blk_next_bio(bio, 0, gfp_mask); 66 64 bio->bi_iter.bi_sector = sector; ··· 66 68 bio_set_op_attrs(bio, op, 0); 67 69 68 70 bio->bi_iter.bi_size = req_sects << 9; 71 + sector += req_sects; 69 72 nr_sects -= req_sects; 70 - sector = end_sect; 71 73 72 74 /* 73 75 * We can loop for a long time in here, if someone does ··· 80 82 81 83 *biop = bio; 82 84 return 0; 83 - 84 - fail: 85 - if (bio) { 86 - submit_bio_wait(bio); 87 - bio_put(bio); 88 - } 89 - *biop = NULL; 90 - return -EOPNOTSUPP; 91 85 } 92 86 EXPORT_SYMBOL(__blkdev_issue_discard); 93 87 ··· 151 161 return -EOPNOTSUPP; 152 162 153 163 /* Ensure that max_write_same_sectors doesn't overflow bi_size */ 154 - max_write_same_sectors = UINT_MAX >> 9; 164 + max_write_same_sectors = bio_allowed_max_sectors(q); 155 165 156 166 while (nr_sects) { 157 167 bio = blk_next_bio(bio, 1, gfp_mask);
+4 -3
block/blk-merge.c
··· 46 46 bio_get_first_bvec(prev_rq->bio, &pb); 47 47 else 48 48 bio_get_first_bvec(prev, &pb); 49 - if (pb.bv_offset) 49 + if (pb.bv_offset & queue_virt_boundary(q)) 50 50 return true; 51 51 52 52 /* ··· 90 90 /* Zero-sector (unknown) and one-sector granularities are the same. */ 91 91 granularity = max(q->limits.discard_granularity >> 9, 1U); 92 92 93 - max_discard_sectors = min(q->limits.max_discard_sectors, UINT_MAX >> 9); 93 + max_discard_sectors = min(q->limits.max_discard_sectors, 94 + bio_allowed_max_sectors(q)); 94 95 max_discard_sectors -= max_discard_sectors % granularity; 95 96 96 97 if (unlikely(!max_discard_sectors)) { ··· 820 819 821 820 req->__data_len += blk_rq_bytes(next); 822 821 823 - if (req_op(req) != REQ_OP_DISCARD) 822 + if (!blk_discard_mergable(req)) 824 823 elv_merge_requests(q, req, next); 825 824 826 825 /*
+11 -1
block/blk.h
··· 169 169 static inline bool __bvec_gap_to_prev(struct request_queue *q, 170 170 struct bio_vec *bprv, unsigned int offset) 171 171 { 172 - return offset || 172 + return (offset & queue_virt_boundary(q)) || 173 173 ((bprv->bv_offset + bprv->bv_len) & queue_virt_boundary(q)); 174 174 } 175 175 ··· 393 393 static inline unsigned long blk_rq_deadline(struct request *rq) 394 394 { 395 395 return rq->__deadline & ~0x1UL; 396 + } 397 + 398 + /* 399 + * The max size one bio can handle is UINT_MAX becasue bvec_iter.bi_size 400 + * is defined as 'unsigned int', meantime it has to aligned to with logical 401 + * block size which is the minimum accepted unit by hardware. 402 + */ 403 + static inline unsigned int bio_allowed_max_sectors(struct request_queue *q) 404 + { 405 + return round_down(UINT_MAX, queue_logical_block_size(q)) >> 9; 396 406 } 397 407 398 408 /*
+1
block/bounce.c
··· 248 248 return NULL; 249 249 bio->bi_disk = bio_src->bi_disk; 250 250 bio->bi_opf = bio_src->bi_opf; 251 + bio->bi_ioprio = bio_src->bi_ioprio; 251 252 bio->bi_write_hint = bio_src->bi_write_hint; 252 253 bio->bi_iter.bi_sector = bio_src->bi_iter.bi_sector; 253 254 bio->bi_iter.bi_size = bio_src->bi_iter.bi_size;
+9 -9
crypto/crypto_user_base.c
··· 84 84 { 85 85 struct crypto_report_cipher rcipher; 86 86 87 - strlcpy(rcipher.type, "cipher", sizeof(rcipher.type)); 87 + strncpy(rcipher.type, "cipher", sizeof(rcipher.type)); 88 88 89 89 rcipher.blocksize = alg->cra_blocksize; 90 90 rcipher.min_keysize = alg->cra_cipher.cia_min_keysize; ··· 103 103 { 104 104 struct crypto_report_comp rcomp; 105 105 106 - strlcpy(rcomp.type, "compression", sizeof(rcomp.type)); 106 + strncpy(rcomp.type, "compression", sizeof(rcomp.type)); 107 107 if (nla_put(skb, CRYPTOCFGA_REPORT_COMPRESS, 108 108 sizeof(struct crypto_report_comp), &rcomp)) 109 109 goto nla_put_failure; ··· 117 117 { 118 118 struct crypto_report_acomp racomp; 119 119 120 - strlcpy(racomp.type, "acomp", sizeof(racomp.type)); 120 + strncpy(racomp.type, "acomp", sizeof(racomp.type)); 121 121 122 122 if (nla_put(skb, CRYPTOCFGA_REPORT_ACOMP, 123 123 sizeof(struct crypto_report_acomp), &racomp)) ··· 132 132 { 133 133 struct crypto_report_akcipher rakcipher; 134 134 135 - strlcpy(rakcipher.type, "akcipher", sizeof(rakcipher.type)); 135 + strncpy(rakcipher.type, "akcipher", sizeof(rakcipher.type)); 136 136 137 137 if (nla_put(skb, CRYPTOCFGA_REPORT_AKCIPHER, 138 138 sizeof(struct crypto_report_akcipher), &rakcipher)) ··· 147 147 { 148 148 struct crypto_report_kpp rkpp; 149 149 150 - strlcpy(rkpp.type, "kpp", sizeof(rkpp.type)); 150 + strncpy(rkpp.type, "kpp", sizeof(rkpp.type)); 151 151 152 152 if (nla_put(skb, CRYPTOCFGA_REPORT_KPP, 153 153 sizeof(struct crypto_report_kpp), &rkpp)) ··· 161 161 static int crypto_report_one(struct crypto_alg *alg, 162 162 struct crypto_user_alg *ualg, struct sk_buff *skb) 163 163 { 164 - strlcpy(ualg->cru_name, alg->cra_name, sizeof(ualg->cru_name)); 165 - strlcpy(ualg->cru_driver_name, alg->cra_driver_name, 164 + strncpy(ualg->cru_name, alg->cra_name, sizeof(ualg->cru_name)); 165 + strncpy(ualg->cru_driver_name, alg->cra_driver_name, 166 166 sizeof(ualg->cru_driver_name)); 167 - strlcpy(ualg->cru_module_name, module_name(alg->cra_module), 167 + strncpy(ualg->cru_module_name, module_name(alg->cra_module), 168 168 sizeof(ualg->cru_module_name)); 169 169 170 170 ualg->cru_type = 0; ··· 177 177 if (alg->cra_flags & CRYPTO_ALG_LARVAL) { 178 178 struct crypto_report_larval rl; 179 179 180 - strlcpy(rl.type, "larval", sizeof(rl.type)); 180 + strncpy(rl.type, "larval", sizeof(rl.type)); 181 181 if (nla_put(skb, CRYPTOCFGA_REPORT_LARVAL, 182 182 sizeof(struct crypto_report_larval), &rl)) 183 183 goto nla_put_failure;
+21
crypto/crypto_user_stat.c
··· 37 37 u64 v64; 38 38 u32 v32; 39 39 40 + memset(&raead, 0, sizeof(raead)); 41 + 40 42 strncpy(raead.type, "aead", sizeof(raead.type)); 41 43 42 44 v32 = atomic_read(&alg->encrypt_cnt); ··· 66 64 struct crypto_stat rcipher; 67 65 u64 v64; 68 66 u32 v32; 67 + 68 + memset(&rcipher, 0, sizeof(rcipher)); 69 69 70 70 strlcpy(rcipher.type, "cipher", sizeof(rcipher.type)); 71 71 ··· 97 93 u64 v64; 98 94 u32 v32; 99 95 96 + memset(&rcomp, 0, sizeof(rcomp)); 97 + 100 98 strlcpy(rcomp.type, "compression", sizeof(rcomp.type)); 101 99 v32 = atomic_read(&alg->compress_cnt); 102 100 rcomp.stat_compress_cnt = v32; ··· 126 120 u64 v64; 127 121 u32 v32; 128 122 123 + memset(&racomp, 0, sizeof(racomp)); 124 + 129 125 strlcpy(racomp.type, "acomp", sizeof(racomp.type)); 130 126 v32 = atomic_read(&alg->compress_cnt); 131 127 racomp.stat_compress_cnt = v32; ··· 154 146 struct crypto_stat rakcipher; 155 147 u64 v64; 156 148 u32 v32; 149 + 150 + memset(&rakcipher, 0, sizeof(rakcipher)); 157 151 158 152 strncpy(rakcipher.type, "akcipher", sizeof(rakcipher.type)); 159 153 v32 = atomic_read(&alg->encrypt_cnt); ··· 187 177 struct crypto_stat rkpp; 188 178 u32 v; 189 179 180 + memset(&rkpp, 0, sizeof(rkpp)); 181 + 190 182 strlcpy(rkpp.type, "kpp", sizeof(rkpp.type)); 191 183 192 184 v = atomic_read(&alg->setsecret_cnt); ··· 215 203 u64 v64; 216 204 u32 v32; 217 205 206 + memset(&rhash, 0, sizeof(rhash)); 207 + 218 208 strncpy(rhash.type, "ahash", sizeof(rhash.type)); 219 209 220 210 v32 = atomic_read(&alg->hash_cnt); ··· 240 226 struct crypto_stat rhash; 241 227 u64 v64; 242 228 u32 v32; 229 + 230 + memset(&rhash, 0, sizeof(rhash)); 243 231 244 232 strncpy(rhash.type, "shash", sizeof(rhash.type)); 245 233 ··· 267 251 u64 v64; 268 252 u32 v32; 269 253 254 + memset(&rrng, 0, sizeof(rrng)); 255 + 270 256 strncpy(rrng.type, "rng", sizeof(rrng.type)); 271 257 272 258 v32 = atomic_read(&alg->generate_cnt); ··· 293 275 struct crypto_user_alg *ualg, 294 276 struct sk_buff *skb) 295 277 { 278 + memset(ualg, 0, sizeof(*ualg)); 279 + 296 280 strlcpy(ualg->cru_name, alg->cra_name, sizeof(ualg->cru_name)); 297 281 strlcpy(ualg->cru_driver_name, alg->cra_driver_name, 298 282 sizeof(ualg->cru_driver_name)); ··· 311 291 if (alg->cra_flags & CRYPTO_ALG_LARVAL) { 312 292 struct crypto_stat rl; 313 293 294 + memset(&rl, 0, sizeof(rl)); 314 295 strlcpy(rl.type, "larval", sizeof(rl.type)); 315 296 if (nla_put(skb, CRYPTOCFGA_STAT_LARVAL, 316 297 sizeof(struct crypto_stat), &rl))
+3 -2
crypto/simd.c
··· 124 124 125 125 ctx->cryptd_tfm = cryptd_tfm; 126 126 127 - reqsize = sizeof(struct skcipher_request); 128 - reqsize += crypto_skcipher_reqsize(&cryptd_tfm->base); 127 + reqsize = crypto_skcipher_reqsize(cryptd_skcipher_child(cryptd_tfm)); 128 + reqsize = max(reqsize, crypto_skcipher_reqsize(&cryptd_tfm->base)); 129 + reqsize += sizeof(struct skcipher_request); 129 130 130 131 crypto_skcipher_set_reqsize(tfm, reqsize); 131 132
+1 -1
drivers/acpi/Kconfig
··· 512 512 513 513 config XPOWER_PMIC_OPREGION 514 514 bool "ACPI operation region support for XPower AXP288 PMIC" 515 - depends on MFD_AXP20X_I2C && IOSF_MBI 515 + depends on MFD_AXP20X_I2C && IOSF_MBI=y 516 516 help 517 517 This config adds ACPI operation region support for XPower AXP288 PMIC. 518 518
+1
drivers/acpi/acpi_platform.c
··· 30 30 {"PNP0200", 0}, /* AT DMA Controller */ 31 31 {"ACPI0009", 0}, /* IOxAPIC */ 32 32 {"ACPI000A", 0}, /* IOAPIC */ 33 + {"SMB0001", 0}, /* ACPI SMBUS virtual device */ 33 34 {"", 0}, 34 35 }; 35 36
+2 -19
drivers/acpi/acpica/exserial.c
··· 244 244 { 245 245 acpi_status status; 246 246 u32 buffer_length; 247 - u32 data_length; 248 247 void *buffer; 249 248 union acpi_operand_object *buffer_desc; 250 249 u32 function; ··· 281 282 case ACPI_ADR_SPACE_SMBUS: 282 283 283 284 buffer_length = ACPI_SMBUS_BUFFER_SIZE; 284 - data_length = ACPI_SMBUS_DATA_SIZE; 285 285 function = ACPI_WRITE | (obj_desc->field.attribute << 16); 286 286 break; 287 287 288 288 case ACPI_ADR_SPACE_IPMI: 289 289 290 290 buffer_length = ACPI_IPMI_BUFFER_SIZE; 291 - data_length = ACPI_IPMI_DATA_SIZE; 292 291 function = ACPI_WRITE; 293 292 break; 294 293 ··· 307 310 /* Add header length to get the full size of the buffer */ 308 311 309 312 buffer_length += ACPI_SERIAL_HEADER_SIZE; 310 - data_length = source_desc->buffer.pointer[1]; 311 313 function = ACPI_WRITE | (accessor_type << 16); 312 314 break; 313 315 314 316 default: 315 317 return_ACPI_STATUS(AE_AML_INVALID_SPACE_ID); 316 318 } 317 - 318 - #if 0 319 - OBSOLETE ? 320 - /* Check for possible buffer overflow */ 321 - if (data_length > source_desc->buffer.length) { 322 - ACPI_ERROR((AE_INFO, 323 - "Length in buffer header (%u)(%u) is greater than " 324 - "the physical buffer length (%u) and will overflow", 325 - data_length, buffer_length, 326 - source_desc->buffer.length)); 327 - 328 - return_ACPI_STATUS(AE_AML_BUFFER_LIMIT); 329 - } 330 - #endif 331 319 332 320 /* Create the transfer/bidirectional/return buffer */ 333 321 ··· 324 342 /* Copy the input buffer data to the transfer buffer */ 325 343 326 344 buffer = buffer_desc->buffer.pointer; 327 - memcpy(buffer, source_desc->buffer.pointer, data_length); 345 + memcpy(buffer, source_desc->buffer.pointer, 346 + min(buffer_length, source_desc->buffer.length)); 328 347 329 348 /* Lock entire transaction if requested */ 330 349
+1 -1
drivers/acpi/arm64/iort.c
··· 700 700 */ 701 701 static struct irq_domain *iort_get_platform_device_domain(struct device *dev) 702 702 { 703 - struct acpi_iort_node *node, *msi_parent; 703 + struct acpi_iort_node *node, *msi_parent = NULL; 704 704 struct fwnode_handle *iort_fwnode; 705 705 struct acpi_iort_its_group *its; 706 706 int i;
+5 -14
drivers/acpi/nfit/core.c
··· 2928 2928 return rc; 2929 2929 2930 2930 if (ars_status_process_records(acpi_desc)) 2931 - return -ENOMEM; 2931 + dev_err(acpi_desc->dev, "Failed to process ARS records\n"); 2932 2932 2933 - return 0; 2933 + return rc; 2934 2934 } 2935 2935 2936 2936 static int ars_register(struct acpi_nfit_desc *acpi_desc, ··· 3341 3341 struct nvdimm *nvdimm, unsigned int cmd) 3342 3342 { 3343 3343 struct acpi_nfit_desc *acpi_desc = to_acpi_nfit_desc(nd_desc); 3344 - struct nfit_spa *nfit_spa; 3345 - int rc = 0; 3346 3344 3347 3345 if (nvdimm) 3348 3346 return 0; ··· 3353 3355 * just needs guarantees that any ARS it initiates are not 3354 3356 * interrupted by any intervening start requests from userspace. 3355 3357 */ 3356 - mutex_lock(&acpi_desc->init_mutex); 3357 - list_for_each_entry(nfit_spa, &acpi_desc->spas, list) 3358 - if (acpi_desc->scrub_spa 3359 - || test_bit(ARS_REQ_SHORT, &nfit_spa->ars_state) 3360 - || test_bit(ARS_REQ_LONG, &nfit_spa->ars_state)) { 3361 - rc = -EBUSY; 3362 - break; 3363 - } 3364 - mutex_unlock(&acpi_desc->init_mutex); 3358 + if (work_busy(&acpi_desc->dwork.work)) 3359 + return -EBUSY; 3365 3360 3366 - return rc; 3361 + return 0; 3367 3362 } 3368 3363 3369 3364 int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc,
+6 -2
drivers/acpi/nfit/mce.c
··· 25 25 struct acpi_nfit_desc *acpi_desc; 26 26 struct nfit_spa *nfit_spa; 27 27 28 - /* We only care about memory errors */ 29 - if (!mce_is_memory_error(mce)) 28 + /* We only care about uncorrectable memory errors */ 29 + if (!mce_is_memory_error(mce) || mce_is_correctable(mce)) 30 + return NOTIFY_DONE; 31 + 32 + /* Verify the address reported in the MCE is valid. */ 33 + if (!mce_usable_address(mce)) 30 34 return NOTIFY_DONE; 31 35 32 36 /*
+12 -9
drivers/android/binder.c
··· 2974 2974 t->buffer = NULL; 2975 2975 goto err_binder_alloc_buf_failed; 2976 2976 } 2977 - t->buffer->allow_user_free = 0; 2978 2977 t->buffer->debug_id = t->debug_id; 2979 2978 t->buffer->transaction = t; 2980 2979 t->buffer->target_node = target_node; ··· 3509 3510 3510 3511 buffer = binder_alloc_prepare_to_free(&proc->alloc, 3511 3512 data_ptr); 3512 - if (buffer == NULL) { 3513 - binder_user_error("%d:%d BC_FREE_BUFFER u%016llx no match\n", 3514 - proc->pid, thread->pid, (u64)data_ptr); 3515 - break; 3516 - } 3517 - if (!buffer->allow_user_free) { 3518 - binder_user_error("%d:%d BC_FREE_BUFFER u%016llx matched unreturned buffer\n", 3519 - proc->pid, thread->pid, (u64)data_ptr); 3513 + if (IS_ERR_OR_NULL(buffer)) { 3514 + if (PTR_ERR(buffer) == -EPERM) { 3515 + binder_user_error( 3516 + "%d:%d BC_FREE_BUFFER u%016llx matched unreturned or currently freeing buffer\n", 3517 + proc->pid, thread->pid, 3518 + (u64)data_ptr); 3519 + } else { 3520 + binder_user_error( 3521 + "%d:%d BC_FREE_BUFFER u%016llx no match\n", 3522 + proc->pid, thread->pid, 3523 + (u64)data_ptr); 3524 + } 3520 3525 break; 3521 3526 } 3522 3527 binder_debug(BINDER_DEBUG_FREE_BUFFER,
+6 -10
drivers/android/binder_alloc.c
··· 151 151 else { 152 152 /* 153 153 * Guard against user threads attempting to 154 - * free the buffer twice 154 + * free the buffer when in use by kernel or 155 + * after it's already been freed. 155 156 */ 156 - if (buffer->free_in_progress) { 157 - binder_alloc_debug(BINDER_DEBUG_USER_ERROR, 158 - "%d:%d FREE_BUFFER u%016llx user freed buffer twice\n", 159 - alloc->pid, current->pid, 160 - (u64)user_ptr); 161 - return NULL; 162 - } 163 - buffer->free_in_progress = 1; 157 + if (!buffer->allow_user_free) 158 + return ERR_PTR(-EPERM); 159 + buffer->allow_user_free = 0; 164 160 return buffer; 165 161 } 166 162 } ··· 496 500 497 501 rb_erase(best_fit, &alloc->free_buffers); 498 502 buffer->free = 0; 499 - buffer->free_in_progress = 0; 503 + buffer->allow_user_free = 0; 500 504 binder_insert_allocated_buffer_locked(alloc, buffer); 501 505 binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, 502 506 "%d: binder_alloc_buf size %zd got %pK\n",
+1 -2
drivers/android/binder_alloc.h
··· 50 50 unsigned free:1; 51 51 unsigned allow_user_free:1; 52 52 unsigned async_transaction:1; 53 - unsigned free_in_progress:1; 54 - unsigned debug_id:28; 53 + unsigned debug_id:29; 55 54 56 55 struct binder_transaction *transaction; 57 56
+1 -1
drivers/ata/libata-core.c
··· 4553 4553 /* These specific Samsung models/firmware-revs do not handle LPM well */ 4554 4554 { "SAMSUNG MZMPC128HBFU-000MV", "CXM14M1Q", ATA_HORKAGE_NOLPM, }, 4555 4555 { "SAMSUNG SSD PM830 mSATA *", "CXM13D1Q", ATA_HORKAGE_NOLPM, }, 4556 - { "SAMSUNG MZ7TD256HAFV-000L9", "DXT02L5Q", ATA_HORKAGE_NOLPM, }, 4556 + { "SAMSUNG MZ7TD256HAFV-000L9", NULL, ATA_HORKAGE_NOLPM, }, 4557 4557 4558 4558 /* devices that don't properly handle queued TRIM commands */ 4559 4559 { "Micron_M500IT_*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM |
+1 -5
drivers/ata/sata_rcar.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 1 2 /* 2 3 * Renesas R-Car SATA driver 3 4 * 4 5 * Author: Vladimir Barinov <source@cogentembedded.com> 5 6 * Copyright (C) 2013-2015 Cogent Embedded, Inc. 6 7 * Copyright (C) 2013-2015 Renesas Solutions Corp. 7 - * 8 - * This program is free software; you can redistribute it and/or modify it 9 - * under the terms of the GNU General Public License as published by the 10 - * Free Software Foundation; either version 2 of the License, or (at your 11 - * option) any later version. 12 8 */ 13 9 14 10 #include <linux/kernel.h>
+2 -2
drivers/atm/firestream.c
··· 1410 1410 1411 1411 func_enter (); 1412 1412 1413 - fs_dprintk (FS_DEBUG_INIT, "Inititing queue at %x: %d entries:\n", 1413 + fs_dprintk (FS_DEBUG_INIT, "Initializing queue at %x: %d entries:\n", 1414 1414 queue, nentries); 1415 1415 1416 1416 p = aligned_kmalloc (sz, GFP_KERNEL, 0x10); ··· 1443 1443 { 1444 1444 func_enter (); 1445 1445 1446 - fs_dprintk (FS_DEBUG_INIT, "Inititing free pool at %x:\n", queue); 1446 + fs_dprintk (FS_DEBUG_INIT, "Initializing free pool at %x:\n", queue); 1447 1447 1448 1448 write_fs (dev, FP_CNF(queue), (bufsize * RBFP_RBS) | RBFP_RBSVAL | RBFP_CME); 1449 1449 write_fs (dev, FP_SA(queue), 0);
+8 -2
drivers/base/devres.c
··· 26 26 27 27 struct devres { 28 28 struct devres_node node; 29 - /* -- 3 pointers */ 30 - unsigned long long data[]; /* guarantee ull alignment */ 29 + /* 30 + * Some archs want to perform DMA into kmalloc caches 31 + * and need a guaranteed alignment larger than 32 + * the alignment of a 64-bit integer. 33 + * Thus we use ARCH_KMALLOC_MINALIGN here and get exactly the same 34 + * buffer alignment as if it was allocated by plain kmalloc(). 35 + */ 36 + u8 __aligned(ARCH_KMALLOC_MINALIGN) data[]; 31 37 }; 32 38 33 39 struct devres_group {
+2 -1
drivers/block/floppy.c
··· 4148 4148 bio.bi_end_io = floppy_rb0_cb; 4149 4149 bio_set_op_attrs(&bio, REQ_OP_READ, 0); 4150 4150 4151 + init_completion(&cbdata.complete); 4152 + 4151 4153 submit_bio(&bio); 4152 4154 process_fd_request(); 4153 4155 4154 - init_completion(&cbdata.complete); 4155 4156 wait_for_completion(&cbdata.complete); 4156 4157 4157 4158 __free_page(page);
+1
drivers/block/xen-blkfront.c
··· 1919 1919 GFP_KERNEL); 1920 1920 if (!info->rinfo) { 1921 1921 xenbus_dev_fatal(info->xbdev, -ENOMEM, "allocating ring_info structure"); 1922 + info->nr_rings = 0; 1922 1923 return -ENOMEM; 1923 1924 } 1924 1925
+1
drivers/clk/clk-fixed-factor.c
··· 210 210 { 211 211 struct clk *clk = platform_get_drvdata(pdev); 212 212 213 + of_clk_del_provider(pdev->dev.of_node); 213 214 clk_unregister_fixed_factor(clk); 214 215 215 216 return 0;
+13
drivers/clk/meson/axg.c
··· 325 325 .ops = &clk_regmap_gate_ops, 326 326 .parent_names = (const char *[]){ "fclk_div2_div" }, 327 327 .num_parents = 1, 328 + .flags = CLK_IS_CRITICAL, 328 329 }, 329 330 }; 330 331 ··· 350 349 .ops = &clk_regmap_gate_ops, 351 350 .parent_names = (const char *[]){ "fclk_div3_div" }, 352 351 .num_parents = 1, 352 + /* 353 + * FIXME: 354 + * This clock, as fdiv2, is used by the SCPI FW and is required 355 + * by the platform to operate correctly. 356 + * Until the following condition are met, we need this clock to 357 + * be marked as critical: 358 + * a) The SCPI generic driver claims and enable all the clocks 359 + * it needs 360 + * b) CCF has a clock hand-off mechanism to make the sure the 361 + * clock stays on until the proper driver comes along 362 + */ 363 + .flags = CLK_IS_CRITICAL, 353 364 }, 354 365 }; 355 366
+12
drivers/clk/meson/gxbb.c
··· 506 506 .ops = &clk_regmap_gate_ops, 507 507 .parent_names = (const char *[]){ "fclk_div3_div" }, 508 508 .num_parents = 1, 509 + /* 510 + * FIXME: 511 + * This clock, as fdiv2, is used by the SCPI FW and is required 512 + * by the platform to operate correctly. 513 + * Until the following condition are met, we need this clock to 514 + * be marked as critical: 515 + * a) The SCPI generic driver claims and enable all the clocks 516 + * it needs 517 + * b) CCF has a clock hand-off mechanism to make the sure the 518 + * clock stays on until the proper driver comes along 519 + */ 520 + .flags = CLK_IS_CRITICAL, 509 521 }, 510 522 }; 511 523
+1 -1
drivers/clk/qcom/gcc-qcs404.c
··· 265 265 .div = 1, 266 266 .hw.init = &(struct clk_init_data){ 267 267 .name = "cxo", 268 - .parent_names = (const char *[]){ "xo_board" }, 268 + .parent_names = (const char *[]){ "xo-board" }, 269 269 .num_parents = 1, 270 270 .ops = &clk_fixed_factor_ops, 271 271 },
+12 -2
drivers/clocksource/i8253.c
··· 20 20 DEFINE_RAW_SPINLOCK(i8253_lock); 21 21 EXPORT_SYMBOL(i8253_lock); 22 22 23 + /* 24 + * Handle PIT quirk in pit_shutdown() where zeroing the counter register 25 + * restarts the PIT, negating the shutdown. On platforms with the quirk, 26 + * platform specific code can set this to false. 27 + */ 28 + bool i8253_clear_counter_on_shutdown __ro_after_init = true; 29 + 23 30 #ifdef CONFIG_CLKSRC_I8253 24 31 /* 25 32 * Since the PIT overflows every tick, its not very useful ··· 116 109 raw_spin_lock(&i8253_lock); 117 110 118 111 outb_p(0x30, PIT_MODE); 119 - outb_p(0, PIT_CH0); 120 - outb_p(0, PIT_CH0); 112 + 113 + if (i8253_clear_counter_on_shutdown) { 114 + outb_p(0, PIT_CH0); 115 + outb_p(0, PIT_CH0); 116 + } 121 117 122 118 raw_spin_unlock(&i8253_lock); 123 119 return 0;
+6 -1
drivers/cpufreq/imx6q-cpufreq.c
··· 160 160 /* Ensure the arm clock divider is what we expect */ 161 161 ret = clk_set_rate(clks[ARM].clk, new_freq * 1000); 162 162 if (ret) { 163 + int ret1; 164 + 163 165 dev_err(cpu_dev, "failed to set clock rate: %d\n", ret); 164 - regulator_set_voltage_tol(arm_reg, volt_old, 0); 166 + ret1 = regulator_set_voltage_tol(arm_reg, volt_old, 0); 167 + if (ret1) 168 + dev_warn(cpu_dev, 169 + "failed to restore vddarm voltage: %d\n", ret1); 165 170 return ret; 166 171 } 167 172
+21 -5
drivers/cpufreq/ti-cpufreq.c
··· 201 201 {}, 202 202 }; 203 203 204 + static const struct of_device_id *ti_cpufreq_match_node(void) 205 + { 206 + struct device_node *np; 207 + const struct of_device_id *match; 208 + 209 + np = of_find_node_by_path("/"); 210 + match = of_match_node(ti_cpufreq_of_match, np); 211 + of_node_put(np); 212 + 213 + return match; 214 + } 215 + 204 216 static int ti_cpufreq_probe(struct platform_device *pdev) 205 217 { 206 218 u32 version[VERSION_COUNT]; 207 - struct device_node *np; 208 219 const struct of_device_id *match; 209 220 struct opp_table *ti_opp_table; 210 221 struct ti_cpufreq_data *opp_data; 211 222 const char * const reg_names[] = {"vdd", "vbb"}; 212 223 int ret; 213 224 214 - np = of_find_node_by_path("/"); 215 - match = of_match_node(ti_cpufreq_of_match, np); 216 - of_node_put(np); 225 + match = dev_get_platdata(&pdev->dev); 217 226 if (!match) 218 227 return -ENODEV; 219 228 ··· 299 290 300 291 static int ti_cpufreq_init(void) 301 292 { 302 - platform_device_register_simple("ti-cpufreq", -1, NULL, 0); 293 + const struct of_device_id *match; 294 + 295 + /* Check to ensure we are on a compatible platform */ 296 + match = ti_cpufreq_match_node(); 297 + if (match) 298 + platform_device_register_data(NULL, "ti-cpufreq", -1, match, 299 + sizeof(*match)); 300 + 303 301 return 0; 304 302 } 305 303 module_init(ti_cpufreq_init);
+7 -33
drivers/cpuidle/cpuidle-arm.c
··· 82 82 { 83 83 int ret; 84 84 struct cpuidle_driver *drv; 85 - struct cpuidle_device *dev; 86 85 87 86 drv = kmemdup(&arm_idle_driver, sizeof(*drv), GFP_KERNEL); 88 87 if (!drv) ··· 102 103 goto out_kfree_drv; 103 104 } 104 105 105 - ret = cpuidle_register_driver(drv); 106 - if (ret) { 107 - if (ret != -EBUSY) 108 - pr_err("Failed to register cpuidle driver\n"); 109 - goto out_kfree_drv; 110 - } 111 - 112 106 /* 113 107 * Call arch CPU operations in order to initialize 114 108 * idle states suspend back-end specific data ··· 109 117 ret = arm_cpuidle_init(cpu); 110 118 111 119 /* 112 - * Skip the cpuidle device initialization if the reported 120 + * Allow the initialization to continue for other CPUs, if the reported 113 121 * failure is a HW misconfiguration/breakage (-ENXIO). 114 122 */ 115 - if (ret == -ENXIO) 116 - return 0; 117 - 118 123 if (ret) { 119 124 pr_err("CPU %d failed to init idle CPU ops\n", cpu); 120 - goto out_unregister_drv; 125 + ret = ret == -ENXIO ? 0 : ret; 126 + goto out_kfree_drv; 121 127 } 122 128 123 - dev = kzalloc(sizeof(*dev), GFP_KERNEL); 124 - if (!dev) { 125 - ret = -ENOMEM; 126 - goto out_unregister_drv; 127 - } 128 - dev->cpu = cpu; 129 - 130 - ret = cpuidle_register_device(dev); 131 - if (ret) { 132 - pr_err("Failed to register cpuidle device for CPU %d\n", 133 - cpu); 134 - goto out_kfree_dev; 135 - } 129 + ret = cpuidle_register(drv, NULL); 130 + if (ret) 131 + goto out_kfree_drv; 136 132 137 133 return 0; 138 134 139 - out_kfree_dev: 140 - kfree(dev); 141 - out_unregister_drv: 142 - cpuidle_unregister_driver(drv); 143 135 out_kfree_drv: 144 136 kfree(drv); 145 137 return ret; ··· 154 178 while (--cpu >= 0) { 155 179 dev = per_cpu(cpuidle_devices, cpu); 156 180 drv = cpuidle_get_cpu_driver(dev); 157 - cpuidle_unregister_device(dev); 158 - cpuidle_unregister_driver(drv); 159 - kfree(dev); 181 + cpuidle_unregister(drv); 160 182 kfree(drv); 161 183 } 162 184
+17 -14
drivers/crypto/hisilicon/sec/sec_algs.c
··· 732 732 int *splits_in_nents; 733 733 int *splits_out_nents = NULL; 734 734 struct sec_request_el *el, *temp; 735 + bool split = skreq->src != skreq->dst; 735 736 736 737 mutex_init(&sec_req->lock); 737 738 sec_req->req_base = &skreq->base; ··· 751 750 if (ret) 752 751 goto err_free_split_sizes; 753 752 754 - if (skreq->src != skreq->dst) { 753 + if (split) { 755 754 sec_req->len_out = sg_nents(skreq->dst); 756 755 ret = sec_map_and_split_sg(skreq->dst, split_sizes, steps, 757 756 &splits_out, &splits_out_nents, ··· 786 785 split_sizes[i], 787 786 skreq->src != skreq->dst, 788 787 splits_in[i], splits_in_nents[i], 789 - splits_out[i], 790 - splits_out_nents[i], info); 788 + split ? splits_out[i] : NULL, 789 + split ? splits_out_nents[i] : 0, 790 + info); 791 791 if (IS_ERR(el)) { 792 792 ret = PTR_ERR(el); 793 793 goto err_free_elements; ··· 808 806 * more refined but this is unlikely to happen so no need. 809 807 */ 810 808 811 - /* Cleanup - all elements in pointer arrays have been coppied */ 812 - kfree(splits_in_nents); 813 - kfree(splits_in); 814 - kfree(splits_out_nents); 815 - kfree(splits_out); 816 - kfree(split_sizes); 817 - 818 809 /* Grab a big lock for a long time to avoid concurrency issues */ 819 810 mutex_lock(&queue->queuelock); 820 811 ··· 822 827 (!queue->havesoftqueue || 823 828 kfifo_avail(&queue->softqueue) > steps)) || 824 829 !list_empty(&ctx->backlog)) { 830 + ret = -EBUSY; 825 831 if ((skreq->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) { 826 832 list_add_tail(&sec_req->backlog_head, &ctx->backlog); 827 833 mutex_unlock(&queue->queuelock); 828 - return -EBUSY; 834 + goto out; 829 835 } 830 836 831 - ret = -EBUSY; 832 837 mutex_unlock(&queue->queuelock); 833 838 goto err_free_elements; 834 839 } ··· 837 842 if (ret) 838 843 goto err_free_elements; 839 844 840 - return -EINPROGRESS; 845 + ret = -EINPROGRESS; 846 + out: 847 + /* Cleanup - all elements in pointer arrays have been copied */ 848 + kfree(splits_in_nents); 849 + kfree(splits_in); 850 + kfree(splits_out_nents); 851 + kfree(splits_out); 852 + kfree(split_sizes); 853 + return ret; 841 854 842 855 err_free_elements: 843 856 list_for_each_entry_safe(el, temp, &sec_req->elements, head) { ··· 857 854 crypto_skcipher_ivsize(atfm), 858 855 DMA_BIDIRECTIONAL); 859 856 err_unmap_out_sg: 860 - if (skreq->src != skreq->dst) 857 + if (split) 861 858 sec_unmap_sg_on_err(skreq->dst, steps, splits_out, 862 859 splits_out_nents, sec_req->len_out, 863 860 info->dev);
+1
drivers/dma-buf/udmabuf.c
··· 184 184 exp_info.ops = &udmabuf_ops; 185 185 exp_info.size = ubuf->pagecount << PAGE_SHIFT; 186 186 exp_info.priv = ubuf; 187 + exp_info.flags = O_RDWR; 187 188 188 189 buf = dma_buf_export(&exp_info); 189 190 if (IS_ERR(buf)) {
+9 -1
drivers/dma/at_hdmac.c
··· 1641 1641 atchan->descs_allocated = 0; 1642 1642 atchan->status = 0; 1643 1643 1644 + /* 1645 + * Free atslave allocated in at_dma_xlate() 1646 + */ 1647 + kfree(chan->private); 1648 + chan->private = NULL; 1649 + 1644 1650 dev_vdbg(chan2dev(chan), "free_chan_resources: done\n"); 1645 1651 } 1646 1652 ··· 1681 1675 dma_cap_zero(mask); 1682 1676 dma_cap_set(DMA_SLAVE, mask); 1683 1677 1684 - atslave = devm_kzalloc(&dmac_pdev->dev, sizeof(*atslave), GFP_KERNEL); 1678 + atslave = kzalloc(sizeof(*atslave), GFP_KERNEL); 1685 1679 if (!atslave) 1686 1680 return NULL; 1687 1681 ··· 2006 2000 struct resource *io; 2007 2001 2008 2002 at_dma_off(atdma); 2003 + if (pdev->dev.of_node) 2004 + of_dma_controller_free(pdev->dev.of_node); 2009 2005 dma_async_device_unregister(&atdma->dma_common); 2010 2006 2011 2007 dma_pool_destroy(atdma->memset_pool);
+4
drivers/firmware/efi/arm-init.c
··· 265 265 (params.mmap & ~PAGE_MASK))); 266 266 267 267 init_screen_info(); 268 + 269 + /* ARM does not permit early mappings to persist across paging_init() */ 270 + if (IS_ENABLED(CONFIG_ARM)) 271 + efi_memmap_unmap(); 268 272 } 269 273 270 274 static int __init register_gop_device(void)
+1 -1
drivers/firmware/efi/arm-runtime.c
··· 110 110 { 111 111 u64 mapsize; 112 112 113 - if (!efi_enabled(EFI_BOOT) || !efi_enabled(EFI_MEMMAP)) { 113 + if (!efi_enabled(EFI_BOOT)) { 114 114 pr_info("EFI services will not be available.\n"); 115 115 return 0; 116 116 }
+41 -14
drivers/firmware/efi/efi.c
··· 592 592 593 593 early_memunmap(tbl, sizeof(*tbl)); 594 594 } 595 + return 0; 596 + } 595 597 598 + int __init efi_apply_persistent_mem_reservations(void) 599 + { 596 600 if (efi.mem_reserve != EFI_INVALID_TABLE_ADDR) { 597 601 unsigned long prsv = efi.mem_reserve; 598 602 ··· 967 963 } 968 964 969 965 static DEFINE_SPINLOCK(efi_mem_reserve_persistent_lock); 966 + static struct linux_efi_memreserve *efi_memreserve_root __ro_after_init; 970 967 971 - int efi_mem_reserve_persistent(phys_addr_t addr, u64 size) 968 + static int __init efi_memreserve_map_root(void) 972 969 { 973 - struct linux_efi_memreserve *rsv, *parent; 974 - 975 970 if (efi.mem_reserve == EFI_INVALID_TABLE_ADDR) 976 971 return -ENODEV; 977 972 978 - rsv = kmalloc(sizeof(*rsv), GFP_KERNEL); 973 + efi_memreserve_root = memremap(efi.mem_reserve, 974 + sizeof(*efi_memreserve_root), 975 + MEMREMAP_WB); 976 + if (WARN_ON_ONCE(!efi_memreserve_root)) 977 + return -ENOMEM; 978 + return 0; 979 + } 980 + 981 + int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size) 982 + { 983 + struct linux_efi_memreserve *rsv; 984 + int rc; 985 + 986 + if (efi_memreserve_root == (void *)ULONG_MAX) 987 + return -ENODEV; 988 + 989 + if (!efi_memreserve_root) { 990 + rc = efi_memreserve_map_root(); 991 + if (rc) 992 + return rc; 993 + } 994 + 995 + rsv = kmalloc(sizeof(*rsv), GFP_ATOMIC); 979 996 if (!rsv) 980 997 return -ENOMEM; 981 - 982 - parent = memremap(efi.mem_reserve, sizeof(*rsv), MEMREMAP_WB); 983 - if (!parent) { 984 - kfree(rsv); 985 - return -ENOMEM; 986 - } 987 998 988 999 rsv->base = addr; 989 1000 rsv->size = size; 990 1001 991 1002 spin_lock(&efi_mem_reserve_persistent_lock); 992 - rsv->next = parent->next; 993 - parent->next = __pa(rsv); 1003 + rsv->next = efi_memreserve_root->next; 1004 + efi_memreserve_root->next = __pa(rsv); 994 1005 spin_unlock(&efi_mem_reserve_persistent_lock); 995 - 996 - memunmap(parent); 997 1006 998 1007 return 0; 999 1008 } 1009 + 1010 + static int __init efi_memreserve_root_init(void) 1011 + { 1012 + if (efi_memreserve_root) 1013 + return 0; 1014 + if (efi_memreserve_map_root()) 1015 + efi_memreserve_root = (void *)ULONG_MAX; 1016 + return 0; 1017 + } 1018 + early_initcall(efi_memreserve_root_init); 1000 1019 1001 1020 #ifdef CONFIG_KEXEC 1002 1021 static int update_efi_random_seed(struct notifier_block *nb,
+3
drivers/firmware/efi/libstub/arm-stub.c
··· 75 75 efi_guid_t memreserve_table_guid = LINUX_EFI_MEMRESERVE_TABLE_GUID; 76 76 efi_status_t status; 77 77 78 + if (IS_ENABLED(CONFIG_ARM)) 79 + return; 80 + 78 81 status = efi_call_early(allocate_pool, EFI_LOADER_DATA, sizeof(*rsv), 79 82 (void **)&rsv); 80 83 if (status != EFI_SUCCESS) {
+4
drivers/firmware/efi/libstub/fdt.c
··· 158 158 return efi_status; 159 159 } 160 160 } 161 + 162 + /* shrink the FDT back to its minimum size */ 163 + fdt_pack(fdt); 164 + 161 165 return EFI_SUCCESS; 162 166 163 167 fdt_set_fail:
+3
drivers/firmware/efi/memmap.c
··· 118 118 119 119 void __init efi_memmap_unmap(void) 120 120 { 121 + if (!efi_enabled(EFI_MEMMAP)) 122 + return; 123 + 121 124 if (!efi.memmap.late) { 122 125 unsigned long size; 123 126
+1 -1
drivers/firmware/efi/runtime-wrappers.c
··· 67 67 } \ 68 68 \ 69 69 init_completion(&efi_rts_work.efi_rts_comp); \ 70 - INIT_WORK_ONSTACK(&efi_rts_work.work, efi_call_rts); \ 70 + INIT_WORK(&efi_rts_work.work, efi_call_rts); \ 71 71 efi_rts_work.arg1 = _arg1; \ 72 72 efi_rts_work.arg2 = _arg2; \ 73 73 efi_rts_work.arg3 = _arg3; \
+1
drivers/fsi/Kconfig
··· 46 46 tristate "FSI master based on Aspeed ColdFire coprocessor" 47 47 depends on GPIOLIB 48 48 depends on GPIO_ASPEED 49 + select GENERIC_ALLOCATOR 49 50 ---help--- 50 51 This option enables a FSI master using the AST2400 and AST2500 GPIO 51 52 lines driven by the internal ColdFire coprocessor. This requires
-1
drivers/fsi/fsi-scom.c
··· 20 20 #include <linux/fs.h> 21 21 #include <linux/uaccess.h> 22 22 #include <linux/slab.h> 23 - #include <linux/cdev.h> 24 23 #include <linux/list.h> 25 24 26 25 #include <uapi/linux/fsi.h>
+2 -1
drivers/gnss/serial.c
··· 13 13 #include <linux/of.h> 14 14 #include <linux/pm.h> 15 15 #include <linux/pm_runtime.h> 16 + #include <linux/sched.h> 16 17 #include <linux/serdev.h> 17 18 #include <linux/slab.h> 18 19 ··· 64 63 int ret; 65 64 66 65 /* write is only buffered synchronously */ 67 - ret = serdev_device_write(serdev, buf, count, 0); 66 + ret = serdev_device_write(serdev, buf, count, MAX_SCHEDULE_TIMEOUT); 68 67 if (ret < 0) 69 68 return ret; 70 69
+2 -1
drivers/gnss/sirf.c
··· 16 16 #include <linux/pm.h> 17 17 #include <linux/pm_runtime.h> 18 18 #include <linux/regulator/consumer.h> 19 + #include <linux/sched.h> 19 20 #include <linux/serdev.h> 20 21 #include <linux/slab.h> 21 22 #include <linux/wait.h> ··· 84 83 int ret; 85 84 86 85 /* write is only buffered synchronously */ 87 - ret = serdev_device_write(serdev, buf, count, 0); 86 + ret = serdev_device_write(serdev, buf, count, MAX_SCHEDULE_TIMEOUT); 88 87 if (ret < 0) 89 88 return ret; 90 89
+1 -1
drivers/gpio/gpio-davinci.c
··· 258 258 chips->chip.set = davinci_gpio_set; 259 259 260 260 chips->chip.ngpio = ngpio; 261 - chips->chip.base = -1; 261 + chips->chip.base = pdata->no_auto_base ? pdata->base : -1; 262 262 263 263 #ifdef CONFIG_OF_GPIO 264 264 chips->chip.of_gpio_n_cells = 2;
+3 -3
drivers/gpio/gpio-mockup.c
··· 35 35 #define gpio_mockup_err(...) pr_err(GPIO_MOCKUP_NAME ": " __VA_ARGS__) 36 36 37 37 enum { 38 - GPIO_MOCKUP_DIR_OUT = 0, 39 - GPIO_MOCKUP_DIR_IN = 1, 38 + GPIO_MOCKUP_DIR_IN = 0, 39 + GPIO_MOCKUP_DIR_OUT = 1, 40 40 }; 41 41 42 42 /* ··· 131 131 { 132 132 struct gpio_mockup_chip *chip = gpiochip_get_data(gc); 133 133 134 - return chip->lines[offset].dir; 134 + return !chip->lines[offset].dir; 135 135 } 136 136 137 137 static int gpio_mockup_to_irq(struct gpio_chip *gc, unsigned int offset)
+2 -2
drivers/gpio/gpio-pxa.c
··· 268 268 269 269 if (pxa_gpio_has_pinctrl()) { 270 270 ret = pinctrl_gpio_direction_input(chip->base + offset); 271 - if (!ret) 272 - return 0; 271 + if (ret) 272 + return ret; 273 273 } 274 274 275 275 spin_lock_irqsave(&gpio_lock, flags);
+3 -2
drivers/gpio/gpiolib.c
··· 1295 1295 gdev->descs = kcalloc(chip->ngpio, sizeof(gdev->descs[0]), GFP_KERNEL); 1296 1296 if (!gdev->descs) { 1297 1297 status = -ENOMEM; 1298 - goto err_free_gdev; 1298 + goto err_free_ida; 1299 1299 } 1300 1300 1301 1301 if (chip->ngpio == 0) { ··· 1427 1427 kfree_const(gdev->label); 1428 1428 err_free_descs: 1429 1429 kfree(gdev->descs); 1430 - err_free_gdev: 1430 + err_free_ida: 1431 1431 ida_simple_remove(&gpio_ida, gdev->id); 1432 + err_free_gdev: 1432 1433 /* failures here can mean systems won't boot... */ 1433 1434 pr_err("%s: GPIOs %d..%d (%s) failed to register, %d\n", __func__, 1434 1435 gdev->base, gdev->base + gdev->ngpio - 1,
+1
drivers/gpu/drm/amd/amdgpu/amdgpu.h
··· 151 151 extern int amdgpu_gpu_recovery; 152 152 extern int amdgpu_emu_mode; 153 153 extern uint amdgpu_smu_memory_pool_size; 154 + extern uint amdgpu_dc_feature_mask; 154 155 extern struct amdgpu_mgpu_info mgpu_info; 155 156 156 157 #ifdef CONFIG_DRM_AMDGPU_SI
+5 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
··· 501 501 { 502 502 struct amdgpu_device *adev = (struct amdgpu_device *)kgd; 503 503 504 - amdgpu_dpm_switch_power_profile(adev, 505 - PP_SMC_POWER_PROFILE_COMPUTE, !idle); 504 + if (adev->powerplay.pp_funcs && 505 + adev->powerplay.pp_funcs->switch_power_profile) 506 + amdgpu_dpm_switch_power_profile(adev, 507 + PP_SMC_POWER_PROFILE_COMPUTE, 508 + !idle); 506 509 } 507 510 508 511 bool amdgpu_amdkfd_is_kfd_vmid(struct amdgpu_device *adev, u32 vmid)
+7
drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
··· 626 626 "dither", 627 627 amdgpu_dither_enum_list, sz); 628 628 629 + if (amdgpu_device_has_dc_support(adev)) { 630 + adev->mode_info.max_bpc_property = 631 + drm_property_create_range(adev->ddev, 0, "max bpc", 8, 16); 632 + if (!adev->mode_info.max_bpc_property) 633 + return -ENOMEM; 634 + } 635 + 629 636 return 0; 630 637 } 631 638
+11
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 127 127 int amdgpu_gpu_recovery = -1; /* auto */ 128 128 int amdgpu_emu_mode = 0; 129 129 uint amdgpu_smu_memory_pool_size = 0; 130 + /* FBC (bit 0) disabled by default*/ 131 + uint amdgpu_dc_feature_mask = 0; 132 + 130 133 struct amdgpu_mgpu_info mgpu_info = { 131 134 .mutex = __MUTEX_INITIALIZER(mgpu_info.mutex), 132 135 }; ··· 633 630 module_param(halt_if_hws_hang, int, 0644); 634 631 MODULE_PARM_DESC(halt_if_hws_hang, "Halt if HWS hang is detected (0 = off (default), 1 = on)"); 635 632 #endif 633 + 634 + /** 635 + * DOC: dcfeaturemask (uint) 636 + * Override display features enabled. See enum DC_FEATURE_MASK in drivers/gpu/drm/amd/include/amd_shared.h. 637 + * The default is the current set of stable display features. 638 + */ 639 + MODULE_PARM_DESC(dcfeaturemask, "all stable DC features enabled (default))"); 640 + module_param_named(dcfeaturemask, amdgpu_dc_feature_mask, uint, 0444); 636 641 637 642 static const struct pci_device_id pciidlist[] = { 638 643 #ifdef CONFIG_DRM_AMDGPU_SI
+2
drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
··· 339 339 struct drm_property *audio_property; 340 340 /* FMT dithering */ 341 341 struct drm_property *dither_property; 342 + /* maximum number of bits per channel for monitor color */ 343 + struct drm_property *max_bpc_property; 342 344 /* hardcoded DFP edid from BIOS */ 343 345 struct edid *bios_hardcoded_edid; 344 346 int bios_hardcoded_edid_size;
+18 -14
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 181 181 182 182 if (level == adev->vm_manager.root_level) 183 183 /* For the root directory */ 184 - return round_up(adev->vm_manager.max_pfn, 1 << shift) >> shift; 184 + return round_up(adev->vm_manager.max_pfn, 1ULL << shift) >> shift; 185 185 else if (level != AMDGPU_VM_PTB) 186 186 /* Everything in between */ 187 187 return 512; ··· 1632 1632 continue; 1633 1633 } 1634 1634 1635 - /* First check if the entry is already handled */ 1636 - if (cursor.pfn < frag_start) { 1637 - cursor.entry->huge = true; 1638 - amdgpu_vm_pt_next(adev, &cursor); 1639 - continue; 1640 - } 1641 - 1642 1635 /* If it isn't already handled it can't be a huge page */ 1643 1636 if (cursor.entry->huge) { 1644 1637 /* Add the entry to the relocated list to update it. */ ··· 1656 1663 if (!amdgpu_vm_pt_descendant(adev, &cursor)) 1657 1664 return -ENOENT; 1658 1665 continue; 1659 - } else if (frag >= parent_shift) { 1666 + } else if (frag >= parent_shift && 1667 + cursor.level - 1 != adev->vm_manager.root_level) { 1660 1668 /* If the fragment size is even larger than the parent 1661 - * shift we should go up one level and check it again. 1669 + * shift we should go up one level and check it again 1670 + * unless one level up is the root level. 1662 1671 */ 1663 1672 if (!amdgpu_vm_pt_ancestor(&cursor)) 1664 1673 return -ENOENT; ··· 1668 1673 } 1669 1674 1670 1675 /* Looks good so far, calculate parameters for the update */ 1671 - incr = AMDGPU_GPU_PAGE_SIZE << shift; 1676 + incr = (uint64_t)AMDGPU_GPU_PAGE_SIZE << shift; 1672 1677 mask = amdgpu_vm_entries_mask(adev, cursor.level); 1673 1678 pe_start = ((cursor.pfn >> shift) & mask) * 8; 1674 - entry_end = (mask + 1) << shift; 1679 + entry_end = (uint64_t)(mask + 1) << shift; 1675 1680 entry_end += cursor.pfn & ~(entry_end - 1); 1676 1681 entry_end = min(entry_end, end); 1677 1682 ··· 1684 1689 flags | AMDGPU_PTE_FRAG(frag)); 1685 1690 1686 1691 pe_start += nptes * 8; 1687 - dst += nptes * AMDGPU_GPU_PAGE_SIZE << shift; 1692 + dst += (uint64_t)nptes * AMDGPU_GPU_PAGE_SIZE << shift; 1688 1693 1689 1694 frag_start = upd_end; 1690 1695 if (frag_start >= frag_end) { ··· 1696 1701 } 1697 1702 } while (frag_start < entry_end); 1698 1703 1699 - if (frag >= shift) 1704 + if (amdgpu_vm_pt_descendant(adev, &cursor)) { 1705 + /* Mark all child entries as huge */ 1706 + while (cursor.pfn < frag_start) { 1707 + cursor.entry->huge = true; 1708 + amdgpu_vm_pt_next(adev, &cursor); 1709 + } 1710 + 1711 + } else if (frag >= shift) { 1712 + /* or just move on to the next on the same level. */ 1700 1713 amdgpu_vm_pt_next(adev, &cursor); 1714 + } 1701 1715 } 1702 1716 1703 1717 return 0;
+4 -3
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
··· 2440 2440 #endif 2441 2441 2442 2442 WREG32_FIELD15(GC, 0, RLC_CNTL, RLC_ENABLE_F32, 1); 2443 + udelay(50); 2443 2444 2444 2445 /* carrizo do enable cp interrupt after cp inited */ 2445 - if (!(adev->flags & AMD_IS_APU)) 2446 + if (!(adev->flags & AMD_IS_APU)) { 2446 2447 gfx_v9_0_enable_gui_idle_interrupt(adev, true); 2447 - 2448 - udelay(50); 2448 + udelay(50); 2449 + } 2449 2450 2450 2451 #ifdef AMDGPU_RLC_DEBUG_RETRY 2451 2452 /* RLC_GPM_GENERAL_6 : RLC Ucode version */
+3 -3
drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
··· 72 72 73 73 /* Program the system aperture low logical page number. */ 74 74 WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR, 75 - min(adev->gmc.vram_start, adev->gmc.agp_start) >> 18); 75 + min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18); 76 76 77 77 if (adev->asic_type == CHIP_RAVEN && adev->rev_id >= 0x8) 78 78 /* ··· 82 82 * to get rid of the VM fault and hardware hang. 83 83 */ 84 84 WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR, 85 - max((adev->gmc.vram_end >> 18) + 0x1, 85 + max((adev->gmc.fb_end >> 18) + 0x1, 86 86 adev->gmc.agp_end >> 18)); 87 87 else 88 88 WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR, 89 - max(adev->gmc.vram_end, adev->gmc.agp_end) >> 18); 89 + max(adev->gmc.fb_end, adev->gmc.agp_end) >> 18); 90 90 91 91 /* Set default page address. */ 92 92 value = adev->vram_scratch.gpu_addr - adev->gmc.vram_start
+1
drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
··· 46 46 MODULE_FIRMWARE("amdgpu/pitcairn_mc.bin"); 47 47 MODULE_FIRMWARE("amdgpu/verde_mc.bin"); 48 48 MODULE_FIRMWARE("amdgpu/oland_mc.bin"); 49 + MODULE_FIRMWARE("amdgpu/hainan_mc.bin"); 49 50 MODULE_FIRMWARE("amdgpu/si58_mc.bin"); 50 51 51 52 #define MC_SEQ_MISC0__MT__MASK 0xf0000000
+3 -3
drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
··· 90 90 91 91 /* Program the system aperture low logical page number. */ 92 92 WREG32_SOC15(MMHUB, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR, 93 - min(adev->gmc.vram_start, adev->gmc.agp_start) >> 18); 93 + min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18); 94 94 95 95 if (adev->asic_type == CHIP_RAVEN && adev->rev_id >= 0x8) 96 96 /* ··· 100 100 * to get rid of the VM fault and hardware hang. 101 101 */ 102 102 WREG32_SOC15(MMHUB, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR, 103 - max((adev->gmc.vram_end >> 18) + 0x1, 103 + max((adev->gmc.fb_end >> 18) + 0x1, 104 104 adev->gmc.agp_end >> 18)); 105 105 else 106 106 WREG32_SOC15(MMHUB, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR, 107 - max(adev->gmc.vram_end, adev->gmc.agp_end) >> 18); 107 + max(adev->gmc.fb_end, adev->gmc.agp_end) >> 18); 108 108 109 109 /* Set default page address. */ 110 110 value = adev->vram_scratch.gpu_addr - adev->gmc.vram_start +
+32 -7
drivers/gpu/drm/amd/amdgpu/soc15.c
··· 65 65 #define mmMP0_MISC_LIGHT_SLEEP_CTRL 0x01ba 66 66 #define mmMP0_MISC_LIGHT_SLEEP_CTRL_BASE_IDX 0 67 67 68 + /* for Vega20 register name change */ 69 + #define mmHDP_MEM_POWER_CTRL 0x00d4 70 + #define HDP_MEM_POWER_CTRL__IPH_MEM_POWER_CTRL_EN_MASK 0x00000001L 71 + #define HDP_MEM_POWER_CTRL__IPH_MEM_POWER_LS_EN_MASK 0x00000002L 72 + #define HDP_MEM_POWER_CTRL__RC_MEM_POWER_CTRL_EN_MASK 0x00010000L 73 + #define HDP_MEM_POWER_CTRL__RC_MEM_POWER_LS_EN_MASK 0x00020000L 74 + #define mmHDP_MEM_POWER_CTRL_BASE_IDX 0 68 75 /* 69 76 * Indirect registers accessor 70 77 */ ··· 877 870 { 878 871 uint32_t def, data; 879 872 880 - def = data = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_LS)); 873 + if (adev->asic_type == CHIP_VEGA20) { 874 + def = data = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_CTRL)); 881 875 882 - if (enable && (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS)) 883 - data |= HDP_MEM_POWER_LS__LS_ENABLE_MASK; 884 - else 885 - data &= ~HDP_MEM_POWER_LS__LS_ENABLE_MASK; 876 + if (enable && (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS)) 877 + data |= HDP_MEM_POWER_CTRL__IPH_MEM_POWER_CTRL_EN_MASK | 878 + HDP_MEM_POWER_CTRL__IPH_MEM_POWER_LS_EN_MASK | 879 + HDP_MEM_POWER_CTRL__RC_MEM_POWER_CTRL_EN_MASK | 880 + HDP_MEM_POWER_CTRL__RC_MEM_POWER_LS_EN_MASK; 881 + else 882 + data &= ~(HDP_MEM_POWER_CTRL__IPH_MEM_POWER_CTRL_EN_MASK | 883 + HDP_MEM_POWER_CTRL__IPH_MEM_POWER_LS_EN_MASK | 884 + HDP_MEM_POWER_CTRL__RC_MEM_POWER_CTRL_EN_MASK | 885 + HDP_MEM_POWER_CTRL__RC_MEM_POWER_LS_EN_MASK); 886 886 887 - if (def != data) 888 - WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_LS), data); 887 + if (def != data) 888 + WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_CTRL), data); 889 + } else { 890 + def = data = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_LS)); 891 + 892 + if (enable && (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS)) 893 + data |= HDP_MEM_POWER_LS__LS_ENABLE_MASK; 894 + else 895 + data &= ~HDP_MEM_POWER_LS__LS_ENABLE_MASK; 896 + 897 + if (def != data) 898 + WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_LS), data); 899 + } 889 900 } 890 901 891 902 static void soc15_update_drm_clock_gating(struct amdgpu_device *adev, bool enable)
+1 -1
drivers/gpu/drm/amd/amdgpu/vega10_ih.c
··· 129 129 else 130 130 wptr_off = adev->wb.gpu_addr + (adev->irq.ih.wptr_offs * 4); 131 131 WREG32_SOC15(OSSSYS, 0, mmIH_RB_WPTR_ADDR_LO, lower_32_bits(wptr_off)); 132 - WREG32_SOC15(OSSSYS, 0, mmIH_RB_WPTR_ADDR_HI, upper_32_bits(wptr_off) & 0xFF); 132 + WREG32_SOC15(OSSSYS, 0, mmIH_RB_WPTR_ADDR_HI, upper_32_bits(wptr_off) & 0xFFFF); 133 133 134 134 /* set rptr, wptr to 0 */ 135 135 WREG32_SOC15(OSSSYS, 0, mmIH_RB_RPTR, 0);
+1
drivers/gpu/drm/amd/amdgpu/vega20_reg_init.c
··· 49 49 adev->reg_offset[SMUIO_HWIP][i] = (uint32_t *)(&(SMUIO_BASE.instance[i])); 50 50 adev->reg_offset[NBIF_HWIP][i] = (uint32_t *)(&(NBIO_BASE.instance[i])); 51 51 adev->reg_offset[THM_HWIP][i] = (uint32_t *)(&(THM_BASE.instance[i])); 52 + adev->reg_offset[CLK_HWIP][i] = (uint32_t *)(&(CLK_BASE.instance[i])); 52 53 } 53 54 return 0; 54 55 }
+24 -19
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 429 429 adev->asic_type < CHIP_RAVEN) 430 430 init_data.flags.gpu_vm_support = true; 431 431 432 + if (amdgpu_dc_feature_mask & DC_FBC_MASK) 433 + init_data.flags.fbc_support = true; 434 + 432 435 /* Display Core create. */ 433 436 adev->dm.dc = dc_create(&init_data); 434 437 ··· 1527 1524 { 1528 1525 struct amdgpu_display_manager *dm = bl_get_data(bd); 1529 1526 1530 - /* 1531 - * PWM interperts 0 as 100% rather than 0% because of HW 1532 - * limitation for level 0.So limiting minimum brightness level 1533 - * to 1. 1534 - */ 1535 - if (bd->props.brightness < 1) 1536 - return 1; 1537 1527 if (dc_link_set_backlight_level(dm->backlight_link, 1538 1528 bd->props.brightness, 0, 0)) 1539 1529 return 0; ··· 2358 2362 static enum dc_color_depth 2359 2363 convert_color_depth_from_display_info(const struct drm_connector *connector) 2360 2364 { 2365 + struct dm_connector_state *dm_conn_state = 2366 + to_dm_connector_state(connector->state); 2361 2367 uint32_t bpc = connector->display_info.bpc; 2368 + 2369 + /* TODO: Remove this when there's support for max_bpc in drm */ 2370 + if (dm_conn_state && bpc > dm_conn_state->max_bpc) 2371 + /* Round down to nearest even number. */ 2372 + bpc = dm_conn_state->max_bpc - (dm_conn_state->max_bpc & 1); 2362 2373 2363 2374 switch (bpc) { 2364 2375 case 0: ··· 2710 2707 drm_connector = &aconnector->base; 2711 2708 2712 2709 if (!aconnector->dc_sink) { 2713 - /* 2714 - * Create dc_sink when necessary to MST 2715 - * Don't apply fake_sink to MST 2716 - */ 2717 - if (aconnector->mst_port) { 2718 - dm_dp_mst_dc_sink_create(drm_connector); 2719 - return stream; 2710 + if (!aconnector->mst_port) { 2711 + sink = create_fake_sink(aconnector); 2712 + if (!sink) 2713 + return stream; 2720 2714 } 2721 - 2722 - sink = create_fake_sink(aconnector); 2723 - if (!sink) 2724 - return stream; 2725 2715 } else { 2726 2716 sink = aconnector->dc_sink; 2727 2717 } ··· 2950 2954 } else if (property == adev->mode_info.underscan_property) { 2951 2955 dm_new_state->underscan_enable = val; 2952 2956 ret = 0; 2957 + } else if (property == adev->mode_info.max_bpc_property) { 2958 + dm_new_state->max_bpc = val; 2959 + ret = 0; 2953 2960 } 2954 2961 2955 2962 return ret; ··· 2994 2995 ret = 0; 2995 2996 } else if (property == adev->mode_info.underscan_property) { 2996 2997 *val = dm_state->underscan_enable; 2998 + ret = 0; 2999 + } else if (property == adev->mode_info.max_bpc_property) { 3000 + *val = dm_state->max_bpc; 2997 3001 ret = 0; 2998 3002 } 2999 3003 return ret; ··· 3310 3308 static const struct drm_plane_funcs dm_plane_funcs = { 3311 3309 .update_plane = drm_atomic_helper_update_plane, 3312 3310 .disable_plane = drm_atomic_helper_disable_plane, 3313 - .destroy = drm_plane_cleanup, 3311 + .destroy = drm_primary_helper_destroy, 3314 3312 .reset = dm_drm_plane_reset, 3315 3313 .atomic_duplicate_state = dm_drm_plane_duplicate_state, 3316 3314 .atomic_destroy_state = dm_drm_plane_destroy_state, ··· 3807 3805 0); 3808 3806 drm_object_attach_property(&aconnector->base.base, 3809 3807 adev->mode_info.underscan_vborder_property, 3808 + 0); 3809 + drm_object_attach_property(&aconnector->base.base, 3810 + adev->mode_info.max_bpc_property, 3810 3811 0); 3811 3812 3812 3813 }
+1 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
··· 160 160 struct mutex hpd_lock; 161 161 162 162 bool fake_enable; 163 - 164 - bool mst_connected; 165 163 }; 166 164 167 165 #define to_amdgpu_dm_connector(x) container_of(x, struct amdgpu_dm_connector, base) ··· 204 206 enum amdgpu_rmx_type scaling; 205 207 uint8_t underscan_vborder; 206 208 uint8_t underscan_hborder; 209 + uint8_t max_bpc; 207 210 bool underscan_enable; 208 211 bool freesync_enable; 209 212 bool freesync_capable;
+9 -75
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
··· 205 205 .atomic_get_property = amdgpu_dm_connector_atomic_get_property 206 206 }; 207 207 208 - void dm_dp_mst_dc_sink_create(struct drm_connector *connector) 209 - { 210 - struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector); 211 - struct dc_sink *dc_sink; 212 - struct dc_sink_init_data init_params = { 213 - .link = aconnector->dc_link, 214 - .sink_signal = SIGNAL_TYPE_DISPLAY_PORT_MST }; 215 - 216 - /* FIXME none of this is safe. we shouldn't touch aconnector here in 217 - * atomic_check 218 - */ 219 - 220 - /* 221 - * TODO: Need to further figure out why ddc.algo is NULL while MST port exists 222 - */ 223 - if (!aconnector->port || !aconnector->port->aux.ddc.algo) 224 - return; 225 - 226 - ASSERT(aconnector->edid); 227 - 228 - dc_sink = dc_link_add_remote_sink( 229 - aconnector->dc_link, 230 - (uint8_t *)aconnector->edid, 231 - (aconnector->edid->extensions + 1) * EDID_LENGTH, 232 - &init_params); 233 - 234 - dc_sink->priv = aconnector; 235 - aconnector->dc_sink = dc_sink; 236 - 237 - if (aconnector->dc_sink) 238 - amdgpu_dm_update_freesync_caps( 239 - connector, aconnector->edid); 240 - } 241 - 242 208 static int dm_dp_mst_get_modes(struct drm_connector *connector) 243 209 { 244 210 struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector); ··· 285 319 struct amdgpu_device *adev = dev->dev_private; 286 320 struct amdgpu_encoder *amdgpu_encoder; 287 321 struct drm_encoder *encoder; 288 - const struct drm_connector_helper_funcs *connector_funcs = 289 - connector->base.helper_private; 290 - struct drm_encoder *enc_master = 291 - connector_funcs->best_encoder(&connector->base); 292 322 293 - DRM_DEBUG_KMS("enc master is %p\n", enc_master); 294 323 amdgpu_encoder = kzalloc(sizeof(*amdgpu_encoder), GFP_KERNEL); 295 324 if (!amdgpu_encoder) 296 325 return NULL; ··· 315 354 struct amdgpu_device *adev = dev->dev_private; 316 355 struct amdgpu_dm_connector *aconnector; 317 356 struct drm_connector *connector; 318 - struct drm_connector_list_iter conn_iter; 319 - 320 - drm_connector_list_iter_begin(dev, &conn_iter); 321 - drm_for_each_connector_iter(connector, &conn_iter) { 322 - aconnector = to_amdgpu_dm_connector(connector); 323 - if (aconnector->mst_port == master 324 - && !aconnector->port) { 325 - DRM_INFO("DM_MST: reusing connector: %p [id: %d] [master: %p]\n", 326 - aconnector, connector->base.id, aconnector->mst_port); 327 - 328 - aconnector->port = port; 329 - drm_connector_set_path_property(connector, pathprop); 330 - 331 - drm_connector_list_iter_end(&conn_iter); 332 - aconnector->mst_connected = true; 333 - return &aconnector->base; 334 - } 335 - } 336 - drm_connector_list_iter_end(&conn_iter); 337 357 338 358 aconnector = kzalloc(sizeof(*aconnector), GFP_KERNEL); 339 359 if (!aconnector) ··· 342 400 master->connector_id); 343 401 344 402 aconnector->mst_encoder = dm_dp_create_fake_mst_encoder(master); 403 + drm_connector_attach_encoder(&aconnector->base, 404 + &aconnector->mst_encoder->base); 345 405 346 - /* 347 - * TODO: understand why this one is needed 348 - */ 349 406 drm_object_attach_property( 350 407 &connector->base, 351 408 dev->mode_config.path_property, ··· 362 421 */ 363 422 amdgpu_dm_connector_funcs_reset(connector); 364 423 365 - aconnector->mst_connected = true; 366 - 367 424 DRM_INFO("DM_MST: added connector: %p [id: %d] [master: %p]\n", 368 425 aconnector, connector->base.id, aconnector->mst_port); 369 426 ··· 373 434 static void dm_dp_destroy_mst_connector(struct drm_dp_mst_topology_mgr *mgr, 374 435 struct drm_connector *connector) 375 436 { 437 + struct amdgpu_dm_connector *master = container_of(mgr, struct amdgpu_dm_connector, mst_mgr); 438 + struct drm_device *dev = master->base.dev; 439 + struct amdgpu_device *adev = dev->dev_private; 376 440 struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector); 377 441 378 442 DRM_INFO("DM_MST: Disabling connector: %p [id: %d] [master: %p]\n", ··· 389 447 aconnector->dc_sink = NULL; 390 448 } 391 449 392 - aconnector->mst_connected = false; 450 + drm_connector_unregister(connector); 451 + if (adev->mode_info.rfbdev) 452 + drm_fb_helper_remove_one_connector(&adev->mode_info.rfbdev->helper, connector); 453 + drm_connector_put(connector); 393 454 } 394 455 395 456 static void dm_dp_mst_hotplug(struct drm_dp_mst_topology_mgr *mgr) ··· 403 458 drm_kms_helper_hotplug_event(dev); 404 459 } 405 460 406 - static void dm_dp_mst_link_status_reset(struct drm_connector *connector) 407 - { 408 - mutex_lock(&connector->dev->mode_config.mutex); 409 - drm_connector_set_link_status_property(connector, DRM_MODE_LINK_STATUS_BAD); 410 - mutex_unlock(&connector->dev->mode_config.mutex); 411 - } 412 - 413 461 static void dm_dp_mst_register_connector(struct drm_connector *connector) 414 462 { 415 463 struct drm_device *dev = connector->dev; 416 464 struct amdgpu_device *adev = dev->dev_private; 417 - struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector); 418 465 419 466 if (adev->mode_info.rfbdev) 420 467 drm_fb_helper_add_one_connector(&adev->mode_info.rfbdev->helper, connector); ··· 414 477 DRM_ERROR("adev->mode_info.rfbdev is NULL\n"); 415 478 416 479 drm_connector_register(connector); 417 - 418 - if (aconnector->mst_connected) 419 - dm_dp_mst_link_status_reset(connector); 420 480 } 421 481 422 482 static const struct drm_dp_mst_topology_cbs dm_mst_cbs = {
-1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h
··· 31 31 32 32 void amdgpu_dm_initialize_dp_connector(struct amdgpu_display_manager *dm, 33 33 struct amdgpu_dm_connector *aconnector); 34 - void dm_dp_mst_dc_sink_create(struct drm_connector *connector); 35 34 36 35 #endif
+2 -2
drivers/gpu/drm/amd/display/dc/core/dc_link.c
··· 1722 1722 i2c_success = i2c_write(pipe_ctx, slave_address, 1723 1723 buffer, sizeof(buffer)); 1724 1724 RETIMER_REDRIVER_INFO("retimer write to slave_address = 0x%x,\ 1725 - offset = 0x%d, reg_val = 0x%d, i2c_success = %d\n", 1725 + offset = 0x%x, reg_val = 0x%x, i2c_success = %d\n", 1726 1726 slave_address, buffer[0], buffer[1], i2c_success?1:0); 1727 1727 if (!i2c_success) 1728 1728 /* Write failure */ ··· 1734 1734 i2c_success = i2c_write(pipe_ctx, slave_address, 1735 1735 buffer, sizeof(buffer)); 1736 1736 RETIMER_REDRIVER_INFO("retimer write to slave_address = 0x%x,\ 1737 - offset = 0x%d, reg_val = 0x%d, i2c_success = %d\n", 1737 + offset = 0x%x, reg_val = 0x%x, i2c_success = %d\n", 1738 1738 slave_address, buffer[0], buffer[1], i2c_success?1:0); 1739 1739 if (!i2c_success) 1740 1740 /* Write failure */
+1
drivers/gpu/drm/amd/display/dc/dc.h
··· 169 169 struct dc_config { 170 170 bool gpu_vm_support; 171 171 bool disable_disp_pll_sharing; 172 + bool fbc_support; 172 173 }; 173 174 174 175 enum visual_confirm {
+6 -1
drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
··· 1736 1736 if (events->force_trigger) 1737 1737 value |= 0x1; 1738 1738 1739 - value |= 0x84; 1739 + if (num_pipes) { 1740 + struct dc *dc = pipe_ctx[0]->stream->ctx->dc; 1741 + 1742 + if (dc->fbc_compressor) 1743 + value |= 0x84; 1744 + } 1740 1745 1741 1746 for (i = 0; i < num_pipes; i++) 1742 1747 pipe_ctx[i]->stream_res.tg->funcs->
+2 -1
drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
··· 1362 1362 pool->base.sw_i2cs[i] = NULL; 1363 1363 } 1364 1364 1365 - dc->fbc_compressor = dce110_compressor_create(ctx); 1365 + if (dc->config.fbc_support) 1366 + dc->fbc_compressor = dce110_compressor_create(ctx); 1366 1367 1367 1368 if (!underlay_create(ctx, &pool->base)) 1368 1369 goto res_create_fail;
+4
drivers/gpu/drm/amd/include/amd_shared.h
··· 133 133 PP_AVFS_MASK = 0x40000, 134 134 }; 135 135 136 + enum DC_FEATURE_MASK { 137 + DC_FBC_MASK = 0x1, 138 + }; 139 + 136 140 /** 137 141 * struct amd_ip_funcs - general hooks for managing amdgpu IP Blocks 138 142 */
+5 -2
drivers/gpu/drm/amd/include/atomfirmware.h
··· 1325 1325 struct atom_common_table_header table_header; 1326 1326 uint8_t smuip_min_ver; 1327 1327 uint8_t smuip_max_ver; 1328 - uint8_t smu_rsd1; 1328 + uint8_t waflclk_ss_mode; 1329 1329 uint8_t gpuclk_ss_mode; 1330 1330 uint16_t sclk_ss_percentage; 1331 1331 uint16_t sclk_ss_rate_10hz; ··· 1355 1355 uint32_t syspll3_1_vco_freq_10khz; 1356 1356 uint32_t bootup_fclk_10khz; 1357 1357 uint32_t bootup_waflclk_10khz; 1358 - uint32_t reserved[3]; 1358 + uint32_t smu_info_caps; 1359 + uint16_t waflclk_ss_percentage; // in unit of 0.001% 1360 + uint16_t smuinitoffset; 1361 + uint32_t reserved; 1359 1362 }; 1360 1363 1361 1364 /*
+10 -10
drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
··· 4525 4525 struct smu7_single_dpm_table *sclk_table = &(data->dpm_table.sclk_table); 4526 4526 struct smu7_single_dpm_table *golden_sclk_table = 4527 4527 &(data->golden_dpm_table.sclk_table); 4528 - int value; 4528 + int value = sclk_table->dpm_levels[sclk_table->count - 1].value; 4529 + int golden_value = golden_sclk_table->dpm_levels 4530 + [golden_sclk_table->count - 1].value; 4529 4531 4530 - value = (sclk_table->dpm_levels[sclk_table->count - 1].value - 4531 - golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value) * 4532 - 100 / 4533 - golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value; 4532 + value -= golden_value; 4533 + value = DIV_ROUND_UP(value * 100, golden_value); 4534 4534 4535 4535 return value; 4536 4536 } ··· 4567 4567 struct smu7_single_dpm_table *mclk_table = &(data->dpm_table.mclk_table); 4568 4568 struct smu7_single_dpm_table *golden_mclk_table = 4569 4569 &(data->golden_dpm_table.mclk_table); 4570 - int value; 4570 + int value = mclk_table->dpm_levels[mclk_table->count - 1].value; 4571 + int golden_value = golden_mclk_table->dpm_levels 4572 + [golden_mclk_table->count - 1].value; 4571 4573 4572 - value = (mclk_table->dpm_levels[mclk_table->count - 1].value - 4573 - golden_mclk_table->dpm_levels[golden_mclk_table->count - 1].value) * 4574 - 100 / 4575 - golden_mclk_table->dpm_levels[golden_mclk_table->count - 1].value; 4574 + value -= golden_value; 4575 + value = DIV_ROUND_UP(value * 100, golden_value); 4576 4576 4577 4577 return value; 4578 4578 }
+16 -16
drivers/gpu/drm/amd/powerplay/hwmgr/smu_helper.c
··· 713 713 for (i = 0; i < wm_with_clock_ranges->num_wm_dmif_sets; i++) { 714 714 table->WatermarkRow[1][i].MinClock = 715 715 cpu_to_le16((uint16_t) 716 - (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_min_dcfclk_clk_in_khz) / 717 - 1000); 716 + (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_min_dcfclk_clk_in_khz / 717 + 1000)); 718 718 table->WatermarkRow[1][i].MaxClock = 719 719 cpu_to_le16((uint16_t) 720 - (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_max_dcfclk_clk_in_khz) / 721 - 1000); 720 + (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_max_dcfclk_clk_in_khz / 721 + 1000)); 722 722 table->WatermarkRow[1][i].MinUclk = 723 723 cpu_to_le16((uint16_t) 724 - (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_min_mem_clk_in_khz) / 725 - 1000); 724 + (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_min_mem_clk_in_khz / 725 + 1000)); 726 726 table->WatermarkRow[1][i].MaxUclk = 727 727 cpu_to_le16((uint16_t) 728 - (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_max_mem_clk_in_khz) / 729 - 1000); 728 + (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_max_mem_clk_in_khz / 729 + 1000)); 730 730 table->WatermarkRow[1][i].WmSetting = (uint8_t) 731 731 wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_set_id; 732 732 } ··· 734 734 for (i = 0; i < wm_with_clock_ranges->num_wm_mcif_sets; i++) { 735 735 table->WatermarkRow[0][i].MinClock = 736 736 cpu_to_le16((uint16_t) 737 - (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_min_socclk_clk_in_khz) / 738 - 1000); 737 + (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_min_socclk_clk_in_khz / 738 + 1000)); 739 739 table->WatermarkRow[0][i].MaxClock = 740 740 cpu_to_le16((uint16_t) 741 - (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_max_socclk_clk_in_khz) / 742 - 1000); 741 + (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_max_socclk_clk_in_khz / 742 + 1000)); 743 743 table->WatermarkRow[0][i].MinUclk = 744 744 cpu_to_le16((uint16_t) 745 - (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_min_mem_clk_in_khz) / 746 - 1000); 745 + (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_min_mem_clk_in_khz / 746 + 1000)); 747 747 table->WatermarkRow[0][i].MaxUclk = 748 748 cpu_to_le16((uint16_t) 749 - (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_max_mem_clk_in_khz) / 750 - 1000); 749 + (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_max_mem_clk_in_khz / 750 + 1000)); 751 751 table->WatermarkRow[0][i].WmSetting = (uint8_t) 752 752 wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_set_id; 753 753 }
+10 -15
drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
··· 4522 4522 struct vega10_single_dpm_table *sclk_table = &(data->dpm_table.gfx_table); 4523 4523 struct vega10_single_dpm_table *golden_sclk_table = 4524 4524 &(data->golden_dpm_table.gfx_table); 4525 - int value; 4526 - 4527 - value = (sclk_table->dpm_levels[sclk_table->count - 1].value - 4528 - golden_sclk_table->dpm_levels 4529 - [golden_sclk_table->count - 1].value) * 4530 - 100 / 4531 - golden_sclk_table->dpm_levels 4525 + int value = sclk_table->dpm_levels[sclk_table->count - 1].value; 4526 + int golden_value = golden_sclk_table->dpm_levels 4532 4527 [golden_sclk_table->count - 1].value; 4528 + 4529 + value -= golden_value; 4530 + value = DIV_ROUND_UP(value * 100, golden_value); 4533 4531 4534 4532 return value; 4535 4533 } ··· 4573 4575 struct vega10_single_dpm_table *mclk_table = &(data->dpm_table.mem_table); 4574 4576 struct vega10_single_dpm_table *golden_mclk_table = 4575 4577 &(data->golden_dpm_table.mem_table); 4576 - int value; 4577 - 4578 - value = (mclk_table->dpm_levels 4579 - [mclk_table->count - 1].value - 4580 - golden_mclk_table->dpm_levels 4581 - [golden_mclk_table->count - 1].value) * 4582 - 100 / 4583 - golden_mclk_table->dpm_levels 4578 + int value = mclk_table->dpm_levels[mclk_table->count - 1].value; 4579 + int golden_value = golden_mclk_table->dpm_levels 4584 4580 [golden_mclk_table->count - 1].value; 4581 + 4582 + value -= golden_value; 4583 + value = DIV_ROUND_UP(value * 100, golden_value); 4585 4584 4586 4585 return value; 4587 4586 }
+10 -13
drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
··· 2243 2243 struct vega12_single_dpm_table *sclk_table = &(data->dpm_table.gfx_table); 2244 2244 struct vega12_single_dpm_table *golden_sclk_table = 2245 2245 &(data->golden_dpm_table.gfx_table); 2246 - int value; 2246 + int value = sclk_table->dpm_levels[sclk_table->count - 1].value; 2247 + int golden_value = golden_sclk_table->dpm_levels 2248 + [golden_sclk_table->count - 1].value; 2247 2249 2248 - value = (sclk_table->dpm_levels[sclk_table->count - 1].value - 2249 - golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value) * 2250 - 100 / 2251 - golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value; 2250 + value -= golden_value; 2251 + value = DIV_ROUND_UP(value * 100, golden_value); 2252 2252 2253 2253 return value; 2254 2254 } ··· 2264 2264 struct vega12_single_dpm_table *mclk_table = &(data->dpm_table.mem_table); 2265 2265 struct vega12_single_dpm_table *golden_mclk_table = 2266 2266 &(data->golden_dpm_table.mem_table); 2267 - int value; 2268 - 2269 - value = (mclk_table->dpm_levels 2270 - [mclk_table->count - 1].value - 2271 - golden_mclk_table->dpm_levels 2272 - [golden_mclk_table->count - 1].value) * 2273 - 100 / 2274 - golden_mclk_table->dpm_levels 2267 + int value = mclk_table->dpm_levels[mclk_table->count - 1].value; 2268 + int golden_value = golden_mclk_table->dpm_levels 2275 2269 [golden_mclk_table->count - 1].value; 2270 + 2271 + value -= golden_value; 2272 + value = DIV_ROUND_UP(value * 100, golden_value); 2276 2273 2277 2274 return value; 2278 2275 }
+54 -29
drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
··· 75 75 data->phy_clk_quad_eqn_b = PPREGKEY_VEGA20QUADRATICEQUATION_DFLT; 76 76 data->phy_clk_quad_eqn_c = PPREGKEY_VEGA20QUADRATICEQUATION_DFLT; 77 77 78 - data->registry_data.disallowed_features = 0x0; 78 + /* 79 + * Disable the following features for now: 80 + * GFXCLK DS 81 + * SOCLK DS 82 + * LCLK DS 83 + * DCEFCLK DS 84 + * FCLK DS 85 + * MP1CLK DS 86 + * MP0CLK DS 87 + */ 88 + data->registry_data.disallowed_features = 0xE0041C00; 79 89 data->registry_data.od_state_in_dc_support = 0; 80 90 data->registry_data.thermal_support = 1; 81 91 data->registry_data.skip_baco_hardware = 0; ··· 130 120 data->registry_data.disable_auto_wattman = 1; 131 121 data->registry_data.auto_wattman_debug = 0; 132 122 data->registry_data.auto_wattman_sample_period = 100; 123 + data->registry_data.fclk_gfxclk_ratio = 0x3F6CCCCD; 133 124 data->registry_data.auto_wattman_threshold = 50; 134 125 data->registry_data.gfxoff_controlled_by_driver = 1; 135 126 data->gfxoff_allowed = false; ··· 840 829 return 0; 841 830 } 842 831 832 + static int vega20_notify_smc_display_change(struct pp_hwmgr *hwmgr) 833 + { 834 + struct vega20_hwmgr *data = (struct vega20_hwmgr *)(hwmgr->backend); 835 + 836 + if (data->smu_features[GNLD_DPM_UCLK].enabled) 837 + return smum_send_msg_to_smc_with_parameter(hwmgr, 838 + PPSMC_MSG_SetUclkFastSwitch, 839 + 1); 840 + 841 + return 0; 842 + } 843 + 844 + static int vega20_send_clock_ratio(struct pp_hwmgr *hwmgr) 845 + { 846 + struct vega20_hwmgr *data = 847 + (struct vega20_hwmgr *)(hwmgr->backend); 848 + 849 + return smum_send_msg_to_smc_with_parameter(hwmgr, 850 + PPSMC_MSG_SetFclkGfxClkRatio, 851 + data->registry_data.fclk_gfxclk_ratio); 852 + } 853 + 843 854 static int vega20_disable_all_smu_features(struct pp_hwmgr *hwmgr) 844 855 { 845 856 struct vega20_hwmgr *data = ··· 1323 1290 &(data->dpm_table.gfx_table); 1324 1291 struct vega20_single_dpm_table *golden_sclk_table = 1325 1292 &(data->golden_dpm_table.gfx_table); 1326 - int value; 1293 + int value = sclk_table->dpm_levels[sclk_table->count - 1].value; 1294 + int golden_value = golden_sclk_table->dpm_levels 1295 + [golden_sclk_table->count - 1].value; 1327 1296 1328 1297 /* od percentage */ 1329 - value = DIV_ROUND_UP((sclk_table->dpm_levels[sclk_table->count - 1].value - 1330 - golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value) * 100, 1331 - golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value); 1298 + value -= golden_value; 1299 + value = DIV_ROUND_UP(value * 100, golden_value); 1332 1300 1333 1301 return value; 1334 1302 } ··· 1369 1335 &(data->dpm_table.mem_table); 1370 1336 struct vega20_single_dpm_table *golden_mclk_table = 1371 1337 &(data->golden_dpm_table.mem_table); 1372 - int value; 1338 + int value = mclk_table->dpm_levels[mclk_table->count - 1].value; 1339 + int golden_value = golden_mclk_table->dpm_levels 1340 + [golden_mclk_table->count - 1].value; 1373 1341 1374 1342 /* od percentage */ 1375 - value = DIV_ROUND_UP((mclk_table->dpm_levels[mclk_table->count - 1].value - 1376 - golden_mclk_table->dpm_levels[golden_mclk_table->count - 1].value) * 100, 1377 - golden_mclk_table->dpm_levels[golden_mclk_table->count - 1].value); 1343 + value -= golden_value; 1344 + value = DIV_ROUND_UP(value * 100, golden_value); 1378 1345 1379 1346 return value; 1380 1347 } ··· 1565 1530 result = vega20_enable_all_smu_features(hwmgr); 1566 1531 PP_ASSERT_WITH_CODE(!result, 1567 1532 "[EnableDPMTasks] Failed to enable all smu features!", 1533 + return result); 1534 + 1535 + result = vega20_notify_smc_display_change(hwmgr); 1536 + PP_ASSERT_WITH_CODE(!result, 1537 + "[EnableDPMTasks] Failed to notify smc display change!", 1538 + return result); 1539 + 1540 + result = vega20_send_clock_ratio(hwmgr); 1541 + PP_ASSERT_WITH_CODE(!result, 1542 + "[EnableDPMTasks] Failed to send clock ratio!", 1568 1543 return result); 1569 1544 1570 1545 /* Initialize UVD/VCE powergating state */ ··· 2017 1972 return ret; 2018 1973 } 2019 1974 2020 - static int vega20_notify_smc_display_change(struct pp_hwmgr *hwmgr, 2021 - bool has_disp) 2022 - { 2023 - struct vega20_hwmgr *data = (struct vega20_hwmgr *)(hwmgr->backend); 2024 - 2025 - if (data->smu_features[GNLD_DPM_UCLK].enabled) 2026 - return smum_send_msg_to_smc_with_parameter(hwmgr, 2027 - PPSMC_MSG_SetUclkFastSwitch, 2028 - has_disp ? 1 : 0); 2029 - 2030 - return 0; 2031 - } 2032 - 2033 1975 int vega20_display_clock_voltage_request(struct pp_hwmgr *hwmgr, 2034 1976 struct pp_display_clock_request *clock_req) 2035 1977 { ··· 2075 2043 struct PP_Clocks min_clocks = {0}; 2076 2044 struct pp_display_clock_request clock_req; 2077 2045 int ret = 0; 2078 - 2079 - if ((hwmgr->display_config->num_display > 1) && 2080 - !hwmgr->display_config->multi_monitor_in_sync && 2081 - !hwmgr->display_config->nb_pstate_switch_disable) 2082 - vega20_notify_smc_display_change(hwmgr, false); 2083 - else 2084 - vega20_notify_smc_display_change(hwmgr, true); 2085 2046 2086 2047 min_clocks.dcefClock = hwmgr->display_config->min_dcef_set_clk; 2087 2048 min_clocks.dcefClockInSR = hwmgr->display_config->min_dcef_deep_sleep_set_clk;
+1
drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.h
··· 328 328 uint8_t disable_auto_wattman; 329 329 uint32_t auto_wattman_debug; 330 330 uint32_t auto_wattman_sample_period; 331 + uint32_t fclk_gfxclk_ratio; 331 332 uint8_t auto_wattman_threshold; 332 333 uint8_t log_avfs_param; 333 334 uint8_t enable_enginess;
+2 -1
drivers/gpu/drm/amd/powerplay/inc/vega20_ppsmc.h
··· 105 105 #define PPSMC_MSG_SetSystemVirtualDramAddrHigh 0x4B 106 106 #define PPSMC_MSG_SetSystemVirtualDramAddrLow 0x4C 107 107 #define PPSMC_MSG_WaflTest 0x4D 108 - // Unused ID 0x4E to 0x50 108 + #define PPSMC_MSG_SetFclkGfxClkRatio 0x4E 109 + // Unused ID 0x4F to 0x50 109 110 #define PPSMC_MSG_AllowGfxOff 0x51 110 111 #define PPSMC_MSG_DisallowGfxOff 0x52 111 112 #define PPSMC_MSG_GetPptLimit 0x53
+21
drivers/gpu/drm/ast/ast_drv.c
··· 60 60 61 61 MODULE_DEVICE_TABLE(pci, pciidlist); 62 62 63 + static void ast_kick_out_firmware_fb(struct pci_dev *pdev) 64 + { 65 + struct apertures_struct *ap; 66 + bool primary = false; 67 + 68 + ap = alloc_apertures(1); 69 + if (!ap) 70 + return; 71 + 72 + ap->ranges[0].base = pci_resource_start(pdev, 0); 73 + ap->ranges[0].size = pci_resource_len(pdev, 0); 74 + 75 + #ifdef CONFIG_X86 76 + primary = pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW; 77 + #endif 78 + drm_fb_helper_remove_conflicting_framebuffers(ap, "astdrmfb", primary); 79 + kfree(ap); 80 + } 81 + 63 82 static int ast_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent) 64 83 { 84 + ast_kick_out_firmware_fb(pdev); 85 + 65 86 return drm_get_pci_dev(pdev, ent, &driver); 66 87 } 67 88
+2 -1
drivers/gpu/drm/ast/ast_main.c
··· 583 583 drm_mode_config_cleanup(dev); 584 584 585 585 ast_mm_fini(ast); 586 - pci_iounmap(dev->pdev, ast->ioregs); 586 + if (ast->ioregs != ast->regs + AST_IO_MM_OFFSET) 587 + pci_iounmap(dev->pdev, ast->ioregs); 587 588 pci_iounmap(dev->pdev, ast->regs); 588 589 kfree(ast); 589 590 }
+32 -7
drivers/gpu/drm/ast/ast_mode.c
··· 568 568 } 569 569 ast_bo_unreserve(bo); 570 570 571 + ast_set_offset_reg(crtc); 571 572 ast_set_start_address_crt1(crtc, (u32)gpu_addr); 572 573 573 574 return 0; ··· 973 972 { 974 973 struct ast_i2c_chan *i2c = i2c_priv; 975 974 struct ast_private *ast = i2c->dev->dev_private; 976 - uint32_t val; 975 + uint32_t val, val2, count, pass; 977 976 978 - val = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x10) >> 4; 977 + count = 0; 978 + pass = 0; 979 + val = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x10) >> 4) & 0x01; 980 + do { 981 + val2 = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x10) >> 4) & 0x01; 982 + if (val == val2) { 983 + pass++; 984 + } else { 985 + pass = 0; 986 + val = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x10) >> 4) & 0x01; 987 + } 988 + } while ((pass < 5) && (count++ < 0x10000)); 989 + 979 990 return val & 1 ? 1 : 0; 980 991 } 981 992 ··· 995 982 { 996 983 struct ast_i2c_chan *i2c = i2c_priv; 997 984 struct ast_private *ast = i2c->dev->dev_private; 998 - uint32_t val; 985 + uint32_t val, val2, count, pass; 999 986 1000 - val = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x20) >> 5; 987 + count = 0; 988 + pass = 0; 989 + val = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x20) >> 5) & 0x01; 990 + do { 991 + val2 = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x20) >> 5) & 0x01; 992 + if (val == val2) { 993 + pass++; 994 + } else { 995 + pass = 0; 996 + val = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x20) >> 5) & 0x01; 997 + } 998 + } while ((pass < 5) && (count++ < 0x10000)); 999 + 1001 1000 return val & 1 ? 1 : 0; 1002 1001 } 1003 1002 ··· 1022 997 1023 998 for (i = 0; i < 0x10000; i++) { 1024 999 ujcrb7 = ((clock & 0x01) ? 0 : 1); 1025 - ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0xfe, ujcrb7); 1000 + ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0xf4, ujcrb7); 1026 1001 jtemp = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x01); 1027 1002 if (ujcrb7 == jtemp) 1028 1003 break; ··· 1038 1013 1039 1014 for (i = 0; i < 0x10000; i++) { 1040 1015 ujcrb7 = ((data & 0x01) ? 0 : 1) << 2; 1041 - ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0xfb, ujcrb7); 1016 + ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0xf1, ujcrb7); 1042 1017 jtemp = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x04); 1043 1018 if (ujcrb7 == jtemp) 1044 1019 break; ··· 1279 1254 ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc7, ((y >> 8) & 0x07)); 1280 1255 1281 1256 /* dummy write to fire HWC */ 1282 - ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xCB, 0xFF, 0x00); 1257 + ast_show_cursor(crtc); 1283 1258 1284 1259 return 0; 1285 1260 }
+2
drivers/gpu/drm/drm_auth.c
··· 142 142 143 143 lockdep_assert_held_once(&dev->master_mutex); 144 144 145 + WARN_ON(fpriv->is_master); 145 146 old_master = fpriv->master; 146 147 fpriv->master = drm_master_create(dev); 147 148 if (!fpriv->master) { ··· 171 170 /* drop references and restore old master on failure */ 172 171 drm_master_put(&fpriv->master); 173 172 fpriv->master = old_master; 173 + fpriv->is_master = 0; 174 174 175 175 return ret; 176 176 }
+3
drivers/gpu/drm/drm_dp_mst_topology.c
··· 1275 1275 mutex_lock(&mgr->lock); 1276 1276 mstb = mgr->mst_primary; 1277 1277 1278 + if (!mstb) 1279 + goto out; 1280 + 1278 1281 for (i = 0; i < lct - 1; i++) { 1279 1282 int shift = (i % 2) ? 0 : 4; 1280 1283 int port_num = (rad[i / 2] >> shift) & 0xf;
+3
drivers/gpu/drm/drm_fb_helper.c
··· 219 219 mutex_lock(&fb_helper->lock); 220 220 drm_connector_list_iter_begin(dev, &conn_iter); 221 221 drm_for_each_connector_iter(connector, &conn_iter) { 222 + if (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK) 223 + continue; 224 + 222 225 ret = __drm_fb_helper_add_one_connector(fb_helper, connector); 223 226 if (ret) 224 227 goto fail;
+1 -1
drivers/gpu/drm/drm_fourcc.c
··· 97 97 98 98 /** 99 99 * drm_driver_legacy_fb_format - compute drm fourcc code from legacy description 100 + * @dev: DRM device 100 101 * @bpp: bits per pixels 101 102 * @depth: bit depth per pixel 102 - * @native: use host native byte order 103 103 * 104 104 * Computes a drm fourcc pixel format code for the given @bpp/@depth values. 105 105 * Unlike drm_mode_legacy_fb_format() this looks at the drivers mode_config,
+1 -1
drivers/gpu/drm/etnaviv/etnaviv_sched.c
··· 93 93 * If the GPU managed to complete this jobs fence, the timout is 94 94 * spurious. Bail out. 95 95 */ 96 - if (fence_completed(gpu, submit->out_fence->seqno)) 96 + if (dma_fence_is_signaled(submit->out_fence)) 97 97 return; 98 98 99 99 /*
-9
drivers/gpu/drm/exynos/exynos5433_drm_decon.c
··· 164 164 return frm; 165 165 } 166 166 167 - static u32 decon_get_vblank_counter(struct exynos_drm_crtc *crtc) 168 - { 169 - struct decon_context *ctx = crtc->ctx; 170 - 171 - return decon_get_frame_count(ctx, false); 172 - } 173 - 174 167 static void decon_setup_trigger(struct decon_context *ctx) 175 168 { 176 169 if (!ctx->crtc->i80_mode && !(ctx->out_type & I80_HW_TRG)) ··· 529 536 .disable = decon_disable, 530 537 .enable_vblank = decon_enable_vblank, 531 538 .disable_vblank = decon_disable_vblank, 532 - .get_vblank_counter = decon_get_vblank_counter, 533 539 .atomic_begin = decon_atomic_begin, 534 540 .update_plane = decon_update_plane, 535 541 .disable_plane = decon_disable_plane, ··· 546 554 int ret; 547 555 548 556 ctx->drm_dev = drm_dev; 549 - drm_dev->max_vblank_count = 0xffffffff; 550 557 551 558 for (win = ctx->first_win; win < WINDOWS_NR; win++) { 552 559 ctx->configs[win].pixel_formats = decon_formats;
-11
drivers/gpu/drm/exynos/exynos_drm_crtc.c
··· 162 162 exynos_crtc->ops->disable_vblank(exynos_crtc); 163 163 } 164 164 165 - static u32 exynos_drm_crtc_get_vblank_counter(struct drm_crtc *crtc) 166 - { 167 - struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc); 168 - 169 - if (exynos_crtc->ops->get_vblank_counter) 170 - return exynos_crtc->ops->get_vblank_counter(exynos_crtc); 171 - 172 - return 0; 173 - } 174 - 175 165 static const struct drm_crtc_funcs exynos_crtc_funcs = { 176 166 .set_config = drm_atomic_helper_set_config, 177 167 .page_flip = drm_atomic_helper_page_flip, ··· 171 181 .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, 172 182 .enable_vblank = exynos_drm_crtc_enable_vblank, 173 183 .disable_vblank = exynos_drm_crtc_disable_vblank, 174 - .get_vblank_counter = exynos_drm_crtc_get_vblank_counter, 175 184 }; 176 185 177 186 struct exynos_drm_crtc *exynos_drm_crtc_create(struct drm_device *drm_dev,
-1
drivers/gpu/drm/exynos/exynos_drm_drv.h
··· 135 135 void (*disable)(struct exynos_drm_crtc *crtc); 136 136 int (*enable_vblank)(struct exynos_drm_crtc *crtc); 137 137 void (*disable_vblank)(struct exynos_drm_crtc *crtc); 138 - u32 (*get_vblank_counter)(struct exynos_drm_crtc *crtc); 139 138 enum drm_mode_status (*mode_valid)(struct exynos_drm_crtc *crtc, 140 139 const struct drm_display_mode *mode); 141 140 bool (*mode_fixup)(struct exynos_drm_crtc *crtc,
+11 -3
drivers/gpu/drm/exynos/exynos_drm_dsi.c
··· 14 14 15 15 #include <drm/drmP.h> 16 16 #include <drm/drm_crtc_helper.h> 17 + #include <drm/drm_fb_helper.h> 17 18 #include <drm/drm_mipi_dsi.h> 18 19 #include <drm/drm_panel.h> 19 20 #include <drm/drm_atomic_helper.h> ··· 1475 1474 { 1476 1475 struct exynos_dsi *dsi = encoder_to_dsi(encoder); 1477 1476 struct drm_connector *connector = &dsi->connector; 1477 + struct drm_device *drm = encoder->dev; 1478 1478 int ret; 1479 1479 1480 1480 connector->polled = DRM_CONNECTOR_POLL_HPD; 1481 1481 1482 - ret = drm_connector_init(encoder->dev, connector, 1483 - &exynos_dsi_connector_funcs, 1482 + ret = drm_connector_init(drm, connector, &exynos_dsi_connector_funcs, 1484 1483 DRM_MODE_CONNECTOR_DSI); 1485 1484 if (ret) { 1486 1485 DRM_ERROR("Failed to initialize connector with drm\n"); ··· 1490 1489 connector->status = connector_status_disconnected; 1491 1490 drm_connector_helper_add(connector, &exynos_dsi_connector_helper_funcs); 1492 1491 drm_connector_attach_encoder(connector, encoder); 1492 + if (!drm->registered) 1493 + return 0; 1493 1494 1495 + connector->funcs->reset(connector); 1496 + drm_fb_helper_add_one_connector(drm->fb_helper, connector); 1497 + drm_connector_register(connector); 1494 1498 return 0; 1495 1499 } 1496 1500 ··· 1533 1527 } 1534 1528 1535 1529 dsi->panel = of_drm_find_panel(device->dev.of_node); 1536 - if (dsi->panel) { 1530 + if (IS_ERR(dsi->panel)) { 1531 + dsi->panel = NULL; 1532 + } else { 1537 1533 drm_panel_attach(dsi->panel, &dsi->connector); 1538 1534 dsi->connector.status = connector_status_connected; 1539 1535 }
+1 -1
drivers/gpu/drm/exynos/exynos_drm_fbdev.c
··· 192 192 struct drm_fb_helper *helper; 193 193 int ret; 194 194 195 - if (!dev->mode_config.num_crtc || !dev->mode_config.num_connector) 195 + if (!dev->mode_config.num_crtc) 196 196 return 0; 197 197 198 198 fbdev = kzalloc(sizeof(*fbdev), GFP_KERNEL);
+2
drivers/gpu/drm/i915/gvt/aperture_gm.c
··· 61 61 } 62 62 63 63 mutex_lock(&dev_priv->drm.struct_mutex); 64 + mmio_hw_access_pre(dev_priv); 64 65 ret = i915_gem_gtt_insert(&dev_priv->ggtt.vm, node, 65 66 size, I915_GTT_PAGE_SIZE, 66 67 I915_COLOR_UNEVICTABLE, 67 68 start, end, flags); 69 + mmio_hw_access_post(dev_priv); 68 70 mutex_unlock(&dev_priv->drm.struct_mutex); 69 71 if (ret) 70 72 gvt_err("fail to alloc %s gm space from host\n",
+55 -53
drivers/gpu/drm/i915/gvt/gtt.c
··· 1905 1905 vgpu_free_mm(mm); 1906 1906 return ERR_PTR(-ENOMEM); 1907 1907 } 1908 - mm->ggtt_mm.last_partial_off = -1UL; 1909 1908 1910 1909 return mm; 1911 1910 } ··· 1929 1930 invalidate_ppgtt_mm(mm); 1930 1931 } else { 1931 1932 vfree(mm->ggtt_mm.virtual_ggtt); 1932 - mm->ggtt_mm.last_partial_off = -1UL; 1933 1933 } 1934 1934 1935 1935 vgpu_free_mm(mm); ··· 2166 2168 struct intel_gvt_gtt_entry e, m; 2167 2169 dma_addr_t dma_addr; 2168 2170 int ret; 2171 + struct intel_gvt_partial_pte *partial_pte, *pos, *n; 2172 + bool partial_update = false; 2169 2173 2170 2174 if (bytes != 4 && bytes != 8) 2171 2175 return -EINVAL; ··· 2178 2178 if (!vgpu_gmadr_is_valid(vgpu, gma)) 2179 2179 return 0; 2180 2180 2181 - ggtt_get_guest_entry(ggtt_mm, &e, g_gtt_index); 2182 - 2181 + e.type = GTT_TYPE_GGTT_PTE; 2183 2182 memcpy((void *)&e.val64 + (off & (info->gtt_entry_size - 1)), p_data, 2184 2183 bytes); 2185 2184 2186 2185 /* If ggtt entry size is 8 bytes, and it's split into two 4 bytes 2187 - * write, we assume the two 4 bytes writes are consecutive. 2188 - * Otherwise, we abort and report error 2186 + * write, save the first 4 bytes in a list and update virtual 2187 + * PTE. Only update shadow PTE when the second 4 bytes comes. 2189 2188 */ 2190 2189 if (bytes < info->gtt_entry_size) { 2191 - if (ggtt_mm->ggtt_mm.last_partial_off == -1UL) { 2192 - /* the first partial part*/ 2193 - ggtt_mm->ggtt_mm.last_partial_off = off; 2194 - ggtt_mm->ggtt_mm.last_partial_data = e.val64; 2195 - return 0; 2196 - } else if ((g_gtt_index == 2197 - (ggtt_mm->ggtt_mm.last_partial_off >> 2198 - info->gtt_entry_size_shift)) && 2199 - (off != ggtt_mm->ggtt_mm.last_partial_off)) { 2200 - /* the second partial part */ 2190 + bool found = false; 2201 2191 2202 - int last_off = ggtt_mm->ggtt_mm.last_partial_off & 2203 - (info->gtt_entry_size - 1); 2192 + list_for_each_entry_safe(pos, n, 2193 + &ggtt_mm->ggtt_mm.partial_pte_list, list) { 2194 + if (g_gtt_index == pos->offset >> 2195 + info->gtt_entry_size_shift) { 2196 + if (off != pos->offset) { 2197 + /* the second partial part*/ 2198 + int last_off = pos->offset & 2199 + (info->gtt_entry_size - 1); 2204 2200 2205 - memcpy((void *)&e.val64 + last_off, 2206 - (void *)&ggtt_mm->ggtt_mm.last_partial_data + 2207 - last_off, bytes); 2201 + memcpy((void *)&e.val64 + last_off, 2202 + (void *)&pos->data + last_off, 2203 + bytes); 2208 2204 2209 - ggtt_mm->ggtt_mm.last_partial_off = -1UL; 2210 - } else { 2211 - int last_offset; 2205 + list_del(&pos->list); 2206 + kfree(pos); 2207 + found = true; 2208 + break; 2209 + } 2212 2210 2213 - gvt_vgpu_err("failed to populate guest ggtt entry: abnormal ggtt entry write sequence, last_partial_off=%lx, offset=%x, bytes=%d, ggtt entry size=%d\n", 2214 - ggtt_mm->ggtt_mm.last_partial_off, off, 2215 - bytes, info->gtt_entry_size); 2211 + /* update of the first partial part */ 2212 + pos->data = e.val64; 2213 + ggtt_set_guest_entry(ggtt_mm, &e, g_gtt_index); 2214 + return 0; 2215 + } 2216 + } 2216 2217 2217 - /* set host ggtt entry to scratch page and clear 2218 - * virtual ggtt entry as not present for last 2219 - * partially write offset 2220 - */ 2221 - last_offset = ggtt_mm->ggtt_mm.last_partial_off & 2222 - (~(info->gtt_entry_size - 1)); 2223 - 2224 - ggtt_get_host_entry(ggtt_mm, &m, last_offset); 2225 - ggtt_invalidate_pte(vgpu, &m); 2226 - ops->set_pfn(&m, gvt->gtt.scratch_mfn); 2227 - ops->clear_present(&m); 2228 - ggtt_set_host_entry(ggtt_mm, &m, last_offset); 2229 - ggtt_invalidate(gvt->dev_priv); 2230 - 2231 - ggtt_get_guest_entry(ggtt_mm, &e, last_offset); 2232 - ops->clear_present(&e); 2233 - ggtt_set_guest_entry(ggtt_mm, &e, last_offset); 2234 - 2235 - ggtt_mm->ggtt_mm.last_partial_off = off; 2236 - ggtt_mm->ggtt_mm.last_partial_data = e.val64; 2237 - 2238 - return 0; 2218 + if (!found) { 2219 + /* the first partial part */ 2220 + partial_pte = kzalloc(sizeof(*partial_pte), GFP_KERNEL); 2221 + if (!partial_pte) 2222 + return -ENOMEM; 2223 + partial_pte->offset = off; 2224 + partial_pte->data = e.val64; 2225 + list_add_tail(&partial_pte->list, 2226 + &ggtt_mm->ggtt_mm.partial_pte_list); 2227 + partial_update = true; 2239 2228 } 2240 2229 } 2241 2230 2242 - if (ops->test_present(&e)) { 2231 + if (!partial_update && (ops->test_present(&e))) { 2243 2232 gfn = ops->get_pfn(&e); 2244 2233 m = e; 2245 2234 ··· 2252 2263 } else 2253 2264 ops->set_pfn(&m, dma_addr >> PAGE_SHIFT); 2254 2265 } else { 2255 - ggtt_get_host_entry(ggtt_mm, &m, g_gtt_index); 2256 - ggtt_invalidate_pte(vgpu, &m); 2257 2266 ops->set_pfn(&m, gvt->gtt.scratch_mfn); 2258 2267 ops->clear_present(&m); 2259 2268 } 2260 2269 2261 2270 out: 2271 + ggtt_set_guest_entry(ggtt_mm, &e, g_gtt_index); 2272 + 2273 + ggtt_get_host_entry(ggtt_mm, &e, g_gtt_index); 2274 + ggtt_invalidate_pte(vgpu, &e); 2275 + 2262 2276 ggtt_set_host_entry(ggtt_mm, &m, g_gtt_index); 2263 2277 ggtt_invalidate(gvt->dev_priv); 2264 - ggtt_set_guest_entry(ggtt_mm, &e, g_gtt_index); 2265 2278 return 0; 2266 2279 } 2267 2280 ··· 2421 2430 2422 2431 intel_vgpu_reset_ggtt(vgpu, false); 2423 2432 2433 + INIT_LIST_HEAD(&gtt->ggtt_mm->ggtt_mm.partial_pte_list); 2434 + 2424 2435 return create_scratch_page_tree(vgpu); 2425 2436 } 2426 2437 ··· 2447 2454 2448 2455 static void intel_vgpu_destroy_ggtt_mm(struct intel_vgpu *vgpu) 2449 2456 { 2457 + struct intel_gvt_partial_pte *pos, *next; 2458 + 2459 + list_for_each_entry_safe(pos, next, 2460 + &vgpu->gtt.ggtt_mm->ggtt_mm.partial_pte_list, 2461 + list) { 2462 + gvt_dbg_mm("partial PTE update on hold 0x%lx : 0x%llx\n", 2463 + pos->offset, pos->data); 2464 + kfree(pos); 2465 + } 2450 2466 intel_vgpu_destroy_mm(vgpu->gtt.ggtt_mm); 2451 2467 vgpu->gtt.ggtt_mm = NULL; 2452 2468 }
+7 -3
drivers/gpu/drm/i915/gvt/gtt.h
··· 35 35 #define _GVT_GTT_H_ 36 36 37 37 #define I915_GTT_PAGE_SHIFT 12 38 - #define I915_GTT_PAGE_MASK (~(I915_GTT_PAGE_SIZE - 1)) 39 38 40 39 struct intel_vgpu_mm; 41 40 ··· 132 133 133 134 #define GVT_RING_CTX_NR_PDPS GEN8_3LVL_PDPES 134 135 136 + struct intel_gvt_partial_pte { 137 + unsigned long offset; 138 + u64 data; 139 + struct list_head list; 140 + }; 141 + 135 142 struct intel_vgpu_mm { 136 143 enum intel_gvt_mm_type type; 137 144 struct intel_vgpu *vgpu; ··· 162 157 } ppgtt_mm; 163 158 struct { 164 159 void *virtual_ggtt; 165 - unsigned long last_partial_off; 166 - u64 last_partial_data; 160 + struct list_head partial_pte_list; 167 161 } ggtt_mm; 168 162 }; 169 163 };
+4 -4
drivers/gpu/drm/i915/gvt/handlers.c
··· 1609 1609 return 0; 1610 1610 } 1611 1611 1612 - static int bxt_edp_psr_imr_iir_write(struct intel_vgpu *vgpu, 1612 + static int edp_psr_imr_iir_write(struct intel_vgpu *vgpu, 1613 1613 unsigned int offset, void *p_data, unsigned int bytes) 1614 1614 { 1615 1615 vgpu_vreg(vgpu, offset) = 0; ··· 2607 2607 MMIO_DFH(_MMIO(0x1a178), D_BDW_PLUS, F_CMD_ACCESS, NULL, NULL); 2608 2608 MMIO_DFH(_MMIO(0x1a17c), D_BDW_PLUS, F_CMD_ACCESS, NULL, NULL); 2609 2609 MMIO_DFH(_MMIO(0x2217c), D_BDW_PLUS, F_CMD_ACCESS, NULL, NULL); 2610 + 2611 + MMIO_DH(EDP_PSR_IMR, D_BDW_PLUS, NULL, edp_psr_imr_iir_write); 2612 + MMIO_DH(EDP_PSR_IIR, D_BDW_PLUS, NULL, edp_psr_imr_iir_write); 2610 2613 return 0; 2611 2614 } 2612 2615 ··· 3207 3204 MMIO_D(HSW_TVIDEO_DIP_GCP(TRANSCODER_A), D_BXT); 3208 3205 MMIO_D(HSW_TVIDEO_DIP_GCP(TRANSCODER_B), D_BXT); 3209 3206 MMIO_D(HSW_TVIDEO_DIP_GCP(TRANSCODER_C), D_BXT); 3210 - 3211 - MMIO_DH(EDP_PSR_IMR, D_BXT, NULL, bxt_edp_psr_imr_iir_write); 3212 - MMIO_DH(EDP_PSR_IIR, D_BXT, NULL, bxt_edp_psr_imr_iir_write); 3213 3207 3214 3208 MMIO_D(RC6_CTX_BASE, D_BXT); 3215 3209
+3 -1
drivers/gpu/drm/i915/gvt/mmio_context.c
··· 131 131 {RCS, GAMT_CHKN_BIT_REG, 0x0, false}, /* 0x4ab8 */ 132 132 133 133 {RCS, GEN9_GAMT_ECO_REG_RW_IA, 0x0, false}, /* 0x4ab0 */ 134 - {RCS, GEN9_CSFE_CHICKEN1_RCS, 0x0, false}, /* 0x20d4 */ 134 + {RCS, GEN9_CSFE_CHICKEN1_RCS, 0xffff, false}, /* 0x20d4 */ 135 135 136 136 {RCS, GEN8_GARBCNTL, 0x0, false}, /* 0xb004 */ 137 137 {RCS, GEN7_FF_THREAD_MODE, 0x0, false}, /* 0x20a0 */ ··· 158 158 int ring_id, i; 159 159 160 160 for (ring_id = 0; ring_id < ARRAY_SIZE(regs); ring_id++) { 161 + if (!HAS_ENGINE(dev_priv, ring_id)) 162 + continue; 161 163 offset.reg = regs[ring_id]; 162 164 for (i = 0; i < GEN9_MOCS_SIZE; i++) { 163 165 gen9_render_mocs.control_table[ring_id][i] =
+8 -7
drivers/gpu/drm/i915/i915_drv.c
··· 1175 1175 return -EINVAL; 1176 1176 } 1177 1177 1178 - dram_info->valid_dimm = true; 1179 - 1180 1178 /* 1181 1179 * If any of the channel is single rank channel, worst case output 1182 1180 * will be same as if single rank memory, so consider single rank ··· 1191 1193 return -EINVAL; 1192 1194 } 1193 1195 1194 - if (ch0.is_16gb_dimm || ch1.is_16gb_dimm) 1195 - dram_info->is_16gb_dimm = true; 1196 + dram_info->is_16gb_dimm = ch0.is_16gb_dimm || ch1.is_16gb_dimm; 1196 1197 1197 1198 dev_priv->dram_info.symmetric_memory = intel_is_dram_symmetric(val_ch0, 1198 1199 val_ch1, ··· 1311 1314 return -EINVAL; 1312 1315 } 1313 1316 1314 - dram_info->valid_dimm = true; 1315 1317 dram_info->valid = true; 1316 1318 return 0; 1317 1319 } ··· 1323 1327 int ret; 1324 1328 1325 1329 dram_info->valid = false; 1326 - dram_info->valid_dimm = false; 1327 - dram_info->is_16gb_dimm = false; 1328 1330 dram_info->rank = I915_DRAM_RANK_INVALID; 1329 1331 dram_info->bandwidth_kbps = 0; 1330 1332 dram_info->num_channels = 0; 1333 + 1334 + /* 1335 + * Assume 16Gb DIMMs are present until proven otherwise. 1336 + * This is only used for the level 0 watermark latency 1337 + * w/a which does not apply to bxt/glk. 1338 + */ 1339 + dram_info->is_16gb_dimm = !IS_GEN9_LP(dev_priv); 1331 1340 1332 1341 if (INTEL_GEN(dev_priv) < 9 || IS_GEMINILAKE(dev_priv)) 1333 1342 return;
-1
drivers/gpu/drm/i915/i915_drv.h
··· 1948 1948 1949 1949 struct dram_info { 1950 1950 bool valid; 1951 - bool valid_dimm; 1952 1951 bool is_16gb_dimm; 1953 1952 u8 num_channels; 1954 1953 enum dram_rank {
+7 -2
drivers/gpu/drm/i915/i915_gem_execbuffer.c
··· 460 460 * any non-page-aligned or non-canonical addresses. 461 461 */ 462 462 if (unlikely(entry->flags & EXEC_OBJECT_PINNED && 463 - entry->offset != gen8_canonical_addr(entry->offset & PAGE_MASK))) 463 + entry->offset != gen8_canonical_addr(entry->offset & I915_GTT_PAGE_MASK))) 464 464 return -EINVAL; 465 465 466 466 /* pad_to_size was once a reserved field, so sanitize it */ ··· 1268 1268 else if (gen >= 4) 1269 1269 len = 4; 1270 1270 else 1271 - len = 3; 1271 + len = 6; 1272 1272 1273 1273 batch = reloc_gpu(eb, vma, len); 1274 1274 if (IS_ERR(batch)) ··· 1306 1306 *batch++ = addr; 1307 1307 *batch++ = target_offset; 1308 1308 } else { 1309 + *batch++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL; 1310 + *batch++ = addr; 1311 + *batch++ = target_offset; 1312 + 1313 + /* And again for good measure (blb/pnv) */ 1309 1314 *batch++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL; 1310 1315 *batch++ = addr; 1311 1316 *batch++ = target_offset;
+6 -1
drivers/gpu/drm/i915/i915_gem_gtt.c
··· 1757 1757 if (i == 4) 1758 1758 continue; 1759 1759 1760 - seq_printf(m, "\t\t(%03d, %04d) %08lx: ", 1760 + seq_printf(m, "\t\t(%03d, %04d) %08llx: ", 1761 1761 pde, pte, 1762 1762 (pde * GEN6_PTES + pte) * I915_GTT_PAGE_SIZE); 1763 1763 for (i = 0; i < 4; i++) { ··· 3413 3413 ggtt->vm.insert_page = bxt_vtd_ggtt_insert_page__BKL; 3414 3414 if (ggtt->vm.clear_range != nop_clear_range) 3415 3415 ggtt->vm.clear_range = bxt_vtd_ggtt_clear_range__BKL; 3416 + 3417 + /* Prevent recursively calling stop_machine() and deadlocks. */ 3418 + dev_info(dev_priv->drm.dev, 3419 + "Disabling error capture for VT-d workaround\n"); 3420 + i915_disable_error_state(dev_priv, -ENODEV); 3416 3421 } 3417 3422 3418 3423 ggtt->invalidate = gen6_ggtt_invalidate;
+17 -15
drivers/gpu/drm/i915/i915_gem_gtt.h
··· 42 42 #include "i915_selftest.h" 43 43 #include "i915_timeline.h" 44 44 45 - #define I915_GTT_PAGE_SIZE_4K BIT(12) 46 - #define I915_GTT_PAGE_SIZE_64K BIT(16) 47 - #define I915_GTT_PAGE_SIZE_2M BIT(21) 45 + #define I915_GTT_PAGE_SIZE_4K BIT_ULL(12) 46 + #define I915_GTT_PAGE_SIZE_64K BIT_ULL(16) 47 + #define I915_GTT_PAGE_SIZE_2M BIT_ULL(21) 48 48 49 49 #define I915_GTT_PAGE_SIZE I915_GTT_PAGE_SIZE_4K 50 50 #define I915_GTT_MAX_PAGE_SIZE I915_GTT_PAGE_SIZE_2M 51 + 52 + #define I915_GTT_PAGE_MASK -I915_GTT_PAGE_SIZE 51 53 52 54 #define I915_GTT_MIN_ALIGNMENT I915_GTT_PAGE_SIZE 53 55 ··· 661 659 u64 start, u64 end, unsigned int flags); 662 660 663 661 /* Flags used by pin/bind&friends. */ 664 - #define PIN_NONBLOCK BIT(0) 665 - #define PIN_MAPPABLE BIT(1) 666 - #define PIN_ZONE_4G BIT(2) 667 - #define PIN_NONFAULT BIT(3) 668 - #define PIN_NOEVICT BIT(4) 662 + #define PIN_NONBLOCK BIT_ULL(0) 663 + #define PIN_MAPPABLE BIT_ULL(1) 664 + #define PIN_ZONE_4G BIT_ULL(2) 665 + #define PIN_NONFAULT BIT_ULL(3) 666 + #define PIN_NOEVICT BIT_ULL(4) 669 667 670 - #define PIN_MBZ BIT(5) /* I915_VMA_PIN_OVERFLOW */ 671 - #define PIN_GLOBAL BIT(6) /* I915_VMA_GLOBAL_BIND */ 672 - #define PIN_USER BIT(7) /* I915_VMA_LOCAL_BIND */ 673 - #define PIN_UPDATE BIT(8) 668 + #define PIN_MBZ BIT_ULL(5) /* I915_VMA_PIN_OVERFLOW */ 669 + #define PIN_GLOBAL BIT_ULL(6) /* I915_VMA_GLOBAL_BIND */ 670 + #define PIN_USER BIT_ULL(7) /* I915_VMA_LOCAL_BIND */ 671 + #define PIN_UPDATE BIT_ULL(8) 674 672 675 - #define PIN_HIGH BIT(9) 676 - #define PIN_OFFSET_BIAS BIT(10) 677 - #define PIN_OFFSET_FIXED BIT(11) 673 + #define PIN_HIGH BIT_ULL(9) 674 + #define PIN_OFFSET_BIAS BIT_ULL(10) 675 + #define PIN_OFFSET_FIXED BIT_ULL(11) 678 676 #define PIN_OFFSET_MASK (-I915_GTT_PAGE_SIZE) 679 677 680 678 #endif
+14 -1
drivers/gpu/drm/i915/i915_gpu_error.c
··· 648 648 return 0; 649 649 } 650 650 651 + if (IS_ERR(error)) 652 + return PTR_ERR(error); 653 + 651 654 if (*error->error_msg) 652 655 err_printf(m, "%s\n", error->error_msg); 653 656 err_printf(m, "Kernel: " UTS_RELEASE "\n"); ··· 1862 1859 error = i915_capture_gpu_state(i915); 1863 1860 if (!error) { 1864 1861 DRM_DEBUG_DRIVER("out of memory, not capturing error state\n"); 1862 + i915_disable_error_state(i915, -ENOMEM); 1865 1863 return; 1866 1864 } 1867 1865 ··· 1918 1914 i915->gpu_error.first_error = NULL; 1919 1915 spin_unlock_irq(&i915->gpu_error.lock); 1920 1916 1921 - i915_gpu_state_put(error); 1917 + if (!IS_ERR(error)) 1918 + i915_gpu_state_put(error); 1919 + } 1920 + 1921 + void i915_disable_error_state(struct drm_i915_private *i915, int err) 1922 + { 1923 + spin_lock_irq(&i915->gpu_error.lock); 1924 + if (!i915->gpu_error.first_error) 1925 + i915->gpu_error.first_error = ERR_PTR(err); 1926 + spin_unlock_irq(&i915->gpu_error.lock); 1922 1927 }
+7 -1
drivers/gpu/drm/i915/i915_gpu_error.h
··· 343 343 344 344 struct i915_gpu_state *i915_first_error_state(struct drm_i915_private *i915); 345 345 void i915_reset_error_state(struct drm_i915_private *i915); 346 + void i915_disable_error_state(struct drm_i915_private *i915, int err); 346 347 347 348 #else 348 349 ··· 356 355 static inline struct i915_gpu_state * 357 356 i915_first_error_state(struct drm_i915_private *i915) 358 357 { 359 - return NULL; 358 + return ERR_PTR(-ENODEV); 360 359 } 361 360 362 361 static inline void i915_reset_error_state(struct drm_i915_private *i915) 362 + { 363 + } 364 + 365 + static inline void i915_disable_error_state(struct drm_i915_private *i915, 366 + int err) 363 367 { 364 368 } 365 369
+12 -8
drivers/gpu/drm/i915/i915_reg.h
··· 2095 2095 2096 2096 /* ICL PHY DFLEX registers */ 2097 2097 #define PORT_TX_DFLEXDPMLE1 _MMIO(0x1638C0) 2098 - #define DFLEXDPMLE1_DPMLETC_MASK(n) (0xf << (4 * (n))) 2099 - #define DFLEXDPMLE1_DPMLETC(n, x) ((x) << (4 * (n))) 2098 + #define DFLEXDPMLE1_DPMLETC_MASK(tc_port) (0xf << (4 * (tc_port))) 2099 + #define DFLEXDPMLE1_DPMLETC_ML0(tc_port) (1 << (4 * (tc_port))) 2100 + #define DFLEXDPMLE1_DPMLETC_ML1_0(tc_port) (3 << (4 * (tc_port))) 2101 + #define DFLEXDPMLE1_DPMLETC_ML3(tc_port) (8 << (4 * (tc_port))) 2102 + #define DFLEXDPMLE1_DPMLETC_ML3_2(tc_port) (12 << (4 * (tc_port))) 2103 + #define DFLEXDPMLE1_DPMLETC_ML3_0(tc_port) (15 << (4 * (tc_port))) 2100 2104 2101 2105 /* BXT PHY Ref registers */ 2102 2106 #define _PORT_REF_DW3_A 0x16218C ··· 4597 4593 4598 4594 #define DRM_DIP_ENABLE (1 << 28) 4599 4595 #define PSR_VSC_BIT_7_SET (1 << 27) 4600 - #define VSC_SELECT_MASK (0x3 << 26) 4601 - #define VSC_SELECT_SHIFT 26 4602 - #define VSC_DIP_HW_HEA_DATA (0 << 26) 4603 - #define VSC_DIP_HW_HEA_SW_DATA (1 << 26) 4604 - #define VSC_DIP_HW_DATA_SW_HEA (2 << 26) 4605 - #define VSC_DIP_SW_HEA_DATA (3 << 26) 4596 + #define VSC_SELECT_MASK (0x3 << 25) 4597 + #define VSC_SELECT_SHIFT 25 4598 + #define VSC_DIP_HW_HEA_DATA (0 << 25) 4599 + #define VSC_DIP_HW_HEA_SW_DATA (1 << 25) 4600 + #define VSC_DIP_HW_DATA_SW_HEA (2 << 25) 4601 + #define VSC_DIP_SW_HEA_DATA (3 << 25) 4606 4602 #define VDIP_ENABLE_PPS (1 << 24) 4607 4603 4608 4604 /* Panel power sequencing */
+17
drivers/gpu/drm/i915/intel_audio.c
··· 144 144 /* HDMI N/CTS table */ 145 145 #define TMDS_297M 297000 146 146 #define TMDS_296M 296703 147 + #define TMDS_594M 594000 148 + #define TMDS_593M 593407 149 + 147 150 static const struct { 148 151 int sample_rate; 149 152 int clock; ··· 167 164 { 176400, TMDS_297M, 18816, 247500 }, 168 165 { 192000, TMDS_296M, 23296, 281250 }, 169 166 { 192000, TMDS_297M, 20480, 247500 }, 167 + { 44100, TMDS_593M, 8918, 937500 }, 168 + { 44100, TMDS_594M, 9408, 990000 }, 169 + { 48000, TMDS_593M, 5824, 562500 }, 170 + { 48000, TMDS_594M, 6144, 594000 }, 171 + { 32000, TMDS_593M, 5824, 843750 }, 172 + { 32000, TMDS_594M, 3072, 445500 }, 173 + { 88200, TMDS_593M, 17836, 937500 }, 174 + { 88200, TMDS_594M, 18816, 990000 }, 175 + { 96000, TMDS_593M, 11648, 562500 }, 176 + { 96000, TMDS_594M, 12288, 594000 }, 177 + { 176400, TMDS_593M, 35672, 937500 }, 178 + { 176400, TMDS_594M, 37632, 990000 }, 179 + { 192000, TMDS_593M, 23296, 562500 }, 180 + { 192000, TMDS_594M, 24576, 594000 }, 170 181 }; 171 182 172 183 /* get AUD_CONFIG_PIXEL_CLOCK_HDMI_* value for mode */
+2 -16
drivers/gpu/drm/i915/intel_cdclk.c
··· 2138 2138 static int intel_pixel_rate_to_cdclk(struct drm_i915_private *dev_priv, 2139 2139 int pixel_rate) 2140 2140 { 2141 - if (INTEL_GEN(dev_priv) >= 10) 2141 + if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) 2142 2142 return DIV_ROUND_UP(pixel_rate, 2); 2143 - else if (IS_GEMINILAKE(dev_priv)) 2144 - /* 2145 - * FIXME: Avoid using a pixel clock that is more than 99% of the cdclk 2146 - * as a temporary workaround. Use a higher cdclk instead. (Note that 2147 - * intel_compute_max_dotclk() limits the max pixel clock to 99% of max 2148 - * cdclk.) 2149 - */ 2150 - return DIV_ROUND_UP(pixel_rate * 100, 2 * 99); 2151 2143 else if (IS_GEN9(dev_priv) || 2152 2144 IS_BROADWELL(dev_priv) || IS_HASWELL(dev_priv)) 2153 2145 return pixel_rate; ··· 2535 2543 { 2536 2544 int max_cdclk_freq = dev_priv->max_cdclk_freq; 2537 2545 2538 - if (INTEL_GEN(dev_priv) >= 10) 2546 + if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) 2539 2547 return 2 * max_cdclk_freq; 2540 - else if (IS_GEMINILAKE(dev_priv)) 2541 - /* 2542 - * FIXME: Limiting to 99% as a temporary workaround. See 2543 - * intel_min_cdclk() for details. 2544 - */ 2545 - return 2 * max_cdclk_freq * 99 / 100; 2546 2548 else if (IS_GEN9(dev_priv) || 2547 2549 IS_BROADWELL(dev_priv) || IS_HASWELL(dev_priv)) 2548 2550 return max_cdclk_freq;
+1 -1
drivers/gpu/drm/i915/intel_device_info.c
··· 474 474 u8 eu_disabled_mask; 475 475 u32 n_disabled; 476 476 477 - if (!(sseu->subslice_mask[ss] & BIT(ss))) 477 + if (!(sseu->subslice_mask[s] & BIT(ss))) 478 478 /* skip disabled subslice */ 479 479 continue; 480 480
+88 -15
drivers/gpu/drm/i915/intel_display.c
··· 2890 2890 return; 2891 2891 2892 2892 valid_fb: 2893 + intel_state->base.rotation = plane_config->rotation; 2893 2894 intel_fill_fb_ggtt_view(&intel_state->view, fb, 2894 2895 intel_state->base.rotation); 2895 2896 intel_state->color_plane[0].stride = ··· 4851 4850 * chroma samples for both of the luma samples, and thus we don't 4852 4851 * actually get the expected MPEG2 chroma siting convention :( 4853 4852 * The same behaviour is observed on pre-SKL platforms as well. 4853 + * 4854 + * Theory behind the formula (note that we ignore sub-pixel 4855 + * source coordinates): 4856 + * s = source sample position 4857 + * d = destination sample position 4858 + * 4859 + * Downscaling 4:1: 4860 + * -0.5 4861 + * | 0.0 4862 + * | | 1.5 (initial phase) 4863 + * | | | 4864 + * v v v 4865 + * | s | s | s | s | 4866 + * | d | 4867 + * 4868 + * Upscaling 1:4: 4869 + * -0.5 4870 + * | -0.375 (initial phase) 4871 + * | | 0.0 4872 + * | | | 4873 + * v v v 4874 + * | s | 4875 + * | d | d | d | d | 4854 4876 */ 4855 - u16 skl_scaler_calc_phase(int sub, bool chroma_cosited) 4877 + u16 skl_scaler_calc_phase(int sub, int scale, bool chroma_cosited) 4856 4878 { 4857 4879 int phase = -0x8000; 4858 4880 u16 trip = 0; 4859 4881 4860 4882 if (chroma_cosited) 4861 4883 phase += (sub - 1) * 0x8000 / sub; 4884 + 4885 + phase += scale / (2 * sub); 4886 + 4887 + /* 4888 + * Hardware initial phase limited to [-0.5:1.5]. 4889 + * Since the max hardware scale factor is 3.0, we 4890 + * should never actually excdeed 1.0 here. 4891 + */ 4892 + WARN_ON(phase < -0x8000 || phase > 0x18000); 4862 4893 4863 4894 if (phase < 0) 4864 4895 phase = 0x10000 + phase; ··· 5100 5067 5101 5068 if (crtc->config->pch_pfit.enabled) { 5102 5069 u16 uv_rgb_hphase, uv_rgb_vphase; 5070 + int pfit_w, pfit_h, hscale, vscale; 5103 5071 int id; 5104 5072 5105 5073 if (WARN_ON(crtc->config->scaler_state.scaler_id < 0)) 5106 5074 return; 5107 5075 5108 - uv_rgb_hphase = skl_scaler_calc_phase(1, false); 5109 - uv_rgb_vphase = skl_scaler_calc_phase(1, false); 5076 + pfit_w = (crtc->config->pch_pfit.size >> 16) & 0xFFFF; 5077 + pfit_h = crtc->config->pch_pfit.size & 0xFFFF; 5078 + 5079 + hscale = (crtc->config->pipe_src_w << 16) / pfit_w; 5080 + vscale = (crtc->config->pipe_src_h << 16) / pfit_h; 5081 + 5082 + uv_rgb_hphase = skl_scaler_calc_phase(1, hscale, false); 5083 + uv_rgb_vphase = skl_scaler_calc_phase(1, vscale, false); 5110 5084 5111 5085 id = scaler_state->scaler_id; 5112 5086 I915_WRITE(SKL_PS_CTRL(pipe, id), PS_SCALER_EN | ··· 7883 7843 plane_config->tiling = I915_TILING_X; 7884 7844 fb->modifier = I915_FORMAT_MOD_X_TILED; 7885 7845 } 7846 + 7847 + if (val & DISPPLANE_ROTATE_180) 7848 + plane_config->rotation = DRM_MODE_ROTATE_180; 7886 7849 } 7850 + 7851 + if (IS_CHERRYVIEW(dev_priv) && pipe == PIPE_B && 7852 + val & DISPPLANE_MIRROR) 7853 + plane_config->rotation |= DRM_MODE_REFLECT_X; 7887 7854 7888 7855 pixel_format = val & DISPPLANE_PIXFORMAT_MASK; 7889 7856 fourcc = i9xx_format_to_fourcc(pixel_format); ··· 8959 8912 MISSING_CASE(tiling); 8960 8913 goto error; 8961 8914 } 8915 + 8916 + /* 8917 + * DRM_MODE_ROTATE_ is counter clockwise to stay compatible with Xrandr 8918 + * while i915 HW rotation is clockwise, thats why this swapping. 8919 + */ 8920 + switch (val & PLANE_CTL_ROTATE_MASK) { 8921 + case PLANE_CTL_ROTATE_0: 8922 + plane_config->rotation = DRM_MODE_ROTATE_0; 8923 + break; 8924 + case PLANE_CTL_ROTATE_90: 8925 + plane_config->rotation = DRM_MODE_ROTATE_270; 8926 + break; 8927 + case PLANE_CTL_ROTATE_180: 8928 + plane_config->rotation = DRM_MODE_ROTATE_180; 8929 + break; 8930 + case PLANE_CTL_ROTATE_270: 8931 + plane_config->rotation = DRM_MODE_ROTATE_90; 8932 + break; 8933 + } 8934 + 8935 + if (INTEL_GEN(dev_priv) >= 10 && 8936 + val & PLANE_CTL_FLIP_HORIZONTAL) 8937 + plane_config->rotation |= DRM_MODE_REFLECT_X; 8962 8938 8963 8939 base = I915_READ(PLANE_SURF(pipe, plane_id)) & 0xfffff000; 8964 8940 plane_config->base = base; ··· 12838 12768 intel_check_cpu_fifo_underruns(dev_priv); 12839 12769 intel_check_pch_fifo_underruns(dev_priv); 12840 12770 12841 - if (!new_crtc_state->active) { 12842 - /* 12843 - * Make sure we don't call initial_watermarks 12844 - * for ILK-style watermark updates. 12845 - * 12846 - * No clue what this is supposed to achieve. 12847 - */ 12848 - if (INTEL_GEN(dev_priv) >= 9) 12849 - dev_priv->display.initial_watermarks(intel_state, 12850 - to_intel_crtc_state(new_crtc_state)); 12851 - } 12771 + /* FIXME unify this for all platforms */ 12772 + if (!new_crtc_state->active && 12773 + !HAS_GMCH_DISPLAY(dev_priv) && 12774 + dev_priv->display.initial_watermarks) 12775 + dev_priv->display.initial_watermarks(intel_state, 12776 + to_intel_crtc_state(new_crtc_state)); 12852 12777 } 12853 12778 } 12854 12779 ··· 14711 14646 fb->height < SKL_MIN_YUV_420_SRC_H || 14712 14647 (fb->width % 4) != 0 || (fb->height % 4) != 0)) { 14713 14648 DRM_DEBUG_KMS("src dimensions not correct for NV12\n"); 14714 - return -EINVAL; 14649 + goto err; 14715 14650 } 14716 14651 14717 14652 for (i = 0; i < fb->format->num_planes; i++) { ··· 15298 15233 ret = drm_atomic_add_affected_planes(state, crtc); 15299 15234 if (ret) 15300 15235 goto out; 15236 + 15237 + /* 15238 + * FIXME hack to force a LUT update to avoid the 15239 + * plane update forcing the pipe gamma on without 15240 + * having a proper LUT loaded. Remove once we 15241 + * have readout for pipe gamma enable. 15242 + */ 15243 + crtc_state->color_mgmt_changed = true; 15301 15244 } 15302 15245 } 15303 15246
+4 -4
drivers/gpu/drm/i915/intel_dp_mst.c
··· 452 452 if (!intel_connector) 453 453 return NULL; 454 454 455 + intel_connector->get_hw_state = intel_dp_mst_get_hw_state; 456 + intel_connector->mst_port = intel_dp; 457 + intel_connector->port = port; 458 + 455 459 connector = &intel_connector->base; 456 460 ret = drm_connector_init(dev, connector, &intel_dp_mst_connector_funcs, 457 461 DRM_MODE_CONNECTOR_DisplayPort); ··· 465 461 } 466 462 467 463 drm_connector_helper_add(connector, &intel_dp_mst_connector_helper_funcs); 468 - 469 - intel_connector->get_hw_state = intel_dp_mst_get_hw_state; 470 - intel_connector->mst_port = intel_dp; 471 - intel_connector->port = port; 472 464 473 465 for_each_pipe(dev_priv, pipe) { 474 466 struct drm_encoder *enc =
+2 -1
drivers/gpu/drm/i915/intel_drv.h
··· 547 547 unsigned int tiling; 548 548 int size; 549 549 u32 base; 550 + u8 rotation; 550 551 }; 551 552 552 553 #define SKL_MIN_SRC_W 8 ··· 1647 1646 void intel_crtc_arm_fifo_underrun(struct intel_crtc *crtc, 1648 1647 struct intel_crtc_state *crtc_state); 1649 1648 1650 - u16 skl_scaler_calc_phase(int sub, bool chroma_center); 1649 + u16 skl_scaler_calc_phase(int sub, int scale, bool chroma_center); 1651 1650 int skl_update_scaler_crtc(struct intel_crtc_state *crtc_state); 1652 1651 int skl_max_scale(const struct intel_crtc_state *crtc_state, 1653 1652 u32 pixel_format);
+52 -22
drivers/gpu/drm/i915/intel_hotplug.c
··· 228 228 drm_for_each_connector_iter(connector, &conn_iter) { 229 229 struct intel_connector *intel_connector = to_intel_connector(connector); 230 230 231 - if (intel_connector->encoder->hpd_pin == pin) { 231 + /* Don't check MST ports, they don't have pins */ 232 + if (!intel_connector->mst_port && 233 + intel_connector->encoder->hpd_pin == pin) { 232 234 if (connector->polled != intel_connector->polled) 233 235 DRM_DEBUG_DRIVER("Reenabling HPD on connector %s\n", 234 236 connector->name); ··· 397 395 struct intel_encoder *encoder; 398 396 bool storm_detected = false; 399 397 bool queue_dig = false, queue_hp = false; 398 + u32 long_hpd_pulse_mask = 0; 399 + u32 short_hpd_pulse_mask = 0; 400 + enum hpd_pin pin; 400 401 401 402 if (!pin_mask) 402 403 return; 403 404 404 405 spin_lock(&dev_priv->irq_lock); 405 - for_each_intel_encoder(&dev_priv->drm, encoder) { 406 - enum hpd_pin pin = encoder->hpd_pin; 407 - bool has_hpd_pulse = intel_encoder_has_hpd_pulse(encoder); 408 406 407 + /* 408 + * Determine whether ->hpd_pulse() exists for each pin, and 409 + * whether we have a short or a long pulse. This is needed 410 + * as each pin may have up to two encoders (HDMI and DP) and 411 + * only the one of them (DP) will have ->hpd_pulse(). 412 + */ 413 + for_each_intel_encoder(&dev_priv->drm, encoder) { 414 + bool has_hpd_pulse = intel_encoder_has_hpd_pulse(encoder); 415 + enum port port = encoder->port; 416 + bool long_hpd; 417 + 418 + pin = encoder->hpd_pin; 409 419 if (!(BIT(pin) & pin_mask)) 410 420 continue; 411 421 412 - if (has_hpd_pulse) { 413 - bool long_hpd = long_mask & BIT(pin); 414 - enum port port = encoder->port; 422 + if (!has_hpd_pulse) 423 + continue; 415 424 416 - DRM_DEBUG_DRIVER("digital hpd port %c - %s\n", port_name(port), 417 - long_hpd ? "long" : "short"); 418 - /* 419 - * For long HPD pulses we want to have the digital queue happen, 420 - * but we still want HPD storm detection to function. 421 - */ 422 - queue_dig = true; 423 - if (long_hpd) { 424 - dev_priv->hotplug.long_port_mask |= (1 << port); 425 - } else { 426 - /* for short HPD just trigger the digital queue */ 427 - dev_priv->hotplug.short_port_mask |= (1 << port); 428 - continue; 429 - } 425 + long_hpd = long_mask & BIT(pin); 426 + 427 + DRM_DEBUG_DRIVER("digital hpd port %c - %s\n", port_name(port), 428 + long_hpd ? "long" : "short"); 429 + queue_dig = true; 430 + 431 + if (long_hpd) { 432 + long_hpd_pulse_mask |= BIT(pin); 433 + dev_priv->hotplug.long_port_mask |= BIT(port); 434 + } else { 435 + short_hpd_pulse_mask |= BIT(pin); 436 + dev_priv->hotplug.short_port_mask |= BIT(port); 430 437 } 438 + } 439 + 440 + /* Now process each pin just once */ 441 + for_each_hpd_pin(pin) { 442 + bool long_hpd; 443 + 444 + if (!(BIT(pin) & pin_mask)) 445 + continue; 431 446 432 447 if (dev_priv->hotplug.stats[pin].state == HPD_DISABLED) { 433 448 /* ··· 461 442 if (dev_priv->hotplug.stats[pin].state != HPD_ENABLED) 462 443 continue; 463 444 464 - if (!has_hpd_pulse) { 445 + /* 446 + * Delegate to ->hpd_pulse() if one of the encoders for this 447 + * pin has it, otherwise let the hotplug_work deal with this 448 + * pin directly. 449 + */ 450 + if (((short_hpd_pulse_mask | long_hpd_pulse_mask) & BIT(pin))) { 451 + long_hpd = long_hpd_pulse_mask & BIT(pin); 452 + } else { 465 453 dev_priv->hotplug.event_bits |= BIT(pin); 454 + long_hpd = true; 466 455 queue_hp = true; 467 456 } 457 + 458 + if (!long_hpd) 459 + continue; 468 460 469 461 if (intel_hpd_irq_storm_detect(dev_priv, pin)) { 470 462 dev_priv->hotplug.event_bits &= ~BIT(pin);
+3 -1
drivers/gpu/drm/i915/intel_lpe_audio.c
··· 297 297 lpe_audio_platdev_destroy(dev_priv); 298 298 299 299 irq_free_desc(dev_priv->lpe_audio.irq); 300 - } 301 300 301 + dev_priv->lpe_audio.irq = -1; 302 + dev_priv->lpe_audio.platdev = NULL; 303 + } 302 304 303 305 /** 304 306 * intel_lpe_audio_notify() - notify lpe audio event
+13 -1
drivers/gpu/drm/i915/intel_lrc.c
··· 424 424 425 425 reg_state[CTX_RING_TAIL+1] = intel_ring_set_tail(rq->ring, rq->tail); 426 426 427 - /* True 32b PPGTT with dynamic page allocation: update PDP 427 + /* 428 + * True 32b PPGTT with dynamic page allocation: update PDP 428 429 * registers and point the unallocated PDPs to scratch page. 429 430 * PML4 is allocated during ppgtt init, so this is not needed 430 431 * in 48-bit mode. ··· 433 432 if (ppgtt && !i915_vm_is_48bit(&ppgtt->vm)) 434 433 execlists_update_context_pdps(ppgtt, reg_state); 435 434 435 + /* 436 + * Make sure the context image is complete before we submit it to HW. 437 + * 438 + * Ostensibly, writes (including the WCB) should be flushed prior to 439 + * an uncached write such as our mmio register access, the empirical 440 + * evidence (esp. on Braswell) suggests that the WC write into memory 441 + * may not be visible to the HW prior to the completion of the UC 442 + * register write and that we may begin execution from the context 443 + * before its image is complete leading to invalid PD chasing. 444 + */ 445 + wmb(); 436 446 return ce->lrc_desc; 437 447 } 438 448
+41 -3
drivers/gpu/drm/i915/intel_pm.c
··· 2493 2493 uint32_t method1, method2; 2494 2494 int cpp; 2495 2495 2496 + if (mem_value == 0) 2497 + return U32_MAX; 2498 + 2496 2499 if (!intel_wm_plane_visible(cstate, pstate)) 2497 2500 return 0; 2498 2501 ··· 2525 2522 uint32_t method1, method2; 2526 2523 int cpp; 2527 2524 2525 + if (mem_value == 0) 2526 + return U32_MAX; 2527 + 2528 2528 if (!intel_wm_plane_visible(cstate, pstate)) 2529 2529 return 0; 2530 2530 ··· 2550 2544 uint32_t mem_value) 2551 2545 { 2552 2546 int cpp; 2547 + 2548 + if (mem_value == 0) 2549 + return U32_MAX; 2553 2550 2554 2551 if (!intel_wm_plane_visible(cstate, pstate)) 2555 2552 return 0; ··· 2890 2881 * any underrun. If not able to get Dimm info assume 16GB dimm 2891 2882 * to avoid any underrun. 2892 2883 */ 2893 - if (!dev_priv->dram_info.valid_dimm || 2894 - dev_priv->dram_info.is_16gb_dimm) 2884 + if (dev_priv->dram_info.is_16gb_dimm) 2895 2885 wm[0] += 1; 2896 2886 2897 2887 } else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) { ··· 3017 3009 intel_print_wm_latency(dev_priv, "Cursor", dev_priv->wm.cur_latency); 3018 3010 } 3019 3011 3012 + static void snb_wm_lp3_irq_quirk(struct drm_i915_private *dev_priv) 3013 + { 3014 + /* 3015 + * On some SNB machines (Thinkpad X220 Tablet at least) 3016 + * LP3 usage can cause vblank interrupts to be lost. 3017 + * The DEIIR bit will go high but it looks like the CPU 3018 + * never gets interrupted. 3019 + * 3020 + * It's not clear whether other interrupt source could 3021 + * be affected or if this is somehow limited to vblank 3022 + * interrupts only. To play it safe we disable LP3 3023 + * watermarks entirely. 3024 + */ 3025 + if (dev_priv->wm.pri_latency[3] == 0 && 3026 + dev_priv->wm.spr_latency[3] == 0 && 3027 + dev_priv->wm.cur_latency[3] == 0) 3028 + return; 3029 + 3030 + dev_priv->wm.pri_latency[3] = 0; 3031 + dev_priv->wm.spr_latency[3] = 0; 3032 + dev_priv->wm.cur_latency[3] = 0; 3033 + 3034 + DRM_DEBUG_KMS("LP3 watermarks disabled due to potential for lost interrupts\n"); 3035 + intel_print_wm_latency(dev_priv, "Primary", dev_priv->wm.pri_latency); 3036 + intel_print_wm_latency(dev_priv, "Sprite", dev_priv->wm.spr_latency); 3037 + intel_print_wm_latency(dev_priv, "Cursor", dev_priv->wm.cur_latency); 3038 + } 3039 + 3020 3040 static void ilk_setup_wm_latency(struct drm_i915_private *dev_priv) 3021 3041 { 3022 3042 intel_read_wm_latency(dev_priv, dev_priv->wm.pri_latency); ··· 3061 3025 intel_print_wm_latency(dev_priv, "Sprite", dev_priv->wm.spr_latency); 3062 3026 intel_print_wm_latency(dev_priv, "Cursor", dev_priv->wm.cur_latency); 3063 3027 3064 - if (IS_GEN6(dev_priv)) 3028 + if (IS_GEN6(dev_priv)) { 3065 3029 snb_wm_latency_quirk(dev_priv); 3030 + snb_wm_lp3_irq_quirk(dev_priv); 3031 + } 3066 3032 } 3067 3033 3068 3034 static void skl_setup_wm_latency(struct drm_i915_private *dev_priv)
+36 -2
drivers/gpu/drm/i915/intel_ringbuffer.c
··· 91 91 gen4_render_ring_flush(struct i915_request *rq, u32 mode) 92 92 { 93 93 u32 cmd, *cs; 94 + int i; 94 95 95 96 /* 96 97 * read/write caches: ··· 128 127 cmd |= MI_INVALIDATE_ISP; 129 128 } 130 129 131 - cs = intel_ring_begin(rq, 2); 130 + i = 2; 131 + if (mode & EMIT_INVALIDATE) 132 + i += 20; 133 + 134 + cs = intel_ring_begin(rq, i); 132 135 if (IS_ERR(cs)) 133 136 return PTR_ERR(cs); 134 137 135 138 *cs++ = cmd; 136 - *cs++ = MI_NOOP; 139 + 140 + /* 141 + * A random delay to let the CS invalidate take effect? Without this 142 + * delay, the GPU relocation path fails as the CS does not see 143 + * the updated contents. Just as important, if we apply the flushes 144 + * to the EMIT_FLUSH branch (i.e. immediately after the relocation 145 + * write and before the invalidate on the next batch), the relocations 146 + * still fail. This implies that is a delay following invalidation 147 + * that is required to reset the caches as opposed to a delay to 148 + * ensure the memory is written. 149 + */ 150 + if (mode & EMIT_INVALIDATE) { 151 + *cs++ = GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE; 152 + *cs++ = i915_ggtt_offset(rq->engine->scratch) | 153 + PIPE_CONTROL_GLOBAL_GTT; 154 + *cs++ = 0; 155 + *cs++ = 0; 156 + 157 + for (i = 0; i < 12; i++) 158 + *cs++ = MI_FLUSH; 159 + 160 + *cs++ = GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE; 161 + *cs++ = i915_ggtt_offset(rq->engine->scratch) | 162 + PIPE_CONTROL_GLOBAL_GTT; 163 + *cs++ = 0; 164 + *cs++ = 0; 165 + } 166 + 167 + *cs++ = cmd; 168 + 137 169 intel_ring_advance(rq, cs); 138 170 139 171 return 0;
+7 -9
drivers/gpu/drm/i915/intel_runtime_pm.c
··· 2749 2749 }, 2750 2750 }, 2751 2751 { 2752 + .name = "DC off", 2753 + .domains = ICL_DISPLAY_DC_OFF_POWER_DOMAINS, 2754 + .ops = &gen9_dc_off_power_well_ops, 2755 + .id = DISP_PW_ID_NONE, 2756 + }, 2757 + { 2752 2758 .name = "power well 2", 2753 2759 .domains = ICL_PW_2_POWER_DOMAINS, 2754 2760 .ops = &hsw_power_well_ops, ··· 2764 2758 .hsw.idx = ICL_PW_CTL_IDX_PW_2, 2765 2759 .hsw.has_fuses = true, 2766 2760 }, 2767 - }, 2768 - { 2769 - .name = "DC off", 2770 - .domains = ICL_DISPLAY_DC_OFF_POWER_DOMAINS, 2771 - .ops = &gen9_dc_off_power_well_ops, 2772 - .id = DISP_PW_ID_NONE, 2773 2761 }, 2774 2762 { 2775 2763 .name = "power well 3", ··· 3176 3176 void icl_dbuf_slices_update(struct drm_i915_private *dev_priv, 3177 3177 u8 req_slices) 3178 3178 { 3179 - u8 hw_enabled_slices = dev_priv->wm.skl_hw.ddb.enabled_slices; 3180 - u32 val; 3179 + const u8 hw_enabled_slices = dev_priv->wm.skl_hw.ddb.enabled_slices; 3181 3180 bool ret; 3182 3181 3183 3182 if (req_slices > intel_dbuf_max_slices(dev_priv)) { ··· 3187 3188 if (req_slices == hw_enabled_slices || req_slices == 0) 3188 3189 return; 3189 3190 3190 - val = I915_READ(DBUF_CTL_S2); 3191 3191 if (req_slices > hw_enabled_slices) 3192 3192 ret = intel_dbuf_slice_set(dev_priv, DBUF_CTL_S2, true); 3193 3193 else
+54 -39
drivers/gpu/drm/i915/intel_sprite.c
··· 302 302 return min(8192 * cpp, 32768); 303 303 } 304 304 305 + static void 306 + skl_program_scaler(struct intel_plane *plane, 307 + const struct intel_crtc_state *crtc_state, 308 + const struct intel_plane_state *plane_state) 309 + { 310 + struct drm_i915_private *dev_priv = to_i915(plane->base.dev); 311 + enum pipe pipe = plane->pipe; 312 + int scaler_id = plane_state->scaler_id; 313 + const struct intel_scaler *scaler = 314 + &crtc_state->scaler_state.scalers[scaler_id]; 315 + int crtc_x = plane_state->base.dst.x1; 316 + int crtc_y = plane_state->base.dst.y1; 317 + uint32_t crtc_w = drm_rect_width(&plane_state->base.dst); 318 + uint32_t crtc_h = drm_rect_height(&plane_state->base.dst); 319 + u16 y_hphase, uv_rgb_hphase; 320 + u16 y_vphase, uv_rgb_vphase; 321 + int hscale, vscale; 322 + 323 + hscale = drm_rect_calc_hscale(&plane_state->base.src, 324 + &plane_state->base.dst, 325 + 0, INT_MAX); 326 + vscale = drm_rect_calc_vscale(&plane_state->base.src, 327 + &plane_state->base.dst, 328 + 0, INT_MAX); 329 + 330 + /* TODO: handle sub-pixel coordinates */ 331 + if (plane_state->base.fb->format->format == DRM_FORMAT_NV12) { 332 + y_hphase = skl_scaler_calc_phase(1, hscale, false); 333 + y_vphase = skl_scaler_calc_phase(1, vscale, false); 334 + 335 + /* MPEG2 chroma siting convention */ 336 + uv_rgb_hphase = skl_scaler_calc_phase(2, hscale, true); 337 + uv_rgb_vphase = skl_scaler_calc_phase(2, vscale, false); 338 + } else { 339 + /* not used */ 340 + y_hphase = 0; 341 + y_vphase = 0; 342 + 343 + uv_rgb_hphase = skl_scaler_calc_phase(1, hscale, false); 344 + uv_rgb_vphase = skl_scaler_calc_phase(1, vscale, false); 345 + } 346 + 347 + I915_WRITE_FW(SKL_PS_CTRL(pipe, scaler_id), 348 + PS_SCALER_EN | PS_PLANE_SEL(plane->id) | scaler->mode); 349 + I915_WRITE_FW(SKL_PS_PWR_GATE(pipe, scaler_id), 0); 350 + I915_WRITE_FW(SKL_PS_VPHASE(pipe, scaler_id), 351 + PS_Y_PHASE(y_vphase) | PS_UV_RGB_PHASE(uv_rgb_vphase)); 352 + I915_WRITE_FW(SKL_PS_HPHASE(pipe, scaler_id), 353 + PS_Y_PHASE(y_hphase) | PS_UV_RGB_PHASE(uv_rgb_hphase)); 354 + I915_WRITE_FW(SKL_PS_WIN_POS(pipe, scaler_id), (crtc_x << 16) | crtc_y); 355 + I915_WRITE_FW(SKL_PS_WIN_SZ(pipe, scaler_id), (crtc_w << 16) | crtc_h); 356 + } 357 + 305 358 void 306 359 skl_update_plane(struct intel_plane *plane, 307 360 const struct intel_crtc_state *crtc_state, 308 361 const struct intel_plane_state *plane_state) 309 362 { 310 363 struct drm_i915_private *dev_priv = to_i915(plane->base.dev); 311 - const struct drm_framebuffer *fb = plane_state->base.fb; 312 364 enum plane_id plane_id = plane->id; 313 365 enum pipe pipe = plane->pipe; 314 366 u32 plane_ctl = plane_state->ctl; ··· 370 318 u32 aux_stride = skl_plane_stride(plane_state, 1); 371 319 int crtc_x = plane_state->base.dst.x1; 372 320 int crtc_y = plane_state->base.dst.y1; 373 - uint32_t crtc_w = drm_rect_width(&plane_state->base.dst); 374 - uint32_t crtc_h = drm_rect_height(&plane_state->base.dst); 375 321 uint32_t x = plane_state->color_plane[0].x; 376 322 uint32_t y = plane_state->color_plane[0].y; 377 323 uint32_t src_w = drm_rect_width(&plane_state->base.src) >> 16; ··· 379 329 /* Sizes are 0 based */ 380 330 src_w--; 381 331 src_h--; 382 - crtc_w--; 383 - crtc_h--; 384 332 385 333 spin_lock_irqsave(&dev_priv->uncore.lock, irqflags); 386 334 ··· 401 353 (plane_state->color_plane[1].y << 16) | 402 354 plane_state->color_plane[1].x); 403 355 404 - /* program plane scaler */ 405 356 if (plane_state->scaler_id >= 0) { 406 - int scaler_id = plane_state->scaler_id; 407 - const struct intel_scaler *scaler = 408 - &crtc_state->scaler_state.scalers[scaler_id]; 409 - u16 y_hphase, uv_rgb_hphase; 410 - u16 y_vphase, uv_rgb_vphase; 411 - 412 - /* TODO: handle sub-pixel coordinates */ 413 - if (fb->format->format == DRM_FORMAT_NV12) { 414 - y_hphase = skl_scaler_calc_phase(1, false); 415 - y_vphase = skl_scaler_calc_phase(1, false); 416 - 417 - /* MPEG2 chroma siting convention */ 418 - uv_rgb_hphase = skl_scaler_calc_phase(2, true); 419 - uv_rgb_vphase = skl_scaler_calc_phase(2, false); 420 - } else { 421 - /* not used */ 422 - y_hphase = 0; 423 - y_vphase = 0; 424 - 425 - uv_rgb_hphase = skl_scaler_calc_phase(1, false); 426 - uv_rgb_vphase = skl_scaler_calc_phase(1, false); 427 - } 428 - 429 - I915_WRITE_FW(SKL_PS_CTRL(pipe, scaler_id), 430 - PS_SCALER_EN | PS_PLANE_SEL(plane_id) | scaler->mode); 431 - I915_WRITE_FW(SKL_PS_PWR_GATE(pipe, scaler_id), 0); 432 - I915_WRITE_FW(SKL_PS_VPHASE(pipe, scaler_id), 433 - PS_Y_PHASE(y_vphase) | PS_UV_RGB_PHASE(uv_rgb_vphase)); 434 - I915_WRITE_FW(SKL_PS_HPHASE(pipe, scaler_id), 435 - PS_Y_PHASE(y_hphase) | PS_UV_RGB_PHASE(uv_rgb_hphase)); 436 - I915_WRITE_FW(SKL_PS_WIN_POS(pipe, scaler_id), (crtc_x << 16) | crtc_y); 437 - I915_WRITE_FW(SKL_PS_WIN_SZ(pipe, scaler_id), 438 - ((crtc_w + 1) << 16)|(crtc_h + 1)); 357 + skl_program_scaler(plane, crtc_state, plane_state); 439 358 440 359 I915_WRITE_FW(PLANE_POS(pipe, plane_id), 0); 441 360 } else {
+1 -1
drivers/gpu/drm/i915/selftests/huge_pages.c
··· 551 551 err = igt_check_page_sizes(vma); 552 552 553 553 if (vma->page_sizes.gtt != I915_GTT_PAGE_SIZE_4K) { 554 - pr_err("page_sizes.gtt=%u, expected %lu\n", 554 + pr_err("page_sizes.gtt=%u, expected %llu\n", 555 555 vma->page_sizes.gtt, I915_GTT_PAGE_SIZE_4K); 556 556 err = -EINVAL; 557 557 }
+3 -3
drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
··· 1337 1337 GEM_BUG_ON(!drm_mm_node_allocated(&vma->node)); 1338 1338 if (vma->node.start != total || 1339 1339 vma->node.size != 2*I915_GTT_PAGE_SIZE) { 1340 - pr_err("i915_gem_gtt_reserve (pass 1) placement failed, found (%llx + %llx), expected (%llx + %lx)\n", 1340 + pr_err("i915_gem_gtt_reserve (pass 1) placement failed, found (%llx + %llx), expected (%llx + %llx)\n", 1341 1341 vma->node.start, vma->node.size, 1342 1342 total, 2*I915_GTT_PAGE_SIZE); 1343 1343 err = -EINVAL; ··· 1386 1386 GEM_BUG_ON(!drm_mm_node_allocated(&vma->node)); 1387 1387 if (vma->node.start != total || 1388 1388 vma->node.size != 2*I915_GTT_PAGE_SIZE) { 1389 - pr_err("i915_gem_gtt_reserve (pass 2) placement failed, found (%llx + %llx), expected (%llx + %lx)\n", 1389 + pr_err("i915_gem_gtt_reserve (pass 2) placement failed, found (%llx + %llx), expected (%llx + %llx)\n", 1390 1390 vma->node.start, vma->node.size, 1391 1391 total, 2*I915_GTT_PAGE_SIZE); 1392 1392 err = -EINVAL; ··· 1430 1430 GEM_BUG_ON(!drm_mm_node_allocated(&vma->node)); 1431 1431 if (vma->node.start != offset || 1432 1432 vma->node.size != 2*I915_GTT_PAGE_SIZE) { 1433 - pr_err("i915_gem_gtt_reserve (pass 3) placement failed, found (%llx + %llx), expected (%llx + %lx)\n", 1433 + pr_err("i915_gem_gtt_reserve (pass 3) placement failed, found (%llx + %llx), expected (%llx + %llx)\n", 1434 1434 vma->node.start, vma->node.size, 1435 1435 offset, 2*I915_GTT_PAGE_SIZE); 1436 1436 err = -EINVAL;
+25 -2
drivers/gpu/drm/meson/meson_crtc.c
··· 45 45 struct drm_crtc base; 46 46 struct drm_pending_vblank_event *event; 47 47 struct meson_drm *priv; 48 + bool enabled; 48 49 }; 49 50 #define to_meson_crtc(x) container_of(x, struct meson_crtc, base) 50 51 ··· 81 80 82 81 }; 83 82 84 - static void meson_crtc_atomic_enable(struct drm_crtc *crtc, 85 - struct drm_crtc_state *old_state) 83 + static void meson_crtc_enable(struct drm_crtc *crtc) 86 84 { 87 85 struct meson_crtc *meson_crtc = to_meson_crtc(crtc); 88 86 struct drm_crtc_state *crtc_state = crtc->state; ··· 101 101 writel_bits_relaxed(VPP_POSTBLEND_ENABLE, VPP_POSTBLEND_ENABLE, 102 102 priv->io_base + _REG(VPP_MISC)); 103 103 104 + drm_crtc_vblank_on(crtc); 105 + 106 + meson_crtc->enabled = true; 107 + } 108 + 109 + static void meson_crtc_atomic_enable(struct drm_crtc *crtc, 110 + struct drm_crtc_state *old_state) 111 + { 112 + struct meson_crtc *meson_crtc = to_meson_crtc(crtc); 113 + struct meson_drm *priv = meson_crtc->priv; 114 + 115 + DRM_DEBUG_DRIVER("\n"); 116 + 117 + if (!meson_crtc->enabled) 118 + meson_crtc_enable(crtc); 119 + 104 120 priv->viu.osd1_enabled = true; 105 121 } 106 122 ··· 125 109 { 126 110 struct meson_crtc *meson_crtc = to_meson_crtc(crtc); 127 111 struct meson_drm *priv = meson_crtc->priv; 112 + 113 + drm_crtc_vblank_off(crtc); 128 114 129 115 priv->viu.osd1_enabled = false; 130 116 priv->viu.osd1_commit = false; ··· 142 124 143 125 crtc->state->event = NULL; 144 126 } 127 + 128 + meson_crtc->enabled = false; 145 129 } 146 130 147 131 static void meson_crtc_atomic_begin(struct drm_crtc *crtc, ··· 151 131 { 152 132 struct meson_crtc *meson_crtc = to_meson_crtc(crtc); 153 133 unsigned long flags; 134 + 135 + if (crtc->state->enable && !meson_crtc->enabled) 136 + meson_crtc_enable(crtc); 154 137 155 138 if (crtc->state->event) { 156 139 WARN_ON(drm_crtc_vblank_get(crtc) != 0);
+1
drivers/gpu/drm/meson/meson_dw_hdmi.c
··· 706 706 .reg_read = meson_dw_hdmi_reg_read, 707 707 .reg_write = meson_dw_hdmi_reg_write, 708 708 .max_register = 0x10000, 709 + .fast_io = true, 709 710 }; 710 711 711 712 static bool meson_hdmi_connector_is_available(struct device *dev)
+12 -7
drivers/gpu/drm/meson/meson_venc.c
··· 71 71 */ 72 72 73 73 /* HHI Registers */ 74 + #define HHI_GCLK_MPEG2 0x148 /* 0x52 offset in data sheet */ 74 75 #define HHI_VDAC_CNTL0 0x2F4 /* 0xbd offset in data sheet */ 75 76 #define HHI_VDAC_CNTL1 0x2F8 /* 0xbe offset in data sheet */ 76 77 #define HHI_HDMI_PHY_CNTL0 0x3a0 /* 0xe8 offset in data sheet */ ··· 715 714 { 5, &meson_hdmi_encp_mode_1080i60 }, 716 715 { 20, &meson_hdmi_encp_mode_1080i50 }, 717 716 { 32, &meson_hdmi_encp_mode_1080p24 }, 717 + { 33, &meson_hdmi_encp_mode_1080p50 }, 718 718 { 34, &meson_hdmi_encp_mode_1080p30 }, 719 719 { 31, &meson_hdmi_encp_mode_1080p50 }, 720 720 { 16, &meson_hdmi_encp_mode_1080p60 }, ··· 856 854 unsigned int sof_lines; 857 855 unsigned int vsync_lines; 858 856 857 + /* Use VENCI for 480i and 576i and double HDMI pixels */ 858 + if (mode->flags & DRM_MODE_FLAG_DBLCLK) { 859 + hdmi_repeat = true; 860 + use_enci = true; 861 + venc_hdmi_latency = 1; 862 + } 863 + 859 864 if (meson_venc_hdmi_supported_vic(vic)) { 860 865 vmode = meson_venc_hdmi_get_vic_vmode(vic); 861 866 if (!vmode) { ··· 874 865 } else { 875 866 meson_venc_hdmi_get_dmt_vmode(mode, &vmode_dmt); 876 867 vmode = &vmode_dmt; 877 - } 878 - 879 - /* Use VENCI for 480i and 576i and double HDMI pixels */ 880 - if (mode->flags & DRM_MODE_FLAG_DBLCLK) { 881 - hdmi_repeat = true; 882 - use_enci = true; 883 - venc_hdmi_latency = 1; 868 + use_enci = false; 884 869 } 885 870 886 871 /* Repeat VENC pixels for 480/576i/p, 720p50/60 and 1080p50/60 */ ··· 1532 1529 void meson_venc_enable_vsync(struct meson_drm *priv) 1533 1530 { 1534 1531 writel_relaxed(2, priv->io_base + _REG(VENC_INTCTRL)); 1532 + regmap_update_bits(priv->hhi, HHI_GCLK_MPEG2, BIT(25), BIT(25)); 1535 1533 } 1536 1534 1537 1535 void meson_venc_disable_vsync(struct meson_drm *priv) 1538 1536 { 1537 + regmap_update_bits(priv->hhi, HHI_GCLK_MPEG2, BIT(25), 0); 1539 1538 writel_relaxed(0, priv->io_base + _REG(VENC_INTCTRL)); 1540 1539 } 1541 1540
+6 -6
drivers/gpu/drm/meson/meson_viu.c
··· 184 184 if (lut_sel == VIU_LUT_OSD_OETF) { 185 185 writel(0, priv->io_base + _REG(addr_port)); 186 186 187 - for (i = 0; i < 20; i++) 187 + for (i = 0; i < (OSD_OETF_LUT_SIZE / 2); i++) 188 188 writel(r_map[i * 2] | (r_map[i * 2 + 1] << 16), 189 189 priv->io_base + _REG(data_port)); 190 190 191 191 writel(r_map[OSD_OETF_LUT_SIZE - 1] | (g_map[0] << 16), 192 192 priv->io_base + _REG(data_port)); 193 193 194 - for (i = 0; i < 20; i++) 194 + for (i = 0; i < (OSD_OETF_LUT_SIZE / 2); i++) 195 195 writel(g_map[i * 2 + 1] | (g_map[i * 2 + 2] << 16), 196 196 priv->io_base + _REG(data_port)); 197 197 198 - for (i = 0; i < 20; i++) 198 + for (i = 0; i < (OSD_OETF_LUT_SIZE / 2); i++) 199 199 writel(b_map[i * 2] | (b_map[i * 2 + 1] << 16), 200 200 priv->io_base + _REG(data_port)); 201 201 ··· 211 211 } else if (lut_sel == VIU_LUT_OSD_EOTF) { 212 212 writel(0, priv->io_base + _REG(addr_port)); 213 213 214 - for (i = 0; i < 20; i++) 214 + for (i = 0; i < (OSD_EOTF_LUT_SIZE / 2); i++) 215 215 writel(r_map[i * 2] | (r_map[i * 2 + 1] << 16), 216 216 priv->io_base + _REG(data_port)); 217 217 218 218 writel(r_map[OSD_EOTF_LUT_SIZE - 1] | (g_map[0] << 16), 219 219 priv->io_base + _REG(data_port)); 220 220 221 - for (i = 0; i < 20; i++) 221 + for (i = 0; i < (OSD_EOTF_LUT_SIZE / 2); i++) 222 222 writel(g_map[i * 2 + 1] | (g_map[i * 2 + 2] << 16), 223 223 priv->io_base + _REG(data_port)); 224 224 225 - for (i = 0; i < 20; i++) 225 + for (i = 0; i < (OSD_EOTF_LUT_SIZE / 2); i++) 226 226 writel(b_map[i * 2] | (b_map[i * 2 + 1] << 16), 227 227 priv->io_base + _REG(data_port)); 228 228
+11 -11
drivers/gpu/drm/omapdrm/dss/dsi.c
··· 5409 5409 5410 5410 /* DSI on OMAP3 doesn't have register DSI_GNQ, set number 5411 5411 * of data to 3 by default */ 5412 - if (dsi->data->quirks & DSI_QUIRK_GNQ) 5412 + if (dsi->data->quirks & DSI_QUIRK_GNQ) { 5413 + dsi_runtime_get(dsi); 5413 5414 /* NB_DATA_LANES */ 5414 5415 dsi->num_lanes_supported = 1 + REG_GET(dsi, DSI_GNQ, 11, 9); 5415 - else 5416 + dsi_runtime_put(dsi); 5417 + } else { 5416 5418 dsi->num_lanes_supported = 3; 5419 + } 5417 5420 5418 5421 r = dsi_init_output(dsi); 5419 5422 if (r) ··· 5429 5426 } 5430 5427 5431 5428 r = of_platform_populate(dev->of_node, NULL, NULL, dev); 5432 - if (r) 5429 + if (r) { 5433 5430 DSSERR("Failed to populate DSI child devices: %d\n", r); 5431 + goto err_uninit_output; 5432 + } 5434 5433 5435 5434 r = component_add(&pdev->dev, &dsi_component_ops); 5436 5435 if (r) 5437 - goto err_uninit_output; 5436 + goto err_of_depopulate; 5438 5437 5439 5438 return 0; 5440 5439 5440 + err_of_depopulate: 5441 + of_platform_depopulate(dev); 5441 5442 err_uninit_output: 5442 5443 dsi_uninit_output(dsi); 5443 5444 err_pm_disable: ··· 5477 5470 /* wait for current handler to finish before turning the DSI off */ 5478 5471 synchronize_irq(dsi->irq); 5479 5472 5480 - dispc_runtime_put(dsi->dss->dispc); 5481 - 5482 5473 return 0; 5483 5474 } 5484 5475 5485 5476 static int dsi_runtime_resume(struct device *dev) 5486 5477 { 5487 5478 struct dsi_data *dsi = dev_get_drvdata(dev); 5488 - int r; 5489 - 5490 - r = dispc_runtime_get(dsi->dss->dispc); 5491 - if (r) 5492 - return r; 5493 5479 5494 5480 dsi->is_enabled = true; 5495 5481 /* ensure the irq handler sees the is_enabled value */
+10 -1
drivers/gpu/drm/omapdrm/dss/dss.c
··· 1484 1484 dss); 1485 1485 1486 1486 /* Add all the child devices as components. */ 1487 + r = of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev); 1488 + if (r) 1489 + goto err_uninit_debugfs; 1490 + 1487 1491 omapdss_gather_components(&pdev->dev); 1488 1492 1489 1493 device_for_each_child(&pdev->dev, &match, dss_add_child_component); 1490 1494 1491 1495 r = component_master_add_with_match(&pdev->dev, &dss_component_ops, match); 1492 1496 if (r) 1493 - goto err_uninit_debugfs; 1497 + goto err_of_depopulate; 1494 1498 1495 1499 return 0; 1500 + 1501 + err_of_depopulate: 1502 + of_platform_depopulate(&pdev->dev); 1496 1503 1497 1504 err_uninit_debugfs: 1498 1505 dss_debugfs_remove_file(dss->debugfs.clk); ··· 1528 1521 static int dss_remove(struct platform_device *pdev) 1529 1522 { 1530 1523 struct dss_device *dss = platform_get_drvdata(pdev); 1524 + 1525 + of_platform_depopulate(&pdev->dev); 1531 1526 1532 1527 component_master_del(&pdev->dev, &dss_component_ops); 1533 1528
+9 -28
drivers/gpu/drm/omapdrm/dss/hdmi4.c
··· 635 635 636 636 hdmi->dss = dss; 637 637 638 - r = hdmi_pll_init(dss, hdmi->pdev, &hdmi->pll, &hdmi->wp); 638 + r = hdmi_runtime_get(hdmi); 639 639 if (r) 640 640 return r; 641 + 642 + r = hdmi_pll_init(dss, hdmi->pdev, &hdmi->pll, &hdmi->wp); 643 + if (r) 644 + goto err_runtime_put; 641 645 642 646 r = hdmi4_cec_init(hdmi->pdev, &hdmi->core, &hdmi->wp); 643 647 if (r) ··· 656 652 hdmi->debugfs = dss_debugfs_create_file(dss, "hdmi", hdmi_dump_regs, 657 653 hdmi); 658 654 655 + hdmi_runtime_put(hdmi); 656 + 659 657 return 0; 660 658 661 659 err_cec_uninit: 662 660 hdmi4_cec_uninit(&hdmi->core); 663 661 err_pll_uninit: 664 662 hdmi_pll_uninit(&hdmi->pll); 663 + err_runtime_put: 664 + hdmi_runtime_put(hdmi); 665 665 return r; 666 666 } 667 667 ··· 841 833 return 0; 842 834 } 843 835 844 - static int hdmi_runtime_suspend(struct device *dev) 845 - { 846 - struct omap_hdmi *hdmi = dev_get_drvdata(dev); 847 - 848 - dispc_runtime_put(hdmi->dss->dispc); 849 - 850 - return 0; 851 - } 852 - 853 - static int hdmi_runtime_resume(struct device *dev) 854 - { 855 - struct omap_hdmi *hdmi = dev_get_drvdata(dev); 856 - int r; 857 - 858 - r = dispc_runtime_get(hdmi->dss->dispc); 859 - if (r < 0) 860 - return r; 861 - 862 - return 0; 863 - } 864 - 865 - static const struct dev_pm_ops hdmi_pm_ops = { 866 - .runtime_suspend = hdmi_runtime_suspend, 867 - .runtime_resume = hdmi_runtime_resume, 868 - }; 869 - 870 836 static const struct of_device_id hdmi_of_match[] = { 871 837 { .compatible = "ti,omap4-hdmi", }, 872 838 {}, ··· 851 869 .remove = hdmi4_remove, 852 870 .driver = { 853 871 .name = "omapdss_hdmi", 854 - .pm = &hdmi_pm_ops, 855 872 .of_match_table = hdmi_of_match, 856 873 .suppress_bind_attrs = true, 857 874 },
-27
drivers/gpu/drm/omapdrm/dss/hdmi5.c
··· 825 825 return 0; 826 826 } 827 827 828 - static int hdmi_runtime_suspend(struct device *dev) 829 - { 830 - struct omap_hdmi *hdmi = dev_get_drvdata(dev); 831 - 832 - dispc_runtime_put(hdmi->dss->dispc); 833 - 834 - return 0; 835 - } 836 - 837 - static int hdmi_runtime_resume(struct device *dev) 838 - { 839 - struct omap_hdmi *hdmi = dev_get_drvdata(dev); 840 - int r; 841 - 842 - r = dispc_runtime_get(hdmi->dss->dispc); 843 - if (r < 0) 844 - return r; 845 - 846 - return 0; 847 - } 848 - 849 - static const struct dev_pm_ops hdmi_pm_ops = { 850 - .runtime_suspend = hdmi_runtime_suspend, 851 - .runtime_resume = hdmi_runtime_resume, 852 - }; 853 - 854 828 static const struct of_device_id hdmi_of_match[] = { 855 829 { .compatible = "ti,omap5-hdmi", }, 856 830 { .compatible = "ti,dra7-hdmi", }, ··· 836 862 .remove = hdmi5_remove, 837 863 .driver = { 838 864 .name = "omapdss_hdmi5", 839 - .pm = &hdmi_pm_ops, 840 865 .of_match_table = hdmi_of_match, 841 866 .suppress_bind_attrs = true, 842 867 },
-7
drivers/gpu/drm/omapdrm/dss/venc.c
··· 946 946 if (venc->tv_dac_clk) 947 947 clk_disable_unprepare(venc->tv_dac_clk); 948 948 949 - dispc_runtime_put(venc->dss->dispc); 950 - 951 949 return 0; 952 950 } 953 951 954 952 static int venc_runtime_resume(struct device *dev) 955 953 { 956 954 struct venc_device *venc = dev_get_drvdata(dev); 957 - int r; 958 - 959 - r = dispc_runtime_get(venc->dss->dispc); 960 - if (r < 0) 961 - return r; 962 955 963 956 if (venc->tv_dac_clk) 964 957 clk_prepare_enable(venc->tv_dac_clk);
+6
drivers/gpu/drm/omapdrm/omap_crtc.c
··· 350 350 static void omap_crtc_atomic_enable(struct drm_crtc *crtc, 351 351 struct drm_crtc_state *old_state) 352 352 { 353 + struct omap_drm_private *priv = crtc->dev->dev_private; 353 354 struct omap_crtc *omap_crtc = to_omap_crtc(crtc); 354 355 int ret; 355 356 356 357 DBG("%s", omap_crtc->name); 358 + 359 + priv->dispc_ops->runtime_get(priv->dispc); 357 360 358 361 spin_lock_irq(&crtc->dev->event_lock); 359 362 drm_crtc_vblank_on(crtc); ··· 370 367 static void omap_crtc_atomic_disable(struct drm_crtc *crtc, 371 368 struct drm_crtc_state *old_state) 372 369 { 370 + struct omap_drm_private *priv = crtc->dev->dev_private; 373 371 struct omap_crtc *omap_crtc = to_omap_crtc(crtc); 374 372 375 373 DBG("%s", omap_crtc->name); ··· 383 379 spin_unlock_irq(&crtc->dev->event_lock); 384 380 385 381 drm_crtc_vblank_off(crtc); 382 + 383 + priv->dispc_ops->runtime_put(priv->dispc); 386 384 } 387 385 388 386 static enum drm_mode_status omap_crtc_mode_valid(struct drm_crtc *crtc,
+18 -3
drivers/gpu/drm/rcar-du/rcar_du_group.c
··· 202 202 203 203 static void __rcar_du_group_start_stop(struct rcar_du_group *rgrp, bool start) 204 204 { 205 - struct rcar_du_crtc *rcrtc = &rgrp->dev->crtcs[rgrp->index * 2]; 205 + struct rcar_du_device *rcdu = rgrp->dev; 206 206 207 - rcar_du_crtc_dsysr_clr_set(rcrtc, DSYSR_DRES | DSYSR_DEN, 208 - start ? DSYSR_DEN : DSYSR_DRES); 207 + /* 208 + * Group start/stop is controlled by the DRES and DEN bits of DSYSR0 209 + * for the first group and DSYSR2 for the second group. On most DU 210 + * instances, this maps to the first CRTC of the group, and we can just 211 + * use rcar_du_crtc_dsysr_clr_set() to access the correct DSYSR. On 212 + * M3-N, however, DU2 doesn't exist, but DSYSR2 does. We thus need to 213 + * access the register directly using group read/write. 214 + */ 215 + if (rcdu->info->channels_mask & BIT(rgrp->index * 2)) { 216 + struct rcar_du_crtc *rcrtc = &rgrp->dev->crtcs[rgrp->index * 2]; 217 + 218 + rcar_du_crtc_dsysr_clr_set(rcrtc, DSYSR_DRES | DSYSR_DEN, 219 + start ? DSYSR_DEN : DSYSR_DRES); 220 + } else { 221 + rcar_du_group_write(rgrp, DSYSR, 222 + start ? DSYSR_DEN : DSYSR_DRES); 223 + } 209 224 } 210 225 211 226 void rcar_du_group_start_stop(struct rcar_du_group *rgrp, bool start)
+2 -2
drivers/gpu/drm/sun4i/sun4i_lvds.c
··· 75 75 76 76 DRM_DEBUG_DRIVER("Enabling LVDS output\n"); 77 77 78 - if (!IS_ERR(tcon->panel)) { 78 + if (tcon->panel) { 79 79 drm_panel_prepare(tcon->panel); 80 80 drm_panel_enable(tcon->panel); 81 81 } ··· 88 88 89 89 DRM_DEBUG_DRIVER("Disabling LVDS output\n"); 90 90 91 - if (!IS_ERR(tcon->panel)) { 91 + if (tcon->panel) { 92 92 drm_panel_disable(tcon->panel); 93 93 drm_panel_unprepare(tcon->panel); 94 94 }
+2 -2
drivers/gpu/drm/sun4i/sun4i_rgb.c
··· 135 135 136 136 DRM_DEBUG_DRIVER("Enabling RGB output\n"); 137 137 138 - if (!IS_ERR(tcon->panel)) { 138 + if (tcon->panel) { 139 139 drm_panel_prepare(tcon->panel); 140 140 drm_panel_enable(tcon->panel); 141 141 } ··· 148 148 149 149 DRM_DEBUG_DRIVER("Disabling RGB output\n"); 150 150 151 - if (!IS_ERR(tcon->panel)) { 151 + if (tcon->panel) { 152 152 drm_panel_disable(tcon->panel); 153 153 drm_panel_unprepare(tcon->panel); 154 154 }
+3 -2
drivers/gpu/drm/sun4i/sun4i_tcon.c
··· 491 491 sun4i_tcon0_mode_set_common(tcon, mode); 492 492 493 493 /* Set dithering if needed */ 494 - sun4i_tcon0_mode_set_dithering(tcon, tcon->panel->connector); 494 + if (tcon->panel) 495 + sun4i_tcon0_mode_set_dithering(tcon, tcon->panel->connector); 495 496 496 497 /* Adjust clock delay */ 497 498 clk_delay = sun4i_tcon_get_clk_delay(mode, 0); ··· 556 555 * Following code is a way to avoid quirks all around TCON 557 556 * and DOTCLOCK drivers. 558 557 */ 559 - if (!IS_ERR(tcon->panel)) { 558 + if (tcon->panel) { 560 559 struct drm_panel *panel = tcon->panel; 561 560 struct drm_connector *connector = panel->connector; 562 561 struct drm_display_info display_info = connector->display_info;
+6
drivers/gpu/drm/vc4/vc4_kms.c
··· 214 214 return 0; 215 215 } 216 216 217 + /* We know for sure we don't want an async update here. Set 218 + * state->legacy_cursor_update to false to prevent 219 + * drm_atomic_helper_setup_commit() from auto-completing 220 + * commit->flip_done. 221 + */ 222 + state->legacy_cursor_update = false; 217 223 ret = drm_atomic_helper_setup_commit(state, nonblock); 218 224 if (ret) 219 225 return ret;
+13 -2
drivers/gpu/drm/vc4/vc4_plane.c
··· 807 807 static void vc4_plane_atomic_async_update(struct drm_plane *plane, 808 808 struct drm_plane_state *state) 809 809 { 810 - struct vc4_plane_state *vc4_state = to_vc4_plane_state(plane->state); 810 + struct vc4_plane_state *vc4_state, *new_vc4_state; 811 811 812 812 if (plane->state->fb != state->fb) { 813 813 vc4_plane_async_set_fb(plane, state->fb); ··· 828 828 plane->state->src_y = state->src_y; 829 829 830 830 /* Update the display list based on the new crtc_x/y. */ 831 - vc4_plane_atomic_check(plane, plane->state); 831 + vc4_plane_atomic_check(plane, state); 832 + 833 + new_vc4_state = to_vc4_plane_state(state); 834 + vc4_state = to_vc4_plane_state(plane->state); 835 + 836 + /* Update the current vc4_state pos0, pos2 and ptr0 dlist entries. */ 837 + vc4_state->dlist[vc4_state->pos0_offset] = 838 + new_vc4_state->dlist[vc4_state->pos0_offset]; 839 + vc4_state->dlist[vc4_state->pos2_offset] = 840 + new_vc4_state->dlist[vc4_state->pos2_offset]; 841 + vc4_state->dlist[vc4_state->ptr0_offset] = 842 + new_vc4_state->dlist[vc4_state->ptr0_offset]; 832 843 833 844 /* Note that we can't just call vc4_plane_write_dlist() 834 845 * because that would smash the context data that the HVS is
+3
drivers/gpu/vga/vga_switcheroo.c
··· 380 380 mutex_unlock(&vgasr_mutex); 381 381 return -EINVAL; 382 382 } 383 + /* notify if GPU has been already bound */ 384 + if (ops->gpu_bound) 385 + ops->gpu_bound(pdev, id); 383 386 } 384 387 mutex_unlock(&vgasr_mutex); 385 388
+18
drivers/hid/hid-alps.c
··· 660 660 return ret; 661 661 } 662 662 663 + static int alps_sp_open(struct input_dev *dev) 664 + { 665 + struct hid_device *hid = input_get_drvdata(dev); 666 + 667 + return hid_hw_open(hid); 668 + } 669 + 670 + static void alps_sp_close(struct input_dev *dev) 671 + { 672 + struct hid_device *hid = input_get_drvdata(dev); 673 + 674 + hid_hw_close(hid); 675 + } 676 + 663 677 static int alps_input_configured(struct hid_device *hdev, struct hid_input *hi) 664 678 { 665 679 struct alps_dev *data = hid_get_drvdata(hdev); ··· 746 732 input2->id.product = input->id.product; 747 733 input2->id.version = input->id.version; 748 734 input2->dev.parent = input->dev.parent; 735 + 736 + input_set_drvdata(input2, hdev); 737 + input2->open = alps_sp_open; 738 + input2->close = alps_sp_close; 749 739 750 740 __set_bit(EV_KEY, input2->evbit); 751 741 data->sp_btn_cnt = (data->sp_btn_info & 0x0F);
+3
drivers/hid/hid-asus.c
··· 359 359 u32 value; 360 360 int ret; 361 361 362 + if (!IS_ENABLED(CONFIG_ASUS_WMI)) 363 + return false; 364 + 362 365 ret = asus_wmi_evaluate_method(ASUS_WMI_METHODID_DSTS2, 363 366 ASUS_WMI_DEVID_KBD_BACKLIGHT, 0, &value); 364 367 hid_dbg(hdev, "WMI backlight check: rc %d value %x", ret, value);
+11
drivers/hid/hid-ids.h
··· 275 275 276 276 #define USB_VENDOR_ID_CIDC 0x1677 277 277 278 + #define I2C_VENDOR_ID_CIRQUE 0x0488 279 + #define I2C_PRODUCT_ID_CIRQUE_121F 0x121F 280 + 278 281 #define USB_VENDOR_ID_CJTOUCH 0x24b8 279 282 #define USB_DEVICE_ID_CJTOUCH_MULTI_TOUCH_0020 0x0020 280 283 #define USB_DEVICE_ID_CJTOUCH_MULTI_TOUCH_0040 0x0040 ··· 710 707 #define USB_VENDOR_ID_LG 0x1fd2 711 708 #define USB_DEVICE_ID_LG_MULTITOUCH 0x0064 712 709 #define USB_DEVICE_ID_LG_MELFAS_MT 0x6007 710 + #define I2C_DEVICE_ID_LG_8001 0x8001 713 711 714 712 #define USB_VENDOR_ID_LOGITECH 0x046d 715 713 #define USB_DEVICE_ID_LOGITECH_AUDIOHUB 0x0a0e ··· 809 805 #define USB_DEVICE_ID_MS_TYPE_COVER_2 0x07a9 810 806 #define USB_DEVICE_ID_MS_POWER_COVER 0x07da 811 807 #define USB_DEVICE_ID_MS_XBOX_ONE_S_CONTROLLER 0x02fd 808 + #define USB_DEVICE_ID_MS_PIXART_MOUSE 0x00cb 812 809 813 810 #define USB_VENDOR_ID_MOJO 0x8282 814 811 #define USB_DEVICE_ID_RETRO_ADAPTER 0x3201 ··· 932 927 #define USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3003 0x3003 933 928 #define USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3008 0x3008 934 929 930 + #define I2C_VENDOR_ID_RAYDIUM 0x2386 931 + #define I2C_PRODUCT_ID_RAYDIUM_4B33 0x4b33 932 + 935 933 #define USB_VENDOR_ID_RAZER 0x1532 936 934 #define USB_DEVICE_ID_RAZER_BLADE_14 0x011D 937 935 ··· 1048 1040 #define USB_VENDOR_ID_SYMBOL 0x05e0 1049 1041 #define USB_DEVICE_ID_SYMBOL_SCANNER_1 0x0800 1050 1042 #define USB_DEVICE_ID_SYMBOL_SCANNER_2 0x1300 1043 + #define USB_DEVICE_ID_SYMBOL_SCANNER_3 0x1200 1051 1044 1052 1045 #define USB_VENDOR_ID_SYNAPTICS 0x06cb 1053 1046 #define USB_DEVICE_ID_SYNAPTICS_TP 0x0001 ··· 1210 1201 #define USB_DEVICE_ID_PRIMAX_MOUSE_4D22 0x4d22 1211 1202 #define USB_DEVICE_ID_PRIMAX_KEYBOARD 0x4e05 1212 1203 #define USB_DEVICE_ID_PRIMAX_REZEL 0x4e72 1204 + #define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F 0x4d0f 1205 + #define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22 0x4e22 1213 1206 1214 1207 1215 1208 #define USB_VENDOR_ID_RISO_KAGAKU 0x1294 /* Riso Kagaku Corp. */
+3 -44
drivers/hid/hid-input.c
··· 325 325 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM, 326 326 USB_DEVICE_ID_ELECOM_BM084), 327 327 HID_BATTERY_QUIRK_IGNORE }, 328 + { HID_USB_DEVICE(USB_VENDOR_ID_SYMBOL, 329 + USB_DEVICE_ID_SYMBOL_SCANNER_3), 330 + HID_BATTERY_QUIRK_IGNORE }, 328 331 {} 329 332 }; 330 333 ··· 1841 1838 } 1842 1839 EXPORT_SYMBOL_GPL(hidinput_disconnect); 1843 1840 1844 - /** 1845 - * hid_scroll_counter_handle_scroll() - Send high- and low-resolution scroll 1846 - * events given a high-resolution wheel 1847 - * movement. 1848 - * @counter: a hid_scroll_counter struct describing the wheel. 1849 - * @hi_res_value: the movement of the wheel, in the mouse's high-resolution 1850 - * units. 1851 - * 1852 - * Given a high-resolution movement, this function converts the movement into 1853 - * microns and emits high-resolution scroll events for the input device. It also 1854 - * uses the multiplier from &struct hid_scroll_counter to emit low-resolution 1855 - * scroll events when appropriate for backwards-compatibility with userspace 1856 - * input libraries. 1857 - */ 1858 - void hid_scroll_counter_handle_scroll(struct hid_scroll_counter *counter, 1859 - int hi_res_value) 1860 - { 1861 - int low_res_value, remainder, multiplier; 1862 - 1863 - input_report_rel(counter->dev, REL_WHEEL_HI_RES, 1864 - hi_res_value * counter->microns_per_hi_res_unit); 1865 - 1866 - /* 1867 - * Update the low-res remainder with the high-res value, 1868 - * but reset if the direction has changed. 1869 - */ 1870 - remainder = counter->remainder; 1871 - if ((remainder ^ hi_res_value) < 0) 1872 - remainder = 0; 1873 - remainder += hi_res_value; 1874 - 1875 - /* 1876 - * Then just use the resolution multiplier to see if 1877 - * we should send a low-res (aka regular wheel) event. 1878 - */ 1879 - multiplier = counter->resolution_multiplier; 1880 - low_res_value = remainder / multiplier; 1881 - remainder -= low_res_value * multiplier; 1882 - counter->remainder = remainder; 1883 - 1884 - if (low_res_value) 1885 - input_report_rel(counter->dev, REL_WHEEL, low_res_value); 1886 - } 1887 - EXPORT_SYMBOL_GPL(hid_scroll_counter_handle_scroll);
+27 -282
drivers/hid/hid-logitech-hidpp.c
··· 64 64 #define HIDPP_QUIRK_NO_HIDINPUT BIT(23) 65 65 #define HIDPP_QUIRK_FORCE_OUTPUT_REPORTS BIT(24) 66 66 #define HIDPP_QUIRK_UNIFYING BIT(25) 67 - #define HIDPP_QUIRK_HI_RES_SCROLL_1P0 BIT(26) 68 - #define HIDPP_QUIRK_HI_RES_SCROLL_X2120 BIT(27) 69 - #define HIDPP_QUIRK_HI_RES_SCROLL_X2121 BIT(28) 70 - 71 - /* Convenience constant to check for any high-res support. */ 72 - #define HIDPP_QUIRK_HI_RES_SCROLL (HIDPP_QUIRK_HI_RES_SCROLL_1P0 | \ 73 - HIDPP_QUIRK_HI_RES_SCROLL_X2120 | \ 74 - HIDPP_QUIRK_HI_RES_SCROLL_X2121) 75 67 76 68 #define HIDPP_QUIRK_DELAYED_INIT HIDPP_QUIRK_NO_HIDINPUT 77 69 ··· 149 157 unsigned long capabilities; 150 158 151 159 struct hidpp_battery battery; 152 - struct hid_scroll_counter vertical_wheel_counter; 153 160 }; 154 161 155 162 /* HID++ 1.0 error codes */ ··· 400 409 #define HIDPP_SET_LONG_REGISTER 0x82 401 410 #define HIDPP_GET_LONG_REGISTER 0x83 402 411 403 - /** 404 - * hidpp10_set_register_bit() - Sets a single bit in a HID++ 1.0 register. 405 - * @hidpp_dev: the device to set the register on. 406 - * @register_address: the address of the register to modify. 407 - * @byte: the byte of the register to modify. Should be less than 3. 408 - * Return: 0 if successful, otherwise a negative error code. 409 - */ 410 - static int hidpp10_set_register_bit(struct hidpp_device *hidpp_dev, 411 - u8 register_address, u8 byte, u8 bit) 412 + #define HIDPP_REG_GENERAL 0x00 413 + 414 + static int hidpp10_enable_battery_reporting(struct hidpp_device *hidpp_dev) 412 415 { 413 416 struct hidpp_report response; 414 417 int ret; 415 418 u8 params[3] = { 0 }; 416 419 417 420 ret = hidpp_send_rap_command_sync(hidpp_dev, 418 - REPORT_ID_HIDPP_SHORT, 419 - HIDPP_GET_REGISTER, 420 - register_address, 421 - NULL, 0, &response); 421 + REPORT_ID_HIDPP_SHORT, 422 + HIDPP_GET_REGISTER, 423 + HIDPP_REG_GENERAL, 424 + NULL, 0, &response); 422 425 if (ret) 423 426 return ret; 424 427 425 428 memcpy(params, response.rap.params, 3); 426 429 427 - params[byte] |= BIT(bit); 430 + /* Set the battery bit */ 431 + params[0] |= BIT(4); 428 432 429 433 return hidpp_send_rap_command_sync(hidpp_dev, 430 - REPORT_ID_HIDPP_SHORT, 431 - HIDPP_SET_REGISTER, 432 - register_address, 433 - params, 3, &response); 434 - } 435 - 436 - 437 - #define HIDPP_REG_GENERAL 0x00 438 - 439 - static int hidpp10_enable_battery_reporting(struct hidpp_device *hidpp_dev) 440 - { 441 - return hidpp10_set_register_bit(hidpp_dev, HIDPP_REG_GENERAL, 0, 4); 442 - } 443 - 444 - #define HIDPP_REG_FEATURES 0x01 445 - 446 - /* On HID++ 1.0 devices, high-res scroll was called "scrolling acceleration". */ 447 - static int hidpp10_enable_scrolling_acceleration(struct hidpp_device *hidpp_dev) 448 - { 449 - return hidpp10_set_register_bit(hidpp_dev, HIDPP_REG_FEATURES, 0, 6); 434 + REPORT_ID_HIDPP_SHORT, 435 + HIDPP_SET_REGISTER, 436 + HIDPP_REG_GENERAL, 437 + params, 3, &response); 450 438 } 451 439 452 440 #define HIDPP_REG_BATTERY_STATUS 0x07 ··· 1134 1164 } 1135 1165 1136 1166 return ret; 1137 - } 1138 - 1139 - /* -------------------------------------------------------------------------- */ 1140 - /* 0x2120: Hi-resolution scrolling */ 1141 - /* -------------------------------------------------------------------------- */ 1142 - 1143 - #define HIDPP_PAGE_HI_RESOLUTION_SCROLLING 0x2120 1144 - 1145 - #define CMD_HI_RESOLUTION_SCROLLING_SET_HIGHRES_SCROLLING_MODE 0x10 1146 - 1147 - static int hidpp_hrs_set_highres_scrolling_mode(struct hidpp_device *hidpp, 1148 - bool enabled, u8 *multiplier) 1149 - { 1150 - u8 feature_index; 1151 - u8 feature_type; 1152 - int ret; 1153 - u8 params[1]; 1154 - struct hidpp_report response; 1155 - 1156 - ret = hidpp_root_get_feature(hidpp, 1157 - HIDPP_PAGE_HI_RESOLUTION_SCROLLING, 1158 - &feature_index, 1159 - &feature_type); 1160 - if (ret) 1161 - return ret; 1162 - 1163 - params[0] = enabled ? BIT(0) : 0; 1164 - ret = hidpp_send_fap_command_sync(hidpp, feature_index, 1165 - CMD_HI_RESOLUTION_SCROLLING_SET_HIGHRES_SCROLLING_MODE, 1166 - params, sizeof(params), &response); 1167 - if (ret) 1168 - return ret; 1169 - *multiplier = response.fap.params[1]; 1170 - return 0; 1171 - } 1172 - 1173 - /* -------------------------------------------------------------------------- */ 1174 - /* 0x2121: HiRes Wheel */ 1175 - /* -------------------------------------------------------------------------- */ 1176 - 1177 - #define HIDPP_PAGE_HIRES_WHEEL 0x2121 1178 - 1179 - #define CMD_HIRES_WHEEL_GET_WHEEL_CAPABILITY 0x00 1180 - #define CMD_HIRES_WHEEL_SET_WHEEL_MODE 0x20 1181 - 1182 - static int hidpp_hrw_get_wheel_capability(struct hidpp_device *hidpp, 1183 - u8 *multiplier) 1184 - { 1185 - u8 feature_index; 1186 - u8 feature_type; 1187 - int ret; 1188 - struct hidpp_report response; 1189 - 1190 - ret = hidpp_root_get_feature(hidpp, HIDPP_PAGE_HIRES_WHEEL, 1191 - &feature_index, &feature_type); 1192 - if (ret) 1193 - goto return_default; 1194 - 1195 - ret = hidpp_send_fap_command_sync(hidpp, feature_index, 1196 - CMD_HIRES_WHEEL_GET_WHEEL_CAPABILITY, 1197 - NULL, 0, &response); 1198 - if (ret) 1199 - goto return_default; 1200 - 1201 - *multiplier = response.fap.params[0]; 1202 - return 0; 1203 - return_default: 1204 - hid_warn(hidpp->hid_dev, 1205 - "Couldn't get wheel multiplier (error %d), assuming %d.\n", 1206 - ret, *multiplier); 1207 - return ret; 1208 - } 1209 - 1210 - static int hidpp_hrw_set_wheel_mode(struct hidpp_device *hidpp, bool invert, 1211 - bool high_resolution, bool use_hidpp) 1212 - { 1213 - u8 feature_index; 1214 - u8 feature_type; 1215 - int ret; 1216 - u8 params[1]; 1217 - struct hidpp_report response; 1218 - 1219 - ret = hidpp_root_get_feature(hidpp, HIDPP_PAGE_HIRES_WHEEL, 1220 - &feature_index, &feature_type); 1221 - if (ret) 1222 - return ret; 1223 - 1224 - params[0] = (invert ? BIT(2) : 0) | 1225 - (high_resolution ? BIT(1) : 0) | 1226 - (use_hidpp ? BIT(0) : 0); 1227 - 1228 - return hidpp_send_fap_command_sync(hidpp, feature_index, 1229 - CMD_HIRES_WHEEL_SET_WHEEL_MODE, 1230 - params, sizeof(params), &response); 1231 1167 } 1232 1168 1233 1169 /* -------------------------------------------------------------------------- */ ··· 2399 2523 input_report_rel(mydata->input, REL_Y, v); 2400 2524 2401 2525 v = hid_snto32(data[6], 8); 2402 - hid_scroll_counter_handle_scroll( 2403 - &hidpp->vertical_wheel_counter, v); 2526 + input_report_rel(mydata->input, REL_WHEEL, v); 2404 2527 2405 2528 input_sync(mydata->input); 2406 2529 } ··· 2528 2653 } 2529 2654 2530 2655 /* -------------------------------------------------------------------------- */ 2531 - /* High-resolution scroll wheels */ 2532 - /* -------------------------------------------------------------------------- */ 2533 - 2534 - /** 2535 - * struct hi_res_scroll_info - Stores info on a device's high-res scroll wheel. 2536 - * @product_id: the HID product ID of the device being described. 2537 - * @microns_per_hi_res_unit: the distance moved by the user's finger for each 2538 - * high-resolution unit reported by the device, in 2539 - * 256ths of a millimetre. 2540 - */ 2541 - struct hi_res_scroll_info { 2542 - __u32 product_id; 2543 - int microns_per_hi_res_unit; 2544 - }; 2545 - 2546 - static struct hi_res_scroll_info hi_res_scroll_devices[] = { 2547 - { /* Anywhere MX */ 2548 - .product_id = 0x1017, .microns_per_hi_res_unit = 445 }, 2549 - { /* Performance MX */ 2550 - .product_id = 0x101a, .microns_per_hi_res_unit = 406 }, 2551 - { /* M560 */ 2552 - .product_id = 0x402d, .microns_per_hi_res_unit = 435 }, 2553 - { /* MX Master 2S */ 2554 - .product_id = 0x4069, .microns_per_hi_res_unit = 406 }, 2555 - }; 2556 - 2557 - static int hi_res_scroll_look_up_microns(__u32 product_id) 2558 - { 2559 - int i; 2560 - int num_devices = sizeof(hi_res_scroll_devices) 2561 - / sizeof(hi_res_scroll_devices[0]); 2562 - for (i = 0; i < num_devices; i++) { 2563 - if (hi_res_scroll_devices[i].product_id == product_id) 2564 - return hi_res_scroll_devices[i].microns_per_hi_res_unit; 2565 - } 2566 - /* We don't have a value for this device, so use a sensible default. */ 2567 - return 406; 2568 - } 2569 - 2570 - static int hi_res_scroll_enable(struct hidpp_device *hidpp) 2571 - { 2572 - int ret; 2573 - u8 multiplier = 8; 2574 - 2575 - if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_X2121) { 2576 - ret = hidpp_hrw_set_wheel_mode(hidpp, false, true, false); 2577 - hidpp_hrw_get_wheel_capability(hidpp, &multiplier); 2578 - } else if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_X2120) { 2579 - ret = hidpp_hrs_set_highres_scrolling_mode(hidpp, true, 2580 - &multiplier); 2581 - } else /* if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_1P0) */ 2582 - ret = hidpp10_enable_scrolling_acceleration(hidpp); 2583 - 2584 - if (ret) 2585 - return ret; 2586 - 2587 - hidpp->vertical_wheel_counter.resolution_multiplier = multiplier; 2588 - hidpp->vertical_wheel_counter.microns_per_hi_res_unit = 2589 - hi_res_scroll_look_up_microns(hidpp->hid_dev->product); 2590 - hid_info(hidpp->hid_dev, "multiplier = %d, microns = %d\n", 2591 - multiplier, 2592 - hidpp->vertical_wheel_counter.microns_per_hi_res_unit); 2593 - return 0; 2594 - } 2595 - 2596 - /* -------------------------------------------------------------------------- */ 2597 2656 /* Generic HID++ devices */ 2598 2657 /* -------------------------------------------------------------------------- */ 2599 2658 ··· 2572 2763 wtp_populate_input(hidpp, input, origin_is_hid_core); 2573 2764 else if (hidpp->quirks & HIDPP_QUIRK_CLASS_M560) 2574 2765 m560_populate_input(hidpp, input, origin_is_hid_core); 2575 - 2576 - if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL) { 2577 - input_set_capability(input, EV_REL, REL_WHEEL_HI_RES); 2578 - hidpp->vertical_wheel_counter.dev = input; 2579 - } 2580 2766 } 2581 2767 2582 2768 static int hidpp_input_configured(struct hid_device *hdev, ··· 2688 2884 return m560_raw_event(hdev, data, size); 2689 2885 2690 2886 return 0; 2691 - } 2692 - 2693 - static int hidpp_event(struct hid_device *hdev, struct hid_field *field, 2694 - struct hid_usage *usage, __s32 value) 2695 - { 2696 - /* This function will only be called for scroll events, due to the 2697 - * restriction imposed in hidpp_usages. 2698 - */ 2699 - struct hidpp_device *hidpp = hid_get_drvdata(hdev); 2700 - struct hid_scroll_counter *counter = &hidpp->vertical_wheel_counter; 2701 - /* A scroll event may occur before the multiplier has been retrieved or 2702 - * the input device set, or high-res scroll enabling may fail. In such 2703 - * cases we must return early (falling back to default behaviour) to 2704 - * avoid a crash in hid_scroll_counter_handle_scroll. 2705 - */ 2706 - if (!(hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL) || value == 0 2707 - || counter->dev == NULL || counter->resolution_multiplier == 0) 2708 - return 0; 2709 - 2710 - hid_scroll_counter_handle_scroll(counter, value); 2711 - return 1; 2712 2887 } 2713 2888 2714 2889 static int hidpp_initialize_battery(struct hidpp_device *hidpp) ··· 2901 3118 if (hidpp->battery.ps) 2902 3119 power_supply_changed(hidpp->battery.ps); 2903 3120 2904 - if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL) 2905 - hi_res_scroll_enable(hidpp); 2906 - 2907 3121 if (!(hidpp->quirks & HIDPP_QUIRK_NO_HIDINPUT) || hidpp->delayed_input) 2908 3122 /* if the input nodes are already created, we can stop now */ 2909 3123 return; ··· 3086 3306 mutex_destroy(&hidpp->send_mutex); 3087 3307 } 3088 3308 3089 - #define LDJ_DEVICE(product) \ 3090 - HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, \ 3091 - USB_VENDOR_ID_LOGITECH, (product)) 3092 - 3093 3309 static const struct hid_device_id hidpp_devices[] = { 3094 3310 { /* wireless touchpad */ 3095 - LDJ_DEVICE(0x4011), 3311 + HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, 3312 + USB_VENDOR_ID_LOGITECH, 0x4011), 3096 3313 .driver_data = HIDPP_QUIRK_CLASS_WTP | HIDPP_QUIRK_DELAYED_INIT | 3097 3314 HIDPP_QUIRK_WTP_PHYSICAL_BUTTONS }, 3098 3315 { /* wireless touchpad T650 */ 3099 - LDJ_DEVICE(0x4101), 3316 + HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, 3317 + USB_VENDOR_ID_LOGITECH, 0x4101), 3100 3318 .driver_data = HIDPP_QUIRK_CLASS_WTP | HIDPP_QUIRK_DELAYED_INIT }, 3101 3319 { /* wireless touchpad T651 */ 3102 3320 HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 3103 3321 USB_DEVICE_ID_LOGITECH_T651), 3104 3322 .driver_data = HIDPP_QUIRK_CLASS_WTP }, 3105 - { /* Mouse Logitech Anywhere MX */ 3106 - LDJ_DEVICE(0x1017), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 }, 3107 - { /* Mouse Logitech Cube */ 3108 - LDJ_DEVICE(0x4010), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2120 }, 3109 - { /* Mouse Logitech M335 */ 3110 - LDJ_DEVICE(0x4050), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3111 - { /* Mouse Logitech M515 */ 3112 - LDJ_DEVICE(0x4007), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2120 }, 3113 3323 { /* Mouse logitech M560 */ 3114 - LDJ_DEVICE(0x402d), 3115 - .driver_data = HIDPP_QUIRK_DELAYED_INIT | HIDPP_QUIRK_CLASS_M560 3116 - | HIDPP_QUIRK_HI_RES_SCROLL_X2120 }, 3117 - { /* Mouse Logitech M705 (firmware RQM17) */ 3118 - LDJ_DEVICE(0x101b), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 }, 3119 - { /* Mouse Logitech M705 (firmware RQM67) */ 3120 - LDJ_DEVICE(0x406d), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3121 - { /* Mouse Logitech M720 */ 3122 - LDJ_DEVICE(0x405e), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3123 - { /* Mouse Logitech MX Anywhere 2 */ 3124 - LDJ_DEVICE(0x404a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3125 - { LDJ_DEVICE(0xb013), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3126 - { LDJ_DEVICE(0xb018), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3127 - { LDJ_DEVICE(0xb01f), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3128 - { /* Mouse Logitech MX Anywhere 2S */ 3129 - LDJ_DEVICE(0x406a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3130 - { /* Mouse Logitech MX Master */ 3131 - LDJ_DEVICE(0x4041), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3132 - { LDJ_DEVICE(0x4060), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3133 - { LDJ_DEVICE(0x4071), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3134 - { /* Mouse Logitech MX Master 2S */ 3135 - LDJ_DEVICE(0x4069), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3136 - { /* Mouse Logitech Performance MX */ 3137 - LDJ_DEVICE(0x101a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 }, 3324 + HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, 3325 + USB_VENDOR_ID_LOGITECH, 0x402d), 3326 + .driver_data = HIDPP_QUIRK_DELAYED_INIT | HIDPP_QUIRK_CLASS_M560 }, 3138 3327 { /* Keyboard logitech K400 */ 3139 - LDJ_DEVICE(0x4024), 3328 + HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, 3329 + USB_VENDOR_ID_LOGITECH, 0x4024), 3140 3330 .driver_data = HIDPP_QUIRK_CLASS_K400 }, 3141 3331 { /* Solar Keyboard Logitech K750 */ 3142 - LDJ_DEVICE(0x4002), 3332 + HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, 3333 + USB_VENDOR_ID_LOGITECH, 0x4002), 3143 3334 .driver_data = HIDPP_QUIRK_CLASS_K750 }, 3144 3335 3145 - { LDJ_DEVICE(HID_ANY_ID) }, 3336 + { HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, 3337 + USB_VENDOR_ID_LOGITECH, HID_ANY_ID)}, 3146 3338 3147 3339 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_G920_WHEEL), 3148 3340 .driver_data = HIDPP_QUIRK_CLASS_G920 | HIDPP_QUIRK_FORCE_OUTPUT_REPORTS}, ··· 3123 3371 3124 3372 MODULE_DEVICE_TABLE(hid, hidpp_devices); 3125 3373 3126 - static const struct hid_usage_id hidpp_usages[] = { 3127 - { HID_GD_WHEEL, EV_REL, REL_WHEEL }, 3128 - { HID_ANY_ID - 1, HID_ANY_ID - 1, HID_ANY_ID - 1} 3129 - }; 3130 - 3131 3374 static struct hid_driver hidpp_driver = { 3132 3375 .name = "logitech-hidpp-device", 3133 3376 .id_table = hidpp_devices, 3134 3377 .probe = hidpp_probe, 3135 3378 .remove = hidpp_remove, 3136 3379 .raw_event = hidpp_raw_event, 3137 - .usage_table = hidpp_usages, 3138 - .event = hidpp_event, 3139 3380 .input_configured = hidpp_input_configured, 3140 3381 .input_mapping = hidpp_input_mapping, 3141 3382 .input_mapped = hidpp_input_mapped,
+6
drivers/hid/hid-multitouch.c
··· 1814 1814 MT_USB_DEVICE(USB_VENDOR_ID_CHUNGHWAT, 1815 1815 USB_DEVICE_ID_CHUNGHWAT_MULTITOUCH) }, 1816 1816 1817 + /* Cirque devices */ 1818 + { .driver_data = MT_CLS_WIN_8_DUAL, 1819 + HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8, 1820 + I2C_VENDOR_ID_CIRQUE, 1821 + I2C_PRODUCT_ID_CIRQUE_121F) }, 1822 + 1817 1823 /* CJTouch panels */ 1818 1824 { .driver_data = MT_CLS_NSMU, 1819 1825 MT_USB_DEVICE(USB_VENDOR_ID_CJTOUCH,
+3 -1
drivers/hid/hid-quirks.c
··· 107 107 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_MOUSE_C05A), HID_QUIRK_ALWAYS_POLL }, 108 108 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_MOUSE_C06A), HID_QUIRK_ALWAYS_POLL }, 109 109 { HID_USB_DEVICE(USB_VENDOR_ID_MCS, USB_DEVICE_ID_MCS_GAMEPADBLOCK), HID_QUIRK_MULTI_INPUT }, 110 - { HID_USB_DEVICE(USB_VENDOR_ID_MGE, USB_DEVICE_ID_MGE_UPS), HID_QUIRK_NOGET }, 110 + { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_PIXART_MOUSE), HID_QUIRK_ALWAYS_POLL }, 111 111 { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_POWER_COVER), HID_QUIRK_NO_INIT_REPORTS }, 112 112 { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_SURFACE_PRO_2), HID_QUIRK_NO_INIT_REPORTS }, 113 113 { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TOUCH_COVER_2), HID_QUIRK_NO_INIT_REPORTS }, ··· 130 130 { HID_USB_DEVICE(USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN), HID_QUIRK_NO_INIT_REPORTS }, 131 131 { HID_USB_DEVICE(USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL }, 132 132 { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_MOUSE_4D22), HID_QUIRK_ALWAYS_POLL }, 133 + { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F), HID_QUIRK_ALWAYS_POLL }, 134 + { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22), HID_QUIRK_ALWAYS_POLL }, 133 135 { HID_USB_DEVICE(USB_VENDOR_ID_PRODIGE, USB_DEVICE_ID_PRODIGE_CORDLESS), HID_QUIRK_NOGET }, 134 136 { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3001), HID_QUIRK_NOGET }, 135 137 { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3003), HID_QUIRK_NOGET },
+1 -1
drivers/hid/hid-sensor-custom.c
··· 358 358 sensor_inst->hsdev, 359 359 sensor_inst->hsdev->usage, 360 360 usage, report_id, 361 - SENSOR_HUB_SYNC); 361 + SENSOR_HUB_SYNC, false); 362 362 } else if (!strncmp(name, "units", strlen("units"))) 363 363 value = sensor_inst->fields[field_index].attribute.units; 364 364 else if (!strncmp(name, "unit-expo", strlen("unit-expo")))
+10 -3
drivers/hid/hid-sensor-hub.c
··· 299 299 int sensor_hub_input_attr_get_raw_value(struct hid_sensor_hub_device *hsdev, 300 300 u32 usage_id, 301 301 u32 attr_usage_id, u32 report_id, 302 - enum sensor_hub_read_flags flag) 302 + enum sensor_hub_read_flags flag, 303 + bool is_signed) 303 304 { 304 305 struct sensor_hub_data *data = hid_get_drvdata(hsdev->hdev); 305 306 unsigned long flags; ··· 332 331 &hsdev->pending.ready, HZ*5); 333 332 switch (hsdev->pending.raw_size) { 334 333 case 1: 335 - ret_val = *(u8 *)hsdev->pending.raw_data; 334 + if (is_signed) 335 + ret_val = *(s8 *)hsdev->pending.raw_data; 336 + else 337 + ret_val = *(u8 *)hsdev->pending.raw_data; 336 338 break; 337 339 case 2: 338 - ret_val = *(u16 *)hsdev->pending.raw_data; 340 + if (is_signed) 341 + ret_val = *(s16 *)hsdev->pending.raw_data; 342 + else 343 + ret_val = *(u16 *)hsdev->pending.raw_data; 339 344 break; 340 345 case 4: 341 346 ret_val = *(u32 *)hsdev->pending.raw_data;
+90 -64
drivers/hid/hid-steam.c
··· 23 23 * In order to avoid breaking them this driver creates a layered hidraw device, 24 24 * so it can detect when the client is running and then: 25 25 * - it will not send any command to the controller. 26 - * - this input device will be disabled, to avoid double input of the same 26 + * - this input device will be removed, to avoid double input of the same 27 27 * user action. 28 + * When the client is closed, this input device will be created again. 28 29 * 29 30 * For additional functions, such as changing the right-pad margin or switching 30 31 * the led, you can use the user-space tool at: ··· 114 113 spinlock_t lock; 115 114 struct hid_device *hdev, *client_hdev; 116 115 struct mutex mutex; 117 - bool client_opened, input_opened; 116 + bool client_opened; 118 117 struct input_dev __rcu *input; 119 118 unsigned long quirks; 120 119 struct work_struct work_connect; ··· 280 279 } 281 280 } 282 281 283 - static void steam_update_lizard_mode(struct steam_device *steam) 284 - { 285 - mutex_lock(&steam->mutex); 286 - if (!steam->client_opened) { 287 - if (steam->input_opened) 288 - steam_set_lizard_mode(steam, false); 289 - else 290 - steam_set_lizard_mode(steam, lizard_mode); 291 - } 292 - mutex_unlock(&steam->mutex); 293 - } 294 - 295 282 static int steam_input_open(struct input_dev *dev) 296 283 { 297 284 struct steam_device *steam = input_get_drvdata(dev); ··· 290 301 return ret; 291 302 292 303 mutex_lock(&steam->mutex); 293 - steam->input_opened = true; 294 304 if (!steam->client_opened && lizard_mode) 295 305 steam_set_lizard_mode(steam, false); 296 306 mutex_unlock(&steam->mutex); ··· 301 313 struct steam_device *steam = input_get_drvdata(dev); 302 314 303 315 mutex_lock(&steam->mutex); 304 - steam->input_opened = false; 305 316 if (!steam->client_opened && lizard_mode) 306 317 steam_set_lizard_mode(steam, true); 307 318 mutex_unlock(&steam->mutex); ··· 387 400 return 0; 388 401 } 389 402 390 - static int steam_register(struct steam_device *steam) 403 + static int steam_input_register(struct steam_device *steam) 391 404 { 392 405 struct hid_device *hdev = steam->hdev; 393 406 struct input_dev *input; ··· 400 413 dbg_hid("%s: already connected\n", __func__); 401 414 return 0; 402 415 } 403 - 404 - /* 405 - * Unlikely, but getting the serial could fail, and it is not so 406 - * important, so make up a serial number and go on. 407 - */ 408 - if (steam_get_serial(steam) < 0) 409 - strlcpy(steam->serial_no, "XXXXXXXXXX", 410 - sizeof(steam->serial_no)); 411 - 412 - hid_info(hdev, "Steam Controller '%s' connected", 413 - steam->serial_no); 414 416 415 417 input = input_allocate_device(); 416 418 if (!input) ··· 468 492 goto input_register_fail; 469 493 470 494 rcu_assign_pointer(steam->input, input); 471 - 472 - /* ignore battery errors, we can live without it */ 473 - if (steam->quirks & STEAM_QUIRK_WIRELESS) 474 - steam_battery_register(steam); 475 - 476 495 return 0; 477 496 478 497 input_register_fail: ··· 475 504 return ret; 476 505 } 477 506 478 - static void steam_unregister(struct steam_device *steam) 507 + static void steam_input_unregister(struct steam_device *steam) 479 508 { 480 509 struct input_dev *input; 510 + rcu_read_lock(); 511 + input = rcu_dereference(steam->input); 512 + rcu_read_unlock(); 513 + if (!input) 514 + return; 515 + RCU_INIT_POINTER(steam->input, NULL); 516 + synchronize_rcu(); 517 + input_unregister_device(input); 518 + } 519 + 520 + static void steam_battery_unregister(struct steam_device *steam) 521 + { 481 522 struct power_supply *battery; 482 523 483 524 rcu_read_lock(); 484 - input = rcu_dereference(steam->input); 485 525 battery = rcu_dereference(steam->battery); 486 526 rcu_read_unlock(); 487 527 488 - if (battery) { 489 - RCU_INIT_POINTER(steam->battery, NULL); 490 - synchronize_rcu(); 491 - power_supply_unregister(battery); 528 + if (!battery) 529 + return; 530 + RCU_INIT_POINTER(steam->battery, NULL); 531 + synchronize_rcu(); 532 + power_supply_unregister(battery); 533 + } 534 + 535 + static int steam_register(struct steam_device *steam) 536 + { 537 + int ret; 538 + 539 + /* 540 + * This function can be called several times in a row with the 541 + * wireless adaptor, without steam_unregister() between them, because 542 + * another client send a get_connection_status command, for example. 543 + * The battery and serial number are set just once per device. 544 + */ 545 + if (!steam->serial_no[0]) { 546 + /* 547 + * Unlikely, but getting the serial could fail, and it is not so 548 + * important, so make up a serial number and go on. 549 + */ 550 + if (steam_get_serial(steam) < 0) 551 + strlcpy(steam->serial_no, "XXXXXXXXXX", 552 + sizeof(steam->serial_no)); 553 + 554 + hid_info(steam->hdev, "Steam Controller '%s' connected", 555 + steam->serial_no); 556 + 557 + /* ignore battery errors, we can live without it */ 558 + if (steam->quirks & STEAM_QUIRK_WIRELESS) 559 + steam_battery_register(steam); 560 + 561 + mutex_lock(&steam_devices_lock); 562 + list_add(&steam->list, &steam_devices); 563 + mutex_unlock(&steam_devices_lock); 492 564 } 493 - if (input) { 494 - RCU_INIT_POINTER(steam->input, NULL); 495 - synchronize_rcu(); 565 + 566 + mutex_lock(&steam->mutex); 567 + if (!steam->client_opened) { 568 + steam_set_lizard_mode(steam, lizard_mode); 569 + ret = steam_input_register(steam); 570 + } else { 571 + ret = 0; 572 + } 573 + mutex_unlock(&steam->mutex); 574 + 575 + return ret; 576 + } 577 + 578 + static void steam_unregister(struct steam_device *steam) 579 + { 580 + steam_battery_unregister(steam); 581 + steam_input_unregister(steam); 582 + if (steam->serial_no[0]) { 496 583 hid_info(steam->hdev, "Steam Controller '%s' disconnected", 497 584 steam->serial_no); 498 - input_unregister_device(input); 585 + mutex_lock(&steam_devices_lock); 586 + list_del(&steam->list); 587 + mutex_unlock(&steam_devices_lock); 588 + steam->serial_no[0] = 0; 499 589 } 500 590 } 501 591 ··· 632 600 mutex_lock(&steam->mutex); 633 601 steam->client_opened = true; 634 602 mutex_unlock(&steam->mutex); 603 + 604 + steam_input_unregister(steam); 605 + 635 606 return ret; 636 607 } 637 608 ··· 644 609 645 610 mutex_lock(&steam->mutex); 646 611 steam->client_opened = false; 647 - if (steam->input_opened) 648 - steam_set_lizard_mode(steam, false); 649 - else 650 - steam_set_lizard_mode(steam, lizard_mode); 651 612 mutex_unlock(&steam->mutex); 652 613 653 614 hid_hw_close(steam->hdev); 615 + if (steam->connected) { 616 + steam_set_lizard_mode(steam, lizard_mode); 617 + steam_input_register(steam); 618 + } 654 619 } 655 620 656 621 static int steam_client_ll_raw_request(struct hid_device *hdev, ··· 779 744 } 780 745 } 781 746 782 - mutex_lock(&steam_devices_lock); 783 - steam_update_lizard_mode(steam); 784 - list_add(&steam->list, &steam_devices); 785 - mutex_unlock(&steam_devices_lock); 786 - 787 747 return 0; 788 748 789 749 hid_hw_open_fail: ··· 804 774 return; 805 775 } 806 776 807 - mutex_lock(&steam_devices_lock); 808 - list_del(&steam->list); 809 - mutex_unlock(&steam_devices_lock); 810 - 811 777 hid_destroy_device(steam->client_hdev); 812 778 steam->client_opened = false; 813 779 cancel_work_sync(&steam->work_connect); ··· 818 792 static void steam_do_connect_event(struct steam_device *steam, bool connected) 819 793 { 820 794 unsigned long flags; 795 + bool changed; 821 796 822 797 spin_lock_irqsave(&steam->lock, flags); 798 + changed = steam->connected != connected; 823 799 steam->connected = connected; 824 800 spin_unlock_irqrestore(&steam->lock, flags); 825 801 826 - if (schedule_work(&steam->work_connect) == 0) 802 + if (changed && schedule_work(&steam->work_connect) == 0) 827 803 dbg_hid("%s: connected=%d event already queued\n", 828 804 __func__, connected); 829 805 } ··· 1047 1019 return 0; 1048 1020 rcu_read_lock(); 1049 1021 input = rcu_dereference(steam->input); 1050 - if (likely(input)) { 1022 + if (likely(input)) 1051 1023 steam_do_input_event(steam, input, data); 1052 - } else { 1053 - dbg_hid("%s: input data without connect event\n", 1054 - __func__); 1055 - steam_do_connect_event(steam, true); 1056 - } 1057 1024 rcu_read_unlock(); 1058 1025 break; 1059 1026 case STEAM_EV_CONNECT: ··· 1097 1074 1098 1075 mutex_lock(&steam_devices_lock); 1099 1076 list_for_each_entry(steam, &steam_devices, list) { 1100 - steam_update_lizard_mode(steam); 1077 + mutex_lock(&steam->mutex); 1078 + if (!steam->client_opened) 1079 + steam_set_lizard_mode(steam, lizard_mode); 1080 + mutex_unlock(&steam->mutex); 1101 1081 } 1102 1082 mutex_unlock(&steam_devices_lock); 1103 1083 return 0;
+21
drivers/hid/i2c-hid/i2c-hid-core.c
··· 49 49 #define I2C_HID_QUIRK_SET_PWR_WAKEUP_DEV BIT(0) 50 50 #define I2C_HID_QUIRK_NO_IRQ_AFTER_RESET BIT(1) 51 51 #define I2C_HID_QUIRK_NO_RUNTIME_PM BIT(2) 52 + #define I2C_HID_QUIRK_DELAY_AFTER_SLEEP BIT(3) 52 53 53 54 /* flags */ 54 55 #define I2C_HID_STARTED 0 ··· 159 158 160 159 bool irq_wake_enabled; 161 160 struct mutex reset_lock; 161 + 162 + unsigned long sleep_delay; 162 163 }; 163 164 164 165 static const struct i2c_hid_quirks { ··· 174 171 I2C_HID_QUIRK_SET_PWR_WAKEUP_DEV }, 175 172 { I2C_VENDOR_ID_HANTICK, I2C_PRODUCT_ID_HANTICK_5288, 176 173 I2C_HID_QUIRK_NO_IRQ_AFTER_RESET | 174 + I2C_HID_QUIRK_NO_RUNTIME_PM }, 175 + { I2C_VENDOR_ID_RAYDIUM, I2C_PRODUCT_ID_RAYDIUM_4B33, 176 + I2C_HID_QUIRK_DELAY_AFTER_SLEEP }, 177 + { USB_VENDOR_ID_LG, I2C_DEVICE_ID_LG_8001, 177 178 I2C_HID_QUIRK_NO_RUNTIME_PM }, 178 179 { 0, 0 } 179 180 }; ··· 394 387 { 395 388 struct i2c_hid *ihid = i2c_get_clientdata(client); 396 389 int ret; 390 + unsigned long now, delay; 397 391 398 392 i2c_hid_dbg(ihid, "%s\n", __func__); 399 393 ··· 412 404 goto set_pwr_exit; 413 405 } 414 406 407 + if (ihid->quirks & I2C_HID_QUIRK_DELAY_AFTER_SLEEP && 408 + power_state == I2C_HID_PWR_ON) { 409 + now = jiffies; 410 + if (time_after(ihid->sleep_delay, now)) { 411 + delay = jiffies_to_usecs(ihid->sleep_delay - now); 412 + usleep_range(delay, delay + 1); 413 + } 414 + } 415 + 415 416 ret = __i2c_hid_command(client, &hid_set_power_cmd, power_state, 416 417 0, NULL, 0, NULL, 0); 418 + 419 + if (ihid->quirks & I2C_HID_QUIRK_DELAY_AFTER_SLEEP && 420 + power_state == I2C_HID_PWR_SLEEP) 421 + ihid->sleep_delay = jiffies + msecs_to_jiffies(20); 417 422 418 423 if (ret) 419 424 dev_err(&client->dev, "failed to change power setting.\n");
+8
drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
··· 331 331 .driver_data = (void *)&sipodev_desc 332 332 }, 333 333 { 334 + .ident = "Direkt-Tek DTLAPY133-1", 335 + .matches = { 336 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Direkt-Tek"), 337 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "DTLAPY133-1"), 338 + }, 339 + .driver_data = (void *)&sipodev_desc 340 + }, 341 + { 334 342 .ident = "Mediacom Flexbook Edge 11", 335 343 .matches = { 336 344 DMI_EXACT_MATCH(DMI_SYS_VENDOR, "MEDIACOM"),
+19 -6
drivers/hid/uhid.c
··· 12 12 13 13 #include <linux/atomic.h> 14 14 #include <linux/compat.h> 15 + #include <linux/cred.h> 15 16 #include <linux/device.h> 16 17 #include <linux/fs.h> 17 18 #include <linux/hid.h> ··· 497 496 goto err_free; 498 497 } 499 498 500 - len = min(sizeof(hid->name), sizeof(ev->u.create2.name)); 501 - strlcpy(hid->name, ev->u.create2.name, len); 502 - len = min(sizeof(hid->phys), sizeof(ev->u.create2.phys)); 503 - strlcpy(hid->phys, ev->u.create2.phys, len); 504 - len = min(sizeof(hid->uniq), sizeof(ev->u.create2.uniq)); 505 - strlcpy(hid->uniq, ev->u.create2.uniq, len); 499 + /* @hid is zero-initialized, strncpy() is correct, strlcpy() not */ 500 + len = min(sizeof(hid->name), sizeof(ev->u.create2.name)) - 1; 501 + strncpy(hid->name, ev->u.create2.name, len); 502 + len = min(sizeof(hid->phys), sizeof(ev->u.create2.phys)) - 1; 503 + strncpy(hid->phys, ev->u.create2.phys, len); 504 + len = min(sizeof(hid->uniq), sizeof(ev->u.create2.uniq)) - 1; 505 + strncpy(hid->uniq, ev->u.create2.uniq, len); 506 506 507 507 hid->ll_driver = &uhid_hid_driver; 508 508 hid->bus = ev->u.create2.bus; ··· 724 722 725 723 switch (uhid->input_buf.type) { 726 724 case UHID_CREATE: 725 + /* 726 + * 'struct uhid_create_req' contains a __user pointer which is 727 + * copied from, so it's unsafe to allow this with elevated 728 + * privileges (e.g. from a setuid binary) or via kernel_write(). 729 + */ 730 + if (file->f_cred != current_cred() || uaccess_kernel()) { 731 + pr_err_once("UHID_CREATE from different security context by process %d (%s), this is not allowed.\n", 732 + task_tgid_vnr(current), current->comm); 733 + ret = -EACCES; 734 + goto unlock; 735 + } 727 736 ret = uhid_dev_create(uhid, &uhid->input_buf); 728 737 break; 729 738 case UHID_CREATE2:
+14 -4
drivers/hid/usbhid/hiddev.c
··· 512 512 if (cmd == HIDIOCGCOLLECTIONINDEX) { 513 513 if (uref->usage_index >= field->maxusage) 514 514 goto inval; 515 + uref->usage_index = 516 + array_index_nospec(uref->usage_index, 517 + field->maxusage); 515 518 } else if (uref->usage_index >= field->report_count) 516 519 goto inval; 517 520 } 518 521 519 - if ((cmd == HIDIOCGUSAGES || cmd == HIDIOCSUSAGES) && 520 - (uref_multi->num_values > HID_MAX_MULTI_USAGES || 521 - uref->usage_index + uref_multi->num_values > field->report_count)) 522 - goto inval; 522 + if (cmd == HIDIOCGUSAGES || cmd == HIDIOCSUSAGES) { 523 + if (uref_multi->num_values > HID_MAX_MULTI_USAGES || 524 + uref->usage_index + uref_multi->num_values > 525 + field->report_count) 526 + goto inval; 527 + 528 + uref->usage_index = 529 + array_index_nospec(uref->usage_index, 530 + field->report_count - 531 + uref_multi->num_values); 532 + } 523 533 524 534 switch (cmd) { 525 535 case HIDIOCGUSAGE:
+8
drivers/hv/channel.c
··· 516 516 } 517 517 wait_for_completion(&msginfo->waitevent); 518 518 519 + if (msginfo->response.gpadl_created.creation_status != 0) { 520 + pr_err("Failed to establish GPADL: err = 0x%x\n", 521 + msginfo->response.gpadl_created.creation_status); 522 + 523 + ret = -EDQUOT; 524 + goto cleanup; 525 + } 526 + 519 527 if (channel->rescind) { 520 528 ret = -ENODEV; 521 529 goto cleanup;
+22 -4
drivers/hv/hv_kvp.c
··· 353 353 354 354 out->body.kvp_ip_val.dhcp_enabled = in->kvp_ip_val.dhcp_enabled; 355 355 356 + /* fallthrough */ 357 + 358 + case KVP_OP_GET_IP_INFO: 356 359 utf16s_to_utf8s((wchar_t *)in->kvp_ip_val.adapter_id, 357 360 MAX_ADAPTER_ID_SIZE, 358 361 UTF16_LITTLE_ENDIAN, ··· 408 405 process_ib_ipinfo(in_msg, message, KVP_OP_SET_IP_INFO); 409 406 break; 410 407 case KVP_OP_GET_IP_INFO: 411 - /* We only need to pass on message->kvp_hdr.operation. */ 408 + /* 409 + * We only need to pass on the info of operation, adapter_id 410 + * and addr_family to the userland kvp daemon. 411 + */ 412 + process_ib_ipinfo(in_msg, message, KVP_OP_GET_IP_INFO); 412 413 break; 413 414 case KVP_OP_SET: 414 415 switch (in_msg->body.kvp_set.data.value_type) { ··· 453 446 454 447 } 455 448 456 - break; 457 - 458 - case KVP_OP_GET: 449 + /* 450 + * The key is always a string - utf16 encoding. 451 + */ 459 452 message->body.kvp_set.data.key_size = 460 453 utf16s_to_utf8s( 461 454 (wchar_t *)in_msg->body.kvp_set.data.key, 462 455 in_msg->body.kvp_set.data.key_size, 463 456 UTF16_LITTLE_ENDIAN, 464 457 message->body.kvp_set.data.key, 458 + HV_KVP_EXCHANGE_MAX_KEY_SIZE - 1) + 1; 459 + 460 + break; 461 + 462 + case KVP_OP_GET: 463 + message->body.kvp_get.data.key_size = 464 + utf16s_to_utf8s( 465 + (wchar_t *)in_msg->body.kvp_get.data.key, 466 + in_msg->body.kvp_get.data.key_size, 467 + UTF16_LITTLE_ENDIAN, 468 + message->body.kvp_get.data.key, 465 469 HV_KVP_EXCHANGE_MAX_KEY_SIZE - 1) + 1; 466 470 break; 467 471
+4 -4
drivers/hwmon/hwmon.c
··· 649 649 if (info[i]->config[j] & HWMON_T_INPUT) { 650 650 err = hwmon_thermal_add_sensor(dev, 651 651 hwdev, j); 652 - if (err) 653 - goto free_device; 652 + if (err) { 653 + device_unregister(hdev); 654 + goto ida_remove; 655 + } 654 656 } 655 657 } 656 658 } ··· 660 658 661 659 return hdev; 662 660 663 - free_device: 664 - device_unregister(hdev); 665 661 free_hwmon: 666 662 kfree(hwdev); 667 663 ida_remove:
+3 -4
drivers/hwmon/ibmpowernv.c
··· 181 181 return sprintf(buf, "%s\n", sdata->label); 182 182 } 183 183 184 - static int __init get_logical_cpu(int hwcpu) 184 + static int get_logical_cpu(int hwcpu) 185 185 { 186 186 int cpu; 187 187 ··· 192 192 return -ENOENT; 193 193 } 194 194 195 - static void __init make_sensor_label(struct device_node *np, 196 - struct sensor_data *sdata, 197 - const char *label) 195 + static void make_sensor_label(struct device_node *np, 196 + struct sensor_data *sdata, const char *label) 198 197 { 199 198 u32 id; 200 199 size_t n;
+3 -3
drivers/hwmon/ina2xx.c
··· 274 274 break; 275 275 case INA2XX_CURRENT: 276 276 /* signed register, result in mA */ 277 - val = regval * data->current_lsb_uA; 277 + val = (s16)regval * data->current_lsb_uA; 278 278 val = DIV_ROUND_CLOSEST(val, 1000); 279 279 break; 280 280 case INA2XX_CALIBRATION: ··· 491 491 } 492 492 493 493 data->groups[group++] = &ina2xx_group; 494 - if (id->driver_data == ina226) 494 + if (chip == ina226) 495 495 data->groups[group++] = &ina226_group; 496 496 497 497 hwmon_dev = devm_hwmon_device_register_with_groups(dev, client->name, ··· 500 500 return PTR_ERR(hwmon_dev); 501 501 502 502 dev_info(dev, "power monitor %s (Rshunt = %li uOhm)\n", 503 - id->name, data->rshunt); 503 + client->name, data->rshunt); 504 504 505 505 return 0; 506 506 }
+1 -1
drivers/hwmon/mlxreg-fan.c
··· 51 51 */ 52 52 #define MLXREG_FAN_GET_RPM(rval, d, s) (DIV_ROUND_CLOSEST(15000000 * 100, \ 53 53 ((rval) + (s)) * (d))) 54 - #define MLXREG_FAN_GET_FAULT(val, mask) (!!((val) ^ (mask))) 54 + #define MLXREG_FAN_GET_FAULT(val, mask) (!((val) ^ (mask))) 55 55 #define MLXREG_FAN_PWM_DUTY2STATE(duty) (DIV_ROUND_CLOSEST((duty) * \ 56 56 MLXREG_FAN_MAX_STATE, \ 57 57 MLXREG_FAN_MAX_DUTY))
-6
drivers/hwmon/raspberrypi-hwmon.c
··· 115 115 { 116 116 struct device *dev = &pdev->dev; 117 117 struct rpi_hwmon_data *data; 118 - int ret; 119 118 120 119 data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 121 120 if (!data) ··· 122 123 123 124 /* Parent driver assure that firmware is correct */ 124 125 data->fw = dev_get_drvdata(dev->parent); 125 - 126 - /* Init throttled */ 127 - ret = rpi_firmware_property(data->fw, RPI_FIRMWARE_GET_THROTTLED, 128 - &data->last_throttled, 129 - sizeof(data->last_throttled)); 130 126 131 127 data->hwmon_dev = devm_hwmon_device_register_with_info(dev, "rpi_volt", 132 128 data,
+1 -1
drivers/hwmon/w83795.c
··· 1691 1691 * somewhere else in the code 1692 1692 */ 1693 1693 #define SENSOR_ATTR_TEMP(index) { \ 1694 - SENSOR_ATTR_2(temp##index##_type, S_IRUGO | (index < 4 ? S_IWUSR : 0), \ 1694 + SENSOR_ATTR_2(temp##index##_type, S_IRUGO | (index < 5 ? S_IWUSR : 0), \ 1695 1695 show_temp_mode, store_temp_mode, NOT_USED, index - 1), \ 1696 1696 SENSOR_ATTR_2(temp##index##_input, S_IRUGO, show_temp, \ 1697 1697 NULL, TEMP_READ, index - 1), \
+10 -1
drivers/i2c/busses/Kconfig
··· 224 224 This driver can also be built as a module. If so, the module 225 225 will be called i2c-nforce2-s4985. 226 226 227 + config I2C_NVIDIA_GPU 228 + tristate "NVIDIA GPU I2C controller" 229 + depends on PCI 230 + help 231 + If you say yes to this option, support will be included for the 232 + NVIDIA GPU I2C controller which is used to communicate with the GPU's 233 + Type-C controller. This driver can also be built as a module called 234 + i2c-nvidia-gpu. 235 + 227 236 config I2C_SIS5595 228 237 tristate "SiS 5595" 229 238 depends on PCI ··· 761 752 762 753 config I2C_OMAP 763 754 tristate "OMAP I2C adapter" 764 - depends on ARCH_OMAP 755 + depends on ARCH_OMAP || ARCH_K3 765 756 default y if MACH_OMAP_H3 || MACH_OMAP_OSK 766 757 help 767 758 If you say yes to this option, support will be included for the
+1
drivers/i2c/busses/Makefile
··· 19 19 obj-$(CONFIG_I2C_ISMT) += i2c-ismt.o 20 20 obj-$(CONFIG_I2C_NFORCE2) += i2c-nforce2.o 21 21 obj-$(CONFIG_I2C_NFORCE2_S4985) += i2c-nforce2-s4985.o 22 + obj-$(CONFIG_I2C_NVIDIA_GPU) += i2c-nvidia-gpu.o 22 23 obj-$(CONFIG_I2C_PIIX4) += i2c-piix4.o 23 24 obj-$(CONFIG_I2C_SIS5595) += i2c-sis5595.o 24 25 obj-$(CONFIG_I2C_SIS630) += i2c-sis630.o
+368
drivers/i2c/busses/i2c-nvidia-gpu.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Nvidia GPU I2C controller Driver 4 + * 5 + * Copyright (C) 2018 NVIDIA Corporation. All rights reserved. 6 + * Author: Ajay Gupta <ajayg@nvidia.com> 7 + */ 8 + #include <linux/delay.h> 9 + #include <linux/i2c.h> 10 + #include <linux/interrupt.h> 11 + #include <linux/module.h> 12 + #include <linux/pci.h> 13 + #include <linux/platform_device.h> 14 + #include <linux/pm.h> 15 + #include <linux/pm_runtime.h> 16 + 17 + #include <asm/unaligned.h> 18 + 19 + /* I2C definitions */ 20 + #define I2C_MST_CNTL 0x00 21 + #define I2C_MST_CNTL_GEN_START BIT(0) 22 + #define I2C_MST_CNTL_GEN_STOP BIT(1) 23 + #define I2C_MST_CNTL_CMD_READ (1 << 2) 24 + #define I2C_MST_CNTL_CMD_WRITE (2 << 2) 25 + #define I2C_MST_CNTL_BURST_SIZE_SHIFT 6 26 + #define I2C_MST_CNTL_GEN_NACK BIT(28) 27 + #define I2C_MST_CNTL_STATUS GENMASK(30, 29) 28 + #define I2C_MST_CNTL_STATUS_OKAY (0 << 29) 29 + #define I2C_MST_CNTL_STATUS_NO_ACK (1 << 29) 30 + #define I2C_MST_CNTL_STATUS_TIMEOUT (2 << 29) 31 + #define I2C_MST_CNTL_STATUS_BUS_BUSY (3 << 29) 32 + #define I2C_MST_CNTL_CYCLE_TRIGGER BIT(31) 33 + 34 + #define I2C_MST_ADDR 0x04 35 + 36 + #define I2C_MST_I2C0_TIMING 0x08 37 + #define I2C_MST_I2C0_TIMING_SCL_PERIOD_100KHZ 0x10e 38 + #define I2C_MST_I2C0_TIMING_TIMEOUT_CLK_CNT 16 39 + #define I2C_MST_I2C0_TIMING_TIMEOUT_CLK_CNT_MAX 255 40 + #define I2C_MST_I2C0_TIMING_TIMEOUT_CHECK BIT(24) 41 + 42 + #define I2C_MST_DATA 0x0c 43 + 44 + #define I2C_MST_HYBRID_PADCTL 0x20 45 + #define I2C_MST_HYBRID_PADCTL_MODE_I2C BIT(0) 46 + #define I2C_MST_HYBRID_PADCTL_I2C_SCL_INPUT_RCV BIT(14) 47 + #define I2C_MST_HYBRID_PADCTL_I2C_SDA_INPUT_RCV BIT(15) 48 + 49 + struct gpu_i2c_dev { 50 + struct device *dev; 51 + void __iomem *regs; 52 + struct i2c_adapter adapter; 53 + struct i2c_board_info *gpu_ccgx_ucsi; 54 + }; 55 + 56 + static void gpu_enable_i2c_bus(struct gpu_i2c_dev *i2cd) 57 + { 58 + u32 val; 59 + 60 + /* enable I2C */ 61 + val = readl(i2cd->regs + I2C_MST_HYBRID_PADCTL); 62 + val |= I2C_MST_HYBRID_PADCTL_MODE_I2C | 63 + I2C_MST_HYBRID_PADCTL_I2C_SCL_INPUT_RCV | 64 + I2C_MST_HYBRID_PADCTL_I2C_SDA_INPUT_RCV; 65 + writel(val, i2cd->regs + I2C_MST_HYBRID_PADCTL); 66 + 67 + /* enable 100KHZ mode */ 68 + val = I2C_MST_I2C0_TIMING_SCL_PERIOD_100KHZ; 69 + val |= (I2C_MST_I2C0_TIMING_TIMEOUT_CLK_CNT_MAX 70 + << I2C_MST_I2C0_TIMING_TIMEOUT_CLK_CNT); 71 + val |= I2C_MST_I2C0_TIMING_TIMEOUT_CHECK; 72 + writel(val, i2cd->regs + I2C_MST_I2C0_TIMING); 73 + } 74 + 75 + static int gpu_i2c_check_status(struct gpu_i2c_dev *i2cd) 76 + { 77 + unsigned long target = jiffies + msecs_to_jiffies(1000); 78 + u32 val; 79 + 80 + do { 81 + val = readl(i2cd->regs + I2C_MST_CNTL); 82 + if (!(val & I2C_MST_CNTL_CYCLE_TRIGGER)) 83 + break; 84 + if ((val & I2C_MST_CNTL_STATUS) != 85 + I2C_MST_CNTL_STATUS_BUS_BUSY) 86 + break; 87 + usleep_range(500, 600); 88 + } while (time_is_after_jiffies(target)); 89 + 90 + if (time_is_before_jiffies(target)) { 91 + dev_err(i2cd->dev, "i2c timeout error %x\n", val); 92 + return -ETIME; 93 + } 94 + 95 + val = readl(i2cd->regs + I2C_MST_CNTL); 96 + switch (val & I2C_MST_CNTL_STATUS) { 97 + case I2C_MST_CNTL_STATUS_OKAY: 98 + return 0; 99 + case I2C_MST_CNTL_STATUS_NO_ACK: 100 + return -EIO; 101 + case I2C_MST_CNTL_STATUS_TIMEOUT: 102 + return -ETIME; 103 + default: 104 + return 0; 105 + } 106 + } 107 + 108 + static int gpu_i2c_read(struct gpu_i2c_dev *i2cd, u8 *data, u16 len) 109 + { 110 + int status; 111 + u32 val; 112 + 113 + val = I2C_MST_CNTL_GEN_START | I2C_MST_CNTL_CMD_READ | 114 + (len << I2C_MST_CNTL_BURST_SIZE_SHIFT) | 115 + I2C_MST_CNTL_CYCLE_TRIGGER | I2C_MST_CNTL_GEN_NACK; 116 + writel(val, i2cd->regs + I2C_MST_CNTL); 117 + 118 + status = gpu_i2c_check_status(i2cd); 119 + if (status < 0) 120 + return status; 121 + 122 + val = readl(i2cd->regs + I2C_MST_DATA); 123 + switch (len) { 124 + case 1: 125 + data[0] = val; 126 + break; 127 + case 2: 128 + put_unaligned_be16(val, data); 129 + break; 130 + case 3: 131 + put_unaligned_be16(val >> 8, data); 132 + data[2] = val; 133 + break; 134 + case 4: 135 + put_unaligned_be32(val, data); 136 + break; 137 + default: 138 + break; 139 + } 140 + return status; 141 + } 142 + 143 + static int gpu_i2c_start(struct gpu_i2c_dev *i2cd) 144 + { 145 + writel(I2C_MST_CNTL_GEN_START, i2cd->regs + I2C_MST_CNTL); 146 + return gpu_i2c_check_status(i2cd); 147 + } 148 + 149 + static int gpu_i2c_stop(struct gpu_i2c_dev *i2cd) 150 + { 151 + writel(I2C_MST_CNTL_GEN_STOP, i2cd->regs + I2C_MST_CNTL); 152 + return gpu_i2c_check_status(i2cd); 153 + } 154 + 155 + static int gpu_i2c_write(struct gpu_i2c_dev *i2cd, u8 data) 156 + { 157 + u32 val; 158 + 159 + writel(data, i2cd->regs + I2C_MST_DATA); 160 + 161 + val = I2C_MST_CNTL_CMD_WRITE | (1 << I2C_MST_CNTL_BURST_SIZE_SHIFT); 162 + writel(val, i2cd->regs + I2C_MST_CNTL); 163 + 164 + return gpu_i2c_check_status(i2cd); 165 + } 166 + 167 + static int gpu_i2c_master_xfer(struct i2c_adapter *adap, 168 + struct i2c_msg *msgs, int num) 169 + { 170 + struct gpu_i2c_dev *i2cd = i2c_get_adapdata(adap); 171 + int status, status2; 172 + int i, j; 173 + 174 + /* 175 + * The controller supports maximum 4 byte read due to known 176 + * limitation of sending STOP after every read. 177 + */ 178 + for (i = 0; i < num; i++) { 179 + if (msgs[i].flags & I2C_M_RD) { 180 + /* program client address before starting read */ 181 + writel(msgs[i].addr, i2cd->regs + I2C_MST_ADDR); 182 + /* gpu_i2c_read has implicit start */ 183 + status = gpu_i2c_read(i2cd, msgs[i].buf, msgs[i].len); 184 + if (status < 0) 185 + goto stop; 186 + } else { 187 + u8 addr = i2c_8bit_addr_from_msg(msgs + i); 188 + 189 + status = gpu_i2c_start(i2cd); 190 + if (status < 0) { 191 + if (i == 0) 192 + return status; 193 + goto stop; 194 + } 195 + 196 + status = gpu_i2c_write(i2cd, addr); 197 + if (status < 0) 198 + goto stop; 199 + 200 + for (j = 0; j < msgs[i].len; j++) { 201 + status = gpu_i2c_write(i2cd, msgs[i].buf[j]); 202 + if (status < 0) 203 + goto stop; 204 + } 205 + } 206 + } 207 + status = gpu_i2c_stop(i2cd); 208 + if (status < 0) 209 + return status; 210 + 211 + return i; 212 + stop: 213 + status2 = gpu_i2c_stop(i2cd); 214 + if (status2 < 0) 215 + dev_err(i2cd->dev, "i2c stop failed %d\n", status2); 216 + return status; 217 + } 218 + 219 + static const struct i2c_adapter_quirks gpu_i2c_quirks = { 220 + .max_read_len = 4, 221 + .flags = I2C_AQ_COMB_WRITE_THEN_READ, 222 + }; 223 + 224 + static u32 gpu_i2c_functionality(struct i2c_adapter *adap) 225 + { 226 + return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL; 227 + } 228 + 229 + static const struct i2c_algorithm gpu_i2c_algorithm = { 230 + .master_xfer = gpu_i2c_master_xfer, 231 + .functionality = gpu_i2c_functionality, 232 + }; 233 + 234 + /* 235 + * This driver is for Nvidia GPU cards with USB Type-C interface. 236 + * We want to identify the cards using vendor ID and class code only 237 + * to avoid dependency of adding product id for any new card which 238 + * requires this driver. 239 + * Currently there is no class code defined for UCSI device over PCI 240 + * so using UNKNOWN class for now and it will be updated when UCSI 241 + * over PCI gets a class code. 242 + * There is no other NVIDIA cards with UNKNOWN class code. Even if the 243 + * driver gets loaded for an undesired card then eventually i2c_read() 244 + * (initiated from UCSI i2c_client) will timeout or UCSI commands will 245 + * timeout. 246 + */ 247 + #define PCI_CLASS_SERIAL_UNKNOWN 0x0c80 248 + static const struct pci_device_id gpu_i2c_ids[] = { 249 + { PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, 250 + PCI_CLASS_SERIAL_UNKNOWN << 8, 0xffffff00}, 251 + { } 252 + }; 253 + MODULE_DEVICE_TABLE(pci, gpu_i2c_ids); 254 + 255 + static int gpu_populate_client(struct gpu_i2c_dev *i2cd, int irq) 256 + { 257 + struct i2c_client *ccgx_client; 258 + 259 + i2cd->gpu_ccgx_ucsi = devm_kzalloc(i2cd->dev, 260 + sizeof(*i2cd->gpu_ccgx_ucsi), 261 + GFP_KERNEL); 262 + if (!i2cd->gpu_ccgx_ucsi) 263 + return -ENOMEM; 264 + 265 + strlcpy(i2cd->gpu_ccgx_ucsi->type, "ccgx-ucsi", 266 + sizeof(i2cd->gpu_ccgx_ucsi->type)); 267 + i2cd->gpu_ccgx_ucsi->addr = 0x8; 268 + i2cd->gpu_ccgx_ucsi->irq = irq; 269 + ccgx_client = i2c_new_device(&i2cd->adapter, i2cd->gpu_ccgx_ucsi); 270 + if (!ccgx_client) 271 + return -ENODEV; 272 + 273 + return 0; 274 + } 275 + 276 + static int gpu_i2c_probe(struct pci_dev *pdev, const struct pci_device_id *id) 277 + { 278 + struct gpu_i2c_dev *i2cd; 279 + int status; 280 + 281 + i2cd = devm_kzalloc(&pdev->dev, sizeof(*i2cd), GFP_KERNEL); 282 + if (!i2cd) 283 + return -ENOMEM; 284 + 285 + i2cd->dev = &pdev->dev; 286 + dev_set_drvdata(&pdev->dev, i2cd); 287 + 288 + status = pcim_enable_device(pdev); 289 + if (status < 0) { 290 + dev_err(&pdev->dev, "pcim_enable_device failed %d\n", status); 291 + return status; 292 + } 293 + 294 + pci_set_master(pdev); 295 + 296 + i2cd->regs = pcim_iomap(pdev, 0, 0); 297 + if (!i2cd->regs) { 298 + dev_err(&pdev->dev, "pcim_iomap failed\n"); 299 + return -ENOMEM; 300 + } 301 + 302 + status = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI); 303 + if (status < 0) { 304 + dev_err(&pdev->dev, "pci_alloc_irq_vectors err %d\n", status); 305 + return status; 306 + } 307 + 308 + gpu_enable_i2c_bus(i2cd); 309 + 310 + i2c_set_adapdata(&i2cd->adapter, i2cd); 311 + i2cd->adapter.owner = THIS_MODULE; 312 + strlcpy(i2cd->adapter.name, "NVIDIA GPU I2C adapter", 313 + sizeof(i2cd->adapter.name)); 314 + i2cd->adapter.algo = &gpu_i2c_algorithm; 315 + i2cd->adapter.quirks = &gpu_i2c_quirks; 316 + i2cd->adapter.dev.parent = &pdev->dev; 317 + status = i2c_add_adapter(&i2cd->adapter); 318 + if (status < 0) 319 + goto free_irq_vectors; 320 + 321 + status = gpu_populate_client(i2cd, pdev->irq); 322 + if (status < 0) { 323 + dev_err(&pdev->dev, "gpu_populate_client failed %d\n", status); 324 + goto del_adapter; 325 + } 326 + 327 + return 0; 328 + 329 + del_adapter: 330 + i2c_del_adapter(&i2cd->adapter); 331 + free_irq_vectors: 332 + pci_free_irq_vectors(pdev); 333 + return status; 334 + } 335 + 336 + static void gpu_i2c_remove(struct pci_dev *pdev) 337 + { 338 + struct gpu_i2c_dev *i2cd = dev_get_drvdata(&pdev->dev); 339 + 340 + i2c_del_adapter(&i2cd->adapter); 341 + pci_free_irq_vectors(pdev); 342 + } 343 + 344 + static int gpu_i2c_resume(struct device *dev) 345 + { 346 + struct gpu_i2c_dev *i2cd = dev_get_drvdata(dev); 347 + 348 + gpu_enable_i2c_bus(i2cd); 349 + return 0; 350 + } 351 + 352 + static UNIVERSAL_DEV_PM_OPS(gpu_i2c_driver_pm, NULL, gpu_i2c_resume, NULL); 353 + 354 + static struct pci_driver gpu_i2c_driver = { 355 + .name = "nvidia-gpu", 356 + .id_table = gpu_i2c_ids, 357 + .probe = gpu_i2c_probe, 358 + .remove = gpu_i2c_remove, 359 + .driver = { 360 + .pm = &gpu_i2c_driver_pm, 361 + }, 362 + }; 363 + 364 + module_pci_driver(gpu_i2c_driver); 365 + 366 + MODULE_AUTHOR("Ajay Gupta <ajayg@nvidia.com>"); 367 + MODULE_DESCRIPTION("Nvidia GPU I2C controller Driver"); 368 + MODULE_LICENSE("GPL v2");
+8 -7
drivers/i2c/busses/i2c-qcom-geni.c
··· 571 571 572 572 dev_dbg(&pdev->dev, "i2c fifo/se-dma mode. fifo depth:%d\n", tx_depth); 573 573 574 - ret = i2c_add_adapter(&gi2c->adap); 575 - if (ret) { 576 - dev_err(&pdev->dev, "Error adding i2c adapter %d\n", ret); 577 - return ret; 578 - } 579 - 580 574 gi2c->suspended = 1; 581 575 pm_runtime_set_suspended(gi2c->se.dev); 582 576 pm_runtime_set_autosuspend_delay(gi2c->se.dev, I2C_AUTO_SUSPEND_DELAY); 583 577 pm_runtime_use_autosuspend(gi2c->se.dev); 584 578 pm_runtime_enable(gi2c->se.dev); 579 + 580 + ret = i2c_add_adapter(&gi2c->adap); 581 + if (ret) { 582 + dev_err(&pdev->dev, "Error adding i2c adapter %d\n", ret); 583 + pm_runtime_disable(gi2c->se.dev); 584 + return ret; 585 + } 585 586 586 587 return 0; 587 588 } ··· 591 590 { 592 591 struct geni_i2c_dev *gi2c = platform_get_drvdata(pdev); 593 592 594 - pm_runtime_disable(gi2c->se.dev); 595 593 i2c_del_adapter(&gi2c->adap); 594 + pm_runtime_disable(gi2c->se.dev); 596 595 return 0; 597 596 } 598 597
+4 -1
drivers/iio/accel/hid-sensor-accel-3d.c
··· 149 149 int report_id = -1; 150 150 u32 address; 151 151 int ret_type; 152 + s32 min; 152 153 struct hid_sensor_hub_device *hsdev = 153 154 accel_state->common_attributes.hsdev; 154 155 ··· 159 158 case IIO_CHAN_INFO_RAW: 160 159 hid_sensor_power_state(&accel_state->common_attributes, true); 161 160 report_id = accel_state->accel[chan->scan_index].report_id; 161 + min = accel_state->accel[chan->scan_index].logical_minimum; 162 162 address = accel_3d_addresses[chan->scan_index]; 163 163 if (report_id >= 0) 164 164 *val = sensor_hub_input_attr_get_raw_value( 165 165 accel_state->common_attributes.hsdev, 166 166 hsdev->usage, address, report_id, 167 - SENSOR_HUB_SYNC); 167 + SENSOR_HUB_SYNC, 168 + min < 0); 168 169 else { 169 170 *val = 0; 170 171 hid_sensor_power_state(&accel_state->common_attributes,
+4 -1
drivers/iio/gyro/hid-sensor-gyro-3d.c
··· 111 111 int report_id = -1; 112 112 u32 address; 113 113 int ret_type; 114 + s32 min; 114 115 115 116 *val = 0; 116 117 *val2 = 0; ··· 119 118 case IIO_CHAN_INFO_RAW: 120 119 hid_sensor_power_state(&gyro_state->common_attributes, true); 121 120 report_id = gyro_state->gyro[chan->scan_index].report_id; 121 + min = gyro_state->gyro[chan->scan_index].logical_minimum; 122 122 address = gyro_3d_addresses[chan->scan_index]; 123 123 if (report_id >= 0) 124 124 *val = sensor_hub_input_attr_get_raw_value( 125 125 gyro_state->common_attributes.hsdev, 126 126 HID_USAGE_SENSOR_GYRO_3D, address, 127 127 report_id, 128 - SENSOR_HUB_SYNC); 128 + SENSOR_HUB_SYNC, 129 + min < 0); 129 130 else { 130 131 *val = 0; 131 132 hid_sensor_power_state(&gyro_state->common_attributes,
+2 -1
drivers/iio/humidity/hid-sensor-humidity.c
··· 75 75 HID_USAGE_SENSOR_HUMIDITY, 76 76 HID_USAGE_SENSOR_ATMOSPHERIC_HUMIDITY, 77 77 humid_st->humidity_attr.report_id, 78 - SENSOR_HUB_SYNC); 78 + SENSOR_HUB_SYNC, 79 + humid_st->humidity_attr.logical_minimum < 0); 79 80 hid_sensor_power_state(&humid_st->common_attributes, false); 80 81 81 82 return IIO_VAL_INT;
+5 -3
drivers/iio/light/hid-sensor-als.c
··· 93 93 int report_id = -1; 94 94 u32 address; 95 95 int ret_type; 96 + s32 min; 96 97 97 98 *val = 0; 98 99 *val2 = 0; ··· 103 102 case CHANNEL_SCAN_INDEX_INTENSITY: 104 103 case CHANNEL_SCAN_INDEX_ILLUM: 105 104 report_id = als_state->als_illum.report_id; 106 - address = 107 - HID_USAGE_SENSOR_LIGHT_ILLUM; 105 + min = als_state->als_illum.logical_minimum; 106 + address = HID_USAGE_SENSOR_LIGHT_ILLUM; 108 107 break; 109 108 default: 110 109 report_id = -1; ··· 117 116 als_state->common_attributes.hsdev, 118 117 HID_USAGE_SENSOR_ALS, address, 119 118 report_id, 120 - SENSOR_HUB_SYNC); 119 + SENSOR_HUB_SYNC, 120 + min < 0); 121 121 hid_sensor_power_state(&als_state->common_attributes, 122 122 false); 123 123 } else {
+5 -3
drivers/iio/light/hid-sensor-prox.c
··· 73 73 int report_id = -1; 74 74 u32 address; 75 75 int ret_type; 76 + s32 min; 76 77 77 78 *val = 0; 78 79 *val2 = 0; ··· 82 81 switch (chan->scan_index) { 83 82 case CHANNEL_SCAN_INDEX_PRESENCE: 84 83 report_id = prox_state->prox_attr.report_id; 85 - address = 86 - HID_USAGE_SENSOR_HUMAN_PRESENCE; 84 + min = prox_state->prox_attr.logical_minimum; 85 + address = HID_USAGE_SENSOR_HUMAN_PRESENCE; 87 86 break; 88 87 default: 89 88 report_id = -1; ··· 96 95 prox_state->common_attributes.hsdev, 97 96 HID_USAGE_SENSOR_PROX, address, 98 97 report_id, 99 - SENSOR_HUB_SYNC); 98 + SENSOR_HUB_SYNC, 99 + min < 0); 100 100 hid_sensor_power_state(&prox_state->common_attributes, 101 101 false); 102 102 } else {
+5 -3
drivers/iio/magnetometer/hid-sensor-magn-3d.c
··· 163 163 int report_id = -1; 164 164 u32 address; 165 165 int ret_type; 166 + s32 min; 166 167 167 168 *val = 0; 168 169 *val2 = 0; 169 170 switch (mask) { 170 171 case IIO_CHAN_INFO_RAW: 171 172 hid_sensor_power_state(&magn_state->magn_flux_attributes, true); 172 - report_id = 173 - magn_state->magn[chan->address].report_id; 173 + report_id = magn_state->magn[chan->address].report_id; 174 + min = magn_state->magn[chan->address].logical_minimum; 174 175 address = magn_3d_addresses[chan->address]; 175 176 if (report_id >= 0) 176 177 *val = sensor_hub_input_attr_get_raw_value( 177 178 magn_state->magn_flux_attributes.hsdev, 178 179 HID_USAGE_SENSOR_COMPASS_3D, address, 179 180 report_id, 180 - SENSOR_HUB_SYNC); 181 + SENSOR_HUB_SYNC, 182 + min < 0); 181 183 else { 182 184 *val = 0; 183 185 hid_sensor_power_state(
+3 -9
drivers/iio/magnetometer/st_magn_buffer.c
··· 30 30 return st_sensors_set_dataready_irq(indio_dev, state); 31 31 } 32 32 33 - static int st_magn_buffer_preenable(struct iio_dev *indio_dev) 34 - { 35 - return st_sensors_set_enable(indio_dev, true); 36 - } 37 - 38 33 static int st_magn_buffer_postenable(struct iio_dev *indio_dev) 39 34 { 40 35 int err; ··· 45 50 if (err < 0) 46 51 goto st_magn_buffer_postenable_error; 47 52 48 - return err; 53 + return st_sensors_set_enable(indio_dev, true); 49 54 50 55 st_magn_buffer_postenable_error: 51 56 kfree(mdata->buffer_data); ··· 58 63 int err; 59 64 struct st_sensor_data *mdata = iio_priv(indio_dev); 60 65 61 - err = iio_triggered_buffer_predisable(indio_dev); 66 + err = st_sensors_set_enable(indio_dev, false); 62 67 if (err < 0) 63 68 goto st_magn_buffer_predisable_error; 64 69 65 - err = st_sensors_set_enable(indio_dev, false); 70 + err = iio_triggered_buffer_predisable(indio_dev); 66 71 67 72 st_magn_buffer_predisable_error: 68 73 kfree(mdata->buffer_data); ··· 70 75 } 71 76 72 77 static const struct iio_buffer_setup_ops st_magn_buffer_setup_ops = { 73 - .preenable = &st_magn_buffer_preenable, 74 78 .postenable = &st_magn_buffer_postenable, 75 79 .predisable = &st_magn_buffer_predisable, 76 80 };
+5 -3
drivers/iio/orientation/hid-sensor-incl-3d.c
··· 111 111 int report_id = -1; 112 112 u32 address; 113 113 int ret_type; 114 + s32 min; 114 115 115 116 *val = 0; 116 117 *val2 = 0; 117 118 switch (mask) { 118 119 case IIO_CHAN_INFO_RAW: 119 120 hid_sensor_power_state(&incl_state->common_attributes, true); 120 - report_id = 121 - incl_state->incl[chan->scan_index].report_id; 121 + report_id = incl_state->incl[chan->scan_index].report_id; 122 + min = incl_state->incl[chan->scan_index].logical_minimum; 122 123 address = incl_3d_addresses[chan->scan_index]; 123 124 if (report_id >= 0) 124 125 *val = sensor_hub_input_attr_get_raw_value( 125 126 incl_state->common_attributes.hsdev, 126 127 HID_USAGE_SENSOR_INCLINOMETER_3D, address, 127 128 report_id, 128 - SENSOR_HUB_SYNC); 129 + SENSOR_HUB_SYNC, 130 + min < 0); 129 131 else { 130 132 hid_sensor_power_state(&incl_state->common_attributes, 131 133 false);
+5 -3
drivers/iio/pressure/hid-sensor-press.c
··· 77 77 int report_id = -1; 78 78 u32 address; 79 79 int ret_type; 80 + s32 min; 80 81 81 82 *val = 0; 82 83 *val2 = 0; ··· 86 85 switch (chan->scan_index) { 87 86 case CHANNEL_SCAN_INDEX_PRESSURE: 88 87 report_id = press_state->press_attr.report_id; 89 - address = 90 - HID_USAGE_SENSOR_ATMOSPHERIC_PRESSURE; 88 + min = press_state->press_attr.logical_minimum; 89 + address = HID_USAGE_SENSOR_ATMOSPHERIC_PRESSURE; 91 90 break; 92 91 default: 93 92 report_id = -1; ··· 100 99 press_state->common_attributes.hsdev, 101 100 HID_USAGE_SENSOR_PRESSURE, address, 102 101 report_id, 103 - SENSOR_HUB_SYNC); 102 + SENSOR_HUB_SYNC, 103 + min < 0); 104 104 hid_sensor_power_state(&press_state->common_attributes, 105 105 false); 106 106 } else {
+2 -1
drivers/iio/temperature/hid-sensor-temperature.c
··· 76 76 HID_USAGE_SENSOR_TEMPERATURE, 77 77 HID_USAGE_SENSOR_DATA_ENVIRONMENTAL_TEMPERATURE, 78 78 temp_st->temperature_attr.report_id, 79 - SENSOR_HUB_SYNC); 79 + SENSOR_HUB_SYNC, 80 + temp_st->temperature_attr.logical_minimum < 0); 80 81 hid_sensor_power_state( 81 82 &temp_st->common_attributes, 82 83 false);
+4 -2
drivers/infiniband/core/roce_gid_mgmt.c
··· 767 767 768 768 case NETDEV_CHANGEADDR: 769 769 cmds[0] = netdev_del_cmd; 770 - cmds[1] = add_default_gid_cmd; 771 - cmds[2] = add_cmd; 770 + if (ndev->reg_state == NETREG_REGISTERED) { 771 + cmds[1] = add_default_gid_cmd; 772 + cmds[2] = add_cmd; 773 + } 772 774 break; 773 775 774 776 case NETDEV_CHANGEUPPER:
+6 -14
drivers/infiniband/core/umem_odp.c
··· 137 137 up_read(&per_mm->umem_rwsem); 138 138 } 139 139 140 - static int invalidate_page_trampoline(struct ib_umem_odp *item, u64 start, 141 - u64 end, void *cookie) 142 - { 143 - ib_umem_notifier_start_account(item); 144 - item->umem.context->invalidate_range(item, start, start + PAGE_SIZE); 145 - ib_umem_notifier_end_account(item); 146 - return 0; 147 - } 148 - 149 140 static int invalidate_range_start_trampoline(struct ib_umem_odp *item, 150 141 u64 start, u64 end, void *cookie) 151 142 { ··· 544 553 put_page(page); 545 554 546 555 if (remove_existing_mapping && umem->context->invalidate_range) { 547 - invalidate_page_trampoline( 556 + ib_umem_notifier_start_account(umem_odp); 557 + umem->context->invalidate_range( 548 558 umem_odp, 549 - ib_umem_start(umem) + (page_index >> umem->page_shift), 550 - ib_umem_start(umem) + ((page_index + 1) >> 551 - umem->page_shift), 552 - NULL); 559 + ib_umem_start(umem) + (page_index << umem->page_shift), 560 + ib_umem_start(umem) + 561 + ((page_index + 1) << umem->page_shift)); 562 + ib_umem_notifier_end_account(umem_odp); 553 563 ret = -EAGAIN; 554 564 } 555 565
+3
drivers/infiniband/hw/bnxt_re/main.c
··· 1268 1268 /* Registered a new RoCE device instance to netdev */ 1269 1269 rc = bnxt_re_register_netdev(rdev); 1270 1270 if (rc) { 1271 + rtnl_unlock(); 1271 1272 pr_err("Failed to register with netedev: %#x\n", rc); 1272 1273 return -EINVAL; 1273 1274 } ··· 1467 1466 "Failed to register with IB: %#x", rc); 1468 1467 bnxt_re_remove_one(rdev); 1469 1468 bnxt_re_dev_unreg(rdev); 1469 + goto exit; 1470 1470 } 1471 1471 break; 1472 1472 case NETDEV_UP: ··· 1491 1489 } 1492 1490 smp_mb__before_atomic(); 1493 1491 atomic_dec(&rdev->sched_count); 1492 + exit: 1494 1493 kfree(re_work); 1495 1494 } 1496 1495
+60 -68
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
··· 1756 1756 return hns_roce_cmq_send(hr_dev, &desc, 1); 1757 1757 } 1758 1758 1759 - static int hns_roce_v2_write_mtpt(void *mb_buf, struct hns_roce_mr *mr, 1760 - unsigned long mtpt_idx) 1759 + static int set_mtpt_pbl(struct hns_roce_v2_mpt_entry *mpt_entry, 1760 + struct hns_roce_mr *mr) 1761 1761 { 1762 - struct hns_roce_v2_mpt_entry *mpt_entry; 1763 1762 struct scatterlist *sg; 1764 1763 u64 page_addr; 1765 1764 u64 *pages; 1766 1765 int i, j; 1767 1766 int len; 1768 1767 int entry; 1768 + 1769 + mpt_entry->pbl_size = cpu_to_le32(mr->pbl_size); 1770 + mpt_entry->pbl_ba_l = cpu_to_le32(lower_32_bits(mr->pbl_ba >> 3)); 1771 + roce_set_field(mpt_entry->byte_48_mode_ba, 1772 + V2_MPT_BYTE_48_PBL_BA_H_M, V2_MPT_BYTE_48_PBL_BA_H_S, 1773 + upper_32_bits(mr->pbl_ba >> 3)); 1774 + 1775 + pages = (u64 *)__get_free_page(GFP_KERNEL); 1776 + if (!pages) 1777 + return -ENOMEM; 1778 + 1779 + i = 0; 1780 + for_each_sg(mr->umem->sg_head.sgl, sg, mr->umem->nmap, entry) { 1781 + len = sg_dma_len(sg) >> PAGE_SHIFT; 1782 + for (j = 0; j < len; ++j) { 1783 + page_addr = sg_dma_address(sg) + 1784 + (j << mr->umem->page_shift); 1785 + pages[i] = page_addr >> 6; 1786 + /* Record the first 2 entry directly to MTPT table */ 1787 + if (i >= HNS_ROCE_V2_MAX_INNER_MTPT_NUM - 1) 1788 + goto found; 1789 + i++; 1790 + } 1791 + } 1792 + found: 1793 + mpt_entry->pa0_l = cpu_to_le32(lower_32_bits(pages[0])); 1794 + roce_set_field(mpt_entry->byte_56_pa0_h, V2_MPT_BYTE_56_PA0_H_M, 1795 + V2_MPT_BYTE_56_PA0_H_S, upper_32_bits(pages[0])); 1796 + 1797 + mpt_entry->pa1_l = cpu_to_le32(lower_32_bits(pages[1])); 1798 + roce_set_field(mpt_entry->byte_64_buf_pa1, V2_MPT_BYTE_64_PA1_H_M, 1799 + V2_MPT_BYTE_64_PA1_H_S, upper_32_bits(pages[1])); 1800 + roce_set_field(mpt_entry->byte_64_buf_pa1, 1801 + V2_MPT_BYTE_64_PBL_BUF_PG_SZ_M, 1802 + V2_MPT_BYTE_64_PBL_BUF_PG_SZ_S, 1803 + mr->pbl_buf_pg_sz + PG_SHIFT_OFFSET); 1804 + 1805 + free_page((unsigned long)pages); 1806 + 1807 + return 0; 1808 + } 1809 + 1810 + static int hns_roce_v2_write_mtpt(void *mb_buf, struct hns_roce_mr *mr, 1811 + unsigned long mtpt_idx) 1812 + { 1813 + struct hns_roce_v2_mpt_entry *mpt_entry; 1814 + int ret; 1769 1815 1770 1816 mpt_entry = mb_buf; 1771 1817 memset(mpt_entry, 0, sizeof(*mpt_entry)); ··· 1827 1781 mr->pbl_ba_pg_sz + PG_SHIFT_OFFSET); 1828 1782 roce_set_field(mpt_entry->byte_4_pd_hop_st, V2_MPT_BYTE_4_PD_M, 1829 1783 V2_MPT_BYTE_4_PD_S, mr->pd); 1830 - mpt_entry->byte_4_pd_hop_st = cpu_to_le32(mpt_entry->byte_4_pd_hop_st); 1831 1784 1832 1785 roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_RA_EN_S, 0); 1833 1786 roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_R_INV_EN_S, 1); ··· 1841 1796 (mr->access & IB_ACCESS_REMOTE_WRITE ? 1 : 0)); 1842 1797 roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_LW_EN_S, 1843 1798 (mr->access & IB_ACCESS_LOCAL_WRITE ? 1 : 0)); 1844 - mpt_entry->byte_8_mw_cnt_en = cpu_to_le32(mpt_entry->byte_8_mw_cnt_en); 1845 1799 1846 1800 roce_set_bit(mpt_entry->byte_12_mw_pa, V2_MPT_BYTE_12_PA_S, 1847 1801 mr->type == MR_TYPE_MR ? 0 : 1); 1848 1802 roce_set_bit(mpt_entry->byte_12_mw_pa, V2_MPT_BYTE_12_INNER_PA_VLD_S, 1849 1803 1); 1850 - mpt_entry->byte_12_mw_pa = cpu_to_le32(mpt_entry->byte_12_mw_pa); 1851 1804 1852 1805 mpt_entry->len_l = cpu_to_le32(lower_32_bits(mr->size)); 1853 1806 mpt_entry->len_h = cpu_to_le32(upper_32_bits(mr->size)); ··· 1856 1813 if (mr->type == MR_TYPE_DMA) 1857 1814 return 0; 1858 1815 1859 - mpt_entry->pbl_size = cpu_to_le32(mr->pbl_size); 1816 + ret = set_mtpt_pbl(mpt_entry, mr); 1860 1817 1861 - mpt_entry->pbl_ba_l = cpu_to_le32(lower_32_bits(mr->pbl_ba >> 3)); 1862 - roce_set_field(mpt_entry->byte_48_mode_ba, V2_MPT_BYTE_48_PBL_BA_H_M, 1863 - V2_MPT_BYTE_48_PBL_BA_H_S, 1864 - upper_32_bits(mr->pbl_ba >> 3)); 1865 - mpt_entry->byte_48_mode_ba = cpu_to_le32(mpt_entry->byte_48_mode_ba); 1866 - 1867 - pages = (u64 *)__get_free_page(GFP_KERNEL); 1868 - if (!pages) 1869 - return -ENOMEM; 1870 - 1871 - i = 0; 1872 - for_each_sg(mr->umem->sg_head.sgl, sg, mr->umem->nmap, entry) { 1873 - len = sg_dma_len(sg) >> PAGE_SHIFT; 1874 - for (j = 0; j < len; ++j) { 1875 - page_addr = sg_dma_address(sg) + 1876 - (j << mr->umem->page_shift); 1877 - pages[i] = page_addr >> 6; 1878 - 1879 - /* Record the first 2 entry directly to MTPT table */ 1880 - if (i >= HNS_ROCE_V2_MAX_INNER_MTPT_NUM - 1) 1881 - goto found; 1882 - i++; 1883 - } 1884 - } 1885 - 1886 - found: 1887 - mpt_entry->pa0_l = cpu_to_le32(lower_32_bits(pages[0])); 1888 - roce_set_field(mpt_entry->byte_56_pa0_h, V2_MPT_BYTE_56_PA0_H_M, 1889 - V2_MPT_BYTE_56_PA0_H_S, 1890 - upper_32_bits(pages[0])); 1891 - mpt_entry->byte_56_pa0_h = cpu_to_le32(mpt_entry->byte_56_pa0_h); 1892 - 1893 - mpt_entry->pa1_l = cpu_to_le32(lower_32_bits(pages[1])); 1894 - roce_set_field(mpt_entry->byte_64_buf_pa1, V2_MPT_BYTE_64_PA1_H_M, 1895 - V2_MPT_BYTE_64_PA1_H_S, upper_32_bits(pages[1])); 1896 - 1897 - free_page((unsigned long)pages); 1898 - 1899 - roce_set_field(mpt_entry->byte_64_buf_pa1, 1900 - V2_MPT_BYTE_64_PBL_BUF_PG_SZ_M, 1901 - V2_MPT_BYTE_64_PBL_BUF_PG_SZ_S, 1902 - mr->pbl_buf_pg_sz + PG_SHIFT_OFFSET); 1903 - mpt_entry->byte_64_buf_pa1 = cpu_to_le32(mpt_entry->byte_64_buf_pa1); 1904 - 1905 - return 0; 1818 + return ret; 1906 1819 } 1907 1820 1908 1821 static int hns_roce_v2_rereg_write_mtpt(struct hns_roce_dev *hr_dev, ··· 1867 1868 u64 size, void *mb_buf) 1868 1869 { 1869 1870 struct hns_roce_v2_mpt_entry *mpt_entry = mb_buf; 1871 + int ret = 0; 1870 1872 1871 1873 if (flags & IB_MR_REREG_PD) { 1872 1874 roce_set_field(mpt_entry->byte_4_pd_hop_st, V2_MPT_BYTE_4_PD_M, ··· 1880 1880 V2_MPT_BYTE_8_BIND_EN_S, 1881 1881 (mr_access_flags & IB_ACCESS_MW_BIND ? 1 : 0)); 1882 1882 roce_set_bit(mpt_entry->byte_8_mw_cnt_en, 1883 - V2_MPT_BYTE_8_ATOMIC_EN_S, 1884 - (mr_access_flags & IB_ACCESS_REMOTE_ATOMIC ? 1 : 0)); 1883 + V2_MPT_BYTE_8_ATOMIC_EN_S, 1884 + mr_access_flags & IB_ACCESS_REMOTE_ATOMIC ? 1 : 0); 1885 1885 roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_RR_EN_S, 1886 - (mr_access_flags & IB_ACCESS_REMOTE_READ ? 1 : 0)); 1886 + mr_access_flags & IB_ACCESS_REMOTE_READ ? 1 : 0); 1887 1887 roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_RW_EN_S, 1888 - (mr_access_flags & IB_ACCESS_REMOTE_WRITE ? 1 : 0)); 1888 + mr_access_flags & IB_ACCESS_REMOTE_WRITE ? 1 : 0); 1889 1889 roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_LW_EN_S, 1890 - (mr_access_flags & IB_ACCESS_LOCAL_WRITE ? 1 : 0)); 1890 + mr_access_flags & IB_ACCESS_LOCAL_WRITE ? 1 : 0); 1891 1891 } 1892 1892 1893 1893 if (flags & IB_MR_REREG_TRANS) { ··· 1896 1896 mpt_entry->len_l = cpu_to_le32(lower_32_bits(size)); 1897 1897 mpt_entry->len_h = cpu_to_le32(upper_32_bits(size)); 1898 1898 1899 - mpt_entry->pbl_size = cpu_to_le32(mr->pbl_size); 1900 - mpt_entry->pbl_ba_l = 1901 - cpu_to_le32(lower_32_bits(mr->pbl_ba >> 3)); 1902 - roce_set_field(mpt_entry->byte_48_mode_ba, 1903 - V2_MPT_BYTE_48_PBL_BA_H_M, 1904 - V2_MPT_BYTE_48_PBL_BA_H_S, 1905 - upper_32_bits(mr->pbl_ba >> 3)); 1906 - mpt_entry->byte_48_mode_ba = 1907 - cpu_to_le32(mpt_entry->byte_48_mode_ba); 1908 - 1909 1899 mr->iova = iova; 1910 1900 mr->size = size; 1901 + 1902 + ret = set_mtpt_pbl(mpt_entry, mr); 1911 1903 } 1912 1904 1913 - return 0; 1905 + return ret; 1914 1906 } 1915 1907 1916 1908 static int hns_roce_v2_frmr_write_mtpt(void *mb_buf, struct hns_roce_mr *mr)
+11 -18
drivers/infiniband/hw/mlx5/main.c
··· 1094 1094 MLX5_IB_WIDTH_12X = 1 << 4 1095 1095 }; 1096 1096 1097 - static int translate_active_width(struct ib_device *ibdev, u8 active_width, 1097 + static void translate_active_width(struct ib_device *ibdev, u8 active_width, 1098 1098 u8 *ib_width) 1099 1099 { 1100 1100 struct mlx5_ib_dev *dev = to_mdev(ibdev); 1101 - int err = 0; 1102 1101 1103 - if (active_width & MLX5_IB_WIDTH_1X) { 1102 + if (active_width & MLX5_IB_WIDTH_1X) 1104 1103 *ib_width = IB_WIDTH_1X; 1105 - } else if (active_width & MLX5_IB_WIDTH_2X) { 1106 - mlx5_ib_dbg(dev, "active_width %d is not supported by IB spec\n", 1107 - (int)active_width); 1108 - err = -EINVAL; 1109 - } else if (active_width & MLX5_IB_WIDTH_4X) { 1104 + else if (active_width & MLX5_IB_WIDTH_4X) 1110 1105 *ib_width = IB_WIDTH_4X; 1111 - } else if (active_width & MLX5_IB_WIDTH_8X) { 1106 + else if (active_width & MLX5_IB_WIDTH_8X) 1112 1107 *ib_width = IB_WIDTH_8X; 1113 - } else if (active_width & MLX5_IB_WIDTH_12X) { 1108 + else if (active_width & MLX5_IB_WIDTH_12X) 1114 1109 *ib_width = IB_WIDTH_12X; 1115 - } else { 1116 - mlx5_ib_dbg(dev, "Invalid active_width %d\n", 1110 + else { 1111 + mlx5_ib_dbg(dev, "Invalid active_width %d, setting width to default value: 4x\n", 1117 1112 (int)active_width); 1118 - err = -EINVAL; 1113 + *ib_width = IB_WIDTH_4X; 1119 1114 } 1120 1115 1121 - return err; 1116 + return; 1122 1117 } 1123 1118 1124 1119 static int mlx5_mtu_to_ib_mtu(int mtu) ··· 1220 1225 if (err) 1221 1226 goto out; 1222 1227 1223 - err = translate_active_width(ibdev, ib_link_width_oper, 1224 - &props->active_width); 1225 - if (err) 1226 - goto out; 1228 + translate_active_width(ibdev, ib_link_width_oper, &props->active_width); 1229 + 1227 1230 err = mlx5_query_port_ib_proto_oper(mdev, &props->active_speed, port); 1228 1231 if (err) 1229 1232 goto out;
+10
drivers/infiniband/hw/mlx5/odp.c
··· 674 674 goto srcu_unlock; 675 675 } 676 676 677 + if (!mr->umem->is_odp) { 678 + mlx5_ib_dbg(dev, "skipping non ODP MR (lkey=0x%06x) in page fault handler.\n", 679 + key); 680 + if (bytes_mapped) 681 + *bytes_mapped += bcnt; 682 + ret = 0; 683 + goto srcu_unlock; 684 + } 685 + 677 686 ret = pagefault_mr(dev, mr, io_virt, bcnt, bytes_mapped); 678 687 if (ret < 0) 679 688 goto srcu_unlock; ··· 744 735 head = frame; 745 736 746 737 bcnt -= frame->bcnt; 738 + offset = 0; 747 739 } 748 740 break; 749 741
+11 -11
drivers/infiniband/hw/mlx5/qp.c
··· 2633 2633 2634 2634 if (access_flags & IB_ACCESS_REMOTE_READ) 2635 2635 *hw_access_flags |= MLX5_QP_BIT_RRE; 2636 - if ((access_flags & IB_ACCESS_REMOTE_ATOMIC) && 2637 - qp->ibqp.qp_type == IB_QPT_RC) { 2636 + if (access_flags & IB_ACCESS_REMOTE_ATOMIC) { 2638 2637 int atomic_mode; 2639 2638 2640 2639 atomic_mode = get_atomic_mode(dev, qp->ibqp.qp_type); ··· 4677 4678 goto out; 4678 4679 } 4679 4680 4680 - if (wr->opcode == IB_WR_LOCAL_INV || 4681 - wr->opcode == IB_WR_REG_MR) { 4681 + if (wr->opcode == IB_WR_REG_MR) { 4682 4682 fence = dev->umr_fence; 4683 4683 next_fence = MLX5_FENCE_MODE_INITIATOR_SMALL; 4684 - } else if (wr->send_flags & IB_SEND_FENCE) { 4685 - if (qp->next_fence) 4686 - fence = MLX5_FENCE_MODE_SMALL_AND_FENCE; 4687 - else 4688 - fence = MLX5_FENCE_MODE_FENCE; 4689 - } else { 4690 - fence = qp->next_fence; 4684 + } else { 4685 + if (wr->send_flags & IB_SEND_FENCE) { 4686 + if (qp->next_fence) 4687 + fence = MLX5_FENCE_MODE_SMALL_AND_FENCE; 4688 + else 4689 + fence = MLX5_FENCE_MODE_FENCE; 4690 + } else { 4691 + fence = qp->next_fence; 4692 + } 4691 4693 } 4692 4694 4693 4695 switch (ibqp->qp_type) {
+3 -1
drivers/infiniband/sw/rdmavt/ah.c
··· 91 91 * rvt_create_ah - create an address handle 92 92 * @pd: the protection domain 93 93 * @ah_attr: the attributes of the AH 94 + * @udata: pointer to user's input output buffer information. 94 95 * 95 96 * This may be called from interrupt context. 96 97 * 97 98 * Return: newly allocated ah 98 99 */ 99 100 struct ib_ah *rvt_create_ah(struct ib_pd *pd, 100 - struct rdma_ah_attr *ah_attr) 101 + struct rdma_ah_attr *ah_attr, 102 + struct ib_udata *udata) 101 103 { 102 104 struct rvt_ah *ah; 103 105 struct rvt_dev_info *dev = ib_to_rvt(pd->device);
+2 -1
drivers/infiniband/sw/rdmavt/ah.h
··· 51 51 #include <rdma/rdma_vt.h> 52 52 53 53 struct ib_ah *rvt_create_ah(struct ib_pd *pd, 54 - struct rdma_ah_attr *ah_attr); 54 + struct rdma_ah_attr *ah_attr, 55 + struct ib_udata *udata); 55 56 int rvt_destroy_ah(struct ib_ah *ibah); 56 57 int rvt_modify_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr); 57 58 int rvt_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr);
+3 -4
drivers/infiniband/ulp/iser/iser_verbs.c
··· 1124 1124 IB_MR_CHECK_SIG_STATUS, &mr_status); 1125 1125 if (ret) { 1126 1126 pr_err("ib_check_mr_status failed, ret %d\n", ret); 1127 - goto err; 1127 + /* Not a lot we can do, return ambiguous guard error */ 1128 + *sector = 0; 1129 + return 0x1; 1128 1130 } 1129 1131 1130 1132 if (mr_status.fail_status & IB_MR_CHECK_SIG_STATUS) { ··· 1154 1152 } 1155 1153 1156 1154 return 0; 1157 - err: 1158 - /* Not alot we can do here, return ambiguous guard error */ 1159 - return 0x1; 1160 1155 } 1161 1156 1162 1157 void iser_err_comp(struct ib_wc *wc, const char *type)
+2 -1
drivers/iommu/amd_iommu_init.c
··· 797 797 entry = iommu_virt_to_phys(iommu->ga_log) | GA_LOG_SIZE_512; 798 798 memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_BASE_OFFSET, 799 799 &entry, sizeof(entry)); 800 - entry = (iommu_virt_to_phys(iommu->ga_log) & 0xFFFFFFFFFFFFFULL) & ~7ULL; 800 + entry = (iommu_virt_to_phys(iommu->ga_log_tail) & 801 + (BIT_ULL(52)-1)) & ~7ULL; 801 802 memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_TAIL_OFFSET, 802 803 &entry, sizeof(entry)); 803 804 writel(0x00, iommu->mmio_base + MMIO_GA_HEAD_OFFSET);
+1 -1
drivers/iommu/intel-iommu.c
··· 3075 3075 } 3076 3076 3077 3077 if (old_ce) 3078 - iounmap(old_ce); 3078 + memunmap(old_ce); 3079 3079 3080 3080 ret = 0; 3081 3081 if (devfn < 0x80)
+1 -1
drivers/iommu/intel-svm.c
··· 595 595 pr_err("%s: Page request without PASID: %08llx %08llx\n", 596 596 iommu->name, ((unsigned long long *)req)[0], 597 597 ((unsigned long long *)req)[1]); 598 - goto bad_req; 598 + goto no_pasid; 599 599 } 600 600 601 601 if (!svm || svm->pasid != req->pasid) {
+3
drivers/iommu/ipmmu-vmsa.c
··· 498 498 499 499 static void ipmmu_domain_destroy_context(struct ipmmu_vmsa_domain *domain) 500 500 { 501 + if (!domain->mmu) 502 + return; 503 + 501 504 /* 502 505 * Disable the context. Flush the TLB as required when modifying the 503 506 * context registers.
+8 -19
drivers/leds/trigger/ledtrig-pattern.c
··· 75 75 { 76 76 struct pattern_trig_data *data = from_timer(data, t, timer); 77 77 78 - mutex_lock(&data->lock); 79 - 80 78 for (;;) { 81 79 if (!data->is_indefinite && !data->repeat) 82 80 break; ··· 85 87 data->curr->brightness); 86 88 mod_timer(&data->timer, 87 89 jiffies + msecs_to_jiffies(data->curr->delta_t)); 88 - 89 - /* Skip the tuple with zero duration */ 90 - pattern_trig_update_patterns(data); 90 + if (!data->next->delta_t) { 91 + /* Skip the tuple with zero duration */ 92 + pattern_trig_update_patterns(data); 93 + } 91 94 /* Select next tuple */ 92 95 pattern_trig_update_patterns(data); 93 96 } else { ··· 115 116 116 117 break; 117 118 } 118 - 119 - mutex_unlock(&data->lock); 120 119 } 121 120 122 121 static int pattern_trig_start_pattern(struct led_classdev *led_cdev) ··· 173 176 if (res < -1 || res == 0) 174 177 return -EINVAL; 175 178 176 - /* 177 - * Clear previous patterns' performence firstly, and remove the timer 178 - * without mutex lock to avoid dead lock. 179 - */ 180 - del_timer_sync(&data->timer); 181 - 182 179 mutex_lock(&data->lock); 180 + 181 + del_timer_sync(&data->timer); 183 182 184 183 if (data->is_hw_pattern) 185 184 led_cdev->pattern_clear(led_cdev); ··· 227 234 struct pattern_trig_data *data = led_cdev->trigger_data; 228 235 int ccount, cr, offset = 0, err = 0; 229 236 230 - /* 231 - * Clear previous patterns' performence firstly, and remove the timer 232 - * without mutex lock to avoid dead lock. 233 - */ 234 - del_timer_sync(&data->timer); 235 - 236 237 mutex_lock(&data->lock); 238 + 239 + del_timer_sync(&data->timer); 237 240 238 241 if (data->is_hw_pattern) 239 242 led_cdev->pattern_clear(led_cdev);
+38 -11
drivers/media/cec/cec-adap.c
··· 807 807 } 808 808 809 809 if (adap->transmit_queue_sz >= CEC_MAX_MSG_TX_QUEUE_SZ) { 810 - dprintk(1, "%s: transmit queue full\n", __func__); 810 + dprintk(2, "%s: transmit queue full\n", __func__); 811 811 return -EBUSY; 812 812 } 813 813 ··· 1180 1180 { 1181 1181 struct cec_log_addrs *las = &adap->log_addrs; 1182 1182 struct cec_msg msg = { }; 1183 + const unsigned int max_retries = 2; 1184 + unsigned int i; 1183 1185 int err; 1184 1186 1185 1187 if (cec_has_log_addr(adap, log_addr)) ··· 1190 1188 /* Send poll message */ 1191 1189 msg.len = 1; 1192 1190 msg.msg[0] = (log_addr << 4) | log_addr; 1193 - err = cec_transmit_msg_fh(adap, &msg, NULL, true); 1191 + 1192 + for (i = 0; i < max_retries; i++) { 1193 + err = cec_transmit_msg_fh(adap, &msg, NULL, true); 1194 + 1195 + /* 1196 + * While trying to poll the physical address was reset 1197 + * and the adapter was unconfigured, so bail out. 1198 + */ 1199 + if (!adap->is_configuring) 1200 + return -EINTR; 1201 + 1202 + if (err) 1203 + return err; 1204 + 1205 + /* 1206 + * The message was aborted due to a disconnect or 1207 + * unconfigure, just bail out. 1208 + */ 1209 + if (msg.tx_status & CEC_TX_STATUS_ABORTED) 1210 + return -EINTR; 1211 + if (msg.tx_status & CEC_TX_STATUS_OK) 1212 + return 0; 1213 + if (msg.tx_status & CEC_TX_STATUS_NACK) 1214 + break; 1215 + /* 1216 + * Retry up to max_retries times if the message was neither 1217 + * OKed or NACKed. This can happen due to e.g. a Lost 1218 + * Arbitration condition. 1219 + */ 1220 + } 1194 1221 1195 1222 /* 1196 - * While trying to poll the physical address was reset 1197 - * and the adapter was unconfigured, so bail out. 1223 + * If we are unable to get an OK or a NACK after max_retries attempts 1224 + * (and note that each attempt already consists of four polls), then 1225 + * then we assume that something is really weird and that it is not a 1226 + * good idea to try and claim this logical address. 1198 1227 */ 1199 - if (!adap->is_configuring) 1200 - return -EINTR; 1201 - 1202 - if (err) 1203 - return err; 1204 - 1205 - if (msg.tx_status & CEC_TX_STATUS_OK) 1228 + if (i == max_retries) 1206 1229 return 0; 1207 1230 1208 1231 /*
-1
drivers/media/i2c/tc358743.c
··· 1918 1918 ret = v4l2_fwnode_endpoint_alloc_parse(of_fwnode_handle(ep), &endpoint); 1919 1919 if (ret) { 1920 1920 dev_err(dev, "failed to parse endpoint\n"); 1921 - ret = ret; 1922 1921 goto put_node; 1923 1922 } 1924 1923
+4 -6
drivers/media/pci/intel/ipu3/ipu3-cio2.c
··· 1844 1844 static void cio2_pci_remove(struct pci_dev *pci_dev) 1845 1845 { 1846 1846 struct cio2_device *cio2 = pci_get_drvdata(pci_dev); 1847 - unsigned int i; 1848 1847 1849 - cio2_notifier_exit(cio2); 1850 - cio2_fbpt_exit_dummy(cio2); 1851 - for (i = 0; i < CIO2_QUEUES; i++) 1852 - cio2_queue_exit(cio2, &cio2->queue[i]); 1853 - v4l2_device_unregister(&cio2->v4l2_dev); 1854 1848 media_device_unregister(&cio2->media_dev); 1849 + cio2_notifier_exit(cio2); 1850 + cio2_queues_exit(cio2); 1851 + cio2_fbpt_exit_dummy(cio2); 1852 + v4l2_device_unregister(&cio2->v4l2_dev); 1855 1853 media_device_cleanup(&cio2->media_dev); 1856 1854 mutex_destroy(&cio2->lock); 1857 1855 }
+2 -1
drivers/media/platform/omap3isp/isp.c
··· 1587 1587 1588 1588 static void isp_unregister_entities(struct isp_device *isp) 1589 1589 { 1590 + media_device_unregister(&isp->media_dev); 1591 + 1590 1592 omap3isp_csi2_unregister_entities(&isp->isp_csi2a); 1591 1593 omap3isp_ccp2_unregister_entities(&isp->isp_ccp2); 1592 1594 omap3isp_ccdc_unregister_entities(&isp->isp_ccdc); ··· 1599 1597 omap3isp_stat_unregister_entities(&isp->isp_hist); 1600 1598 1601 1599 v4l2_device_unregister(&isp->v4l2_dev); 1602 - media_device_unregister(&isp->media_dev); 1603 1600 media_device_cleanup(&isp->media_dev); 1604 1601 } 1605 1602
+1 -1
drivers/media/platform/vicodec/vicodec-core.c
··· 42 42 #define MAX_WIDTH 4096U 43 43 #define MIN_WIDTH 640U 44 44 #define MAX_HEIGHT 2160U 45 - #define MIN_HEIGHT 480U 45 + #define MIN_HEIGHT 360U 46 46 47 47 #define dprintk(dev, fmt, arg...) \ 48 48 v4l2_dbg(1, debug, &dev->v4l2_dev, "%s: " fmt, __func__, ## arg)
+1 -1
drivers/media/platform/vim2m.c
··· 1009 1009 1010 1010 static const struct media_device_ops m2m_media_ops = { 1011 1011 .req_validate = vb2_request_validate, 1012 - .req_queue = vb2_m2m_request_queue, 1012 + .req_queue = v4l2_m2m_request_queue, 1013 1013 }; 1014 1014 1015 1015 static int vim2m_probe(struct platform_device *pdev)
+5
drivers/media/v4l2-core/v4l2-ctrls.c
··· 1664 1664 p_mpeg2_slice_params->forward_ref_index >= VIDEO_MAX_FRAME) 1665 1665 return -EINVAL; 1666 1666 1667 + if (p_mpeg2_slice_params->pad || 1668 + p_mpeg2_slice_params->picture.pad || 1669 + p_mpeg2_slice_params->sequence.pad) 1670 + return -EINVAL; 1671 + 1667 1672 return 0; 1668 1673 1669 1674 case V4L2_CTRL_TYPE_MPEG2_QUANTIZATION:
+24 -19
drivers/media/v4l2-core/v4l2-event.c
··· 193 193 } 194 194 EXPORT_SYMBOL_GPL(v4l2_event_pending); 195 195 196 + static void __v4l2_event_unsubscribe(struct v4l2_subscribed_event *sev) 197 + { 198 + struct v4l2_fh *fh = sev->fh; 199 + unsigned int i; 200 + 201 + lockdep_assert_held(&fh->subscribe_lock); 202 + assert_spin_locked(&fh->vdev->fh_lock); 203 + 204 + /* Remove any pending events for this subscription */ 205 + for (i = 0; i < sev->in_use; i++) { 206 + list_del(&sev->events[sev_pos(sev, i)].list); 207 + fh->navailable--; 208 + } 209 + list_del(&sev->list); 210 + } 211 + 196 212 int v4l2_event_subscribe(struct v4l2_fh *fh, 197 213 const struct v4l2_event_subscription *sub, unsigned elems, 198 214 const struct v4l2_subscribed_event_ops *ops) ··· 240 224 241 225 spin_lock_irqsave(&fh->vdev->fh_lock, flags); 242 226 found_ev = v4l2_event_subscribed(fh, sub->type, sub->id); 227 + if (!found_ev) 228 + list_add(&sev->list, &fh->subscribed); 243 229 spin_unlock_irqrestore(&fh->vdev->fh_lock, flags); 244 230 245 231 if (found_ev) { 246 232 /* Already listening */ 247 233 kvfree(sev); 248 - goto out_unlock; 249 - } 250 - 251 - if (sev->ops && sev->ops->add) { 234 + } else if (sev->ops && sev->ops->add) { 252 235 ret = sev->ops->add(sev, elems); 253 236 if (ret) { 237 + spin_lock_irqsave(&fh->vdev->fh_lock, flags); 238 + __v4l2_event_unsubscribe(sev); 239 + spin_unlock_irqrestore(&fh->vdev->fh_lock, flags); 254 240 kvfree(sev); 255 - goto out_unlock; 256 241 } 257 242 } 258 243 259 - spin_lock_irqsave(&fh->vdev->fh_lock, flags); 260 - list_add(&sev->list, &fh->subscribed); 261 - spin_unlock_irqrestore(&fh->vdev->fh_lock, flags); 262 - 263 - out_unlock: 264 244 mutex_unlock(&fh->subscribe_lock); 265 245 266 246 return ret; ··· 291 279 { 292 280 struct v4l2_subscribed_event *sev; 293 281 unsigned long flags; 294 - int i; 295 282 296 283 if (sub->type == V4L2_EVENT_ALL) { 297 284 v4l2_event_unsubscribe_all(fh); ··· 302 291 spin_lock_irqsave(&fh->vdev->fh_lock, flags); 303 292 304 293 sev = v4l2_event_subscribed(fh, sub->type, sub->id); 305 - if (sev != NULL) { 306 - /* Remove any pending events for this subscription */ 307 - for (i = 0; i < sev->in_use; i++) { 308 - list_del(&sev->events[sev_pos(sev, i)].list); 309 - fh->navailable--; 310 - } 311 - list_del(&sev->list); 312 - } 294 + if (sev != NULL) 295 + __v4l2_event_unsubscribe(sev); 313 296 314 297 spin_unlock_irqrestore(&fh->vdev->fh_lock, flags); 315 298
+2 -2
drivers/media/v4l2-core/v4l2-mem2mem.c
··· 953 953 } 954 954 EXPORT_SYMBOL_GPL(v4l2_m2m_buf_queue); 955 955 956 - void vb2_m2m_request_queue(struct media_request *req) 956 + void v4l2_m2m_request_queue(struct media_request *req) 957 957 { 958 958 struct media_request_object *obj, *obj_safe; 959 959 struct v4l2_m2m_ctx *m2m_ctx = NULL; ··· 997 997 if (m2m_ctx) 998 998 v4l2_m2m_try_schedule(m2m_ctx); 999 999 } 1000 - EXPORT_SYMBOL_GPL(vb2_m2m_request_queue); 1000 + EXPORT_SYMBOL_GPL(v4l2_m2m_request_queue); 1001 1001 1002 1002 /* Videobuf2 ioctl helpers */ 1003 1003
+1 -1
drivers/misc/atmel-ssc.c
··· 132 132 MODULE_DEVICE_TABLE(of, atmel_ssc_dt_ids); 133 133 #endif 134 134 135 - static inline const struct atmel_ssc_platform_data * __init 135 + static inline const struct atmel_ssc_platform_data * 136 136 atmel_ssc_get_driver_data(struct platform_device *pdev) 137 137 { 138 138 if (pdev->dev.of_node) {
+1 -1
drivers/misc/mic/scif/scif_rma.c
··· 416 416 if (err) 417 417 goto error_window; 418 418 err = scif_map_page(&window->num_pages_lookup.lookup[j], 419 - vmalloc_dma_phys ? 419 + vmalloc_num_pages ? 420 420 vmalloc_to_page(&window->num_pages[i]) : 421 421 virt_to_page(&window->num_pages[i]), 422 422 remote_dev);
+4
drivers/misc/sgi-gru/grukdump.c
··· 27 27 #include <linux/delay.h> 28 28 #include <linux/bitops.h> 29 29 #include <asm/uv/uv_hub.h> 30 + 31 + #include <linux/nospec.h> 32 + 30 33 #include "gru.h" 31 34 #include "grutables.h" 32 35 #include "gruhandles.h" ··· 199 196 /* Currently, only dump by gid is implemented */ 200 197 if (req.gid >= gru_max_gids) 201 198 return -EINVAL; 199 + req.gid = array_index_nospec(req.gid, gru_max_gids); 202 200 203 201 gru = GID_TO_GRU(req.gid); 204 202 ubuf = req.buf;
+83 -3
drivers/mmc/host/sdhci-pci-core.c
··· 12 12 * - JMicron (hardware and technical support) 13 13 */ 14 14 15 + #include <linux/bitfield.h> 15 16 #include <linux/string.h> 16 17 #include <linux/delay.h> 17 18 #include <linux/highmem.h> ··· 463 462 u32 dsm_fns; 464 463 int drv_strength; 465 464 bool d3_retune; 465 + bool rpm_retune_ok; 466 + u32 glk_rx_ctrl1; 467 + u32 glk_tun_val; 466 468 }; 467 469 468 470 static const guid_t intel_dsm_guid = ··· 795 791 return ret; 796 792 } 797 793 794 + #ifdef CONFIG_PM 795 + #define GLK_RX_CTRL1 0x834 796 + #define GLK_TUN_VAL 0x840 797 + #define GLK_PATH_PLL GENMASK(13, 8) 798 + #define GLK_DLY GENMASK(6, 0) 799 + /* Workaround firmware failing to restore the tuning value */ 800 + static void glk_rpm_retune_wa(struct sdhci_pci_chip *chip, bool susp) 801 + { 802 + struct sdhci_pci_slot *slot = chip->slots[0]; 803 + struct intel_host *intel_host = sdhci_pci_priv(slot); 804 + struct sdhci_host *host = slot->host; 805 + u32 glk_rx_ctrl1; 806 + u32 glk_tun_val; 807 + u32 dly; 808 + 809 + if (intel_host->rpm_retune_ok || !mmc_can_retune(host->mmc)) 810 + return; 811 + 812 + glk_rx_ctrl1 = sdhci_readl(host, GLK_RX_CTRL1); 813 + glk_tun_val = sdhci_readl(host, GLK_TUN_VAL); 814 + 815 + if (susp) { 816 + intel_host->glk_rx_ctrl1 = glk_rx_ctrl1; 817 + intel_host->glk_tun_val = glk_tun_val; 818 + return; 819 + } 820 + 821 + if (!intel_host->glk_tun_val) 822 + return; 823 + 824 + if (glk_rx_ctrl1 != intel_host->glk_rx_ctrl1) { 825 + intel_host->rpm_retune_ok = true; 826 + return; 827 + } 828 + 829 + dly = FIELD_PREP(GLK_DLY, FIELD_GET(GLK_PATH_PLL, glk_rx_ctrl1) + 830 + (intel_host->glk_tun_val << 1)); 831 + if (dly == FIELD_GET(GLK_DLY, glk_rx_ctrl1)) 832 + return; 833 + 834 + glk_rx_ctrl1 = (glk_rx_ctrl1 & ~GLK_DLY) | dly; 835 + sdhci_writel(host, glk_rx_ctrl1, GLK_RX_CTRL1); 836 + 837 + intel_host->rpm_retune_ok = true; 838 + chip->rpm_retune = true; 839 + mmc_retune_needed(host->mmc); 840 + pr_info("%s: Requiring re-tune after rpm resume", mmc_hostname(host->mmc)); 841 + } 842 + 843 + static void glk_rpm_retune_chk(struct sdhci_pci_chip *chip, bool susp) 844 + { 845 + if (chip->pdev->device == PCI_DEVICE_ID_INTEL_GLK_EMMC && 846 + !chip->rpm_retune) 847 + glk_rpm_retune_wa(chip, susp); 848 + } 849 + 850 + static int glk_runtime_suspend(struct sdhci_pci_chip *chip) 851 + { 852 + glk_rpm_retune_chk(chip, true); 853 + 854 + return sdhci_cqhci_runtime_suspend(chip); 855 + } 856 + 857 + static int glk_runtime_resume(struct sdhci_pci_chip *chip) 858 + { 859 + glk_rpm_retune_chk(chip, false); 860 + 861 + return sdhci_cqhci_runtime_resume(chip); 862 + } 863 + #endif 864 + 798 865 #ifdef CONFIG_ACPI 799 866 static int ni_set_max_freq(struct sdhci_pci_slot *slot) 800 867 { ··· 954 879 .resume = sdhci_cqhci_resume, 955 880 #endif 956 881 #ifdef CONFIG_PM 957 - .runtime_suspend = sdhci_cqhci_runtime_suspend, 958 - .runtime_resume = sdhci_cqhci_runtime_resume, 882 + .runtime_suspend = glk_runtime_suspend, 883 + .runtime_resume = glk_runtime_resume, 959 884 #endif 960 885 .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC, 961 886 .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | ··· 1837 1762 device_init_wakeup(&pdev->dev, true); 1838 1763 1839 1764 if (slot->cd_idx >= 0) { 1840 - ret = mmc_gpiod_request_cd(host->mmc, NULL, slot->cd_idx, 1765 + ret = mmc_gpiod_request_cd(host->mmc, "cd", slot->cd_idx, 1841 1766 slot->cd_override_level, 0, NULL); 1767 + if (ret && ret != -EPROBE_DEFER) 1768 + ret = mmc_gpiod_request_cd(host->mmc, NULL, 1769 + slot->cd_idx, 1770 + slot->cd_override_level, 1771 + 0, NULL); 1842 1772 if (ret == -EPROBE_DEFER) 1843 1773 goto remove; 1844 1774
+1 -1
drivers/mtd/devices/Kconfig
··· 207 207 config MTD_DOCG3 208 208 tristate "M-Systems Disk-On-Chip G3" 209 209 select BCH 210 - select BCH_CONST_PARAMS 210 + select BCH_CONST_PARAMS if !MTD_NAND_BCH 211 211 select BITREVERSE 212 212 help 213 213 This provides an MTD device driver for the M-Systems DiskOnChip
+9 -1
drivers/mtd/maps/sa1100-flash.c
··· 221 221 info->mtd = info->subdev[0].mtd; 222 222 ret = 0; 223 223 } else if (info->num_subdev > 1) { 224 - struct mtd_info *cdev[nr]; 224 + struct mtd_info **cdev; 225 + 226 + cdev = kmalloc_array(nr, sizeof(*cdev), GFP_KERNEL); 227 + if (!cdev) { 228 + ret = -ENOMEM; 229 + goto err; 230 + } 231 + 225 232 /* 226 233 * We detected multiple devices. Concatenate them together. 227 234 */ ··· 237 230 238 231 info->mtd = mtd_concat_create(cdev, info->num_subdev, 239 232 plat->name); 233 + kfree(cdev); 240 234 if (info->mtd == NULL) { 241 235 ret = -ENXIO; 242 236 goto err;
+2 -1
drivers/mtd/nand/bbt.c
··· 27 27 unsigned int nwords = DIV_ROUND_UP(nblocks * bits_per_block, 28 28 BITS_PER_LONG); 29 29 30 - nand->bbt.cache = kzalloc(nwords, GFP_KERNEL); 30 + nand->bbt.cache = kcalloc(nwords, sizeof(*nand->bbt.cache), 31 + GFP_KERNEL); 31 32 if (!nand->bbt.cache) 32 33 return -ENOMEM; 33 34
+7 -4
drivers/mtd/nand/raw/atmel/nand-controller.c
··· 2032 2032 int ret; 2033 2033 2034 2034 nand_np = dev->of_node; 2035 - nfc_np = of_find_compatible_node(dev->of_node, NULL, 2036 - "atmel,sama5d3-nfc"); 2035 + nfc_np = of_get_compatible_child(dev->of_node, "atmel,sama5d3-nfc"); 2037 2036 if (!nfc_np) { 2038 2037 dev_err(dev, "Could not find device node for sama5d3-nfc\n"); 2039 2038 return -ENODEV; ··· 2446 2447 } 2447 2448 2448 2449 if (caps->legacy_of_bindings) { 2450 + struct device_node *nfc_node; 2449 2451 u32 ale_offs = 21; 2450 2452 2451 2453 /* 2452 2454 * If we are parsing legacy DT props and the DT contains a 2453 2455 * valid NFC node, forward the request to the sama5 logic. 2454 2456 */ 2455 - if (of_find_compatible_node(pdev->dev.of_node, NULL, 2456 - "atmel,sama5d3-nfc")) 2457 + nfc_node = of_get_compatible_child(pdev->dev.of_node, 2458 + "atmel,sama5d3-nfc"); 2459 + if (nfc_node) { 2457 2460 caps = &atmel_sama5_nand_caps; 2461 + of_node_put(nfc_node); 2462 + } 2458 2463 2459 2464 /* 2460 2465 * Even if the compatible says we are dealing with an
-1
drivers/mtd/nand/raw/nand_base.c
··· 590 590 591 591 /** 592 592 * panic_nand_wait - [GENERIC] wait until the command is done 593 - * @mtd: MTD device structure 594 593 * @chip: NAND chip structure 595 594 * @timeo: timeout 596 595 *
+16 -16
drivers/mtd/nand/raw/qcom_nandc.c
··· 150 150 #define NAND_VERSION_MINOR_SHIFT 16 151 151 152 152 /* NAND OP_CMDs */ 153 - #define PAGE_READ 0x2 154 - #define PAGE_READ_WITH_ECC 0x3 155 - #define PAGE_READ_WITH_ECC_SPARE 0x4 156 - #define PROGRAM_PAGE 0x6 157 - #define PAGE_PROGRAM_WITH_ECC 0x7 158 - #define PROGRAM_PAGE_SPARE 0x9 159 - #define BLOCK_ERASE 0xa 160 - #define FETCH_ID 0xb 161 - #define RESET_DEVICE 0xd 153 + #define OP_PAGE_READ 0x2 154 + #define OP_PAGE_READ_WITH_ECC 0x3 155 + #define OP_PAGE_READ_WITH_ECC_SPARE 0x4 156 + #define OP_PROGRAM_PAGE 0x6 157 + #define OP_PAGE_PROGRAM_WITH_ECC 0x7 158 + #define OP_PROGRAM_PAGE_SPARE 0x9 159 + #define OP_BLOCK_ERASE 0xa 160 + #define OP_FETCH_ID 0xb 161 + #define OP_RESET_DEVICE 0xd 162 162 163 163 /* Default Value for NAND_DEV_CMD_VLD */ 164 164 #define NAND_DEV_CMD_VLD_VAL (READ_START_VLD | WRITE_START_VLD | \ ··· 692 692 693 693 if (read) { 694 694 if (host->use_ecc) 695 - cmd = PAGE_READ_WITH_ECC | PAGE_ACC | LAST_PAGE; 695 + cmd = OP_PAGE_READ_WITH_ECC | PAGE_ACC | LAST_PAGE; 696 696 else 697 - cmd = PAGE_READ | PAGE_ACC | LAST_PAGE; 697 + cmd = OP_PAGE_READ | PAGE_ACC | LAST_PAGE; 698 698 } else { 699 - cmd = PROGRAM_PAGE | PAGE_ACC | LAST_PAGE; 699 + cmd = OP_PROGRAM_PAGE | PAGE_ACC | LAST_PAGE; 700 700 } 701 701 702 702 if (host->use_ecc) { ··· 1170 1170 * in use. we configure the controller to perform a raw read of 512 1171 1171 * bytes to read onfi params 1172 1172 */ 1173 - nandc_set_reg(nandc, NAND_FLASH_CMD, PAGE_READ | PAGE_ACC | LAST_PAGE); 1173 + nandc_set_reg(nandc, NAND_FLASH_CMD, OP_PAGE_READ | PAGE_ACC | LAST_PAGE); 1174 1174 nandc_set_reg(nandc, NAND_ADDR0, 0); 1175 1175 nandc_set_reg(nandc, NAND_ADDR1, 0); 1176 1176 nandc_set_reg(nandc, NAND_DEV0_CFG0, 0 << CW_PER_PAGE ··· 1224 1224 struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1225 1225 1226 1226 nandc_set_reg(nandc, NAND_FLASH_CMD, 1227 - BLOCK_ERASE | PAGE_ACC | LAST_PAGE); 1227 + OP_BLOCK_ERASE | PAGE_ACC | LAST_PAGE); 1228 1228 nandc_set_reg(nandc, NAND_ADDR0, page_addr); 1229 1229 nandc_set_reg(nandc, NAND_ADDR1, 0); 1230 1230 nandc_set_reg(nandc, NAND_DEV0_CFG0, ··· 1255 1255 if (column == -1) 1256 1256 return 0; 1257 1257 1258 - nandc_set_reg(nandc, NAND_FLASH_CMD, FETCH_ID); 1258 + nandc_set_reg(nandc, NAND_FLASH_CMD, OP_FETCH_ID); 1259 1259 nandc_set_reg(nandc, NAND_ADDR0, column); 1260 1260 nandc_set_reg(nandc, NAND_ADDR1, 0); 1261 1261 nandc_set_reg(nandc, NAND_FLASH_CHIP_SELECT, ··· 1276 1276 struct nand_chip *chip = &host->chip; 1277 1277 struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1278 1278 1279 - nandc_set_reg(nandc, NAND_FLASH_CMD, RESET_DEVICE); 1279 + nandc_set_reg(nandc, NAND_FLASH_CMD, OP_RESET_DEVICE); 1280 1280 nandc_set_reg(nandc, NAND_EXEC_CMD, 1); 1281 1281 1282 1282 write_reg_dma(nandc, NAND_FLASH_CMD, 1, NAND_BAM_NEXT_SGL);
+17 -4
drivers/mtd/spi-nor/cadence-quadspi.c
··· 644 644 ndelay(cqspi->wr_delay); 645 645 646 646 while (remaining > 0) { 647 + size_t write_words, mod_bytes; 648 + 647 649 write_bytes = remaining > page_size ? page_size : remaining; 648 - iowrite32_rep(cqspi->ahb_base, txbuf, 649 - DIV_ROUND_UP(write_bytes, 4)); 650 + write_words = write_bytes / 4; 651 + mod_bytes = write_bytes % 4; 652 + /* Write 4 bytes at a time then single bytes. */ 653 + if (write_words) { 654 + iowrite32_rep(cqspi->ahb_base, txbuf, write_words); 655 + txbuf += (write_words * 4); 656 + } 657 + if (mod_bytes) { 658 + unsigned int temp = 0xFFFFFFFF; 659 + 660 + memcpy(&temp, txbuf, mod_bytes); 661 + iowrite32(temp, cqspi->ahb_base); 662 + txbuf += mod_bytes; 663 + } 650 664 651 665 if (!wait_for_completion_timeout(&cqspi->transfer_complete, 652 666 msecs_to_jiffies(CQSPI_TIMEOUT_MS))) { ··· 669 655 goto failwr; 670 656 } 671 657 672 - txbuf += write_bytes; 673 658 remaining -= write_bytes; 674 659 675 660 if (remaining > 0) ··· 1009 996 err_unmap: 1010 997 dma_unmap_single(nor->dev, dma_dst, len, DMA_FROM_DEVICE); 1011 998 1012 - return 0; 999 + return ret; 1013 1000 } 1014 1001 1015 1002 static ssize_t cqspi_read(struct spi_nor *nor, loff_t from,
+130 -35
drivers/mtd/spi-nor/spi-nor.c
··· 2156 2156 * @nor: pointer to a 'struct spi_nor' 2157 2157 * @addr: offset in the serial flash memory 2158 2158 * @len: number of bytes to read 2159 - * @buf: buffer where the data is copied into 2159 + * @buf: buffer where the data is copied into (dma-safe memory) 2160 2160 * 2161 2161 * Return: 0 on success, -errno otherwise. 2162 2162 */ ··· 2522 2522 } 2523 2523 2524 2524 /** 2525 + * spi_nor_sort_erase_mask() - sort erase mask 2526 + * @map: the erase map of the SPI NOR 2527 + * @erase_mask: the erase type mask to be sorted 2528 + * 2529 + * Replicate the sort done for the map's erase types in BFPT: sort the erase 2530 + * mask in ascending order with the smallest erase type size starting from 2531 + * BIT(0) in the sorted erase mask. 2532 + * 2533 + * Return: sorted erase mask. 2534 + */ 2535 + static u8 spi_nor_sort_erase_mask(struct spi_nor_erase_map *map, u8 erase_mask) 2536 + { 2537 + struct spi_nor_erase_type *erase_type = map->erase_type; 2538 + int i; 2539 + u8 sorted_erase_mask = 0; 2540 + 2541 + if (!erase_mask) 2542 + return 0; 2543 + 2544 + /* Replicate the sort done for the map's erase types. */ 2545 + for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++) 2546 + if (erase_type[i].size && erase_mask & BIT(erase_type[i].idx)) 2547 + sorted_erase_mask |= BIT(i); 2548 + 2549 + return sorted_erase_mask; 2550 + } 2551 + 2552 + /** 2525 2553 * spi_nor_regions_sort_erase_types() - sort erase types in each region 2526 2554 * @map: the erase map of the SPI NOR 2527 2555 * ··· 2564 2536 static void spi_nor_regions_sort_erase_types(struct spi_nor_erase_map *map) 2565 2537 { 2566 2538 struct spi_nor_erase_region *region = map->regions; 2567 - struct spi_nor_erase_type *erase_type = map->erase_type; 2568 - int i; 2569 2539 u8 region_erase_mask, sorted_erase_mask; 2570 2540 2571 2541 while (region) { 2572 2542 region_erase_mask = region->offset & SNOR_ERASE_TYPE_MASK; 2573 2543 2574 - /* Replicate the sort done for the map's erase types. */ 2575 - sorted_erase_mask = 0; 2576 - for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++) 2577 - if (erase_type[i].size && 2578 - region_erase_mask & BIT(erase_type[i].idx)) 2579 - sorted_erase_mask |= BIT(i); 2544 + sorted_erase_mask = spi_nor_sort_erase_mask(map, 2545 + region_erase_mask); 2580 2546 2581 2547 /* Overwrite erase mask. */ 2582 2548 region->offset = (region->offset & ~SNOR_ERASE_TYPE_MASK) | ··· 2877 2855 * spi_nor_get_map_in_use() - get the configuration map in use 2878 2856 * @nor: pointer to a 'struct spi_nor' 2879 2857 * @smpt: pointer to the sector map parameter table 2858 + * @smpt_len: sector map parameter table length 2859 + * 2860 + * Return: pointer to the map in use, ERR_PTR(-errno) otherwise. 2880 2861 */ 2881 - static const u32 *spi_nor_get_map_in_use(struct spi_nor *nor, const u32 *smpt) 2862 + static const u32 *spi_nor_get_map_in_use(struct spi_nor *nor, const u32 *smpt, 2863 + u8 smpt_len) 2882 2864 { 2883 - const u32 *ret = NULL; 2884 - u32 i, addr; 2865 + const u32 *ret; 2866 + u8 *buf; 2867 + u32 addr; 2885 2868 int err; 2869 + u8 i; 2886 2870 u8 addr_width, read_opcode, read_dummy; 2887 - u8 read_data_mask, data_byte, map_id; 2871 + u8 read_data_mask, map_id; 2872 + 2873 + /* Use a kmalloc'ed bounce buffer to guarantee it is DMA-able. */ 2874 + buf = kmalloc(sizeof(*buf), GFP_KERNEL); 2875 + if (!buf) 2876 + return ERR_PTR(-ENOMEM); 2888 2877 2889 2878 addr_width = nor->addr_width; 2890 2879 read_dummy = nor->read_dummy; 2891 2880 read_opcode = nor->read_opcode; 2892 2881 2893 2882 map_id = 0; 2894 - i = 0; 2895 2883 /* Determine if there are any optional Detection Command Descriptors */ 2896 - while (!(smpt[i] & SMPT_DESC_TYPE_MAP)) { 2884 + for (i = 0; i < smpt_len; i += 2) { 2885 + if (smpt[i] & SMPT_DESC_TYPE_MAP) 2886 + break; 2887 + 2897 2888 read_data_mask = SMPT_CMD_READ_DATA(smpt[i]); 2898 2889 nor->addr_width = spi_nor_smpt_addr_width(nor, smpt[i]); 2899 2890 nor->read_dummy = spi_nor_smpt_read_dummy(nor, smpt[i]); 2900 2891 nor->read_opcode = SMPT_CMD_OPCODE(smpt[i]); 2901 2892 addr = smpt[i + 1]; 2902 2893 2903 - err = spi_nor_read_raw(nor, addr, 1, &data_byte); 2904 - if (err) 2894 + err = spi_nor_read_raw(nor, addr, 1, buf); 2895 + if (err) { 2896 + ret = ERR_PTR(err); 2905 2897 goto out; 2898 + } 2906 2899 2907 2900 /* 2908 2901 * Build an index value that is used to select the Sector Map 2909 2902 * Configuration that is currently in use. 2910 2903 */ 2911 - map_id = map_id << 1 | !!(data_byte & read_data_mask); 2912 - i = i + 2; 2904 + map_id = map_id << 1 | !!(*buf & read_data_mask); 2913 2905 } 2914 2906 2915 - /* Find the matching configuration map */ 2916 - while (SMPT_MAP_ID(smpt[i]) != map_id) { 2907 + /* 2908 + * If command descriptors are provided, they always precede map 2909 + * descriptors in the table. There is no need to start the iteration 2910 + * over smpt array all over again. 2911 + * 2912 + * Find the matching configuration map. 2913 + */ 2914 + ret = ERR_PTR(-EINVAL); 2915 + while (i < smpt_len) { 2916 + if (SMPT_MAP_ID(smpt[i]) == map_id) { 2917 + ret = smpt + i; 2918 + break; 2919 + } 2920 + 2921 + /* 2922 + * If there are no more configuration map descriptors and no 2923 + * configuration ID matched the configuration identifier, the 2924 + * sector address map is unknown. 2925 + */ 2917 2926 if (smpt[i] & SMPT_DESC_END) 2918 - goto out; 2927 + break; 2928 + 2919 2929 /* increment the table index to the next map */ 2920 2930 i += SMPT_MAP_REGION_COUNT(smpt[i]) + 1; 2921 2931 } 2922 2932 2923 - ret = smpt + i; 2924 2933 /* fall through */ 2925 2934 out: 2935 + kfree(buf); 2926 2936 nor->addr_width = addr_width; 2927 2937 nor->read_dummy = read_dummy; 2928 2938 nor->read_opcode = read_opcode; ··· 2995 2941 const u32 *smpt) 2996 2942 { 2997 2943 struct spi_nor_erase_map *map = &nor->erase_map; 2998 - const struct spi_nor_erase_type *erase = map->erase_type; 2944 + struct spi_nor_erase_type *erase = map->erase_type; 2999 2945 struct spi_nor_erase_region *region; 3000 2946 u64 offset; 3001 2947 u32 region_count; 3002 2948 int i, j; 3003 - u8 erase_type; 2949 + u8 uniform_erase_type, save_uniform_erase_type; 2950 + u8 erase_type, regions_erase_type; 3004 2951 3005 2952 region_count = SMPT_MAP_REGION_COUNT(*smpt); 3006 2953 /* ··· 3014 2959 return -ENOMEM; 3015 2960 map->regions = region; 3016 2961 3017 - map->uniform_erase_type = 0xff; 2962 + uniform_erase_type = 0xff; 2963 + regions_erase_type = 0; 3018 2964 offset = 0; 3019 2965 /* Populate regions. */ 3020 2966 for (i = 0; i < region_count; i++) { ··· 3030 2974 * Save the erase types that are supported in all regions and 3031 2975 * can erase the entire flash memory. 3032 2976 */ 3033 - map->uniform_erase_type &= erase_type; 2977 + uniform_erase_type &= erase_type; 2978 + 2979 + /* 2980 + * regions_erase_type mask will indicate all the erase types 2981 + * supported in this configuration map. 2982 + */ 2983 + regions_erase_type |= erase_type; 3034 2984 3035 2985 offset = (region[i].offset & ~SNOR_ERASE_FLAGS_MASK) + 3036 2986 region[i].size; 3037 2987 } 2988 + 2989 + save_uniform_erase_type = map->uniform_erase_type; 2990 + map->uniform_erase_type = spi_nor_sort_erase_mask(map, 2991 + uniform_erase_type); 2992 + 2993 + if (!regions_erase_type) { 2994 + /* 2995 + * Roll back to the previous uniform_erase_type mask, SMPT is 2996 + * broken. 2997 + */ 2998 + map->uniform_erase_type = save_uniform_erase_type; 2999 + return -EINVAL; 3000 + } 3001 + 3002 + /* 3003 + * BFPT advertises all the erase types supported by all the possible 3004 + * map configurations. Mask out the erase types that are not supported 3005 + * by the current map configuration. 3006 + */ 3007 + for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++) 3008 + if (!(regions_erase_type & BIT(erase[i].idx))) 3009 + spi_nor_set_erase_type(&erase[i], 0, 0xFF); 3038 3010 3039 3011 spi_nor_region_mark_end(&region[i - 1]); 3040 3012 ··· 3104 3020 for (i = 0; i < smpt_header->length; i++) 3105 3021 smpt[i] = le32_to_cpu(smpt[i]); 3106 3022 3107 - sector_map = spi_nor_get_map_in_use(nor, smpt); 3108 - if (!sector_map) { 3109 - ret = -EINVAL; 3023 + sector_map = spi_nor_get_map_in_use(nor, smpt, smpt_header->length); 3024 + if (IS_ERR(sector_map)) { 3025 + ret = PTR_ERR(sector_map); 3110 3026 goto out; 3111 3027 } 3112 3028 ··· 3209 3125 if (err) 3210 3126 goto exit; 3211 3127 3212 - /* Parse other parameter headers. */ 3128 + /* Parse optional parameter tables. */ 3213 3129 for (i = 0; i < header.nph; i++) { 3214 3130 param_header = &param_headers[i]; 3215 3131 ··· 3222 3138 break; 3223 3139 } 3224 3140 3225 - if (err) 3226 - goto exit; 3141 + if (err) { 3142 + dev_warn(dev, "Failed to parse optional parameter table: %04x\n", 3143 + SFDP_PARAM_HEADER_ID(param_header)); 3144 + /* 3145 + * Let's not drop all information we extracted so far 3146 + * if optional table parsers fail. In case of failing, 3147 + * each optional parser is responsible to roll back to 3148 + * the previously known spi_nor data. 3149 + */ 3150 + err = 0; 3151 + } 3227 3152 } 3228 3153 3229 3154 exit: ··· 3343 3250 memcpy(&sfdp_params, params, sizeof(sfdp_params)); 3344 3251 memcpy(&prev_map, &nor->erase_map, sizeof(prev_map)); 3345 3252 3346 - if (spi_nor_parse_sfdp(nor, &sfdp_params)) 3253 + if (spi_nor_parse_sfdp(nor, &sfdp_params)) { 3254 + nor->addr_width = 0; 3347 3255 /* restore previous erase map */ 3348 3256 memcpy(&nor->erase_map, &prev_map, 3349 3257 sizeof(nor->erase_map)); 3350 - else 3258 + } else { 3351 3259 memcpy(params, &sfdp_params, sizeof(*params)); 3260 + } 3352 3261 } 3353 3262 3354 3263 return 0;
+2 -2
drivers/net/bonding/bond_main.c
··· 3112 3112 case NETDEV_CHANGE: 3113 3113 /* For 802.3ad mode only: 3114 3114 * Getting invalid Speed/Duplex values here will put slave 3115 - * in weird state. So mark it as link-down for the time 3115 + * in weird state. So mark it as link-fail for the time 3116 3116 * being and let link-monitoring (miimon) set it right when 3117 3117 * correct speeds/duplex are available. 3118 3118 */ 3119 3119 if (bond_update_speed_duplex(slave) && 3120 3120 BOND_MODE(bond) == BOND_MODE_8023AD) 3121 - slave->link = BOND_LINK_DOWN; 3121 + slave->link = BOND_LINK_FAIL; 3122 3122 3123 3123 if (BOND_MODE(bond) == BOND_MODE_8023AD) 3124 3124 bond_3ad_adapter_speed_duplex_changed(slave);
+35 -13
drivers/net/can/dev.c
··· 477 477 } 478 478 EXPORT_SYMBOL_GPL(can_put_echo_skb); 479 479 480 + struct sk_buff *__can_get_echo_skb(struct net_device *dev, unsigned int idx, u8 *len_ptr) 481 + { 482 + struct can_priv *priv = netdev_priv(dev); 483 + struct sk_buff *skb = priv->echo_skb[idx]; 484 + struct canfd_frame *cf; 485 + 486 + if (idx >= priv->echo_skb_max) { 487 + netdev_err(dev, "%s: BUG! Trying to access can_priv::echo_skb out of bounds (%u/max %u)\n", 488 + __func__, idx, priv->echo_skb_max); 489 + return NULL; 490 + } 491 + 492 + if (!skb) { 493 + netdev_err(dev, "%s: BUG! Trying to echo non existing skb: can_priv::echo_skb[%u]\n", 494 + __func__, idx); 495 + return NULL; 496 + } 497 + 498 + /* Using "struct canfd_frame::len" for the frame 499 + * length is supported on both CAN and CANFD frames. 500 + */ 501 + cf = (struct canfd_frame *)skb->data; 502 + *len_ptr = cf->len; 503 + priv->echo_skb[idx] = NULL; 504 + 505 + return skb; 506 + } 507 + 480 508 /* 481 509 * Get the skb from the stack and loop it back locally 482 510 * ··· 514 486 */ 515 487 unsigned int can_get_echo_skb(struct net_device *dev, unsigned int idx) 516 488 { 517 - struct can_priv *priv = netdev_priv(dev); 489 + struct sk_buff *skb; 490 + u8 len; 518 491 519 - BUG_ON(idx >= priv->echo_skb_max); 492 + skb = __can_get_echo_skb(dev, idx, &len); 493 + if (!skb) 494 + return 0; 520 495 521 - if (priv->echo_skb[idx]) { 522 - struct sk_buff *skb = priv->echo_skb[idx]; 523 - struct can_frame *cf = (struct can_frame *)skb->data; 524 - u8 dlc = cf->can_dlc; 496 + netif_rx(skb); 525 497 526 - netif_rx(priv->echo_skb[idx]); 527 - priv->echo_skb[idx] = NULL; 528 - 529 - return dlc; 530 - } 531 - 532 - return 0; 498 + return len; 533 499 } 534 500 EXPORT_SYMBOL_GPL(can_get_echo_skb); 535 501
+60 -48
drivers/net/can/flexcan.c
··· 135 135 136 136 /* FLEXCAN interrupt flag register (IFLAG) bits */ 137 137 /* Errata ERR005829 step7: Reserve first valid MB */ 138 - #define FLEXCAN_TX_MB_RESERVED_OFF_FIFO 8 139 - #define FLEXCAN_TX_MB_OFF_FIFO 9 138 + #define FLEXCAN_TX_MB_RESERVED_OFF_FIFO 8 140 139 #define FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP 0 141 - #define FLEXCAN_TX_MB_OFF_TIMESTAMP 1 142 - #define FLEXCAN_RX_MB_OFF_TIMESTAMP_FIRST (FLEXCAN_TX_MB_OFF_TIMESTAMP + 1) 143 - #define FLEXCAN_RX_MB_OFF_TIMESTAMP_LAST 63 144 - #define FLEXCAN_IFLAG_MB(x) BIT(x) 140 + #define FLEXCAN_TX_MB 63 141 + #define FLEXCAN_RX_MB_OFF_TIMESTAMP_FIRST (FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP + 1) 142 + #define FLEXCAN_RX_MB_OFF_TIMESTAMP_LAST (FLEXCAN_TX_MB - 1) 143 + #define FLEXCAN_IFLAG_MB(x) BIT(x & 0x1f) 145 144 #define FLEXCAN_IFLAG_RX_FIFO_OVERFLOW BIT(7) 146 145 #define FLEXCAN_IFLAG_RX_FIFO_WARN BIT(6) 147 146 #define FLEXCAN_IFLAG_RX_FIFO_AVAILABLE BIT(5) ··· 258 259 struct can_rx_offload offload; 259 260 260 261 struct flexcan_regs __iomem *regs; 261 - struct flexcan_mb __iomem *tx_mb; 262 262 struct flexcan_mb __iomem *tx_mb_reserved; 263 - u8 tx_mb_idx; 264 263 u32 reg_ctrl_default; 265 264 u32 reg_imask1_default; 266 265 u32 reg_imask2_default; ··· 512 515 static netdev_tx_t flexcan_start_xmit(struct sk_buff *skb, struct net_device *dev) 513 516 { 514 517 const struct flexcan_priv *priv = netdev_priv(dev); 518 + struct flexcan_regs __iomem *regs = priv->regs; 515 519 struct can_frame *cf = (struct can_frame *)skb->data; 516 520 u32 can_id; 517 521 u32 data; ··· 535 537 536 538 if (cf->can_dlc > 0) { 537 539 data = be32_to_cpup((__be32 *)&cf->data[0]); 538 - priv->write(data, &priv->tx_mb->data[0]); 540 + priv->write(data, &regs->mb[FLEXCAN_TX_MB].data[0]); 539 541 } 540 542 if (cf->can_dlc > 4) { 541 543 data = be32_to_cpup((__be32 *)&cf->data[4]); 542 - priv->write(data, &priv->tx_mb->data[1]); 544 + priv->write(data, &regs->mb[FLEXCAN_TX_MB].data[1]); 543 545 } 544 546 545 547 can_put_echo_skb(skb, dev, 0); 546 548 547 - priv->write(can_id, &priv->tx_mb->can_id); 548 - priv->write(ctrl, &priv->tx_mb->can_ctrl); 549 + priv->write(can_id, &regs->mb[FLEXCAN_TX_MB].can_id); 550 + priv->write(ctrl, &regs->mb[FLEXCAN_TX_MB].can_ctrl); 549 551 550 552 /* Errata ERR005829 step8: 551 553 * Write twice INACTIVE(0x8) code to first MB. ··· 561 563 static void flexcan_irq_bus_err(struct net_device *dev, u32 reg_esr) 562 564 { 563 565 struct flexcan_priv *priv = netdev_priv(dev); 566 + struct flexcan_regs __iomem *regs = priv->regs; 564 567 struct sk_buff *skb; 565 568 struct can_frame *cf; 566 569 bool rx_errors = false, tx_errors = false; 570 + u32 timestamp; 571 + 572 + timestamp = priv->read(&regs->timer) << 16; 567 573 568 574 skb = alloc_can_err_skb(dev, &cf); 569 575 if (unlikely(!skb)) ··· 614 612 if (tx_errors) 615 613 dev->stats.tx_errors++; 616 614 617 - can_rx_offload_irq_queue_err_skb(&priv->offload, skb); 615 + can_rx_offload_queue_sorted(&priv->offload, skb, timestamp); 618 616 } 619 617 620 618 static void flexcan_irq_state(struct net_device *dev, u32 reg_esr) 621 619 { 622 620 struct flexcan_priv *priv = netdev_priv(dev); 621 + struct flexcan_regs __iomem *regs = priv->regs; 623 622 struct sk_buff *skb; 624 623 struct can_frame *cf; 625 624 enum can_state new_state, rx_state, tx_state; 626 625 int flt; 627 626 struct can_berr_counter bec; 627 + u32 timestamp; 628 + 629 + timestamp = priv->read(&regs->timer) << 16; 628 630 629 631 flt = reg_esr & FLEXCAN_ESR_FLT_CONF_MASK; 630 632 if (likely(flt == FLEXCAN_ESR_FLT_CONF_ACTIVE)) { ··· 658 652 if (unlikely(new_state == CAN_STATE_BUS_OFF)) 659 653 can_bus_off(dev); 660 654 661 - can_rx_offload_irq_queue_err_skb(&priv->offload, skb); 655 + can_rx_offload_queue_sorted(&priv->offload, skb, timestamp); 662 656 } 663 657 664 658 static inline struct flexcan_priv *rx_offload_to_priv(struct can_rx_offload *offload) ··· 726 720 priv->write(BIT(n - 32), &regs->iflag2); 727 721 } else { 728 722 priv->write(FLEXCAN_IFLAG_RX_FIFO_AVAILABLE, &regs->iflag1); 729 - priv->read(&regs->timer); 730 723 } 724 + 725 + /* Read the Free Running Timer. It is optional but recommended 726 + * to unlock Mailbox as soon as possible and make it available 727 + * for reception. 728 + */ 729 + priv->read(&regs->timer); 731 730 732 731 return 1; 733 732 } ··· 743 732 struct flexcan_regs __iomem *regs = priv->regs; 744 733 u32 iflag1, iflag2; 745 734 746 - iflag2 = priv->read(&regs->iflag2) & priv->reg_imask2_default; 747 - iflag1 = priv->read(&regs->iflag1) & priv->reg_imask1_default & 748 - ~FLEXCAN_IFLAG_MB(priv->tx_mb_idx); 735 + iflag2 = priv->read(&regs->iflag2) & priv->reg_imask2_default & 736 + ~FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB); 737 + iflag1 = priv->read(&regs->iflag1) & priv->reg_imask1_default; 749 738 750 739 return (u64)iflag2 << 32 | iflag1; 751 740 } ··· 757 746 struct flexcan_priv *priv = netdev_priv(dev); 758 747 struct flexcan_regs __iomem *regs = priv->regs; 759 748 irqreturn_t handled = IRQ_NONE; 760 - u32 reg_iflag1, reg_esr; 749 + u32 reg_iflag2, reg_esr; 761 750 enum can_state last_state = priv->can.state; 762 - 763 - reg_iflag1 = priv->read(&regs->iflag1); 764 751 765 752 /* reception interrupt */ 766 753 if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) { ··· 773 764 break; 774 765 } 775 766 } else { 767 + u32 reg_iflag1; 768 + 769 + reg_iflag1 = priv->read(&regs->iflag1); 776 770 if (reg_iflag1 & FLEXCAN_IFLAG_RX_FIFO_AVAILABLE) { 777 771 handled = IRQ_HANDLED; 778 772 can_rx_offload_irq_offload_fifo(&priv->offload); ··· 791 779 } 792 780 } 793 781 782 + reg_iflag2 = priv->read(&regs->iflag2); 783 + 794 784 /* transmission complete interrupt */ 795 - if (reg_iflag1 & FLEXCAN_IFLAG_MB(priv->tx_mb_idx)) { 785 + if (reg_iflag2 & FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB)) { 786 + u32 reg_ctrl = priv->read(&regs->mb[FLEXCAN_TX_MB].can_ctrl); 787 + 796 788 handled = IRQ_HANDLED; 797 - stats->tx_bytes += can_get_echo_skb(dev, 0); 789 + stats->tx_bytes += can_rx_offload_get_echo_skb(&priv->offload, 790 + 0, reg_ctrl << 16); 798 791 stats->tx_packets++; 799 792 can_led_event(dev, CAN_LED_EVENT_TX); 800 793 801 794 /* after sending a RTR frame MB is in RX mode */ 802 795 priv->write(FLEXCAN_MB_CODE_TX_INACTIVE, 803 - &priv->tx_mb->can_ctrl); 804 - priv->write(FLEXCAN_IFLAG_MB(priv->tx_mb_idx), &regs->iflag1); 796 + &regs->mb[FLEXCAN_TX_MB].can_ctrl); 797 + priv->write(FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB), &regs->iflag2); 805 798 netif_wake_queue(dev); 806 799 } 807 800 ··· 948 931 reg_mcr &= ~FLEXCAN_MCR_MAXMB(0xff); 949 932 reg_mcr |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT | FLEXCAN_MCR_SUPV | 950 933 FLEXCAN_MCR_WRN_EN | FLEXCAN_MCR_SRX_DIS | FLEXCAN_MCR_IRMQ | 951 - FLEXCAN_MCR_IDAM_C; 934 + FLEXCAN_MCR_IDAM_C | FLEXCAN_MCR_MAXMB(FLEXCAN_TX_MB); 952 935 953 - if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) { 936 + if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) 954 937 reg_mcr &= ~FLEXCAN_MCR_FEN; 955 - reg_mcr |= FLEXCAN_MCR_MAXMB(priv->offload.mb_last); 956 - } else { 957 - reg_mcr |= FLEXCAN_MCR_FEN | 958 - FLEXCAN_MCR_MAXMB(priv->tx_mb_idx); 959 - } 938 + else 939 + reg_mcr |= FLEXCAN_MCR_FEN; 940 + 960 941 netdev_dbg(dev, "%s: writing mcr=0x%08x", __func__, reg_mcr); 961 942 priv->write(reg_mcr, &regs->mcr); 962 943 ··· 997 982 priv->write(reg_ctrl2, &regs->ctrl2); 998 983 } 999 984 1000 - /* clear and invalidate all mailboxes first */ 1001 - for (i = priv->tx_mb_idx; i < ARRAY_SIZE(regs->mb); i++) { 1002 - priv->write(FLEXCAN_MB_CODE_RX_INACTIVE, 1003 - &regs->mb[i].can_ctrl); 1004 - } 1005 - 1006 985 if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) { 1007 - for (i = priv->offload.mb_first; i <= priv->offload.mb_last; i++) 986 + for (i = priv->offload.mb_first; i <= priv->offload.mb_last; i++) { 1008 987 priv->write(FLEXCAN_MB_CODE_RX_EMPTY, 1009 988 &regs->mb[i].can_ctrl); 989 + } 990 + } else { 991 + /* clear and invalidate unused mailboxes first */ 992 + for (i = FLEXCAN_TX_MB_RESERVED_OFF_FIFO; i <= ARRAY_SIZE(regs->mb); i++) { 993 + priv->write(FLEXCAN_MB_CODE_RX_INACTIVE, 994 + &regs->mb[i].can_ctrl); 995 + } 1010 996 } 1011 997 1012 998 /* Errata ERR005829: mark first TX mailbox as INACTIVE */ ··· 1016 1000 1017 1001 /* mark TX mailbox as INACTIVE */ 1018 1002 priv->write(FLEXCAN_MB_CODE_TX_INACTIVE, 1019 - &priv->tx_mb->can_ctrl); 1003 + &regs->mb[FLEXCAN_TX_MB].can_ctrl); 1020 1004 1021 1005 /* acceptance mask/acceptance code (accept everything) */ 1022 1006 priv->write(0x0, &regs->rxgmask); ··· 1371 1355 priv->devtype_data = devtype_data; 1372 1356 priv->reg_xceiver = reg_xceiver; 1373 1357 1374 - if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) { 1375 - priv->tx_mb_idx = FLEXCAN_TX_MB_OFF_TIMESTAMP; 1358 + if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) 1376 1359 priv->tx_mb_reserved = &regs->mb[FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP]; 1377 - } else { 1378 - priv->tx_mb_idx = FLEXCAN_TX_MB_OFF_FIFO; 1360 + else 1379 1361 priv->tx_mb_reserved = &regs->mb[FLEXCAN_TX_MB_RESERVED_OFF_FIFO]; 1380 - } 1381 - priv->tx_mb = &regs->mb[priv->tx_mb_idx]; 1382 1362 1383 - priv->reg_imask1_default = FLEXCAN_IFLAG_MB(priv->tx_mb_idx); 1384 - priv->reg_imask2_default = 0; 1363 + priv->reg_imask1_default = 0; 1364 + priv->reg_imask2_default = FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB); 1385 1365 1386 1366 priv->offload.mailbox_read = flexcan_mailbox_read; 1387 1367
+4 -1
drivers/net/can/rcar/rcar_can.c
··· 24 24 25 25 #define RCAR_CAN_DRV_NAME "rcar_can" 26 26 27 + #define RCAR_SUPPORTED_CLOCKS (BIT(CLKR_CLKP1) | BIT(CLKR_CLKP2) | \ 28 + BIT(CLKR_CLKEXT)) 29 + 27 30 /* Mailbox configuration: 28 31 * mailbox 60 - 63 - Rx FIFO mailboxes 29 32 * mailbox 56 - 59 - Tx FIFO mailboxes ··· 792 789 goto fail_clk; 793 790 } 794 791 795 - if (clock_select >= ARRAY_SIZE(clock_names)) { 792 + if (!(BIT(clock_select) & RCAR_SUPPORTED_CLOCKS)) { 796 793 err = -EINVAL; 797 794 dev_err(&pdev->dev, "invalid CAN clock selected\n"); 798 795 goto fail_clk;
+49 -2
drivers/net/can/rx-offload.c
··· 211 211 } 212 212 EXPORT_SYMBOL_GPL(can_rx_offload_irq_offload_fifo); 213 213 214 - int can_rx_offload_irq_queue_err_skb(struct can_rx_offload *offload, struct sk_buff *skb) 214 + int can_rx_offload_queue_sorted(struct can_rx_offload *offload, 215 + struct sk_buff *skb, u32 timestamp) 216 + { 217 + struct can_rx_offload_cb *cb; 218 + unsigned long flags; 219 + 220 + if (skb_queue_len(&offload->skb_queue) > 221 + offload->skb_queue_len_max) 222 + return -ENOMEM; 223 + 224 + cb = can_rx_offload_get_cb(skb); 225 + cb->timestamp = timestamp; 226 + 227 + spin_lock_irqsave(&offload->skb_queue.lock, flags); 228 + __skb_queue_add_sort(&offload->skb_queue, skb, can_rx_offload_compare); 229 + spin_unlock_irqrestore(&offload->skb_queue.lock, flags); 230 + 231 + can_rx_offload_schedule(offload); 232 + 233 + return 0; 234 + } 235 + EXPORT_SYMBOL_GPL(can_rx_offload_queue_sorted); 236 + 237 + unsigned int can_rx_offload_get_echo_skb(struct can_rx_offload *offload, 238 + unsigned int idx, u32 timestamp) 239 + { 240 + struct net_device *dev = offload->dev; 241 + struct net_device_stats *stats = &dev->stats; 242 + struct sk_buff *skb; 243 + u8 len; 244 + int err; 245 + 246 + skb = __can_get_echo_skb(dev, idx, &len); 247 + if (!skb) 248 + return 0; 249 + 250 + err = can_rx_offload_queue_sorted(offload, skb, timestamp); 251 + if (err) { 252 + stats->rx_errors++; 253 + stats->tx_fifo_errors++; 254 + } 255 + 256 + return len; 257 + } 258 + EXPORT_SYMBOL_GPL(can_rx_offload_get_echo_skb); 259 + 260 + int can_rx_offload_queue_tail(struct can_rx_offload *offload, 261 + struct sk_buff *skb) 215 262 { 216 263 if (skb_queue_len(&offload->skb_queue) > 217 264 offload->skb_queue_len_max) ··· 269 222 270 223 return 0; 271 224 } 272 - EXPORT_SYMBOL_GPL(can_rx_offload_irq_queue_err_skb); 225 + EXPORT_SYMBOL_GPL(can_rx_offload_queue_tail); 273 226 274 227 static int can_rx_offload_init_queue(struct net_device *dev, struct can_rx_offload *offload, unsigned int weight) 275 228 {
+1 -1
drivers/net/can/spi/hi311x.c
··· 760 760 { 761 761 struct hi3110_priv *priv = netdev_priv(net); 762 762 struct spi_device *spi = priv->spi; 763 - unsigned long flags = IRQF_ONESHOT | IRQF_TRIGGER_RISING; 763 + unsigned long flags = IRQF_ONESHOT | IRQF_TRIGGER_HIGH; 764 764 int ret; 765 765 766 766 ret = open_candev(net);
+2 -2
drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
··· 528 528 context = &priv->tx_contexts[i]; 529 529 530 530 context->echo_index = i; 531 - can_put_echo_skb(skb, netdev, context->echo_index); 532 531 ++priv->active_tx_contexts; 533 532 if (priv->active_tx_contexts >= (int)dev->max_tx_urbs) 534 533 netif_stop_queue(netdev); ··· 552 553 dev_kfree_skb(skb); 553 554 spin_lock_irqsave(&priv->tx_contexts_lock, flags); 554 555 555 - can_free_echo_skb(netdev, context->echo_index); 556 556 context->echo_index = dev->max_tx_urbs; 557 557 --priv->active_tx_contexts; 558 558 netif_wake_queue(netdev); ··· 561 563 } 562 564 563 565 context->priv = priv; 566 + 567 + can_put_echo_skb(skb, netdev, context->echo_index); 564 568 565 569 usb_fill_bulk_urb(urb, dev->udev, 566 570 usb_sndbulkpipe(dev->udev,
+5 -5
drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
··· 1019 1019 new_state : CAN_STATE_ERROR_ACTIVE; 1020 1020 1021 1021 can_change_state(netdev, cf, tx_state, rx_state); 1022 + 1023 + if (priv->can.restart_ms && 1024 + old_state >= CAN_STATE_BUS_OFF && 1025 + new_state < CAN_STATE_BUS_OFF) 1026 + cf->can_id |= CAN_ERR_RESTARTED; 1022 1027 } 1023 1028 1024 1029 if (new_state == CAN_STATE_BUS_OFF) { ··· 1033 1028 1034 1029 can_bus_off(netdev); 1035 1030 } 1036 - 1037 - if (priv->can.restart_ms && 1038 - old_state >= CAN_STATE_BUS_OFF && 1039 - new_state < CAN_STATE_BUS_OFF) 1040 - cf->can_id |= CAN_ERR_RESTARTED; 1041 1031 } 1042 1032 1043 1033 if (!skb) {
-7
drivers/net/can/usb/ucan.c
··· 35 35 #include <linux/slab.h> 36 36 #include <linux/usb.h> 37 37 38 - #include <linux/can.h> 39 - #include <linux/can/dev.h> 40 - #include <linux/can/error.h> 41 - 42 38 #define UCAN_DRIVER_NAME "ucan" 43 39 #define UCAN_MAX_RX_URBS 8 44 40 /* the CAN controller needs a while to enable/disable the bus */ ··· 1571 1575 /* disconnect the device */ 1572 1576 static void ucan_disconnect(struct usb_interface *intf) 1573 1577 { 1574 - struct usb_device *udev; 1575 1578 struct ucan_priv *up = usb_get_intfdata(intf); 1576 - 1577 - udev = interface_to_usbdev(intf); 1578 1579 1579 1580 usb_set_intfdata(intf, NULL); 1580 1581
+5 -5
drivers/net/dsa/microchip/ksz_common.c
··· 1117 1117 { 1118 1118 int i; 1119 1119 1120 - mutex_init(&dev->reg_mutex); 1121 - mutex_init(&dev->stats_mutex); 1122 - mutex_init(&dev->alu_mutex); 1123 - mutex_init(&dev->vlan_mutex); 1124 - 1125 1120 dev->ds->ops = &ksz_switch_ops; 1126 1121 1127 1122 for (i = 0; i < ARRAY_SIZE(ksz_switch_chips); i++) { ··· 1200 1205 1201 1206 if (dev->pdata) 1202 1207 dev->chip_id = dev->pdata->chip_id; 1208 + 1209 + mutex_init(&dev->reg_mutex); 1210 + mutex_init(&dev->stats_mutex); 1211 + mutex_init(&dev->alu_mutex); 1212 + mutex_init(&dev->vlan_mutex); 1203 1213 1204 1214 if (ksz_switch_detect(dev)) 1205 1215 return -EINVAL;
+2
drivers/net/dsa/mv88e6xxx/global1.c
··· 567 567 if (err) 568 568 return err; 569 569 570 + /* Keep the histogram mode bits */ 571 + val &= MV88E6XXX_G1_STATS_OP_HIST_RX_TX; 570 572 val |= MV88E6XXX_G1_STATS_OP_BUSY | MV88E6XXX_G1_STATS_OP_FLUSH_ALL; 571 573 572 574 err = mv88e6xxx_g1_write(chip, MV88E6XXX_G1_STATS_OP, val);
+11 -12
drivers/net/ethernet/amazon/ena/ena_netdev.c
··· 1848 1848 rc = ena_com_dev_reset(adapter->ena_dev, adapter->reset_reason); 1849 1849 if (rc) 1850 1850 dev_err(&adapter->pdev->dev, "Device reset failed\n"); 1851 + /* stop submitting admin commands on a device that was reset */ 1852 + ena_com_set_admin_running_state(adapter->ena_dev, false); 1851 1853 } 1852 1854 1853 1855 ena_destroy_all_io_queues(adapter); ··· 1915 1913 struct ena_adapter *adapter = netdev_priv(netdev); 1916 1914 1917 1915 netif_dbg(adapter, ifdown, netdev, "%s\n", __func__); 1916 + 1917 + if (!test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags)) 1918 + return 0; 1918 1919 1919 1920 if (test_bit(ENA_FLAG_DEV_UP, &adapter->flags)) 1920 1921 ena_down(adapter); ··· 2618 2613 ena_down(adapter); 2619 2614 2620 2615 /* Stop the device from sending AENQ events (in case reset flag is set 2621 - * and device is up, ena_close already reset the device 2622 - * In case the reset flag is set and the device is up, ena_down() 2623 - * already perform the reset, so it can be skipped. 2616 + * and device is up, ena_down() already reset the device. 2624 2617 */ 2625 2618 if (!(test_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags) && dev_up)) 2626 2619 ena_com_dev_reset(adapter->ena_dev, adapter->reset_reason); ··· 2697 2694 ena_com_abort_admin_commands(ena_dev); 2698 2695 ena_com_wait_for_abort_completion(ena_dev); 2699 2696 ena_com_admin_destroy(ena_dev); 2700 - ena_com_mmio_reg_read_request_destroy(ena_dev); 2701 2697 ena_com_dev_reset(ena_dev, ENA_REGS_RESET_DRIVER_INVALID_STATE); 2698 + ena_com_mmio_reg_read_request_destroy(ena_dev); 2702 2699 err: 2703 2700 clear_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags); 2704 2701 clear_bit(ENA_FLAG_ONGOING_RESET, &adapter->flags); ··· 3455 3452 ena_com_rss_destroy(ena_dev); 3456 3453 err_free_msix: 3457 3454 ena_com_dev_reset(ena_dev, ENA_REGS_RESET_INIT_ERR); 3455 + /* stop submitting admin commands on a device that was reset */ 3456 + ena_com_set_admin_running_state(ena_dev, false); 3458 3457 ena_free_mgmnt_irq(adapter); 3459 3458 ena_disable_msix(adapter); 3460 3459 err_worker_destroy: ··· 3503 3498 3504 3499 cancel_work_sync(&adapter->reset_task); 3505 3500 3506 - unregister_netdev(netdev); 3507 - 3508 - /* If the device is running then we want to make sure the device will be 3509 - * reset to make sure no more events will be issued by the device. 3510 - */ 3511 - if (test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags)) 3512 - set_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags); 3513 - 3514 3501 rtnl_lock(); 3515 3502 ena_destroy_device(adapter, true); 3516 3503 rtnl_unlock(); 3504 + 3505 + unregister_netdev(netdev); 3517 3506 3518 3507 free_netdev(netdev); 3519 3508
+1 -1
drivers/net/ethernet/amazon/ena/ena_netdev.h
··· 45 45 46 46 #define DRV_MODULE_VER_MAJOR 2 47 47 #define DRV_MODULE_VER_MINOR 0 48 - #define DRV_MODULE_VER_SUBMINOR 1 48 + #define DRV_MODULE_VER_SUBMINOR 2 49 49 50 50 #define DRV_MODULE_NAME "ena" 51 51 #ifndef DRV_MODULE_VERSION
+3 -1
drivers/net/ethernet/amd/sunlance.c
··· 1419 1419 1420 1420 prop = of_get_property(nd, "tpe-link-test?", NULL); 1421 1421 if (!prop) 1422 - goto no_link_test; 1422 + goto node_put; 1423 1423 1424 1424 if (strcmp(prop, "true")) { 1425 1425 printk(KERN_NOTICE "SunLance: warning: overriding option " ··· 1428 1428 "to ecd@skynet.be\n"); 1429 1429 auxio_set_lte(AUXIO_LTE_ON); 1430 1430 } 1431 + node_put: 1432 + of_node_put(nd); 1431 1433 no_link_test: 1432 1434 lp->auto_select = 1; 1433 1435 lp->tpe = 0;
+4 -4
drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c
··· 407 407 struct ethtool_pauseparam *pause) 408 408 { 409 409 struct aq_nic_s *aq_nic = netdev_priv(ndev); 410 + u32 fc = aq_nic->aq_nic_cfg.flow_control; 410 411 411 412 pause->autoneg = 0; 412 413 413 - if (aq_nic->aq_hw->aq_nic_cfg->flow_control & AQ_NIC_FC_RX) 414 - pause->rx_pause = 1; 415 - if (aq_nic->aq_hw->aq_nic_cfg->flow_control & AQ_NIC_FC_TX) 416 - pause->tx_pause = 1; 414 + pause->rx_pause = !!(fc & AQ_NIC_FC_RX); 415 + pause->tx_pause = !!(fc & AQ_NIC_FC_TX); 416 + 417 417 } 418 418 419 419 static int aq_ethtool_set_pauseparam(struct net_device *ndev,
+6
drivers/net/ethernet/aquantia/atlantic/aq_hw.h
··· 204 204 205 205 int (*hw_get_fw_version)(struct aq_hw_s *self, u32 *fw_version); 206 206 207 + int (*hw_set_offload)(struct aq_hw_s *self, 208 + struct aq_nic_cfg_s *aq_nic_cfg); 209 + 210 + int (*hw_set_fc)(struct aq_hw_s *self, u32 fc, u32 tc); 207 211 }; 208 212 209 213 struct aq_fw_ops { ··· 229 225 int (*update_link_status)(struct aq_hw_s *self); 230 226 231 227 int (*update_stats)(struct aq_hw_s *self); 228 + 229 + u32 (*get_flow_control)(struct aq_hw_s *self, u32 *fcmode); 232 230 233 231 int (*set_flow_control)(struct aq_hw_s *self); 234 232
+8 -2
drivers/net/ethernet/aquantia/atlantic/aq_main.c
··· 99 99 struct aq_nic_s *aq_nic = netdev_priv(ndev); 100 100 struct aq_nic_cfg_s *aq_cfg = aq_nic_get_cfg(aq_nic); 101 101 bool is_lro = false; 102 + int err = 0; 102 103 103 - if (aq_cfg->hw_features & NETIF_F_LRO) { 104 + aq_cfg->features = features; 105 + 106 + if (aq_cfg->aq_hw_caps->hw_features & NETIF_F_LRO) { 104 107 is_lro = features & NETIF_F_LRO; 105 108 106 109 if (aq_cfg->is_lro != is_lro) { ··· 115 112 } 116 113 } 117 114 } 115 + if ((aq_nic->ndev->features ^ features) & NETIF_F_RXCSUM) 116 + err = aq_nic->aq_hw_ops->hw_set_offload(aq_nic->aq_hw, 117 + aq_cfg); 118 118 119 - return 0; 119 + return err; 120 120 } 121 121 122 122 static int aq_ndev_set_mac_address(struct net_device *ndev, void *addr)
+15 -3
drivers/net/ethernet/aquantia/atlantic/aq_nic.c
··· 118 118 } 119 119 120 120 cfg->link_speed_msk &= cfg->aq_hw_caps->link_speed_msk; 121 - cfg->hw_features = cfg->aq_hw_caps->hw_features; 121 + cfg->features = cfg->aq_hw_caps->hw_features; 122 122 } 123 123 124 124 static int aq_nic_update_link_status(struct aq_nic_s *self) 125 125 { 126 126 int err = self->aq_fw_ops->update_link_status(self->aq_hw); 127 + u32 fc = 0; 127 128 128 129 if (err) 129 130 return err; ··· 134 133 AQ_CFG_DRV_NAME, self->link_status.mbps, 135 134 self->aq_hw->aq_link_status.mbps); 136 135 aq_nic_update_interrupt_moderation_settings(self); 136 + 137 + /* Driver has to update flow control settings on RX block 138 + * on any link event. 139 + * We should query FW whether it negotiated FC. 140 + */ 141 + if (self->aq_fw_ops->get_flow_control) 142 + self->aq_fw_ops->get_flow_control(self->aq_hw, &fc); 143 + if (self->aq_hw_ops->hw_set_fc) 144 + self->aq_hw_ops->hw_set_fc(self->aq_hw, fc, 0); 137 145 } 138 146 139 147 self->link_status = self->aq_hw->aq_link_status; ··· 600 590 } 601 591 } 602 592 603 - if (i > 0 && i < AQ_HW_MULTICAST_ADDRESS_MAX) { 593 + if (i > 0 && i <= AQ_HW_MULTICAST_ADDRESS_MAX) { 604 594 packet_filter |= IFF_MULTICAST; 605 595 self->mc_list.count = i; 606 596 self->aq_hw_ops->hw_multicast_list_set(self->aq_hw, ··· 782 772 ethtool_link_ksettings_add_link_mode(cmd, advertising, 783 773 Pause); 784 774 785 - if (self->aq_nic_cfg.flow_control & AQ_NIC_FC_TX) 775 + /* Asym is when either RX or TX, but not both */ 776 + if (!!(self->aq_nic_cfg.flow_control & AQ_NIC_FC_TX) ^ 777 + !!(self->aq_nic_cfg.flow_control & AQ_NIC_FC_RX)) 786 778 ethtool_link_ksettings_add_link_mode(cmd, advertising, 787 779 Asym_Pause); 788 780
+1 -1
drivers/net/ethernet/aquantia/atlantic/aq_nic.h
··· 23 23 24 24 struct aq_nic_cfg_s { 25 25 const struct aq_hw_caps_s *aq_hw_caps; 26 - u64 hw_features; 26 + u64 features; 27 27 u32 rxds; /* rx ring size, descriptors # */ 28 28 u32 txds; /* tx ring size, descriptors # */ 29 29 u32 vecs; /* vecs==allocated irqs */
+23 -12
drivers/net/ethernet/aquantia/atlantic/aq_ring.c
··· 172 172 return !!budget; 173 173 } 174 174 175 + static void aq_rx_checksum(struct aq_ring_s *self, 176 + struct aq_ring_buff_s *buff, 177 + struct sk_buff *skb) 178 + { 179 + if (!(self->aq_nic->ndev->features & NETIF_F_RXCSUM)) 180 + return; 181 + 182 + if (unlikely(buff->is_cso_err)) { 183 + ++self->stats.rx.errors; 184 + skb->ip_summed = CHECKSUM_NONE; 185 + return; 186 + } 187 + if (buff->is_ip_cso) { 188 + __skb_incr_checksum_unnecessary(skb); 189 + if (buff->is_udp_cso || buff->is_tcp_cso) 190 + __skb_incr_checksum_unnecessary(skb); 191 + } else { 192 + skb->ip_summed = CHECKSUM_NONE; 193 + } 194 + } 195 + 175 196 #define AQ_SKB_ALIGN SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) 176 197 int aq_ring_rx_clean(struct aq_ring_s *self, 177 198 struct napi_struct *napi, ··· 288 267 } 289 268 290 269 skb->protocol = eth_type_trans(skb, ndev); 291 - if (unlikely(buff->is_cso_err)) { 292 - ++self->stats.rx.errors; 293 - skb->ip_summed = CHECKSUM_NONE; 294 - } else { 295 - if (buff->is_ip_cso) { 296 - __skb_incr_checksum_unnecessary(skb); 297 - if (buff->is_udp_cso || buff->is_tcp_cso) 298 - __skb_incr_checksum_unnecessary(skb); 299 - } else { 300 - skb->ip_summed = CHECKSUM_NONE; 301 - } 302 - } 270 + 271 + aq_rx_checksum(self, buff, skb); 303 272 304 273 skb_set_hash(skb, buff->rss_hash, 305 274 buff->is_hash_l4 ? PKT_HASH_TYPE_L4 :
+38 -23
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
··· 100 100 return err; 101 101 } 102 102 103 + static int hw_atl_b0_set_fc(struct aq_hw_s *self, u32 fc, u32 tc) 104 + { 105 + hw_atl_rpb_rx_xoff_en_per_tc_set(self, !!(fc & AQ_NIC_FC_RX), tc); 106 + return 0; 107 + } 108 + 103 109 static int hw_atl_b0_hw_qos_set(struct aq_hw_s *self) 104 110 { 105 111 u32 tc = 0U; 106 112 u32 buff_size = 0U; 107 113 unsigned int i_priority = 0U; 108 - bool is_rx_flow_control = false; 109 114 110 115 /* TPS Descriptor rate init */ 111 116 hw_atl_tps_tx_pkt_shed_desc_rate_curr_time_res_set(self, 0x0U); ··· 143 138 144 139 /* QoS Rx buf size per TC */ 145 140 tc = 0; 146 - is_rx_flow_control = (AQ_NIC_FC_RX & self->aq_nic_cfg->flow_control); 147 141 buff_size = HW_ATL_B0_RXBUF_MAX; 148 142 149 143 hw_atl_rpb_rx_pkt_buff_size_per_tc_set(self, buff_size, tc); ··· 154 150 (buff_size * 155 151 (1024U / 32U) * 50U) / 156 152 100U, tc); 157 - hw_atl_rpb_rx_xoff_en_per_tc_set(self, is_rx_flow_control ? 1U : 0U, tc); 153 + 154 + hw_atl_b0_set_fc(self, self->aq_nic_cfg->flow_control, tc); 158 155 159 156 /* QoS 802.1p priority -> TC mapping */ 160 157 for (i_priority = 8U; i_priority--;) ··· 234 229 hw_atl_tpo_tcp_udp_crc_offload_en_set(self, 1); 235 230 236 231 /* RX checksums offloads*/ 237 - hw_atl_rpo_ipv4header_crc_offload_en_set(self, 1); 238 - hw_atl_rpo_tcp_udp_crc_offload_en_set(self, 1); 232 + hw_atl_rpo_ipv4header_crc_offload_en_set(self, !!(aq_nic_cfg->features & 233 + NETIF_F_RXCSUM)); 234 + hw_atl_rpo_tcp_udp_crc_offload_en_set(self, !!(aq_nic_cfg->features & 235 + NETIF_F_RXCSUM)); 239 236 240 237 /* LSO offloads*/ 241 238 hw_atl_tdm_large_send_offload_en_set(self, 0xFFFFFFFFU); ··· 662 655 struct hw_atl_rxd_wb_s *rxd_wb = (struct hw_atl_rxd_wb_s *) 663 656 &ring->dx_ring[ring->hw_head * HW_ATL_B0_RXD_SIZE]; 664 657 665 - unsigned int is_err = 1U; 666 658 unsigned int is_rx_check_sum_enabled = 0U; 667 659 unsigned int pkt_type = 0U; 660 + u8 rx_stat = 0U; 668 661 669 662 if (!(rxd_wb->status & 0x1U)) { /* RxD is not done */ 670 663 break; ··· 672 665 673 666 buff = &ring->buff_ring[ring->hw_head]; 674 667 675 - is_err = (0x0000003CU & rxd_wb->status); 668 + rx_stat = (0x0000003CU & rxd_wb->status) >> 2; 676 669 677 670 is_rx_check_sum_enabled = (rxd_wb->type) & (0x3U << 19); 678 - is_err &= ~0x20U; /* exclude validity bit */ 679 671 680 672 pkt_type = 0xFFU & (rxd_wb->type >> 4); 681 673 682 - if (is_rx_check_sum_enabled) { 683 - if (0x0U == (pkt_type & 0x3U)) 684 - buff->is_ip_cso = (is_err & 0x08U) ? 0U : 1U; 674 + if (is_rx_check_sum_enabled & BIT(0) && 675 + (0x0U == (pkt_type & 0x3U))) 676 + buff->is_ip_cso = (rx_stat & BIT(1)) ? 0U : 1U; 685 677 678 + if (is_rx_check_sum_enabled & BIT(1)) { 686 679 if (0x4U == (pkt_type & 0x1CU)) 687 - buff->is_udp_cso = buff->is_cso_err ? 0U : 1U; 680 + buff->is_udp_cso = (rx_stat & BIT(2)) ? 0U : 681 + !!(rx_stat & BIT(3)); 688 682 else if (0x0U == (pkt_type & 0x1CU)) 689 - buff->is_tcp_cso = buff->is_cso_err ? 0U : 1U; 690 - 691 - /* Checksum offload workaround for small packets */ 692 - if (rxd_wb->pkt_len <= 60) { 693 - buff->is_ip_cso = 0U; 694 - buff->is_cso_err = 0U; 695 - } 683 + buff->is_tcp_cso = (rx_stat & BIT(2)) ? 0U : 684 + !!(rx_stat & BIT(3)); 696 685 } 697 - 698 - is_err &= ~0x18U; 686 + buff->is_cso_err = !!(rx_stat & 0x6); 687 + /* Checksum offload workaround for small packets */ 688 + if (unlikely(rxd_wb->pkt_len <= 60)) { 689 + buff->is_ip_cso = 0U; 690 + buff->is_cso_err = 0U; 691 + } 699 692 700 693 dma_unmap_page(ndev, buff->pa, buff->len, DMA_FROM_DEVICE); 701 694 702 - if (is_err || rxd_wb->type & 0x1000U) { 703 - /* status error or DMA error */ 695 + if ((rx_stat & BIT(0)) || rxd_wb->type & 0x1000U) { 696 + /* MAC error or DMA error */ 704 697 buff->is_error = 1U; 705 698 } else { 706 699 if (self->aq_nic_cfg->is_rss) { ··· 922 915 static int hw_atl_b0_hw_stop(struct aq_hw_s *self) 923 916 { 924 917 hw_atl_b0_hw_irq_disable(self, HW_ATL_B0_INT_MASK); 918 + 919 + /* Invalidate Descriptor Cache to prevent writing to the cached 920 + * descriptors and to the data pointer of those descriptors 921 + */ 922 + hw_atl_rdm_rx_dma_desc_cache_init_set(self, 1); 923 + 925 924 return aq_hw_err_from_flags(self); 926 925 } 927 926 ··· 976 963 .hw_get_regs = hw_atl_utils_hw_get_regs, 977 964 .hw_get_hw_stats = hw_atl_utils_get_hw_stats, 978 965 .hw_get_fw_version = hw_atl_utils_get_fw_version, 966 + .hw_set_offload = hw_atl_b0_hw_offload_set, 967 + .hw_set_fc = hw_atl_b0_set_fc, 979 968 };
+8
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.c
··· 619 619 HW_ATL_RPB_RX_FC_MODE_SHIFT, rx_flow_ctl_mode); 620 620 } 621 621 622 + void hw_atl_rdm_rx_dma_desc_cache_init_set(struct aq_hw_s *aq_hw, u32 init) 623 + { 624 + aq_hw_write_reg_bit(aq_hw, HW_ATL_RDM_RX_DMA_DESC_CACHE_INIT_ADR, 625 + HW_ATL_RDM_RX_DMA_DESC_CACHE_INIT_MSK, 626 + HW_ATL_RDM_RX_DMA_DESC_CACHE_INIT_SHIFT, 627 + init); 628 + } 629 + 622 630 void hw_atl_rpb_rx_pkt_buff_size_per_tc_set(struct aq_hw_s *aq_hw, 623 631 u32 rx_pkt_buff_size_per_tc, u32 buffer) 624 632 {
+3
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.h
··· 325 325 u32 rx_pkt_buff_size_per_tc, 326 326 u32 buffer); 327 327 328 + /* set rdm rx dma descriptor cache init */ 329 + void hw_atl_rdm_rx_dma_desc_cache_init_set(struct aq_hw_s *aq_hw, u32 init); 330 + 328 331 /* set rx xoff enable (per tc) */ 329 332 void hw_atl_rpb_rx_xoff_en_per_tc_set(struct aq_hw_s *aq_hw, u32 rx_xoff_en_per_tc, 330 333 u32 buffer);
+18
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h
··· 293 293 /* default value of bitfield desc{d}_reset */ 294 294 #define HW_ATL_RDM_DESCDRESET_DEFAULT 0x0 295 295 296 + /* rdm_desc_init_i bitfield definitions 297 + * preprocessor definitions for the bitfield rdm_desc_init_i. 298 + * port="pif_rdm_desc_init_i" 299 + */ 300 + 301 + /* register address for bitfield rdm_desc_init_i */ 302 + #define HW_ATL_RDM_RX_DMA_DESC_CACHE_INIT_ADR 0x00005a00 303 + /* bitmask for bitfield rdm_desc_init_i */ 304 + #define HW_ATL_RDM_RX_DMA_DESC_CACHE_INIT_MSK 0xffffffff 305 + /* inverted bitmask for bitfield rdm_desc_init_i */ 306 + #define HW_ATL_RDM_RX_DMA_DESC_CACHE_INIT_MSKN 0x00000000 307 + /* lower bit position of bitfield rdm_desc_init_i */ 308 + #define HW_ATL_RDM_RX_DMA_DESC_CACHE_INIT_SHIFT 0 309 + /* width of bitfield rdm_desc_init_i */ 310 + #define HW_ATL_RDM_RX_DMA_DESC_CACHE_INIT_WIDTH 32 311 + /* default value of bitfield rdm_desc_init_i */ 312 + #define HW_ATL_RDM_RX_DMA_DESC_CACHE_INIT_DEFAULT 0x0 313 + 296 314 /* rx int_desc_wrb_en bitfield definitions 297 315 * preprocessor definitions for the bitfield "int_desc_wrb_en". 298 316 * port="pif_rdm_int_desc_wrb_en_i"
+21
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils_fw2x.c
··· 30 30 #define HW_ATL_FW2X_MPI_STATE_ADDR 0x370 31 31 #define HW_ATL_FW2X_MPI_STATE2_ADDR 0x374 32 32 33 + #define HW_ATL_FW2X_CAP_PAUSE BIT(CAPS_HI_PAUSE) 34 + #define HW_ATL_FW2X_CAP_ASYM_PAUSE BIT(CAPS_HI_ASYMMETRIC_PAUSE) 33 35 #define HW_ATL_FW2X_CAP_SLEEP_PROXY BIT(CAPS_HI_SLEEP_PROXY) 34 36 #define HW_ATL_FW2X_CAP_WOL BIT(CAPS_HI_WOL) 35 37 ··· 453 451 return 0; 454 452 } 455 453 454 + static u32 aq_fw2x_get_flow_control(struct aq_hw_s *self, u32 *fcmode) 455 + { 456 + u32 mpi_state = aq_hw_read_reg(self, HW_ATL_FW2X_MPI_STATE2_ADDR); 457 + 458 + if (mpi_state & HW_ATL_FW2X_CAP_PAUSE) 459 + if (mpi_state & HW_ATL_FW2X_CAP_ASYM_PAUSE) 460 + *fcmode = AQ_NIC_FC_RX; 461 + else 462 + *fcmode = AQ_NIC_FC_RX | AQ_NIC_FC_TX; 463 + else 464 + if (mpi_state & HW_ATL_FW2X_CAP_ASYM_PAUSE) 465 + *fcmode = AQ_NIC_FC_TX; 466 + else 467 + *fcmode = 0; 468 + 469 + return 0; 470 + } 471 + 456 472 const struct aq_fw_ops aq_fw_2x_ops = { 457 473 .init = aq_fw2x_init, 458 474 .deinit = aq_fw2x_deinit, ··· 485 465 .set_eee_rate = aq_fw2x_set_eee_rate, 486 466 .get_eee_rate = aq_fw2x_get_eee_rate, 487 467 .set_flow_control = aq_fw2x_set_flow_control, 468 + .get_flow_control = aq_fw2x_get_flow_control 488 469 };
-1
drivers/net/ethernet/atheros/alx/alx.h
··· 140 140 }; 141 141 142 142 extern const struct ethtool_ops alx_ethtool_ops; 143 - extern const char alx_drv_name[]; 144 143 145 144 #endif
+1 -1
drivers/net/ethernet/atheros/alx/main.c
··· 49 49 #include "hw.h" 50 50 #include "reg.h" 51 51 52 - const char alx_drv_name[] = "alx"; 52 + static const char alx_drv_name[] = "alx"; 53 53 54 54 static void alx_free_txbuf(struct alx_tx_queue *txq, int entry) 55 55 {
+7 -8
drivers/net/ethernet/broadcom/bcmsysport.c
··· 1902 1902 intrl2_1_mask_clear(priv, 0xffffffff); 1903 1903 else 1904 1904 intrl2_0_mask_clear(priv, INTRL2_0_TDMA_MBDONE_MASK); 1905 - 1906 - /* Last call before we start the real business */ 1907 - netif_tx_start_all_queues(dev); 1908 1905 } 1909 1906 1910 1907 static void rbuf_init(struct bcm_sysport_priv *priv) ··· 2045 2048 2046 2049 bcm_sysport_netif_start(dev); 2047 2050 2051 + netif_tx_start_all_queues(dev); 2052 + 2048 2053 return 0; 2049 2054 2050 2055 out_clear_rx_int: ··· 2070 2071 struct bcm_sysport_priv *priv = netdev_priv(dev); 2071 2072 2072 2073 /* stop all software from updating hardware */ 2073 - netif_tx_stop_all_queues(dev); 2074 + netif_tx_disable(dev); 2074 2075 napi_disable(&priv->napi); 2075 2076 cancel_work_sync(&priv->dim.dim.work); 2076 2077 phy_stop(dev->phydev); ··· 2657 2658 if (!netif_running(dev)) 2658 2659 return 0; 2659 2660 2661 + netif_device_detach(dev); 2662 + 2660 2663 bcm_sysport_netif_stop(dev); 2661 2664 2662 2665 phy_suspend(dev->phydev); 2663 - 2664 - netif_device_detach(dev); 2665 2666 2666 2667 /* Disable UniMAC RX */ 2667 2668 umac_enable_set(priv, CMD_RX_EN, 0); ··· 2745 2746 goto out_free_rx_ring; 2746 2747 } 2747 2748 2748 - netif_device_attach(dev); 2749 - 2750 2749 /* RX pipe enable */ 2751 2750 topctrl_writel(priv, 0, RX_FLUSH_CNTL); 2752 2751 ··· 2784 2787 phy_resume(dev->phydev); 2785 2788 2786 2789 bcm_sysport_netif_start(dev); 2790 + 2791 + netif_device_attach(dev); 2787 2792 2788 2793 return 0; 2789 2794
+7
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
··· 2191 2191 #define PMF_DMAE_C(bp) (BP_PORT(bp) * MAX_DMAE_C_PER_PORT + \ 2192 2192 E1HVN_MAX) 2193 2193 2194 + /* Following is the DMAE channel number allocation for the clients. 2195 + * MFW: OCBB/OCSD implementations use DMAE channels 14/15 respectively. 2196 + * Driver: 0-3 and 8-11 (for PF dmae operations) 2197 + * 4 and 12 (for stats requests) 2198 + */ 2199 + #define BNX2X_FW_DMAE_C 13 /* Channel for FW DMAE operations */ 2200 + 2194 2201 /* PCIE link and speed */ 2195 2202 #define PCICFG_LINK_WIDTH 0x1f00000 2196 2203 #define PCICFG_LINK_WIDTH_SHIFT 20
+1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c
··· 6149 6149 rdata->sd_vlan_tag = cpu_to_le16(start_params->sd_vlan_tag); 6150 6150 rdata->path_id = BP_PATH(bp); 6151 6151 rdata->network_cos_mode = start_params->network_cos_mode; 6152 + rdata->dmae_cmd_id = BNX2X_FW_DMAE_C; 6152 6153 6153 6154 rdata->vxlan_dst_port = cpu_to_le16(start_params->vxlan_dst_port); 6154 6155 rdata->geneve_dst_port = cpu_to_le16(start_params->geneve_dst_port);
+68 -2
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 1675 1675 } else { 1676 1676 if (rxcmp1->rx_cmp_cfa_code_errors_v2 & RX_CMP_L4_CS_ERR_BITS) { 1677 1677 if (dev->features & NETIF_F_RXCSUM) 1678 - cpr->rx_l4_csum_errors++; 1678 + bnapi->cp_ring.rx_l4_csum_errors++; 1679 1679 } 1680 1680 } 1681 1681 ··· 8714 8714 return rc; 8715 8715 } 8716 8716 8717 + static int bnxt_dbg_hwrm_ring_info_get(struct bnxt *bp, u8 ring_type, 8718 + u32 ring_id, u32 *prod, u32 *cons) 8719 + { 8720 + struct hwrm_dbg_ring_info_get_output *resp = bp->hwrm_cmd_resp_addr; 8721 + struct hwrm_dbg_ring_info_get_input req = {0}; 8722 + int rc; 8723 + 8724 + bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_DBG_RING_INFO_GET, -1, -1); 8725 + req.ring_type = ring_type; 8726 + req.fw_ring_id = cpu_to_le32(ring_id); 8727 + mutex_lock(&bp->hwrm_cmd_lock); 8728 + rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); 8729 + if (!rc) { 8730 + *prod = le32_to_cpu(resp->producer_index); 8731 + *cons = le32_to_cpu(resp->consumer_index); 8732 + } 8733 + mutex_unlock(&bp->hwrm_cmd_lock); 8734 + return rc; 8735 + } 8736 + 8717 8737 static void bnxt_dump_tx_sw_state(struct bnxt_napi *bnapi) 8718 8738 { 8719 8739 struct bnxt_tx_ring_info *txr = bnapi->tx_ring; ··· 8841 8821 bnxt_queue_sp_work(bp); 8842 8822 } 8843 8823 } 8824 + 8825 + if ((bp->flags & BNXT_FLAG_CHIP_P5) && netif_carrier_ok(dev)) { 8826 + set_bit(BNXT_RING_COAL_NOW_SP_EVENT, &bp->sp_event); 8827 + bnxt_queue_sp_work(bp); 8828 + } 8844 8829 bnxt_restart_timer: 8845 8830 mod_timer(&bp->timer, jiffies + bp->current_interval); 8846 8831 } ··· 8874 8849 if (test_bit(BNXT_STATE_OPEN, &bp->state)) 8875 8850 bnxt_reset_task(bp, silent); 8876 8851 bnxt_rtnl_unlock_sp(bp); 8852 + } 8853 + 8854 + static void bnxt_chk_missed_irq(struct bnxt *bp) 8855 + { 8856 + int i; 8857 + 8858 + if (!(bp->flags & BNXT_FLAG_CHIP_P5)) 8859 + return; 8860 + 8861 + for (i = 0; i < bp->cp_nr_rings; i++) { 8862 + struct bnxt_napi *bnapi = bp->bnapi[i]; 8863 + struct bnxt_cp_ring_info *cpr; 8864 + u32 fw_ring_id; 8865 + int j; 8866 + 8867 + if (!bnapi) 8868 + continue; 8869 + 8870 + cpr = &bnapi->cp_ring; 8871 + for (j = 0; j < 2; j++) { 8872 + struct bnxt_cp_ring_info *cpr2 = cpr->cp_ring_arr[j]; 8873 + u32 val[2]; 8874 + 8875 + if (!cpr2 || cpr2->has_more_work || 8876 + !bnxt_has_work(bp, cpr2)) 8877 + continue; 8878 + 8879 + if (cpr2->cp_raw_cons != cpr2->last_cp_raw_cons) { 8880 + cpr2->last_cp_raw_cons = cpr2->cp_raw_cons; 8881 + continue; 8882 + } 8883 + fw_ring_id = cpr2->cp_ring_struct.fw_ring_id; 8884 + bnxt_dbg_hwrm_ring_info_get(bp, 8885 + DBG_RING_INFO_GET_REQ_RING_TYPE_L2_CMPL, 8886 + fw_ring_id, &val[0], &val[1]); 8887 + cpr->missed_irqs++; 8888 + } 8889 + } 8877 8890 } 8878 8891 8879 8892 static void bnxt_cfg_ntp_filters(struct bnxt *); ··· 8992 8929 8993 8930 if (test_and_clear_bit(BNXT_FLOW_STATS_SP_EVENT, &bp->sp_event)) 8994 8931 bnxt_tc_flow_stats_work(bp); 8932 + 8933 + if (test_and_clear_bit(BNXT_RING_COAL_NOW_SP_EVENT, &bp->sp_event)) 8934 + bnxt_chk_missed_irq(bp); 8995 8935 8996 8936 /* These functions below will clear BNXT_STATE_IN_SP_TASK. They 8997 8937 * must be the last functions to be called before exiting. ··· 10153 10087 } 10154 10088 10155 10089 bnxt_hwrm_func_qcfg(bp); 10090 + bnxt_hwrm_vnic_qcaps(bp); 10156 10091 bnxt_hwrm_port_led_qcaps(bp); 10157 10092 bnxt_ethtool_init(bp); 10158 10093 bnxt_dcb_init(bp); ··· 10187 10120 VNIC_RSS_CFG_REQ_HASH_TYPE_UDP_IPV6; 10188 10121 } 10189 10122 10190 - bnxt_hwrm_vnic_qcaps(bp); 10191 10123 if (bnxt_rfs_supported(bp)) { 10192 10124 dev->hw_features |= NETIF_F_NTUPLE; 10193 10125 if (bnxt_rfs_capable(bp)) {
+4
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 798 798 u8 had_work_done:1; 799 799 u8 has_more_work:1; 800 800 801 + u32 last_cp_raw_cons; 802 + 801 803 struct bnxt_coal rx_ring_coal; 802 804 u64 rx_packets; 803 805 u64 rx_bytes; ··· 818 816 dma_addr_t hw_stats_map; 819 817 u32 hw_stats_ctx_id; 820 818 u64 rx_l4_csum_errors; 819 + u64 missed_irqs; 821 820 822 821 struct bnxt_ring_struct cp_ring_struct; 823 822 ··· 1530 1527 #define BNXT_LINK_SPEED_CHNG_SP_EVENT 14 1531 1528 #define BNXT_FLOW_STATS_SP_EVENT 15 1532 1529 #define BNXT_UPDATE_PHY_SP_EVENT 16 1530 + #define BNXT_RING_COAL_NOW_SP_EVENT 17 1533 1531 1534 1532 struct bnxt_hw_resc hw_resc; 1535 1533 struct bnxt_pf_info pf;
+6 -3
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
··· 137 137 return rc; 138 138 } 139 139 140 - #define BNXT_NUM_STATS 21 140 + #define BNXT_NUM_STATS 22 141 141 142 142 #define BNXT_RX_STATS_ENTRY(counter) \ 143 143 { BNXT_RX_STATS_OFFSET(counter), __stringify(counter) } ··· 384 384 for (k = 0; k < stat_fields; j++, k++) 385 385 buf[j] = le64_to_cpu(hw_stats[k]); 386 386 buf[j++] = cpr->rx_l4_csum_errors; 387 + buf[j++] = cpr->missed_irqs; 387 388 388 389 bnxt_sw_func_stats[RX_TOTAL_DISCARDS].counter += 389 390 le64_to_cpu(cpr->hw_stats->rx_discard_pkts); ··· 468 467 sprintf(buf, "[%d]: tpa_aborts", i); 469 468 buf += ETH_GSTRING_LEN; 470 469 sprintf(buf, "[%d]: rx_l4_csum_errors", i); 470 + buf += ETH_GSTRING_LEN; 471 + sprintf(buf, "[%d]: missed_irqs", i); 471 472 buf += ETH_GSTRING_LEN; 472 473 } 473 474 for (i = 0; i < BNXT_NUM_SW_FUNC_STATS; i++) { ··· 2945 2942 record->asic_state = 0; 2946 2943 strlcpy(record->system_name, utsname()->nodename, 2947 2944 sizeof(record->system_name)); 2948 - record->year = cpu_to_le16(tm.tm_year); 2949 - record->month = cpu_to_le16(tm.tm_mon); 2945 + record->year = cpu_to_le16(tm.tm_year + 1900); 2946 + record->month = cpu_to_le16(tm.tm_mon + 1); 2950 2947 record->day = cpu_to_le16(tm.tm_mday); 2951 2948 record->hour = cpu_to_le16(tm.tm_hour); 2952 2949 record->minute = cpu_to_le16(tm.tm_min);
+3
drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
··· 43 43 if (ulp_id == BNXT_ROCE_ULP) { 44 44 unsigned int max_stat_ctxs; 45 45 46 + if (bp->flags & BNXT_FLAG_CHIP_P5) 47 + return -EOPNOTSUPP; 48 + 46 49 max_stat_ctxs = bnxt_get_max_func_stat_ctxs(bp); 47 50 if (max_stat_ctxs <= BNXT_MIN_ROCE_STAT_CTXS || 48 51 bp->num_stat_ctxs == max_stat_ctxs)
+7 -6
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 2855 2855 2856 2856 umac_enable_set(priv, CMD_TX_EN | CMD_RX_EN, true); 2857 2857 2858 - netif_tx_start_all_queues(dev); 2859 2858 bcmgenet_enable_tx_napi(priv); 2860 2859 2861 2860 /* Monitor link interrupts now */ ··· 2936 2937 2937 2938 bcmgenet_netif_start(dev); 2938 2939 2940 + netif_tx_start_all_queues(dev); 2941 + 2939 2942 return 0; 2940 2943 2941 2944 err_irq1: ··· 2959 2958 struct bcmgenet_priv *priv = netdev_priv(dev); 2960 2959 2961 2960 bcmgenet_disable_tx_napi(priv); 2962 - netif_tx_stop_all_queues(dev); 2961 + netif_tx_disable(dev); 2963 2962 2964 2963 /* Disable MAC receive */ 2965 2964 umac_enable_set(priv, CMD_RX_EN, false); ··· 3621 3620 if (!netif_running(dev)) 3622 3621 return 0; 3623 3622 3623 + netif_device_detach(dev); 3624 + 3624 3625 bcmgenet_netif_stop(dev); 3625 3626 3626 3627 if (!device_may_wakeup(d)) 3627 3628 phy_suspend(dev->phydev); 3628 - 3629 - netif_device_detach(dev); 3630 3629 3631 3630 /* Prepare the device for Wake-on-LAN and switch to the slow clock */ 3632 3631 if (device_may_wakeup(d) && priv->wolopts) { ··· 3701 3700 /* Always enable ring 16 - descriptor ring */ 3702 3701 bcmgenet_enable_dma(priv, dma_ctrl); 3703 3702 3704 - netif_device_attach(dev); 3705 - 3706 3703 if (!device_may_wakeup(d)) 3707 3704 phy_resume(dev->phydev); 3708 3705 ··· 3708 3709 bcmgenet_eee_enable_set(dev, true); 3709 3710 3710 3711 bcmgenet_netif_start(dev); 3712 + 3713 + netif_device_attach(dev); 3711 3714 3712 3715 return 0; 3713 3716
+16 -2
drivers/net/ethernet/broadcom/tg3.c
··· 12422 12422 { 12423 12423 struct tg3 *tp = netdev_priv(dev); 12424 12424 int i, irq_sync = 0, err = 0; 12425 + bool reset_phy = false; 12425 12426 12426 12427 if ((ering->rx_pending > tp->rx_std_ring_mask) || 12427 12428 (ering->rx_jumbo_pending > tp->rx_jmb_ring_mask) || ··· 12454 12453 12455 12454 if (netif_running(dev)) { 12456 12455 tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 12457 - err = tg3_restart_hw(tp, false); 12456 + /* Reset PHY to avoid PHY lock up */ 12457 + if (tg3_asic_rev(tp) == ASIC_REV_5717 || 12458 + tg3_asic_rev(tp) == ASIC_REV_5719 || 12459 + tg3_asic_rev(tp) == ASIC_REV_5720) 12460 + reset_phy = true; 12461 + 12462 + err = tg3_restart_hw(tp, reset_phy); 12458 12463 if (!err) 12459 12464 tg3_netif_start(tp); 12460 12465 } ··· 12494 12487 { 12495 12488 struct tg3 *tp = netdev_priv(dev); 12496 12489 int err = 0; 12490 + bool reset_phy = false; 12497 12491 12498 12492 if (tp->link_config.autoneg == AUTONEG_ENABLE) 12499 12493 tg3_warn_mgmt_link_flap(tp); ··· 12564 12556 12565 12557 if (netif_running(dev)) { 12566 12558 tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 12567 - err = tg3_restart_hw(tp, false); 12559 + /* Reset PHY to avoid PHY lock up */ 12560 + if (tg3_asic_rev(tp) == ASIC_REV_5717 || 12561 + tg3_asic_rev(tp) == ASIC_REV_5719 || 12562 + tg3_asic_rev(tp) == ASIC_REV_5720) 12563 + reset_phy = true; 12564 + 12565 + err = tg3_restart_hw(tp, reset_phy); 12568 12566 if (!err) 12569 12567 tg3_netif_start(tp); 12570 12568 }
+3
drivers/net/ethernet/cavium/thunder/nic_main.c
··· 1441 1441 { 1442 1442 struct nicpf *nic = pci_get_drvdata(pdev); 1443 1443 1444 + if (!nic) 1445 + return; 1446 + 1444 1447 if (nic->flags & NIC_SRIOV_ENABLED) 1445 1448 pci_disable_sriov(pdev); 1446 1449
+7 -2
drivers/net/ethernet/cavium/thunder/nicvf_main.c
··· 1784 1784 bool if_up = netif_running(nic->netdev); 1785 1785 struct bpf_prog *old_prog; 1786 1786 bool bpf_attached = false; 1787 + int ret = 0; 1787 1788 1788 1789 /* For now just support only the usual MTU sized frames */ 1789 1790 if (prog && (dev->mtu > 1500)) { ··· 1818 1817 if (nic->xdp_prog) { 1819 1818 /* Attach BPF program */ 1820 1819 nic->xdp_prog = bpf_prog_add(nic->xdp_prog, nic->rx_queues - 1); 1821 - if (!IS_ERR(nic->xdp_prog)) 1820 + if (!IS_ERR(nic->xdp_prog)) { 1822 1821 bpf_attached = true; 1822 + } else { 1823 + ret = PTR_ERR(nic->xdp_prog); 1824 + nic->xdp_prog = NULL; 1825 + } 1823 1826 } 1824 1827 1825 1828 /* Calculate Tx queues needed for XDP and network stack */ ··· 1835 1830 netif_trans_update(nic->netdev); 1836 1831 } 1837 1832 1838 - return 0; 1833 + return ret; 1839 1834 } 1840 1835 1841 1836 static int nicvf_xdp(struct net_device *netdev, struct netdev_bpf *xdp)
+3 -1
drivers/net/ethernet/cavium/thunder/nicvf_queues.c
··· 585 585 if (!sq->dmem.base) 586 586 return; 587 587 588 - if (sq->tso_hdrs) 588 + if (sq->tso_hdrs) { 589 589 dma_free_coherent(&nic->pdev->dev, 590 590 sq->dmem.q_len * TSO_HEADER_SIZE, 591 591 sq->tso_hdrs, sq->tso_hdrs_phys); 592 + sq->tso_hdrs = NULL; 593 + } 592 594 593 595 /* Free pending skbs in the queue */ 594 596 smp_rmb();
-1
drivers/net/ethernet/chelsio/Kconfig
··· 67 67 config CHELSIO_T4 68 68 tristate "Chelsio Communications T4/T5/T6 Ethernet support" 69 69 depends on PCI && (IPV6 || IPV6=n) 70 - depends on THERMAL || !THERMAL 71 70 select FW_LOADER 72 71 select MDIO 73 72 select ZLIB_DEFLATE
+1 -3
drivers/net/ethernet/chelsio/cxgb4/Makefile
··· 12 12 cxgb4-$(CONFIG_CHELSIO_T4_DCB) += cxgb4_dcb.o 13 13 cxgb4-$(CONFIG_CHELSIO_T4_FCOE) += cxgb4_fcoe.o 14 14 cxgb4-$(CONFIG_DEBUG_FS) += cxgb4_debugfs.o 15 - ifdef CONFIG_THERMAL 16 - cxgb4-objs += cxgb4_thermal.o 17 - endif 15 + cxgb4-$(CONFIG_THERMAL) += cxgb4_thermal.o
+2 -2
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 5863 5863 if (!is_t4(adapter->params.chip)) 5864 5864 cxgb4_ptp_init(adapter); 5865 5865 5866 - if (IS_ENABLED(CONFIG_THERMAL) && 5866 + if (IS_REACHABLE(CONFIG_THERMAL) && 5867 5867 !is_t4(adapter->params.chip) && (adapter->flags & FW_OK)) 5868 5868 cxgb4_thermal_init(adapter); 5869 5869 ··· 5932 5932 5933 5933 if (!is_t4(adapter->params.chip)) 5934 5934 cxgb4_ptp_stop(adapter); 5935 - if (IS_ENABLED(CONFIG_THERMAL)) 5935 + if (IS_REACHABLE(CONFIG_THERMAL)) 5936 5936 cxgb4_thermal_remove(adapter); 5937 5937 5938 5938 /* If we allocated filters, free up state associated with any
+1 -1
drivers/net/ethernet/cortina/gemini.c
··· 660 660 661 661 u64_stats_update_begin(&port->tx_stats_syncp); 662 662 port->tx_frag_stats[nfrags]++; 663 - u64_stats_update_end(&port->ir_stats_syncp); 663 + u64_stats_update_end(&port->tx_stats_syncp); 664 664 } 665 665 } 666 666
+3 -4
drivers/net/ethernet/faraday/ftmac100.c
··· 872 872 struct net_device *netdev = dev_id; 873 873 struct ftmac100 *priv = netdev_priv(netdev); 874 874 875 - if (likely(netif_running(netdev))) { 876 - /* Disable interrupts for polling */ 877 - ftmac100_disable_all_int(priv); 875 + /* Disable interrupts for polling */ 876 + ftmac100_disable_all_int(priv); 877 + if (likely(netif_running(netdev))) 878 878 napi_schedule(&priv->napi); 879 - } 880 879 881 880 return IRQ_HANDLED; 882 881 }
+1 -3
drivers/net/ethernet/hisilicon/hip04_eth.c
··· 915 915 } 916 916 917 917 ret = register_netdev(ndev); 918 - if (ret) { 919 - free_netdev(ndev); 918 + if (ret) 920 919 goto alloc_fail; 921 - } 922 920 923 921 return 0; 924 922
+2 -1
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
··· 3760 3760 /* Hardware table is only clear when pf resets */ 3761 3761 if (!(handle->flags & HNAE3_SUPPORT_VF)) { 3762 3762 ret = hns3_restore_vlan(netdev); 3763 - return ret; 3763 + if (ret) 3764 + return ret; 3764 3765 } 3765 3766 3766 3767 ret = hns3_restore_fd_rules(netdev);
+4 -6
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
··· 1168 1168 */ 1169 1169 static int hclge_bp_setup_hw(struct hclge_dev *hdev, u8 tc) 1170 1170 { 1171 - struct hclge_vport *vport = hdev->vport; 1172 - u32 i, k, qs_bitmap; 1173 - int ret; 1171 + int i; 1174 1172 1175 1173 for (i = 0; i < HCLGE_BP_GRP_NUM; i++) { 1176 - qs_bitmap = 0; 1174 + u32 qs_bitmap = 0; 1175 + int k, ret; 1177 1176 1178 1177 for (k = 0; k < hdev->num_alloc_vport; k++) { 1178 + struct hclge_vport *vport = &hdev->vport[k]; 1179 1179 u16 qs_id = vport->qs_offset + tc; 1180 1180 u8 grp, sub_grp; 1181 1181 ··· 1185 1185 HCLGE_BP_SUB_GRP_ID_S); 1186 1186 if (i == grp) 1187 1187 qs_bitmap |= (1 << sub_grp); 1188 - 1189 - vport++; 1190 1188 } 1191 1189 1192 1190 ret = hclge_tm_qs_bp_cfg(hdev, tc, i, qs_bitmap);
+32 -42
drivers/net/ethernet/ibm/ibmvnic.c
··· 485 485 486 486 for (j = 0; j < rx_pool->size; j++) { 487 487 if (rx_pool->rx_buff[j].skb) { 488 - dev_kfree_skb_any(rx_pool->rx_buff[i].skb); 489 - rx_pool->rx_buff[i].skb = NULL; 488 + dev_kfree_skb_any(rx_pool->rx_buff[j].skb); 489 + rx_pool->rx_buff[j].skb = NULL; 490 490 } 491 491 } 492 492 ··· 1103 1103 return 0; 1104 1104 } 1105 1105 1106 - mutex_lock(&adapter->reset_lock); 1107 - 1108 1106 if (adapter->state != VNIC_CLOSED) { 1109 1107 rc = ibmvnic_login(netdev); 1110 - if (rc) { 1111 - mutex_unlock(&adapter->reset_lock); 1108 + if (rc) 1112 1109 return rc; 1113 - } 1114 1110 1115 1111 rc = init_resources(adapter); 1116 1112 if (rc) { 1117 1113 netdev_err(netdev, "failed to initialize resources\n"); 1118 1114 release_resources(adapter); 1119 - mutex_unlock(&adapter->reset_lock); 1120 1115 return rc; 1121 1116 } 1122 1117 } 1123 1118 1124 1119 rc = __ibmvnic_open(netdev); 1125 1120 netif_carrier_on(netdev); 1126 - 1127 - mutex_unlock(&adapter->reset_lock); 1128 1121 1129 1122 return rc; 1130 1123 } ··· 1262 1269 return 0; 1263 1270 } 1264 1271 1265 - mutex_lock(&adapter->reset_lock); 1266 1272 rc = __ibmvnic_close(netdev); 1267 1273 ibmvnic_cleanup(netdev); 1268 - mutex_unlock(&adapter->reset_lock); 1269 1274 1270 1275 return rc; 1271 1276 } ··· 1536 1545 tx_crq.v1.sge_len = cpu_to_be32(skb->len); 1537 1546 tx_crq.v1.ioba = cpu_to_be64(data_dma_addr); 1538 1547 1539 - if (adapter->vlan_header_insertion) { 1548 + if (adapter->vlan_header_insertion && skb_vlan_tag_present(skb)) { 1540 1549 tx_crq.v1.flags2 |= IBMVNIC_TX_VLAN_INSERT; 1541 1550 tx_crq.v1.vlan_id = cpu_to_be16(skb->vlan_tci); 1542 1551 } ··· 1737 1746 struct ibmvnic_rwi *rwi, u32 reset_state) 1738 1747 { 1739 1748 u64 old_num_rx_queues, old_num_tx_queues; 1749 + u64 old_num_rx_slots, old_num_tx_slots; 1740 1750 struct net_device *netdev = adapter->netdev; 1741 1751 int i, rc; 1742 1752 ··· 1749 1757 1750 1758 old_num_rx_queues = adapter->req_rx_queues; 1751 1759 old_num_tx_queues = adapter->req_tx_queues; 1760 + old_num_rx_slots = adapter->req_rx_add_entries_per_subcrq; 1761 + old_num_tx_slots = adapter->req_tx_entries_per_subcrq; 1752 1762 1753 1763 ibmvnic_cleanup(netdev); 1754 1764 ··· 1813 1819 if (rc) 1814 1820 return rc; 1815 1821 } else if (adapter->req_rx_queues != old_num_rx_queues || 1816 - adapter->req_tx_queues != old_num_tx_queues) { 1817 - adapter->map_id = 1; 1822 + adapter->req_tx_queues != old_num_tx_queues || 1823 + adapter->req_rx_add_entries_per_subcrq != 1824 + old_num_rx_slots || 1825 + adapter->req_tx_entries_per_subcrq != 1826 + old_num_tx_slots) { 1818 1827 release_rx_pools(adapter); 1819 1828 release_tx_pools(adapter); 1820 - rc = init_rx_pools(netdev); 1821 - if (rc) 1822 - return rc; 1823 - rc = init_tx_pools(netdev); 1829 + release_napi(adapter); 1830 + release_vpd_data(adapter); 1831 + 1832 + rc = init_resources(adapter); 1824 1833 if (rc) 1825 1834 return rc; 1826 1835 1827 - release_napi(adapter); 1828 - rc = init_napi(adapter); 1829 - if (rc) 1830 - return rc; 1831 1836 } else { 1832 1837 rc = reset_tx_pools(adapter); 1833 1838 if (rc) ··· 1910 1917 adapter->state = VNIC_PROBED; 1911 1918 return 0; 1912 1919 } 1913 - /* netif_set_real_num_xx_queues needs to take rtnl lock here 1914 - * unless wait_for_reset is set, in which case the rtnl lock 1915 - * has already been taken before initializing the reset 1916 - */ 1917 - if (!adapter->wait_for_reset) { 1918 - rtnl_lock(); 1919 - rc = init_resources(adapter); 1920 - rtnl_unlock(); 1921 - } else { 1922 - rc = init_resources(adapter); 1923 - } 1920 + 1921 + rc = init_resources(adapter); 1924 1922 if (rc) 1925 1923 return rc; 1926 1924 ··· 1970 1986 struct ibmvnic_rwi *rwi; 1971 1987 struct ibmvnic_adapter *adapter; 1972 1988 struct net_device *netdev; 1989 + bool we_lock_rtnl = false; 1973 1990 u32 reset_state; 1974 1991 int rc = 0; 1975 1992 1976 1993 adapter = container_of(work, struct ibmvnic_adapter, ibmvnic_reset); 1977 1994 netdev = adapter->netdev; 1978 1995 1979 - mutex_lock(&adapter->reset_lock); 1996 + /* netif_set_real_num_xx_queues needs to take rtnl lock here 1997 + * unless wait_for_reset is set, in which case the rtnl lock 1998 + * has already been taken before initializing the reset 1999 + */ 2000 + if (!adapter->wait_for_reset) { 2001 + rtnl_lock(); 2002 + we_lock_rtnl = true; 2003 + } 1980 2004 reset_state = adapter->state; 1981 2005 1982 2006 rwi = get_next_rwi(adapter); ··· 2012 2020 if (rc) { 2013 2021 netdev_dbg(adapter->netdev, "Reset failed\n"); 2014 2022 free_all_rwi(adapter); 2015 - mutex_unlock(&adapter->reset_lock); 2016 - return; 2017 2023 } 2018 2024 2019 2025 adapter->resetting = false; 2020 - mutex_unlock(&adapter->reset_lock); 2026 + if (we_lock_rtnl) 2027 + rtnl_unlock(); 2021 2028 } 2022 2029 2023 2030 static int ibmvnic_reset(struct ibmvnic_adapter *adapter, ··· 4759 4768 4760 4769 INIT_WORK(&adapter->ibmvnic_reset, __ibmvnic_reset); 4761 4770 INIT_LIST_HEAD(&adapter->rwi_list); 4762 - mutex_init(&adapter->reset_lock); 4763 4771 mutex_init(&adapter->rwi_lock); 4764 4772 adapter->resetting = false; 4765 4773 ··· 4830 4840 struct ibmvnic_adapter *adapter = netdev_priv(netdev); 4831 4841 4832 4842 adapter->state = VNIC_REMOVING; 4833 - unregister_netdev(netdev); 4834 - mutex_lock(&adapter->reset_lock); 4843 + rtnl_lock(); 4844 + unregister_netdevice(netdev); 4835 4845 4836 4846 release_resources(adapter); 4837 4847 release_sub_crqs(adapter, 1); ··· 4842 4852 4843 4853 adapter->state = VNIC_REMOVED; 4844 4854 4845 - mutex_unlock(&adapter->reset_lock); 4855 + rtnl_unlock(); 4846 4856 device_remove_file(&dev->dev, &dev_attr_failover); 4847 4857 free_netdev(netdev); 4848 4858 dev_set_drvdata(&dev->dev, NULL);
+1 -1
drivers/net/ethernet/ibm/ibmvnic.h
··· 1075 1075 struct tasklet_struct tasklet; 1076 1076 enum vnic_state state; 1077 1077 enum ibmvnic_reset_reason reset_reason; 1078 - struct mutex reset_lock, rwi_lock; 1078 + struct mutex rwi_lock; 1079 1079 struct list_head rwi_list; 1080 1080 struct work_struct ibmvnic_reset; 1081 1081 bool resetting;
+6 -4
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 1413 1413 } 1414 1414 1415 1415 vsi->flags |= I40E_VSI_FLAG_FILTER_CHANGED; 1416 - set_bit(__I40E_MACVLAN_SYNC_PENDING, vsi->state); 1416 + set_bit(__I40E_MACVLAN_SYNC_PENDING, vsi->back->state); 1417 1417 } 1418 1418 1419 1419 /** ··· 12249 12249 NETIF_F_GSO_GRE | 12250 12250 NETIF_F_GSO_GRE_CSUM | 12251 12251 NETIF_F_GSO_PARTIAL | 12252 + NETIF_F_GSO_IPXIP4 | 12253 + NETIF_F_GSO_IPXIP6 | 12252 12254 NETIF_F_GSO_UDP_TUNNEL | 12253 12255 NETIF_F_GSO_UDP_TUNNEL_CSUM | 12254 12256 NETIF_F_SCTP_CRC | ··· 12268 12266 /* record features VLANs can make use of */ 12269 12267 netdev->vlan_features |= hw_enc_features | NETIF_F_TSO_MANGLEID; 12270 12268 12271 - if (!(pf->flags & I40E_FLAG_MFP_ENABLED)) 12272 - netdev->hw_features |= NETIF_F_NTUPLE | NETIF_F_HW_TC; 12273 - 12274 12269 hw_features = hw_enc_features | 12275 12270 NETIF_F_HW_VLAN_CTAG_TX | 12276 12271 NETIF_F_HW_VLAN_CTAG_RX; 12272 + 12273 + if (!(pf->flags & I40E_FLAG_MFP_ENABLED)) 12274 + hw_features |= NETIF_F_NTUPLE | NETIF_F_HW_TC; 12277 12275 12278 12276 netdev->hw_features |= hw_features; 12279 12277
+7 -7
drivers/net/ethernet/intel/i40e/i40e_xsk.c
··· 33 33 } 34 34 35 35 /** 36 - * i40e_add_xsk_umem - Store an UMEM for a certain ring/qid 36 + * i40e_add_xsk_umem - Store a UMEM for a certain ring/qid 37 37 * @vsi: Current VSI 38 38 * @umem: UMEM to store 39 39 * @qid: Ring/qid to associate with the UMEM ··· 56 56 } 57 57 58 58 /** 59 - * i40e_remove_xsk_umem - Remove an UMEM for a certain ring/qid 59 + * i40e_remove_xsk_umem - Remove a UMEM for a certain ring/qid 60 60 * @vsi: Current VSI 61 61 * @qid: Ring/qid associated with the UMEM 62 62 **/ ··· 130 130 } 131 131 132 132 /** 133 - * i40e_xsk_umem_enable - Enable/associate an UMEM to a certain ring/qid 133 + * i40e_xsk_umem_enable - Enable/associate a UMEM to a certain ring/qid 134 134 * @vsi: Current VSI 135 135 * @umem: UMEM 136 136 * @qid: Rx ring to associate UMEM to ··· 189 189 } 190 190 191 191 /** 192 - * i40e_xsk_umem_disable - Diassociate an UMEM from a certain ring/qid 192 + * i40e_xsk_umem_disable - Disassociate a UMEM from a certain ring/qid 193 193 * @vsi: Current VSI 194 194 * @qid: Rx ring to associate UMEM to 195 195 * ··· 255 255 } 256 256 257 257 /** 258 - * i40e_xsk_umem_query - Queries a certain ring/qid for its UMEM 258 + * i40e_xsk_umem_setup - Enable/disassociate a UMEM to/from a ring/qid 259 259 * @vsi: Current VSI 260 260 * @umem: UMEM to enable/associate to a ring, or NULL to disable 261 261 * @qid: Rx ring to (dis)associate UMEM (from)to 262 262 * 263 - * This function enables or disables an UMEM to a certain ring. 263 + * This function enables or disables a UMEM to a certain ring. 264 264 * 265 265 * Returns 0 on success, <0 on failure 266 266 **/ ··· 276 276 * @rx_ring: Rx ring 277 277 * @xdp: xdp_buff used as input to the XDP program 278 278 * 279 - * This function enables or disables an UMEM to a certain ring. 279 + * This function enables or disables a UMEM to a certain ring. 280 280 * 281 281 * Returns any of I40E_XDP_{PASS, CONSUMED, TX, REDIR} 282 282 **/
+3 -1
drivers/net/ethernet/intel/ice/ice.h
··· 76 76 #define ICE_MIN_INTR_PER_VF (ICE_MIN_QS_PER_VF + 1) 77 77 #define ICE_DFLT_INTR_PER_VF (ICE_DFLT_QS_PER_VF + 1) 78 78 79 + #define ICE_MAX_RESET_WAIT 20 80 + 79 81 #define ICE_VSIQF_HKEY_ARRAY_SIZE ((VSIQF_HKEY_MAX_INDEX + 1) * 4) 80 82 81 83 #define ICE_DFLT_NETIF_M (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_LINK) ··· 191 189 u64 tx_linearize; 192 190 DECLARE_BITMAP(state, __ICE_STATE_NBITS); 193 191 DECLARE_BITMAP(flags, ICE_VSI_FLAG_NBITS); 194 - unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)]; 195 192 unsigned int current_netdev_flags; 196 193 u32 tx_restart; 197 194 u32 tx_busy; ··· 370 369 int ice_get_rss(struct ice_vsi *vsi, u8 *seed, u8 *lut, u16 lut_size); 371 370 void ice_fill_rss_lut(u8 *lut, u16 rss_table_size, u16 rss_size); 372 371 void ice_print_link_msg(struct ice_vsi *vsi, bool isup); 372 + void ice_napi_del(struct ice_vsi *vsi); 373 373 374 374 #endif /* _ICE_H_ */
+3
drivers/net/ethernet/intel/ice/ice_common.c
··· 811 811 /* Attempt to disable FW logging before shutting down control queues */ 812 812 ice_cfg_fw_log(hw, false); 813 813 ice_shutdown_all_ctrlq(hw); 814 + 815 + /* Clear VSI contexts if not already cleared */ 816 + ice_clear_all_vsi_ctx(hw); 814 817 } 815 818 816 819 /**
+6 -1
drivers/net/ethernet/intel/ice/ice_ethtool.c
··· 1517 1517 } 1518 1518 1519 1519 if (!test_bit(__ICE_DOWN, pf->state)) { 1520 - /* Give it a little more time to try to come back */ 1520 + /* Give it a little more time to try to come back. If still 1521 + * down, restart autoneg link or reinitialize the interface. 1522 + */ 1521 1523 msleep(75); 1522 1524 if (!test_bit(__ICE_DOWN, pf->state)) 1523 1525 return ice_nway_reset(netdev); 1526 + 1527 + ice_down(vsi); 1528 + ice_up(vsi); 1524 1529 } 1525 1530 1526 1531 return err;
+2
drivers/net/ethernet/intel/ice/ice_hw_autogen.h
··· 242 242 #define GLNVM_ULD 0x000B6008 243 243 #define GLNVM_ULD_CORER_DONE_M BIT(3) 244 244 #define GLNVM_ULD_GLOBR_DONE_M BIT(4) 245 + #define GLPCI_CNF2 0x000BE004 246 + #define GLPCI_CNF2_CACHELINE_SIZE_M BIT(1) 245 247 #define PF_FUNC_RID 0x0009E880 246 248 #define PF_FUNC_RID_FUNC_NUM_S 0 247 249 #define PF_FUNC_RID_FUNC_NUM_M ICE_M(0x7, 0)
+2 -1
drivers/net/ethernet/intel/ice/ice_lib.c
··· 1997 1997 status = ice_update_vsi(&vsi->back->hw, vsi->idx, ctxt, NULL); 1998 1998 if (status) { 1999 1999 netdev_err(vsi->netdev, "%sabling VLAN pruning on VSI handle: %d, VSI HW ID: %d failed, err = %d, aq_err = %d\n", 2000 - ena ? "Ena" : "Dis", vsi->idx, vsi->vsi_num, status, 2000 + ena ? "En" : "Dis", vsi->idx, vsi->vsi_num, status, 2001 2001 vsi->back->hw.adminq.sq_last_status); 2002 2002 goto err_out; 2003 2003 } ··· 2458 2458 * on this wq 2459 2459 */ 2460 2460 if (vsi->netdev && !ice_is_reset_in_progress(pf->state)) { 2461 + ice_napi_del(vsi); 2461 2462 unregister_netdev(vsi->netdev); 2462 2463 free_netdev(vsi->netdev); 2463 2464 vsi->netdev = NULL;
+48 -38
drivers/net/ethernet/intel/ice/ice_main.c
··· 1465 1465 * ice_napi_del - Remove NAPI handler for the VSI 1466 1466 * @vsi: VSI for which NAPI handler is to be removed 1467 1467 */ 1468 - static void ice_napi_del(struct ice_vsi *vsi) 1468 + void ice_napi_del(struct ice_vsi *vsi) 1469 1469 { 1470 1470 int v_idx; 1471 1471 ··· 1622 1622 { 1623 1623 struct ice_netdev_priv *np = netdev_priv(netdev); 1624 1624 struct ice_vsi *vsi = np->vsi; 1625 - int ret; 1626 1625 1627 1626 if (vid >= VLAN_N_VID) { 1628 1627 netdev_err(netdev, "VLAN id requested %d is out of range %d\n", ··· 1634 1635 1635 1636 /* Enable VLAN pruning when VLAN 0 is added */ 1636 1637 if (unlikely(!vid)) { 1637 - ret = ice_cfg_vlan_pruning(vsi, true); 1638 + int ret = ice_cfg_vlan_pruning(vsi, true); 1639 + 1638 1640 if (ret) 1639 1641 return ret; 1640 1642 } ··· 1644 1644 * needed to continue allowing all untagged packets since VLAN prune 1645 1645 * list is applied to all packets by the switch 1646 1646 */ 1647 - ret = ice_vsi_add_vlan(vsi, vid); 1648 - 1649 - if (!ret) 1650 - set_bit(vid, vsi->active_vlans); 1651 - 1652 - return ret; 1647 + return ice_vsi_add_vlan(vsi, vid); 1653 1648 } 1654 1649 1655 1650 /** ··· 1671 1676 status = ice_vsi_kill_vlan(vsi, vid); 1672 1677 if (status) 1673 1678 return status; 1674 - 1675 - clear_bit(vid, vsi->active_vlans); 1676 1679 1677 1680 /* Disable VLAN pruning when VLAN 0 is removed */ 1678 1681 if (unlikely(!vid)) ··· 1995 2002 } 1996 2003 1997 2004 /** 2005 + * ice_verify_cacheline_size - verify driver's assumption of 64 Byte cache lines 2006 + * @pf: pointer to the PF structure 2007 + * 2008 + * There is no error returned here because the driver should be able to handle 2009 + * 128 Byte cache lines, so we only print a warning in case issues are seen, 2010 + * specifically with Tx. 2011 + */ 2012 + static void ice_verify_cacheline_size(struct ice_pf *pf) 2013 + { 2014 + if (rd32(&pf->hw, GLPCI_CNF2) & GLPCI_CNF2_CACHELINE_SIZE_M) 2015 + dev_warn(&pf->pdev->dev, 2016 + "%d Byte cache line assumption is invalid, driver may have Tx timeouts!\n", 2017 + ICE_CACHE_LINE_BYTES); 2018 + } 2019 + 2020 + /** 1998 2021 * ice_probe - Device initialization routine 1999 2022 * @pdev: PCI device information struct 2000 2023 * @ent: entry in ice_pci_tbl ··· 2160 2151 /* since everything is good, start the service timer */ 2161 2152 mod_timer(&pf->serv_tmr, round_jiffies(jiffies + pf->serv_tmr_period)); 2162 2153 2154 + ice_verify_cacheline_size(pf); 2155 + 2163 2156 return 0; 2164 2157 2165 2158 err_alloc_sw_unroll: ··· 2192 2181 2193 2182 if (!pf) 2194 2183 return; 2184 + 2185 + for (i = 0; i < ICE_MAX_RESET_WAIT; i++) { 2186 + if (!ice_is_reset_in_progress(pf->state)) 2187 + break; 2188 + msleep(100); 2189 + } 2195 2190 2196 2191 set_bit(__ICE_DOWN, pf->state); 2197 2192 ice_service_task_stop(pf); ··· 2527 2510 } 2528 2511 2529 2512 /** 2530 - * ice_restore_vlan - Reinstate VLANs when vsi/netdev comes back up 2531 - * @vsi: the VSI being brought back up 2532 - */ 2533 - static int ice_restore_vlan(struct ice_vsi *vsi) 2534 - { 2535 - int err; 2536 - u16 vid; 2537 - 2538 - if (!vsi->netdev) 2539 - return -EINVAL; 2540 - 2541 - err = ice_vsi_vlan_setup(vsi); 2542 - if (err) 2543 - return err; 2544 - 2545 - for_each_set_bit(vid, vsi->active_vlans, VLAN_N_VID) { 2546 - err = ice_vlan_rx_add_vid(vsi->netdev, htons(ETH_P_8021Q), vid); 2547 - if (err) 2548 - break; 2549 - } 2550 - 2551 - return err; 2552 - } 2553 - 2554 - /** 2555 2513 * ice_vsi_cfg - Setup the VSI 2556 2514 * @vsi: the VSI being configured 2557 2515 * ··· 2538 2546 2539 2547 if (vsi->netdev) { 2540 2548 ice_set_rx_mode(vsi->netdev); 2541 - err = ice_restore_vlan(vsi); 2549 + 2550 + err = ice_vsi_vlan_setup(vsi); 2551 + 2542 2552 if (err) 2543 2553 return err; 2544 2554 } ··· 3290 3296 struct device *dev = &pf->pdev->dev; 3291 3297 struct ice_hw *hw = &pf->hw; 3292 3298 enum ice_status ret; 3293 - int err; 3299 + int err, i; 3294 3300 3295 3301 if (test_bit(__ICE_DOWN, pf->state)) 3296 3302 goto clear_recovery; ··· 3364 3370 } 3365 3371 3366 3372 ice_reset_all_vfs(pf, true); 3373 + 3374 + for (i = 0; i < pf->num_alloc_vsi; i++) { 3375 + bool link_up; 3376 + 3377 + if (!pf->vsi[i] || pf->vsi[i]->type != ICE_VSI_PF) 3378 + continue; 3379 + ice_get_link_status(pf->vsi[i]->port_info, &link_up); 3380 + if (link_up) { 3381 + netif_carrier_on(pf->vsi[i]->netdev); 3382 + netif_tx_wake_all_queues(pf->vsi[i]->netdev); 3383 + } else { 3384 + netif_carrier_off(pf->vsi[i]->netdev); 3385 + netif_tx_stop_all_queues(pf->vsi[i]->netdev); 3386 + } 3387 + } 3388 + 3367 3389 /* if we get here, reset flow is successful */ 3368 3390 clear_bit(__ICE_RESET_FAILED, pf->state); 3369 3391 return;
+12
drivers/net/ethernet/intel/ice/ice_switch.c
··· 348 348 } 349 349 350 350 /** 351 + * ice_clear_all_vsi_ctx - clear all the VSI context entries 352 + * @hw: pointer to the hw struct 353 + */ 354 + void ice_clear_all_vsi_ctx(struct ice_hw *hw) 355 + { 356 + u16 i; 357 + 358 + for (i = 0; i < ICE_MAX_VSI; i++) 359 + ice_clear_vsi_ctx(hw, i); 360 + } 361 + 362 + /** 351 363 * ice_add_vsi - add VSI context to the hardware and VSI handle list 352 364 * @hw: pointer to the hw struct 353 365 * @vsi_handle: unique VSI handle provided by drivers
+2
drivers/net/ethernet/intel/ice/ice_switch.h
··· 190 190 struct ice_sq_cd *cd); 191 191 bool ice_is_vsi_valid(struct ice_hw *hw, u16 vsi_handle); 192 192 struct ice_vsi_ctx *ice_get_vsi_ctx(struct ice_hw *hw, u16 vsi_handle); 193 + void ice_clear_all_vsi_ctx(struct ice_hw *hw); 194 + /* Switch config */ 193 195 enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw); 194 196 195 197 /* Switch/bridge related commands */
+6 -5
drivers/net/ethernet/intel/ice/ice_txrx.c
··· 1520 1520 1521 1521 /* update gso_segs and bytecount */ 1522 1522 first->gso_segs = skb_shinfo(skb)->gso_segs; 1523 - first->bytecount = (first->gso_segs - 1) * off->header_len; 1523 + first->bytecount += (first->gso_segs - 1) * off->header_len; 1524 1524 1525 1525 cd_tso_len = skb->len - off->header_len; 1526 1526 cd_mss = skb_shinfo(skb)->gso_size; ··· 1556 1556 * magnitude greater than our largest possible GSO size. 1557 1557 * 1558 1558 * This would then be implemented as: 1559 - * return (((size >> 12) * 85) >> 8) + 1; 1559 + * return (((size >> 12) * 85) >> 8) + ICE_DESCS_FOR_SKB_DATA_PTR; 1560 1560 * 1561 1561 * Since multiplication and division are commutative, we can reorder 1562 1562 * operations into: 1563 - * return ((size * 85) >> 20) + 1; 1563 + * return ((size * 85) >> 20) + ICE_DESCS_FOR_SKB_DATA_PTR; 1564 1564 */ 1565 1565 static unsigned int ice_txd_use_count(unsigned int size) 1566 1566 { 1567 - return ((size * 85) >> 20) + 1; 1567 + return ((size * 85) >> 20) + ICE_DESCS_FOR_SKB_DATA_PTR; 1568 1568 } 1569 1569 1570 1570 /** ··· 1706 1706 * + 1 desc for context descriptor, 1707 1707 * otherwise try next time 1708 1708 */ 1709 - if (ice_maybe_stop_tx(tx_ring, count + 4 + 1)) { 1709 + if (ice_maybe_stop_tx(tx_ring, count + ICE_DESCS_PER_CACHE_LINE + 1710 + ICE_DESCS_FOR_CTX_DESC)) { 1710 1711 tx_ring->tx_stats.tx_busy++; 1711 1712 return NETDEV_TX_BUSY; 1712 1713 }
+15 -2
drivers/net/ethernet/intel/ice/ice_txrx.h
··· 22 22 #define ICE_RX_BUF_WRITE 16 /* Must be power of 2 */ 23 23 #define ICE_MAX_TXQ_PER_TXQG 128 24 24 25 - /* Tx Descriptors needed, worst case */ 26 - #define DESC_NEEDED (MAX_SKB_FRAGS + 4) 25 + /* We are assuming that the cache line is always 64 Bytes here for ice. 26 + * In order to make sure that is a correct assumption there is a check in probe 27 + * to print a warning if the read from GLPCI_CNF2 tells us that the cache line 28 + * size is 128 bytes. We do it this way because we do not want to read the 29 + * GLPCI_CNF2 register or a variable containing the value on every pass through 30 + * the Tx path. 31 + */ 32 + #define ICE_CACHE_LINE_BYTES 64 33 + #define ICE_DESCS_PER_CACHE_LINE (ICE_CACHE_LINE_BYTES / \ 34 + sizeof(struct ice_tx_desc)) 35 + #define ICE_DESCS_FOR_CTX_DESC 1 36 + #define ICE_DESCS_FOR_SKB_DATA_PTR 1 37 + /* Tx descriptors needed, worst case */ 38 + #define DESC_NEEDED (MAX_SKB_FRAGS + ICE_DESCS_FOR_CTX_DESC + \ 39 + ICE_DESCS_PER_CACHE_LINE + ICE_DESCS_FOR_SKB_DATA_PTR) 27 40 #define ICE_DESC_UNUSED(R) \ 28 41 ((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \ 29 42 (R)->next_to_clean - (R)->next_to_use - 1)
+1 -1
drivers/net/ethernet/intel/ice/ice_type.h
··· 92 92 u64 phy_type_low; 93 93 u16 max_frame_size; 94 94 u16 link_speed; 95 + u16 req_speeds; 95 96 u8 lse_ena; /* Link Status Event notification */ 96 97 u8 link_info; 97 98 u8 an_info; 98 99 u8 ext_info; 99 100 u8 pacing; 100 - u8 req_speeds; 101 101 /* Refer to #define from module_type[ICE_MODULE_TYPE_TOTAL_BYTE] of 102 102 * ice_aqc_get_phy_caps structure 103 103 */
+1 -3
drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
··· 348 348 struct ice_vsi_ctx ctxt = { 0 }; 349 349 enum ice_status status; 350 350 351 - ctxt.info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_TAGGED | 351 + ctxt.info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_UNTAGGED | 352 352 ICE_AQ_VSI_PVLAN_INSERT_PVID | 353 353 ICE_AQ_VSI_VLAN_EMOD_STR; 354 354 ctxt.info.pvid = cpu_to_le16(vid); ··· 2171 2171 2172 2172 if (!ice_vsi_add_vlan(vsi, vid)) { 2173 2173 vf->num_vlan++; 2174 - set_bit(vid, vsi->active_vlans); 2175 2174 2176 2175 /* Enable VLAN pruning when VLAN 0 is added */ 2177 2176 if (unlikely(!vid)) ··· 2189 2190 */ 2190 2191 if (!ice_vsi_kill_vlan(vsi, vid)) { 2191 2192 vf->num_vlan--; 2192 - clear_bit(vid, vsi->active_vlans); 2193 2193 2194 2194 /* Disable VLAN pruning when removing VLAN 0 */ 2195 2195 if (unlikely(!vid))
+1
drivers/net/ethernet/intel/igb/e1000_i210.c
··· 842 842 nvm_word = E1000_INVM_DEFAULT_AL; 843 843 tmp_nvm = nvm_word | E1000_INVM_PLL_WO_VAL; 844 844 igb_write_phy_reg_82580(hw, I347AT4_PAGE_SELECT, E1000_PHY_PLL_FREQ_PAGE); 845 + phy_word = E1000_PHY_PLL_UNCONF; 845 846 for (i = 0; i < E1000_MAX_PLL_TRIES; i++) { 846 847 /* check current state directly from internal PHY */ 847 848 igb_read_phy_reg_82580(hw, E1000_PHY_PLL_FREQ_REG, &phy_word);
+7 -5
drivers/net/ethernet/intel/igb/igb_ptp.c
··· 53 53 * 2^40 * 10^-9 / 60 = 18.3 minutes. 54 54 * 55 55 * SYSTIM is converted to real time using a timecounter. As 56 - * timecounter_cyc2time() allows old timestamps, the timecounter 57 - * needs to be updated at least once per half of the SYSTIM interval. 58 - * Scheduling of delayed work is not very accurate, so we aim for 8 59 - * minutes to be sure the actual interval is shorter than 9.16 minutes. 56 + * timecounter_cyc2time() allows old timestamps, the timecounter needs 57 + * to be updated at least once per half of the SYSTIM interval. 58 + * Scheduling of delayed work is not very accurate, and also the NIC 59 + * clock can be adjusted to run up to 6% faster and the system clock 60 + * up to 10% slower, so we aim for 6 minutes to be sure the actual 61 + * interval in the NIC time is shorter than 9.16 minutes. 60 62 */ 61 63 62 - #define IGB_SYSTIM_OVERFLOW_PERIOD (HZ * 60 * 8) 64 + #define IGB_SYSTIM_OVERFLOW_PERIOD (HZ * 60 * 6) 63 65 #define IGB_PTP_TX_TIMEOUT (HZ * 15) 64 66 #define INCPERIOD_82576 BIT(E1000_TIMINCA_16NS_SHIFT) 65 67 #define INCVALUE_82576_MASK GENMASK(E1000_TIMINCA_16NS_SHIFT - 1, 0)
+3 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
··· 2262 2262 *autoneg = false; 2263 2263 2264 2264 if (hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core0 || 2265 - hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core1) { 2265 + hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core1 || 2266 + hw->phy.sfp_type == ixgbe_sfp_type_1g_lx_core0 || 2267 + hw->phy.sfp_type == ixgbe_sfp_type_1g_lx_core1) { 2266 2268 *speed = IXGBE_LINK_SPEED_1GB_FULL; 2267 2269 return 0; 2268 2270 }
+3 -2
drivers/net/ethernet/lantiq_xrx200.c
··· 512 512 err = register_netdev(net_dev); 513 513 if (err) 514 514 goto err_unprepare_clk; 515 - return err; 515 + 516 + return 0; 516 517 517 518 err_unprepare_clk: 518 519 clk_disable_unprepare(priv->clk); ··· 521 520 err_uninit_dma: 522 521 xrx200_hw_cleanup(priv); 523 522 524 - return 0; 523 + return err; 525 524 } 526 525 527 526 static int xrx200_remove(struct platform_device *pdev)
+5 -11
drivers/net/ethernet/marvell/mvneta.c
··· 494 494 #if defined(__LITTLE_ENDIAN) 495 495 struct mvneta_tx_desc { 496 496 u32 command; /* Options used by HW for packet transmitting.*/ 497 - u16 reserverd1; /* csum_l4 (for future use) */ 497 + u16 reserved1; /* csum_l4 (for future use) */ 498 498 u16 data_size; /* Data size of transmitted packet in bytes */ 499 499 u32 buf_phys_addr; /* Physical addr of transmitted buffer */ 500 500 u32 reserved2; /* hw_cmd - (for future use, PMT) */ ··· 519 519 #else 520 520 struct mvneta_tx_desc { 521 521 u16 data_size; /* Data size of transmitted packet in bytes */ 522 - u16 reserverd1; /* csum_l4 (for future use) */ 522 + u16 reserved1; /* csum_l4 (for future use) */ 523 523 u32 command; /* Options used by HW for packet transmitting.*/ 524 524 u32 reserved2; /* hw_cmd - (for future use, PMT) */ 525 525 u32 buf_phys_addr; /* Physical addr of transmitted buffer */ ··· 3343 3343 if (state->interface != PHY_INTERFACE_MODE_NA && 3344 3344 state->interface != PHY_INTERFACE_MODE_QSGMII && 3345 3345 state->interface != PHY_INTERFACE_MODE_SGMII && 3346 - state->interface != PHY_INTERFACE_MODE_2500BASEX && 3347 3346 !phy_interface_mode_is_8023z(state->interface) && 3348 3347 !phy_interface_mode_is_rgmii(state->interface)) { 3349 3348 bitmap_zero(supported, __ETHTOOL_LINK_MODE_MASK_NBITS); ··· 3356 3357 /* Asymmetric pause is unsupported */ 3357 3358 phylink_set(mask, Pause); 3358 3359 3359 - /* We cannot use 1Gbps when using the 2.5G interface. */ 3360 - if (state->interface == PHY_INTERFACE_MODE_2500BASEX) { 3361 - phylink_set(mask, 2500baseT_Full); 3362 - phylink_set(mask, 2500baseX_Full); 3363 - } else { 3364 - phylink_set(mask, 1000baseT_Full); 3365 - phylink_set(mask, 1000baseX_Full); 3366 - } 3360 + /* Half-duplex at speeds higher than 100Mbit is unsupported */ 3361 + phylink_set(mask, 1000baseT_Full); 3362 + phylink_set(mask, 1000baseX_Full); 3367 3363 3368 3364 if (!phy_interface_mode_is_8023z(state->interface)) { 3369 3365 /* 10M and 100M are only supported in non-802.3z mode */
+1 -1
drivers/net/ethernet/mellanox/mlx4/alloc.c
··· 337 337 static u32 __mlx4_alloc_from_zone(struct mlx4_zone_entry *zone, int count, 338 338 int align, u32 skip_mask, u32 *puid) 339 339 { 340 - u32 uid; 340 + u32 uid = 0; 341 341 u32 res; 342 342 struct mlx4_zone_allocator *zone_alloc = zone->allocator; 343 343 struct mlx4_zone_entry *curr_node;
+4 -2
drivers/net/ethernet/mellanox/mlx4/en_tx.c
··· 1006 1006 ring->packets++; 1007 1007 } 1008 1008 ring->bytes += tx_info->nr_bytes; 1009 - netdev_tx_sent_queue(ring->tx_queue, tx_info->nr_bytes); 1010 1009 AVG_PERF_COUNTER(priv->pstats.tx_pktsz_avg, skb->len); 1011 1010 1012 1011 if (tx_info->inl) ··· 1043 1044 netif_tx_stop_queue(ring->tx_queue); 1044 1045 ring->queue_stopped++; 1045 1046 } 1046 - send_doorbell = !skb->xmit_more || netif_xmit_stopped(ring->tx_queue); 1047 + 1048 + send_doorbell = __netdev_tx_sent_queue(ring->tx_queue, 1049 + tx_info->nr_bytes, 1050 + skb->xmit_more); 1047 1051 1048 1052 real_size = (real_size / 16) & 0x3f; 1049 1053
+2 -2
drivers/net/ethernet/mellanox/mlx4/mlx4.h
··· 540 540 struct resource_allocator { 541 541 spinlock_t alloc_lock; /* protect quotas */ 542 542 union { 543 - int res_reserved; 544 - int res_port_rsvd[MLX4_MAX_PORTS]; 543 + unsigned int res_reserved; 544 + unsigned int res_port_rsvd[MLX4_MAX_PORTS]; 545 545 }; 546 546 union { 547 547 int res_free;
+1
drivers/net/ethernet/mellanox/mlx4/mr.c
··· 363 363 container_of((void *)mpt_entry, struct mlx4_cmd_mailbox, 364 364 buf); 365 365 366 + (*mpt_entry)->lkey = 0; 366 367 err = mlx4_SW2HW_MPT(dev, mailbox, key); 367 368 } 368 369
+1
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 569 569 570 570 unsigned long state; 571 571 int ix; 572 + unsigned int hw_mtu; 572 573 573 574 struct net_dim dim; /* Dynamic Interrupt Moderation */ 574 575
+15 -21
drivers/net/ethernet/mellanox/mlx5/core/en/port.c
··· 88 88 89 89 eth_proto_oper = MLX5_GET(ptys_reg, out, eth_proto_oper); 90 90 *speed = mlx5e_port_ptys2speed(eth_proto_oper); 91 - if (!(*speed)) { 92 - mlx5_core_warn(mdev, "cannot get port speed\n"); 91 + if (!(*speed)) 93 92 err = -EINVAL; 94 - } 95 93 96 94 return err; 97 95 } ··· 256 258 case 40000: 257 259 if (!write) 258 260 *fec_policy = MLX5_GET(pplm_reg, pplm, 259 - fec_override_cap_10g_40g); 261 + fec_override_admin_10g_40g); 260 262 else 261 263 MLX5_SET(pplm_reg, pplm, 262 264 fec_override_admin_10g_40g, *fec_policy); ··· 308 310 case 10000: 309 311 case 40000: 310 312 *fec_cap = MLX5_GET(pplm_reg, pplm, 311 - fec_override_admin_10g_40g); 313 + fec_override_cap_10g_40g); 312 314 break; 313 315 case 25000: 314 316 *fec_cap = MLX5_GET(pplm_reg, pplm, ··· 392 394 393 395 int mlx5e_set_fec_mode(struct mlx5_core_dev *dev, u8 fec_policy) 394 396 { 397 + u8 fec_policy_nofec = BIT(MLX5E_FEC_NOFEC); 395 398 bool fec_mode_not_supp_in_speed = false; 396 - u8 no_fec_policy = BIT(MLX5E_FEC_NOFEC); 397 399 u32 out[MLX5_ST_SZ_DW(pplm_reg)] = {}; 398 400 u32 in[MLX5_ST_SZ_DW(pplm_reg)] = {}; 399 401 int sz = MLX5_ST_SZ_BYTES(pplm_reg); 400 - u32 current_fec_speed; 402 + u8 fec_policy_auto = 0; 401 403 u8 fec_caps = 0; 402 404 int err; 403 405 int i; ··· 413 415 if (err) 414 416 return err; 415 417 416 - err = mlx5e_port_linkspeed(dev, &current_fec_speed); 417 - if (err) 418 - return err; 418 + MLX5_SET(pplm_reg, out, local_port, 1); 419 419 420 - memset(in, 0, sz); 421 - MLX5_SET(pplm_reg, in, local_port, 1); 422 - for (i = 0; i < MLX5E_FEC_SUPPORTED_SPEEDS && !!fec_policy; i++) { 420 + for (i = 0; i < MLX5E_FEC_SUPPORTED_SPEEDS; i++) { 423 421 mlx5e_get_fec_cap_field(out, &fec_caps, fec_supported_speeds[i]); 424 - /* policy supported for link speed */ 425 - if (!!(fec_caps & fec_policy)) { 426 - mlx5e_fec_admin_field(in, &fec_policy, 1, 422 + /* policy supported for link speed, or policy is auto */ 423 + if (fec_caps & fec_policy || fec_policy == fec_policy_auto) { 424 + mlx5e_fec_admin_field(out, &fec_policy, 1, 427 425 fec_supported_speeds[i]); 428 426 } else { 429 - if (fec_supported_speeds[i] == current_fec_speed) 430 - return -EOPNOTSUPP; 431 - mlx5e_fec_admin_field(in, &no_fec_policy, 1, 432 - fec_supported_speeds[i]); 427 + /* turn off FEC if supported. Else, leave it the same */ 428 + if (fec_caps & fec_policy_nofec) 429 + mlx5e_fec_admin_field(out, &fec_policy_nofec, 1, 430 + fec_supported_speeds[i]); 433 431 fec_mode_not_supp_in_speed = true; 434 432 } 435 433 } ··· 435 441 "FEC policy 0x%x is not supported for some speeds", 436 442 fec_policy); 437 443 438 - return mlx5_core_access_reg(dev, in, sz, out, sz, MLX5_REG_PPLM, 0, 1); 444 + return mlx5_core_access_reg(dev, out, sz, out, sz, MLX5_REG_PPLM, 0, 1); 439 445 }
+3 -1
drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
··· 130 130 int err; 131 131 132 132 err = mlx5e_port_linkspeed(priv->mdev, &speed); 133 - if (err) 133 + if (err) { 134 + mlx5_core_warn(priv->mdev, "cannot get port speed\n"); 134 135 return 0; 136 + } 135 137 136 138 xoff = (301 + 216 * priv->dcbx.cable_len / 100) * speed / 1000 + 272 * mtu / 100; 137 139
+1 -2
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
··· 843 843 ethtool_link_ksettings_add_link_mode(link_ksettings, supported, 844 844 Autoneg); 845 845 846 - err = get_fec_supported_advertised(mdev, link_ksettings); 847 - if (err) 846 + if (get_fec_supported_advertised(mdev, link_ksettings)) 848 847 netdev_dbg(netdev, "%s: FEC caps query failed: %d\n", 849 848 __func__, err); 850 849
+31 -6
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 502 502 rq->channel = c; 503 503 rq->ix = c->ix; 504 504 rq->mdev = mdev; 505 + rq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu); 505 506 rq->stats = &c->priv->channel_stats[c->ix].rq; 506 507 507 508 rq->xdp_prog = params->xdp_prog ? bpf_prog_inc(params->xdp_prog) : NULL; ··· 1624 1623 int err; 1625 1624 u32 i; 1626 1625 1626 + err = mlx5_vector2eqn(mdev, param->eq_ix, &eqn_not_used, &irqn); 1627 + if (err) 1628 + return err; 1629 + 1627 1630 err = mlx5_cqwq_create(mdev, &param->wq, param->cqc, &cq->wq, 1628 1631 &cq->wq_ctrl); 1629 1632 if (err) 1630 1633 return err; 1631 - 1632 - mlx5_vector2eqn(mdev, param->eq_ix, &eqn_not_used, &irqn); 1633 1634 1634 1635 mcq->cqe_sz = 64; 1635 1636 mcq->set_ci_db = cq->wq_ctrl.db.db; ··· 1690 1687 int eqn; 1691 1688 int err; 1692 1689 1690 + err = mlx5_vector2eqn(mdev, param->eq_ix, &eqn, &irqn_not_used); 1691 + if (err) 1692 + return err; 1693 + 1693 1694 inlen = MLX5_ST_SZ_BYTES(create_cq_in) + 1694 1695 sizeof(u64) * cq->wq_ctrl.buf.npages; 1695 1696 in = kvzalloc(inlen, GFP_KERNEL); ··· 1706 1699 1707 1700 mlx5_fill_page_frag_array(&cq->wq_ctrl.buf, 1708 1701 (__be64 *)MLX5_ADDR_OF(create_cq_in, in, pas)); 1709 - 1710 - mlx5_vector2eqn(mdev, param->eq_ix, &eqn, &irqn_not_used); 1711 1702 1712 1703 MLX5_SET(cqc, cqc, cq_period_mode, param->cq_period_mode); 1713 1704 MLX5_SET(cqc, cqc, c_eqn, eqn); ··· 1926 1921 int err; 1927 1922 int eqn; 1928 1923 1924 + err = mlx5_vector2eqn(priv->mdev, ix, &eqn, &irq); 1925 + if (err) 1926 + return err; 1927 + 1929 1928 c = kvzalloc_node(sizeof(*c), GFP_KERNEL, cpu_to_node(cpu)); 1930 1929 if (!c) 1931 1930 return -ENOMEM; ··· 1946 1937 c->xdp = !!params->xdp_prog; 1947 1938 c->stats = &priv->channel_stats[ix].ch; 1948 1939 1949 - mlx5_vector2eqn(priv->mdev, ix, &eqn, &irq); 1950 1940 c->irq_desc = irq_to_desc(irq); 1951 1941 1952 1942 netif_napi_add(netdev, &c->napi, mlx5e_napi_poll, 64); ··· 3582 3574 return 0; 3583 3575 } 3584 3576 3577 + #ifdef CONFIG_MLX5_ESWITCH 3585 3578 static int set_feature_tc_num_filters(struct net_device *netdev, bool enable) 3586 3579 { 3587 3580 struct mlx5e_priv *priv = netdev_priv(netdev); ··· 3595 3586 3596 3587 return 0; 3597 3588 } 3589 + #endif 3598 3590 3599 3591 static int set_feature_rx_all(struct net_device *netdev, bool enable) 3600 3592 { ··· 3694 3684 err |= MLX5E_HANDLE_FEATURE(NETIF_F_LRO, set_feature_lro); 3695 3685 err |= MLX5E_HANDLE_FEATURE(NETIF_F_HW_VLAN_CTAG_FILTER, 3696 3686 set_feature_cvlan_filter); 3687 + #ifdef CONFIG_MLX5_ESWITCH 3697 3688 err |= MLX5E_HANDLE_FEATURE(NETIF_F_HW_TC, set_feature_tc_num_filters); 3689 + #endif 3698 3690 err |= MLX5E_HANDLE_FEATURE(NETIF_F_RXALL, set_feature_rx_all); 3699 3691 err |= MLX5E_HANDLE_FEATURE(NETIF_F_RXFCS, set_feature_rx_fcs); 3700 3692 err |= MLX5E_HANDLE_FEATURE(NETIF_F_HW_VLAN_CTAG_RX, set_feature_rx_vlan); ··· 3767 3755 } 3768 3756 3769 3757 if (params->rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ) { 3758 + bool is_linear = mlx5e_rx_mpwqe_is_linear_skb(priv->mdev, &new_channels.params); 3770 3759 u8 ppw_old = mlx5e_mpwqe_log_pkts_per_wqe(params); 3771 3760 u8 ppw_new = mlx5e_mpwqe_log_pkts_per_wqe(&new_channels.params); 3772 3761 3773 - reset = reset && (ppw_old != ppw_new); 3762 + reset = reset && (is_linear || (ppw_old != ppw_new)); 3774 3763 } 3775 3764 3776 3765 if (!reset) { ··· 4691 4678 FT_CAP(modify_root) && 4692 4679 FT_CAP(identified_miss_table_mode) && 4693 4680 FT_CAP(flow_table_modify)) { 4681 + #ifdef CONFIG_MLX5_ESWITCH 4694 4682 netdev->hw_features |= NETIF_F_HW_TC; 4683 + #endif 4695 4684 #ifdef CONFIG_MLX5_EN_ARFS 4696 4685 netdev->hw_features |= NETIF_F_NTUPLE; 4697 4686 #endif ··· 5019 5004 int mlx5e_attach_netdev(struct mlx5e_priv *priv) 5020 5005 { 5021 5006 const struct mlx5e_profile *profile; 5007 + int max_nch; 5022 5008 int err; 5023 5009 5024 5010 profile = priv->profile; 5025 5011 clear_bit(MLX5E_STATE_DESTROYING, &priv->state); 5012 + 5013 + /* max number of channels may have changed */ 5014 + max_nch = mlx5e_get_max_num_channels(priv->mdev); 5015 + if (priv->channels.params.num_channels > max_nch) { 5016 + mlx5_core_warn(priv->mdev, "MLX5E: Reducing number of channels to %d\n", max_nch); 5017 + priv->channels.params.num_channels = max_nch; 5018 + mlx5e_build_default_indir_rqt(priv->channels.params.indirection_rqt, 5019 + MLX5E_INDIR_RQT_SIZE, max_nch); 5020 + } 5026 5021 5027 5022 err = profile->init_tx(priv); 5028 5023 if (err)
+6
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 1104 1104 u32 frag_size; 1105 1105 bool consumed; 1106 1106 1107 + /* Check packet size. Note LRO doesn't use linear SKB */ 1108 + if (unlikely(cqe_bcnt > rq->hw_mtu)) { 1109 + rq->stats->oversize_pkts_sw_drop++; 1110 + return NULL; 1111 + } 1112 + 1107 1113 va = page_address(di->page) + head_offset; 1108 1114 data = va + rx_headroom; 1109 1115 frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt32);
+10 -16
drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
··· 98 98 return 1; 99 99 } 100 100 101 - #ifdef CONFIG_INET 102 - /* loopback test */ 103 - #define MLX5E_TEST_PKT_SIZE (MLX5E_RX_MAX_HEAD - NET_IP_ALIGN) 104 - static const char mlx5e_test_text[ETH_GSTRING_LEN] = "MLX5E SELF TEST"; 105 - #define MLX5E_TEST_MAGIC 0x5AEED15C001ULL 106 - 107 101 struct mlx5ehdr { 108 102 __be32 version; 109 103 __be64 magic; 110 - char text[ETH_GSTRING_LEN]; 111 104 }; 105 + 106 + #ifdef CONFIG_INET 107 + /* loopback test */ 108 + #define MLX5E_TEST_PKT_SIZE (sizeof(struct ethhdr) + sizeof(struct iphdr) +\ 109 + sizeof(struct udphdr) + sizeof(struct mlx5ehdr)) 110 + #define MLX5E_TEST_MAGIC 0x5AEED15C001ULL 112 111 113 112 static struct sk_buff *mlx5e_test_get_udp_skb(struct mlx5e_priv *priv) 114 113 { ··· 116 117 struct ethhdr *ethh; 117 118 struct udphdr *udph; 118 119 struct iphdr *iph; 119 - int datalen, iplen; 120 - 121 - datalen = MLX5E_TEST_PKT_SIZE - 122 - (sizeof(*ethh) + sizeof(*iph) + sizeof(*udph)); 120 + int iplen; 123 121 124 122 skb = netdev_alloc_skb(priv->netdev, MLX5E_TEST_PKT_SIZE); 125 123 if (!skb) { ··· 145 149 /* Fill UDP header */ 146 150 udph->source = htons(9); 147 151 udph->dest = htons(9); /* Discard Protocol */ 148 - udph->len = htons(datalen + sizeof(struct udphdr)); 152 + udph->len = htons(sizeof(struct mlx5ehdr) + sizeof(struct udphdr)); 149 153 udph->check = 0; 150 154 151 155 /* Fill IP header */ ··· 153 157 iph->ttl = 32; 154 158 iph->version = 4; 155 159 iph->protocol = IPPROTO_UDP; 156 - iplen = sizeof(struct iphdr) + sizeof(struct udphdr) + datalen; 160 + iplen = sizeof(struct iphdr) + sizeof(struct udphdr) + 161 + sizeof(struct mlx5ehdr); 157 162 iph->tot_len = htons(iplen); 158 163 iph->frag_off = 0; 159 164 iph->saddr = 0; ··· 167 170 mlxh = skb_put(skb, sizeof(*mlxh)); 168 171 mlxh->version = 0; 169 172 mlxh->magic = cpu_to_be64(MLX5E_TEST_MAGIC); 170 - strlcpy(mlxh->text, mlx5e_test_text, sizeof(mlxh->text)); 171 - datalen -= sizeof(*mlxh); 172 - skb_put_zero(skb, datalen); 173 173 174 174 skb->csum = 0; 175 175 skb->ip_summed = CHECKSUM_PARTIAL;
+3
drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
··· 83 83 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_wqe_err) }, 84 84 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_mpwqe_filler_cqes) }, 85 85 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_mpwqe_filler_strides) }, 86 + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_oversize_pkts_sw_drop) }, 86 87 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_buff_alloc_err) }, 87 88 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_cqe_compress_blks) }, 88 89 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_cqe_compress_pkts) }, ··· 162 161 s->rx_wqe_err += rq_stats->wqe_err; 163 162 s->rx_mpwqe_filler_cqes += rq_stats->mpwqe_filler_cqes; 164 163 s->rx_mpwqe_filler_strides += rq_stats->mpwqe_filler_strides; 164 + s->rx_oversize_pkts_sw_drop += rq_stats->oversize_pkts_sw_drop; 165 165 s->rx_buff_alloc_err += rq_stats->buff_alloc_err; 166 166 s->rx_cqe_compress_blks += rq_stats->cqe_compress_blks; 167 167 s->rx_cqe_compress_pkts += rq_stats->cqe_compress_pkts; ··· 1191 1189 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, wqe_err) }, 1192 1190 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, mpwqe_filler_cqes) }, 1193 1191 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, mpwqe_filler_strides) }, 1192 + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, oversize_pkts_sw_drop) }, 1194 1193 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, buff_alloc_err) }, 1195 1194 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, cqe_compress_blks) }, 1196 1195 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, cqe_compress_pkts) },
+2
drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
··· 96 96 u64 rx_wqe_err; 97 97 u64 rx_mpwqe_filler_cqes; 98 98 u64 rx_mpwqe_filler_strides; 99 + u64 rx_oversize_pkts_sw_drop; 99 100 u64 rx_buff_alloc_err; 100 101 u64 rx_cqe_compress_blks; 101 102 u64 rx_cqe_compress_pkts; ··· 194 193 u64 wqe_err; 195 194 u64 mpwqe_filler_cqes; 196 195 u64 mpwqe_filler_strides; 196 + u64 oversize_pkts_sw_drop; 197 197 u64 buff_alloc_err; 198 198 u64 cqe_compress_blks; 199 199 u64 cqe_compress_pkts;
+35 -34
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 1447 1447 inner_headers); 1448 1448 } 1449 1449 1450 - if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) { 1451 - struct flow_dissector_key_eth_addrs *key = 1450 + if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) { 1451 + struct flow_dissector_key_basic *key = 1452 1452 skb_flow_dissector_target(f->dissector, 1453 - FLOW_DISSECTOR_KEY_ETH_ADDRS, 1453 + FLOW_DISSECTOR_KEY_BASIC, 1454 1454 f->key); 1455 - struct flow_dissector_key_eth_addrs *mask = 1455 + struct flow_dissector_key_basic *mask = 1456 1456 skb_flow_dissector_target(f->dissector, 1457 - FLOW_DISSECTOR_KEY_ETH_ADDRS, 1457 + FLOW_DISSECTOR_KEY_BASIC, 1458 1458 f->mask); 1459 + MLX5_SET(fte_match_set_lyr_2_4, headers_c, ethertype, 1460 + ntohs(mask->n_proto)); 1461 + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype, 1462 + ntohs(key->n_proto)); 1459 1463 1460 - ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c, 1461 - dmac_47_16), 1462 - mask->dst); 1463 - ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, 1464 - dmac_47_16), 1465 - key->dst); 1466 - 1467 - ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c, 1468 - smac_47_16), 1469 - mask->src); 1470 - ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, 1471 - smac_47_16), 1472 - key->src); 1473 - 1474 - if (!is_zero_ether_addr(mask->src) || !is_zero_ether_addr(mask->dst)) 1464 + if (mask->n_proto) 1475 1465 *match_level = MLX5_MATCH_L2; 1476 1466 } 1477 1467 ··· 1495 1505 1496 1506 *match_level = MLX5_MATCH_L2; 1497 1507 } 1498 - } else { 1508 + } else if (*match_level != MLX5_MATCH_NONE) { 1499 1509 MLX5_SET(fte_match_set_lyr_2_4, headers_c, svlan_tag, 1); 1500 1510 MLX5_SET(fte_match_set_lyr_2_4, headers_c, cvlan_tag, 1); 1511 + *match_level = MLX5_MATCH_L2; 1501 1512 } 1502 1513 1503 1514 if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_CVLAN)) { ··· 1536 1545 } 1537 1546 } 1538 1547 1539 - if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) { 1540 - struct flow_dissector_key_basic *key = 1548 + if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) { 1549 + struct flow_dissector_key_eth_addrs *key = 1541 1550 skb_flow_dissector_target(f->dissector, 1542 - FLOW_DISSECTOR_KEY_BASIC, 1551 + FLOW_DISSECTOR_KEY_ETH_ADDRS, 1543 1552 f->key); 1544 - struct flow_dissector_key_basic *mask = 1553 + struct flow_dissector_key_eth_addrs *mask = 1545 1554 skb_flow_dissector_target(f->dissector, 1546 - FLOW_DISSECTOR_KEY_BASIC, 1555 + FLOW_DISSECTOR_KEY_ETH_ADDRS, 1547 1556 f->mask); 1548 - MLX5_SET(fte_match_set_lyr_2_4, headers_c, ethertype, 1549 - ntohs(mask->n_proto)); 1550 - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype, 1551 - ntohs(key->n_proto)); 1552 1557 1553 - if (mask->n_proto) 1558 + ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c, 1559 + dmac_47_16), 1560 + mask->dst); 1561 + ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, 1562 + dmac_47_16), 1563 + key->dst); 1564 + 1565 + ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c, 1566 + smac_47_16), 1567 + mask->src); 1568 + ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, 1569 + smac_47_16), 1570 + key->src); 1571 + 1572 + if (!is_zero_ether_addr(mask->src) || !is_zero_ether_addr(mask->dst)) 1554 1573 *match_level = MLX5_MATCH_L2; 1555 1574 } 1556 1575 ··· 1587 1586 1588 1587 /* the HW doesn't need L3 inline to match on frag=no */ 1589 1588 if (!(key->flags & FLOW_DIS_IS_FRAGMENT)) 1590 - *match_level = MLX5_INLINE_MODE_L2; 1589 + *match_level = MLX5_MATCH_L2; 1591 1590 /* *** L2 attributes parsing up to here *** */ 1592 1591 else 1593 - *match_level = MLX5_INLINE_MODE_IP; 1592 + *match_level = MLX5_MATCH_L3; 1594 1593 } 1595 1594 } 1596 1595 ··· 2980 2979 if (!actions_match_supported(priv, exts, parse_attr, flow, extack)) 2981 2980 return -EOPNOTSUPP; 2982 2981 2983 - if (attr->out_count > 1 && !mlx5_esw_has_fwd_fdb(priv->mdev)) { 2982 + if (attr->mirror_count > 0 && !mlx5_esw_has_fwd_fdb(priv->mdev)) { 2984 2983 NL_SET_ERR_MSG_MOD(extack, 2985 2984 "current firmware doesn't support split rule for port mirroring"); 2986 2985 netdev_warn_once(priv->netdev, "current firmware doesn't support split rule for port mirroring\n");
+8 -2
drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
··· 83 83 }; 84 84 85 85 static const struct rhashtable_params rhash_sa = { 86 - .key_len = FIELD_SIZEOF(struct mlx5_fpga_ipsec_sa_ctx, hw_sa), 87 - .key_offset = offsetof(struct mlx5_fpga_ipsec_sa_ctx, hw_sa), 86 + /* Keep out "cmd" field from the key as it's 87 + * value is not constant during the lifetime 88 + * of the key object. 89 + */ 90 + .key_len = FIELD_SIZEOF(struct mlx5_fpga_ipsec_sa_ctx, hw_sa) - 91 + FIELD_SIZEOF(struct mlx5_ifc_fpga_ipsec_sa_v1, cmd), 92 + .key_offset = offsetof(struct mlx5_fpga_ipsec_sa_ctx, hw_sa) + 93 + FIELD_SIZEOF(struct mlx5_ifc_fpga_ipsec_sa_v1, cmd), 88 94 .head_offset = offsetof(struct mlx5_fpga_ipsec_sa_ctx, hash), 89 95 .automatic_shrinking = true, 90 96 .min_size = 1,
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
··· 560 560 561 561 netif_carrier_off(epriv->netdev); 562 562 mlx5_fs_remove_rx_underlay_qpn(mdev, ipriv->qp.qpn); 563 - mlx5i_uninit_underlay_qp(epriv); 564 563 mlx5e_deactivate_priv_channels(epriv); 565 564 mlx5e_close_channels(&epriv->channels); 565 + mlx5i_uninit_underlay_qp(epriv); 566 566 unlock: 567 567 mutex_unlock(&epriv->state_lock); 568 568 return 0;
-1
drivers/net/ethernet/mellanox/mlxsw/spectrum.c
··· 3568 3568 burst_size = 7; 3569 3569 break; 3570 3570 case MLXSW_REG_HTGT_TRAP_GROUP_SP_IP2ME: 3571 - is_bytes = true; 3572 3571 rate = 4 * 1024; 3573 3572 burst_size = 4; 3574 3573 break;
+6 -5
drivers/net/ethernet/microchip/lan743x_main.c
··· 1672 1672 netif_wake_queue(adapter->netdev); 1673 1673 } 1674 1674 1675 - if (!napi_complete_done(napi, weight)) 1675 + if (!napi_complete(napi)) 1676 1676 goto done; 1677 1677 1678 1678 /* enable isr */ ··· 1681 1681 lan743x_csr_read(adapter, INT_STS); 1682 1682 1683 1683 done: 1684 - return weight; 1684 + return 0; 1685 1685 } 1686 1686 1687 1687 static void lan743x_tx_ring_cleanup(struct lan743x_tx *tx) ··· 1870 1870 tx->vector_flags = lan743x_intr_get_vector_flags(adapter, 1871 1871 INT_BIT_DMA_TX_ 1872 1872 (tx->channel_number)); 1873 - netif_napi_add(adapter->netdev, 1874 - &tx->napi, lan743x_tx_napi_poll, 1875 - tx->ring_size - 1); 1873 + netif_tx_napi_add(adapter->netdev, 1874 + &tx->napi, lan743x_tx_napi_poll, 1875 + tx->ring_size - 1); 1876 1876 napi_enable(&tx->napi); 1877 1877 1878 1878 data = 0; ··· 3017 3017 3018 3018 static const struct pci_device_id lan743x_pcidev_tbl[] = { 3019 3019 { PCI_DEVICE(PCI_VENDOR_ID_SMSC, PCI_DEVICE_ID_SMSC_LAN7430) }, 3020 + { PCI_DEVICE(PCI_VENDOR_ID_SMSC, PCI_DEVICE_ID_SMSC_LAN7431) }, 3020 3021 { 0, } 3021 3022 }; 3022 3023
+1
drivers/net/ethernet/microchip/lan743x_main.h
··· 548 548 /* SMSC acquired EFAR late 1990's, MCHP acquired SMSC 2012 */ 549 549 #define PCI_VENDOR_ID_SMSC PCI_VENDOR_ID_EFAR 550 550 #define PCI_DEVICE_ID_SMSC_LAN7430 (0x7430) 551 + #define PCI_DEVICE_ID_SMSC_LAN7431 (0x7431) 551 552 552 553 #define PCI_CONFIG_LENGTH (0x1000) 553 554
+7 -7
drivers/net/ethernet/qlogic/qed/qed_dcbx.c
··· 191 191 static void 192 192 qed_dcbx_set_params(struct qed_dcbx_results *p_data, 193 193 struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, 194 - bool enable, u8 prio, u8 tc, 194 + bool app_tlv, bool enable, u8 prio, u8 tc, 195 195 enum dcbx_protocol_type type, 196 196 enum qed_pci_personality personality) 197 197 { ··· 210 210 p_data->arr[type].dont_add_vlan0 = true; 211 211 212 212 /* QM reconf data */ 213 - if (p_hwfn->hw_info.personality == personality) 213 + if (app_tlv && p_hwfn->hw_info.personality == personality) 214 214 qed_hw_info_set_offload_tc(&p_hwfn->hw_info, tc); 215 215 216 216 /* Configure dcbx vlan priority in doorbell block for roce EDPM */ ··· 225 225 static void 226 226 qed_dcbx_update_app_info(struct qed_dcbx_results *p_data, 227 227 struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, 228 - bool enable, u8 prio, u8 tc, 228 + bool app_tlv, bool enable, u8 prio, u8 tc, 229 229 enum dcbx_protocol_type type) 230 230 { 231 231 enum qed_pci_personality personality; ··· 240 240 241 241 personality = qed_dcbx_app_update[i].personality; 242 242 243 - qed_dcbx_set_params(p_data, p_hwfn, p_ptt, enable, 243 + qed_dcbx_set_params(p_data, p_hwfn, p_ptt, app_tlv, enable, 244 244 prio, tc, type, personality); 245 245 } 246 246 } ··· 319 319 enable = true; 320 320 } 321 321 322 - qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, enable, 323 - priority, tc, type); 322 + qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, true, 323 + enable, priority, tc, type); 324 324 } 325 325 } 326 326 ··· 341 341 continue; 342 342 343 343 enable = (type == DCBX_PROTOCOL_ETH) ? false : !!dcbx_version; 344 - qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, enable, 344 + qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, false, enable, 345 345 priority, tc, type); 346 346 } 347 347
+1 -1
drivers/net/ethernet/qlogic/qed/qed_debug.c
··· 6071 6071 "no error", 6072 6072 "length error", 6073 6073 "function disabled", 6074 - "VF sent command to attnetion address", 6074 + "VF sent command to attention address", 6075 6075 "host sent prod update command", 6076 6076 "read of during interrupt register while in MIMD mode", 6077 6077 "access to PXP BAR reserved address",
+35 -9
drivers/net/ethernet/qlogic/qed/qed_dev.c
··· 185 185 qed_iscsi_free(p_hwfn); 186 186 qed_ooo_free(p_hwfn); 187 187 } 188 + 189 + if (QED_IS_RDMA_PERSONALITY(p_hwfn)) 190 + qed_rdma_info_free(p_hwfn); 191 + 188 192 qed_iov_free(p_hwfn); 189 193 qed_l2_free(p_hwfn); 190 194 qed_dmae_info_free(p_hwfn); ··· 485 481 struct qed_qm_info *qm_info = &p_hwfn->qm_info; 486 482 487 483 /* Can't have multiple flags set here */ 488 - if (bitmap_weight((unsigned long *)&pq_flags, sizeof(pq_flags)) > 1) 484 + if (bitmap_weight((unsigned long *)&pq_flags, 485 + sizeof(pq_flags) * BITS_PER_BYTE) > 1) { 486 + DP_ERR(p_hwfn, "requested multiple pq flags 0x%x\n", pq_flags); 489 487 goto err; 488 + } 489 + 490 + if (!(qed_get_pq_flags(p_hwfn) & pq_flags)) { 491 + DP_ERR(p_hwfn, "pq flag 0x%x is not set\n", pq_flags); 492 + goto err; 493 + } 490 494 491 495 switch (pq_flags) { 492 496 case PQ_FLAGS_RLS: ··· 518 506 } 519 507 520 508 err: 521 - DP_ERR(p_hwfn, "BAD pq flags %d\n", pq_flags); 522 - return NULL; 509 + return &qm_info->start_pq; 523 510 } 524 511 525 512 /* save pq index in qm info */ ··· 542 531 { 543 532 u8 max_tc = qed_init_qm_get_num_tcs(p_hwfn); 544 533 534 + if (max_tc == 0) { 535 + DP_ERR(p_hwfn, "pq with flag 0x%lx do not exist\n", 536 + PQ_FLAGS_MCOS); 537 + return p_hwfn->qm_info.start_pq; 538 + } 539 + 545 540 if (tc > max_tc) 546 541 DP_ERR(p_hwfn, "tc %d must be smaller than %d\n", tc, max_tc); 547 542 548 - return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + tc; 543 + return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + (tc % max_tc); 549 544 } 550 545 551 546 u16 qed_get_cm_pq_idx_vf(struct qed_hwfn *p_hwfn, u16 vf) 552 547 { 553 548 u16 max_vf = qed_init_qm_get_num_vfs(p_hwfn); 554 549 550 + if (max_vf == 0) { 551 + DP_ERR(p_hwfn, "pq with flag 0x%lx do not exist\n", 552 + PQ_FLAGS_VFS); 553 + return p_hwfn->qm_info.start_pq; 554 + } 555 + 555 556 if (vf > max_vf) 556 557 DP_ERR(p_hwfn, "vf %d must be smaller than %d\n", vf, max_vf); 557 558 558 - return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + vf; 559 + return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + (vf % max_vf); 559 560 } 560 561 561 562 u16 qed_get_cm_pq_idx_ofld_mtc(struct qed_hwfn *p_hwfn, u8 tc) ··· 1100 1077 if (rc) 1101 1078 goto alloc_err; 1102 1079 rc = qed_ooo_alloc(p_hwfn); 1080 + if (rc) 1081 + goto alloc_err; 1082 + } 1083 + 1084 + if (QED_IS_RDMA_PERSONALITY(p_hwfn)) { 1085 + rc = qed_rdma_info_alloc(p_hwfn); 1103 1086 if (rc) 1104 1087 goto alloc_err; 1105 1088 } ··· 2131 2102 if (!p_ptt) 2132 2103 return -EAGAIN; 2133 2104 2134 - /* If roce info is allocated it means roce is initialized and should 2135 - * be enabled in searcher. 2136 - */ 2137 2105 if (p_hwfn->p_rdma_info && 2138 - p_hwfn->b_rdma_enabled_in_prs) 2106 + p_hwfn->p_rdma_info->active && p_hwfn->b_rdma_enabled_in_prs) 2139 2107 qed_wr(p_hwfn, p_ptt, p_hwfn->rdma_prs_search_reg, 0x1); 2140 2108 2141 2109 /* Re-open incoming traffic */
+8 -3
drivers/net/ethernet/qlogic/qed/qed_fcoe.c
··· 147 147 "Cannot satisfy CQ amount. CQs requested %d, CQs available %d. Aborting function start\n", 148 148 fcoe_pf_params->num_cqs, 149 149 p_hwfn->hw_info.feat_num[QED_FCOE_CQ]); 150 - return -EINVAL; 150 + rc = -EINVAL; 151 + goto err; 151 152 } 152 153 153 154 p_data->mtu = cpu_to_le16(fcoe_pf_params->mtu); ··· 157 156 158 157 rc = qed_cxt_acquire_cid(p_hwfn, PROTOCOLID_FCOE, &dummy_cid); 159 158 if (rc) 160 - return rc; 159 + goto err; 161 160 162 161 cxt_info.iid = dummy_cid; 163 162 rc = qed_cxt_get_cid_info(p_hwfn, &cxt_info); 164 163 if (rc) { 165 164 DP_NOTICE(p_hwfn, "Cannot find context info for dummy cid=%d\n", 166 165 dummy_cid); 167 - return rc; 166 + goto err; 168 167 } 169 168 p_cxt = cxt_info.p_cxt; 170 169 SET_FIELD(p_cxt->tstorm_ag_context.flags3, ··· 240 239 241 240 rc = qed_spq_post(p_hwfn, p_ent, NULL); 242 241 242 + return rc; 243 + 244 + err: 245 + qed_sp_destroy_request(p_hwfn, p_ent); 243 246 return rc; 244 247 } 245 248
+2
drivers/net/ethernet/qlogic/qed/qed_int.c
··· 992 992 */ 993 993 do { 994 994 index = p_sb_attn->sb_index; 995 + /* finish reading index before the loop condition */ 996 + dma_rmb(); 995 997 attn_bits = le32_to_cpu(p_sb_attn->atten_bits); 996 998 attn_acks = le32_to_cpu(p_sb_attn->atten_ack); 997 999 } while (index != p_sb_attn->sb_index);
+1
drivers/net/ethernet/qlogic/qed/qed_iscsi.c
··· 200 200 "Cannot satisfy CQ amount. Queues requested %d, CQs available %d. Aborting function start\n", 201 201 p_params->num_queues, 202 202 p_hwfn->hw_info.feat_num[QED_ISCSI_CQ]); 203 + qed_sp_destroy_request(p_hwfn, p_ent); 203 204 return -EINVAL; 204 205 } 205 206
+8 -4
drivers/net/ethernet/qlogic/qed/qed_l2.c
··· 740 740 741 741 rc = qed_sp_vport_update_rss(p_hwfn, p_ramrod, p_rss_params); 742 742 if (rc) { 743 - /* Return spq entry which is taken in qed_sp_init_request()*/ 744 - qed_spq_return_entry(p_hwfn, p_ent); 743 + qed_sp_destroy_request(p_hwfn, p_ent); 745 744 return rc; 746 745 } 747 746 ··· 1354 1355 DP_NOTICE(p_hwfn, 1355 1356 "%d is not supported yet\n", 1356 1357 p_filter_cmd->opcode); 1358 + qed_sp_destroy_request(p_hwfn, *pp_ent); 1357 1359 return -EINVAL; 1358 1360 } 1359 1361 ··· 2056 2056 } else { 2057 2057 rc = qed_fw_vport(p_hwfn, p_params->vport_id, &abs_vport_id); 2058 2058 if (rc) 2059 - return rc; 2059 + goto err; 2060 2060 2061 2061 if (p_params->qid != QED_RFS_NTUPLE_QID_RSS) { 2062 2062 rc = qed_fw_l2_queue(p_hwfn, p_params->qid, 2063 2063 &abs_rx_q_id); 2064 2064 if (rc) 2065 - return rc; 2065 + goto err; 2066 2066 2067 2067 p_ramrod->rx_qid_valid = 1; 2068 2068 p_ramrod->rx_qid = cpu_to_le16(abs_rx_q_id); ··· 2083 2083 (u64)p_params->addr, p_params->length); 2084 2084 2085 2085 return qed_spq_post(p_hwfn, p_ent, NULL); 2086 + 2087 + err: 2088 + qed_sp_destroy_request(p_hwfn, p_ent); 2089 + return rc; 2086 2090 } 2087 2091 2088 2092 int qed_get_rxq_coalesce(struct qed_hwfn *p_hwfn,
+1 -1
drivers/net/ethernet/qlogic/qed/qed_main.c
··· 1782 1782 return -EBUSY; 1783 1783 } 1784 1784 rc = qed_mcp_drain(hwfn, ptt); 1785 + qed_ptt_release(hwfn, ptt); 1785 1786 if (rc) 1786 1787 return rc; 1787 - qed_ptt_release(hwfn, ptt); 1788 1788 } 1789 1789 1790 1790 return 0;
+5 -2
drivers/net/ethernet/qlogic/qed/qed_mcp.c
··· 1944 1944 struct qed_ptt *p_ptt, u32 *p_speed_mask) 1945 1945 { 1946 1946 u32 transceiver_type, transceiver_state; 1947 + int ret; 1947 1948 1948 - qed_mcp_get_transceiver_data(p_hwfn, p_ptt, &transceiver_state, 1949 - &transceiver_type); 1949 + ret = qed_mcp_get_transceiver_data(p_hwfn, p_ptt, &transceiver_state, 1950 + &transceiver_type); 1951 + if (ret) 1952 + return ret; 1950 1953 1951 1954 if (qed_is_transceiver_ready(transceiver_state, transceiver_type) == 1952 1955 false)
+30 -21
drivers/net/ethernet/qlogic/qed/qed_rdma.c
··· 140 140 return FEAT_NUM((struct qed_hwfn *)p_hwfn, QED_PF_L2_QUE) + rel_sb_id; 141 141 } 142 142 143 - static int qed_rdma_alloc(struct qed_hwfn *p_hwfn, 144 - struct qed_ptt *p_ptt, 145 - struct qed_rdma_start_in_params *params) 143 + int qed_rdma_info_alloc(struct qed_hwfn *p_hwfn) 146 144 { 147 145 struct qed_rdma_info *p_rdma_info; 146 + 147 + p_rdma_info = kzalloc(sizeof(*p_rdma_info), GFP_KERNEL); 148 + if (!p_rdma_info) 149 + return -ENOMEM; 150 + 151 + spin_lock_init(&p_rdma_info->lock); 152 + 153 + p_hwfn->p_rdma_info = p_rdma_info; 154 + return 0; 155 + } 156 + 157 + void qed_rdma_info_free(struct qed_hwfn *p_hwfn) 158 + { 159 + kfree(p_hwfn->p_rdma_info); 160 + p_hwfn->p_rdma_info = NULL; 161 + } 162 + 163 + static int qed_rdma_alloc(struct qed_hwfn *p_hwfn) 164 + { 165 + struct qed_rdma_info *p_rdma_info = p_hwfn->p_rdma_info; 148 166 u32 num_cons, num_tasks; 149 167 int rc = -ENOMEM; 150 168 151 169 DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "Allocating RDMA\n"); 152 170 153 - /* Allocate a struct with current pf rdma info */ 154 - p_rdma_info = kzalloc(sizeof(*p_rdma_info), GFP_KERNEL); 155 - if (!p_rdma_info) 156 - return rc; 157 - 158 - p_hwfn->p_rdma_info = p_rdma_info; 159 171 if (QED_IS_IWARP_PERSONALITY(p_hwfn)) 160 172 p_rdma_info->proto = PROTOCOLID_IWARP; 161 173 else ··· 195 183 /* Allocate a struct with device params and fill it */ 196 184 p_rdma_info->dev = kzalloc(sizeof(*p_rdma_info->dev), GFP_KERNEL); 197 185 if (!p_rdma_info->dev) 198 - goto free_rdma_info; 186 + return rc; 199 187 200 188 /* Allocate a struct with port params and fill it */ 201 189 p_rdma_info->port = kzalloc(sizeof(*p_rdma_info->port), GFP_KERNEL); ··· 310 298 kfree(p_rdma_info->port); 311 299 free_rdma_dev: 312 300 kfree(p_rdma_info->dev); 313 - free_rdma_info: 314 - kfree(p_rdma_info); 315 301 316 302 return rc; 317 303 } ··· 380 370 381 371 kfree(p_rdma_info->port); 382 372 kfree(p_rdma_info->dev); 383 - 384 - kfree(p_rdma_info); 385 373 } 386 374 387 375 static void qed_rdma_free_tid(void *rdma_cxt, u32 itid) ··· 687 679 688 680 DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "RDMA setup\n"); 689 681 690 - spin_lock_init(&p_hwfn->p_rdma_info->lock); 691 - 692 682 qed_rdma_init_devinfo(p_hwfn, params); 693 683 qed_rdma_init_port(p_hwfn); 694 684 qed_rdma_init_events(p_hwfn, params); ··· 733 727 /* Disable RoCE search */ 734 728 qed_wr(p_hwfn, p_ptt, p_hwfn->rdma_prs_search_reg, 0); 735 729 p_hwfn->b_rdma_enabled_in_prs = false; 736 - 730 + p_hwfn->p_rdma_info->active = 0; 737 731 qed_wr(p_hwfn, p_ptt, PRS_REG_ROCE_DEST_QP_MAX_PF, 0); 738 732 739 733 ll2_ethertype_en = qed_rd(p_hwfn, p_ptt, PRS_REG_LIGHT_L2_ETHERTYPE_EN); ··· 1242 1236 u8 max_stats_queues; 1243 1237 int rc; 1244 1238 1245 - if (!rdma_cxt || !in_params || !out_params || !p_hwfn->p_rdma_info) { 1239 + if (!rdma_cxt || !in_params || !out_params || 1240 + !p_hwfn->p_rdma_info->active) { 1246 1241 DP_ERR(p_hwfn->cdev, 1247 1242 "qed roce create qp failed due to NULL entry (rdma_cxt=%p, in=%p, out=%p, roce_info=?\n", 1248 1243 rdma_cxt, in_params, out_params); ··· 1521 1514 default: 1522 1515 rc = -EINVAL; 1523 1516 DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "rc = %d\n", rc); 1517 + qed_sp_destroy_request(p_hwfn, p_ent); 1524 1518 return rc; 1525 1519 } 1526 1520 SET_FIELD(p_ramrod->flags1, ··· 1809 1801 { 1810 1802 bool result; 1811 1803 1812 - /* if rdma info has not been allocated, naturally there are no qps */ 1813 - if (!p_hwfn->p_rdma_info) 1804 + /* if rdma wasn't activated yet, naturally there are no qps */ 1805 + if (!p_hwfn->p_rdma_info->active) 1814 1806 return false; 1815 1807 1816 1808 spin_lock_bh(&p_hwfn->p_rdma_info->lock); ··· 1856 1848 if (!p_ptt) 1857 1849 goto err; 1858 1850 1859 - rc = qed_rdma_alloc(p_hwfn, p_ptt, params); 1851 + rc = qed_rdma_alloc(p_hwfn); 1860 1852 if (rc) 1861 1853 goto err1; 1862 1854 ··· 1865 1857 goto err2; 1866 1858 1867 1859 qed_ptt_release(p_hwfn, p_ptt); 1860 + p_hwfn->p_rdma_info->active = 1; 1868 1861 1869 1862 return rc; 1870 1863
+5
drivers/net/ethernet/qlogic/qed/qed_rdma.h
··· 102 102 u16 max_queue_zones; 103 103 enum protocol_type proto; 104 104 struct qed_iwarp_info iwarp; 105 + u8 active:1; 105 106 }; 106 107 107 108 struct qed_rdma_qp { ··· 177 176 #if IS_ENABLED(CONFIG_QED_RDMA) 178 177 void qed_rdma_dpm_bar(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt); 179 178 void qed_rdma_dpm_conf(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt); 179 + int qed_rdma_info_alloc(struct qed_hwfn *p_hwfn); 180 + void qed_rdma_info_free(struct qed_hwfn *p_hwfn); 180 181 #else 181 182 static inline void qed_rdma_dpm_conf(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) {} 182 183 static inline void qed_rdma_dpm_bar(struct qed_hwfn *p_hwfn, 183 184 struct qed_ptt *p_ptt) {} 185 + static inline int qed_rdma_info_alloc(struct qed_hwfn *p_hwfn) {return -EINVAL;} 186 + static inline void qed_rdma_info_free(struct qed_hwfn *p_hwfn) {} 184 187 #endif 185 188 186 189 int
+1
drivers/net/ethernet/qlogic/qed/qed_roce.c
··· 745 745 DP_NOTICE(p_hwfn, 746 746 "qed destroy responder failed: cannot allocate memory (ramrod). rc = %d\n", 747 747 rc); 748 + qed_sp_destroy_request(p_hwfn, p_ent); 748 749 return rc; 749 750 } 750 751
+14
drivers/net/ethernet/qlogic/qed/qed_sp.h
··· 167 167 enum spq_mode comp_mode; 168 168 struct qed_spq_comp_cb comp_cb; 169 169 struct qed_spq_comp_done comp_done; /* SPQ_MODE_EBLOCK */ 170 + 171 + /* Posted entry for unlimited list entry in EBLOCK mode */ 172 + struct qed_spq_entry *post_ent; 170 173 }; 171 174 172 175 struct qed_eq { ··· 398 395 enum spq_mode comp_mode; 399 396 struct qed_spq_comp_cb *p_comp_data; 400 397 }; 398 + 399 + /** 400 + * @brief Returns a SPQ entry to the pool / frees the entry if allocated. 401 + * Should be called on in error flows after initializing the SPQ entry 402 + * and before posting it. 403 + * 404 + * @param p_hwfn 405 + * @param p_ent 406 + */ 407 + void qed_sp_destroy_request(struct qed_hwfn *p_hwfn, 408 + struct qed_spq_entry *p_ent); 401 409 402 410 int qed_sp_init_request(struct qed_hwfn *p_hwfn, 403 411 struct qed_spq_entry **pp_ent,
+20 -2
drivers/net/ethernet/qlogic/qed/qed_sp_commands.c
··· 47 47 #include "qed_sp.h" 48 48 #include "qed_sriov.h" 49 49 50 + void qed_sp_destroy_request(struct qed_hwfn *p_hwfn, 51 + struct qed_spq_entry *p_ent) 52 + { 53 + /* qed_spq_get_entry() can either get an entry from the free_pool, 54 + * or, if no entries are left, allocate a new entry and add it to 55 + * the unlimited_pending list. 56 + */ 57 + if (p_ent->queue == &p_hwfn->p_spq->unlimited_pending) 58 + kfree(p_ent); 59 + else 60 + qed_spq_return_entry(p_hwfn, p_ent); 61 + } 62 + 50 63 int qed_sp_init_request(struct qed_hwfn *p_hwfn, 51 64 struct qed_spq_entry **pp_ent, 52 65 u8 cmd, u8 protocol, struct qed_sp_init_data *p_data) ··· 93 80 94 81 case QED_SPQ_MODE_BLOCK: 95 82 if (!p_data->p_comp_data) 96 - return -EINVAL; 83 + goto err; 97 84 98 85 p_ent->comp_cb.cookie = p_data->p_comp_data->cookie; 99 86 break; ··· 108 95 default: 109 96 DP_NOTICE(p_hwfn, "Unknown SPQE completion mode %d\n", 110 97 p_ent->comp_mode); 111 - return -EINVAL; 98 + goto err; 112 99 } 113 100 114 101 DP_VERBOSE(p_hwfn, QED_MSG_SPQ, ··· 122 109 memset(&p_ent->ramrod, 0, sizeof(p_ent->ramrod)); 123 110 124 111 return 0; 112 + 113 + err: 114 + qed_sp_destroy_request(p_hwfn, p_ent); 115 + 116 + return -EINVAL; 125 117 } 126 118 127 119 static enum tunnel_clss qed_tunn_clss_to_fw_clss(u8 type)
+35 -34
drivers/net/ethernet/qlogic/qed/qed_spq.c
··· 142 142 143 143 DP_INFO(p_hwfn, "Ramrod is stuck, requesting MCP drain\n"); 144 144 rc = qed_mcp_drain(p_hwfn, p_ptt); 145 + qed_ptt_release(p_hwfn, p_ptt); 145 146 if (rc) { 146 147 DP_NOTICE(p_hwfn, "MCP drain failed\n"); 147 148 goto err; ··· 151 150 /* Retry after drain */ 152 151 rc = __qed_spq_block(p_hwfn, p_ent, p_fw_ret, true); 153 152 if (!rc) 154 - goto out; 153 + return 0; 155 154 156 155 comp_done = (struct qed_spq_comp_done *)p_ent->comp_cb.cookie; 157 - if (comp_done->done == 1) 156 + if (comp_done->done == 1) { 158 157 if (p_fw_ret) 159 158 *p_fw_ret = comp_done->fw_return_code; 160 - out: 161 - qed_ptt_release(p_hwfn, p_ptt); 162 - return 0; 163 - 159 + return 0; 160 + } 164 161 err: 165 - qed_ptt_release(p_hwfn, p_ptt); 166 162 DP_NOTICE(p_hwfn, 167 163 "Ramrod is stuck [CID %08x cmd %02x protocol %02x echo %04x]\n", 168 164 le32_to_cpu(p_ent->elem.hdr.cid), ··· 683 685 /* EBLOCK responsible to free the allocated p_ent */ 684 686 if (p_ent->comp_mode != QED_SPQ_MODE_EBLOCK) 685 687 kfree(p_ent); 688 + else 689 + p_ent->post_ent = p_en2; 686 690 687 691 p_ent = p_en2; 688 692 } ··· 767 767 SPQ_HIGH_PRI_RESERVE_DEFAULT); 768 768 } 769 769 770 + /* Avoid overriding of SPQ entries when getting out-of-order completions, by 771 + * marking the completions in a bitmap and increasing the chain consumer only 772 + * for the first successive completed entries. 773 + */ 774 + static void qed_spq_comp_bmap_update(struct qed_hwfn *p_hwfn, __le16 echo) 775 + { 776 + u16 pos = le16_to_cpu(echo) % SPQ_RING_SIZE; 777 + struct qed_spq *p_spq = p_hwfn->p_spq; 778 + 779 + __set_bit(pos, p_spq->p_comp_bitmap); 780 + while (test_bit(p_spq->comp_bitmap_idx, 781 + p_spq->p_comp_bitmap)) { 782 + __clear_bit(p_spq->comp_bitmap_idx, 783 + p_spq->p_comp_bitmap); 784 + p_spq->comp_bitmap_idx++; 785 + qed_chain_return_produced(&p_spq->chain); 786 + } 787 + } 788 + 770 789 int qed_spq_post(struct qed_hwfn *p_hwfn, 771 790 struct qed_spq_entry *p_ent, u8 *fw_return_code) 772 791 { ··· 843 824 p_ent->queue == &p_spq->unlimited_pending); 844 825 845 826 if (p_ent->queue == &p_spq->unlimited_pending) { 846 - /* This is an allocated p_ent which does not need to 847 - * return to pool. 848 - */ 827 + struct qed_spq_entry *p_post_ent = p_ent->post_ent; 828 + 849 829 kfree(p_ent); 850 - return rc; 830 + 831 + /* Return the entry which was actually posted */ 832 + p_ent = p_post_ent; 851 833 } 852 834 853 835 if (rc) ··· 862 842 spq_post_fail2: 863 843 spin_lock_bh(&p_spq->lock); 864 844 list_del(&p_ent->list); 865 - qed_chain_return_produced(&p_spq->chain); 845 + qed_spq_comp_bmap_update(p_hwfn, p_ent->elem.hdr.echo); 866 846 867 847 spq_post_fail: 868 848 /* return to the free pool */ ··· 894 874 spin_lock_bh(&p_spq->lock); 895 875 list_for_each_entry_safe(p_ent, tmp, &p_spq->completion_pending, list) { 896 876 if (p_ent->elem.hdr.echo == echo) { 897 - u16 pos = le16_to_cpu(echo) % SPQ_RING_SIZE; 898 - 899 877 list_del(&p_ent->list); 900 - 901 - /* Avoid overriding of SPQ entries when getting 902 - * out-of-order completions, by marking the completions 903 - * in a bitmap and increasing the chain consumer only 904 - * for the first successive completed entries. 905 - */ 906 - __set_bit(pos, p_spq->p_comp_bitmap); 907 - 908 - while (test_bit(p_spq->comp_bitmap_idx, 909 - p_spq->p_comp_bitmap)) { 910 - __clear_bit(p_spq->comp_bitmap_idx, 911 - p_spq->p_comp_bitmap); 912 - p_spq->comp_bitmap_idx++; 913 - qed_chain_return_produced(&p_spq->chain); 914 - } 915 - 878 + qed_spq_comp_bmap_update(p_hwfn, echo); 916 879 p_spq->comp_count++; 917 880 found = p_ent; 918 881 break; ··· 934 931 QED_MSG_SPQ, 935 932 "Got a completion without a callback function\n"); 936 933 937 - if ((found->comp_mode != QED_SPQ_MODE_EBLOCK) || 938 - (found->queue == &p_spq->unlimited_pending)) 934 + if (found->comp_mode != QED_SPQ_MODE_EBLOCK) 939 935 /* EBLOCK is responsible for returning its own entry into the 940 - * free list, unless it originally added the entry into the 941 - * unlimited pending list. 936 + * free list. 942 937 */ 943 938 qed_spq_return_entry(p_hwfn, found); 944 939
+1
drivers/net/ethernet/qlogic/qed/qed_sriov.c
··· 101 101 default: 102 102 DP_NOTICE(p_hwfn, "Unknown VF personality %d\n", 103 103 p_hwfn->hw_info.personality); 104 + qed_sp_destroy_request(p_hwfn, p_ent); 104 105 return -EINVAL; 105 106 } 106 107
+5 -3
drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
··· 459 459 struct cmd_desc_type0 *first_desc, struct sk_buff *skb, 460 460 struct qlcnic_host_tx_ring *tx_ring) 461 461 { 462 - u8 l4proto, opcode = 0, hdr_len = 0; 462 + u8 l4proto, opcode = 0, hdr_len = 0, tag_vlan = 0; 463 463 u16 flags = 0, vlan_tci = 0; 464 464 int copied, offset, copy_len, size; 465 465 struct cmd_desc_type0 *hwdesc; ··· 472 472 flags = QLCNIC_FLAGS_VLAN_TAGGED; 473 473 vlan_tci = ntohs(vh->h_vlan_TCI); 474 474 protocol = ntohs(vh->h_vlan_encapsulated_proto); 475 + tag_vlan = 1; 475 476 } else if (skb_vlan_tag_present(skb)) { 476 477 flags = QLCNIC_FLAGS_VLAN_OOB; 477 478 vlan_tci = skb_vlan_tag_get(skb); 479 + tag_vlan = 1; 478 480 } 479 481 if (unlikely(adapter->tx_pvid)) { 480 - if (vlan_tci && !(adapter->flags & QLCNIC_TAGGING_ENABLED)) 482 + if (tag_vlan && !(adapter->flags & QLCNIC_TAGGING_ENABLED)) 481 483 return -EIO; 482 - if (vlan_tci && (adapter->flags & QLCNIC_TAGGING_ENABLED)) 484 + if (tag_vlan && (adapter->flags & QLCNIC_TAGGING_ENABLED)) 483 485 goto set_flags; 484 486 485 487 flags = QLCNIC_FLAGS_VLAN_OOB;
+3 -3
drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
··· 234 234 struct net_device *real_dev, 235 235 struct rmnet_endpoint *ep) 236 236 { 237 - struct rmnet_priv *priv; 237 + struct rmnet_priv *priv = netdev_priv(rmnet_dev); 238 238 int rc; 239 239 240 240 if (ep->egress_dev) ··· 247 247 rmnet_dev->hw_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM; 248 248 rmnet_dev->hw_features |= NETIF_F_SG; 249 249 250 + priv->real_dev = real_dev; 251 + 250 252 rc = register_netdevice(rmnet_dev); 251 253 if (!rc) { 252 254 ep->egress_dev = rmnet_dev; ··· 257 255 258 256 rmnet_dev->rtnl_link_ops = &rmnet_link_ops; 259 257 260 - priv = netdev_priv(rmnet_dev); 261 258 priv->mux_id = id; 262 - priv->real_dev = real_dev; 263 259 264 260 netdev_dbg(rmnet_dev, "rmnet dev created\n"); 265 261 }
+2 -1
drivers/net/ethernet/stmicro/stmmac/common.h
··· 365 365 366 366 /* GMAC TX FIFO is 8K, Rx FIFO is 16K */ 367 367 #define BUF_SIZE_16KiB 16384 368 - #define BUF_SIZE_8KiB 8192 368 + /* RX Buffer size must be < 8191 and multiple of 4/8/16 bytes */ 369 + #define BUF_SIZE_8KiB 8188 369 370 #define BUF_SIZE_4KiB 4096 370 371 #define BUF_SIZE_2KiB 2048 371 372
+1 -1
drivers/net/ethernet/stmicro/stmmac/descs_com.h
··· 31 31 /* Enhanced descriptors */ 32 32 static inline void ehn_desc_rx_set_on_ring(struct dma_desc *p, int end) 33 33 { 34 - p->des1 |= cpu_to_le32(((BUF_SIZE_8KiB - 1) 34 + p->des1 |= cpu_to_le32((BUF_SIZE_8KiB 35 35 << ERDES1_BUFFER2_SIZE_SHIFT) 36 36 & ERDES1_BUFFER2_SIZE_MASK); 37 37
+1 -1
drivers/net/ethernet/stmicro/stmmac/enh_desc.c
··· 262 262 int mode, int end) 263 263 { 264 264 p->des0 |= cpu_to_le32(RDES0_OWN); 265 - p->des1 |= cpu_to_le32((BUF_SIZE_8KiB - 1) & ERDES1_BUFFER1_SIZE_MASK); 265 + p->des1 |= cpu_to_le32(BUF_SIZE_8KiB & ERDES1_BUFFER1_SIZE_MASK); 266 266 267 267 if (mode == STMMAC_CHAIN_MODE) 268 268 ehn_desc_rx_set_on_chain(p);
+1 -1
drivers/net/ethernet/stmicro/stmmac/ring_mode.c
··· 140 140 static int set_16kib_bfsize(int mtu) 141 141 { 142 142 int ret = 0; 143 - if (unlikely(mtu >= BUF_SIZE_8KiB)) 143 + if (unlikely(mtu > BUF_SIZE_8KiB)) 144 144 ret = BUF_SIZE_16KiB; 145 145 return ret; 146 146 }
+1 -1
drivers/net/ethernet/via/via-velocity.c
··· 3605 3605 "tx_jumbo", 3606 3606 "rx_mac_control_frames", 3607 3607 "tx_mac_control_frames", 3608 - "rx_frame_alignement_errors", 3608 + "rx_frame_alignment_errors", 3609 3609 "rx_long_ok", 3610 3610 "rx_long_err", 3611 3611 "tx_sqe_errors",
+4 -3
drivers/net/fddi/defza.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 1 + // SPDX-License-Identifier: GPL-2.0+ 2 2 /* FDDI network adapter driver for DEC FDDIcontroller 700/700-C devices. 3 3 * 4 4 * Copyright (c) 2018 Maciej W. Rozycki ··· 56 56 #define DRV_VERSION "v.1.1.4" 57 57 #define DRV_RELDATE "Oct 6 2018" 58 58 59 - static char version[] = 59 + static const char version[] = 60 60 DRV_NAME ": " DRV_VERSION " " DRV_RELDATE " Maciej W. Rozycki\n"; 61 61 62 62 MODULE_AUTHOR("Maciej W. Rozycki <macro@linux-mips.org>"); ··· 784 784 static void fza_tx_smt(struct net_device *dev) 785 785 { 786 786 struct fza_private *fp = netdev_priv(dev); 787 - struct fza_buffer_tx __iomem *smt_tx_ptr, *skb_data_ptr; 787 + struct fza_buffer_tx __iomem *smt_tx_ptr; 788 788 int i, len; 789 789 u32 own; 790 790 ··· 799 799 800 800 if (!netif_queue_stopped(dev)) { 801 801 if (dev_nit_active(dev)) { 802 + struct fza_buffer_tx *skb_data_ptr; 802 803 struct sk_buff *skb; 803 804 804 805 /* Length must be a multiple of 4 as only word
+2 -1
drivers/net/fddi/defza.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 1 + /* SPDX-License-Identifier: GPL-2.0+ */ 2 2 /* FDDI network adapter driver for DEC FDDIcontroller 700/700-C devices. 3 3 * 4 4 * Copyright (c) 2018 Maciej W. Rozycki ··· 235 235 #define FZA_RING_CMD 0x200400 /* command ring address */ 236 236 #define FZA_RING_CMD_SIZE 0x40 /* command descriptor ring 237 237 * size 238 + */ 238 239 /* Command constants. */ 239 240 #define FZA_RING_CMD_MASK 0x7fffffff 240 241 #define FZA_RING_CMD_NOP 0x00000000 /* nop */
+16 -2
drivers/net/phy/broadcom.c
··· 92 92 return 0; 93 93 } 94 94 95 - static int bcm5481x_config(struct phy_device *phydev) 95 + static int bcm54xx_config_clock_delay(struct phy_device *phydev) 96 96 { 97 97 int rc, val; 98 98 ··· 429 429 ret = genphy_config_aneg(phydev); 430 430 431 431 /* Then we can set up the delay. */ 432 - bcm5481x_config(phydev); 432 + bcm54xx_config_clock_delay(phydev); 433 433 434 434 if (of_property_read_bool(np, "enet-phy-lane-swap")) { 435 435 /* Lane Swap - Undocumented register...magic! */ ··· 438 438 if (ret < 0) 439 439 return ret; 440 440 } 441 + 442 + return ret; 443 + } 444 + 445 + static int bcm54616s_config_aneg(struct phy_device *phydev) 446 + { 447 + int ret; 448 + 449 + /* Aneg firsly. */ 450 + ret = genphy_config_aneg(phydev); 451 + 452 + /* Then we can set up the delay. */ 453 + bcm54xx_config_clock_delay(phydev); 441 454 442 455 return ret; 443 456 } ··· 649 636 .features = PHY_GBIT_FEATURES, 650 637 .flags = PHY_HAS_INTERRUPT, 651 638 .config_init = bcm54xx_config_init, 639 + .config_aneg = bcm54616s_config_aneg, 652 640 .ack_interrupt = bcm_phy_ack_intr, 653 641 .config_intr = bcm_phy_config_intr, 654 642 }, {
+5 -5
drivers/net/phy/mdio-gpio.c
··· 63 63 * assume the pin serves as pull-up. If direction is 64 64 * output, the default value is high. 65 65 */ 66 - gpiod_set_value(bitbang->mdo, 1); 66 + gpiod_set_value_cansleep(bitbang->mdo, 1); 67 67 return; 68 68 } 69 69 ··· 78 78 struct mdio_gpio_info *bitbang = 79 79 container_of(ctrl, struct mdio_gpio_info, ctrl); 80 80 81 - return gpiod_get_value(bitbang->mdio); 81 + return gpiod_get_value_cansleep(bitbang->mdio); 82 82 } 83 83 84 84 static void mdio_set(struct mdiobb_ctrl *ctrl, int what) ··· 87 87 container_of(ctrl, struct mdio_gpio_info, ctrl); 88 88 89 89 if (bitbang->mdo) 90 - gpiod_set_value(bitbang->mdo, what); 90 + gpiod_set_value_cansleep(bitbang->mdo, what); 91 91 else 92 - gpiod_set_value(bitbang->mdio, what); 92 + gpiod_set_value_cansleep(bitbang->mdio, what); 93 93 } 94 94 95 95 static void mdc_set(struct mdiobb_ctrl *ctrl, int what) ··· 97 97 struct mdio_gpio_info *bitbang = 98 98 container_of(ctrl, struct mdio_gpio_info, ctrl); 99 99 100 - gpiod_set_value(bitbang->mdc, what); 100 + gpiod_set_value_cansleep(bitbang->mdc, what); 101 101 } 102 102 103 103 static const struct mdiobb_ops mdio_gpio_ops = {
+5 -9
drivers/net/phy/mscc.c
··· 810 810 811 811 phydev->mdix_ctrl = ETH_TP_MDI_AUTO; 812 812 mutex_lock(&phydev->lock); 813 - rc = phy_select_page(phydev, MSCC_PHY_PAGE_EXTENDED_2); 814 - if (rc < 0) 815 - goto out_unlock; 816 813 817 - reg_val = phy_read(phydev, MSCC_PHY_RGMII_CNTL); 818 - reg_val &= ~(RGMII_RX_CLK_DELAY_MASK); 819 - reg_val |= (RGMII_RX_CLK_DELAY_1_1_NS << RGMII_RX_CLK_DELAY_POS); 820 - phy_write(phydev, MSCC_PHY_RGMII_CNTL, reg_val); 814 + reg_val = RGMII_RX_CLK_DELAY_1_1_NS << RGMII_RX_CLK_DELAY_POS; 821 815 822 - out_unlock: 823 - rc = phy_restore_page(phydev, rc, rc > 0 ? 0 : rc); 816 + rc = phy_modify_paged(phydev, MSCC_PHY_PAGE_EXTENDED_2, 817 + MSCC_PHY_RGMII_CNTL, RGMII_RX_CLK_DELAY_MASK, 818 + reg_val); 819 + 824 820 mutex_unlock(&phydev->lock); 825 821 826 822 return rc;
+8
drivers/net/phy/phy_device.c
··· 2197 2197 new_driver->mdiodrv.driver.remove = phy_remove; 2198 2198 new_driver->mdiodrv.driver.owner = owner; 2199 2199 2200 + /* The following works around an issue where the PHY driver doesn't bind 2201 + * to the device, resulting in the genphy driver being used instead of 2202 + * the dedicated driver. The root cause of the issue isn't known yet 2203 + * and seems to be in the base driver core. Once this is fixed we may 2204 + * remove this workaround. 2205 + */ 2206 + new_driver->mdiodrv.driver.probe_type = PROBE_FORCE_SYNCHRONOUS; 2207 + 2200 2208 retval = driver_register(&new_driver->mdiodrv.driver); 2201 2209 if (retval) { 2202 2210 pr_err("%s: Error %d in registering driver\n",
+1 -1
drivers/net/phy/realtek.c
··· 220 220 .flags = PHY_HAS_INTERRUPT, 221 221 }, { 222 222 .phy_id = 0x001cc816, 223 - .name = "RTL8201F 10/100Mbps Ethernet", 223 + .name = "RTL8201F Fast Ethernet", 224 224 .phy_id_mask = 0x001fffff, 225 225 .features = PHY_BASIC_FEATURES, 226 226 .flags = PHY_HAS_INTERRUPT,
+1 -1
drivers/net/rionet.c
··· 216 216 * it just report sending a packet to the target 217 217 * (without actual packet transfer). 218 218 */ 219 - dev_kfree_skb_any(skb); 220 219 ndev->stats.tx_packets++; 221 220 ndev->stats.tx_bytes += skb->len; 221 + dev_kfree_skb_any(skb); 222 222 } 223 223 } 224 224
-2
drivers/net/team/team.c
··· 985 985 team->en_port_count--; 986 986 team_queue_override_port_del(team, port); 987 987 team_adjust_ops(team); 988 - team_notify_peers(team); 989 - team_mcast_rejoin(team); 990 988 team_lower_state_changed(port); 991 989 } 992 990
+6 -1
drivers/net/tun.c
··· 1536 1536 1537 1537 if (!rx_batched || (!more && skb_queue_empty(queue))) { 1538 1538 local_bh_disable(); 1539 + skb_record_rx_queue(skb, tfile->queue_index); 1539 1540 netif_receive_skb(skb); 1540 1541 local_bh_enable(); 1541 1542 return; ··· 1556 1555 struct sk_buff *nskb; 1557 1556 1558 1557 local_bh_disable(); 1559 - while ((nskb = __skb_dequeue(&process_queue))) 1558 + while ((nskb = __skb_dequeue(&process_queue))) { 1559 + skb_record_rx_queue(nskb, tfile->queue_index); 1560 1560 netif_receive_skb(nskb); 1561 + } 1562 + skb_record_rx_queue(skb, tfile->queue_index); 1561 1563 netif_receive_skb(skb); 1562 1564 local_bh_enable(); 1563 1565 } ··· 2455 2451 if (!rcu_dereference(tun->steering_prog)) 2456 2452 rxhash = __skb_get_hash_symmetric(skb); 2457 2453 2454 + skb_record_rx_queue(skb, tfile->queue_index); 2458 2455 netif_receive_skb(skb); 2459 2456 2460 2457 stats = get_cpu_ptr(tun->pcpu_stats);
+4 -6
drivers/net/usb/ipheth.c
··· 140 140 struct usb_device *udev; 141 141 struct usb_interface *intf; 142 142 struct net_device *net; 143 - struct sk_buff *tx_skb; 144 143 struct urb *tx_urb; 145 144 struct urb *rx_urb; 146 145 unsigned char *tx_buf; ··· 229 230 case -ENOENT: 230 231 case -ECONNRESET: 231 232 case -ESHUTDOWN: 233 + case -EPROTO: 232 234 return; 233 235 case 0: 234 236 break; ··· 281 281 dev_err(&dev->intf->dev, "%s: urb status: %d\n", 282 282 __func__, status); 283 283 284 - dev_kfree_skb_irq(dev->tx_skb); 285 284 if (status == 0) 286 285 netif_wake_queue(dev->net); 287 286 else ··· 422 423 if (skb->len > IPHETH_BUF_SIZE) { 423 424 WARN(1, "%s: skb too large: %d bytes\n", __func__, skb->len); 424 425 dev->net->stats.tx_dropped++; 425 - dev_kfree_skb_irq(skb); 426 + dev_kfree_skb_any(skb); 426 427 return NETDEV_TX_OK; 427 428 } 428 429 ··· 442 443 dev_err(&dev->intf->dev, "%s: usb_submit_urb: %d\n", 443 444 __func__, retval); 444 445 dev->net->stats.tx_errors++; 445 - dev_kfree_skb_irq(skb); 446 + dev_kfree_skb_any(skb); 446 447 } else { 447 - dev->tx_skb = skb; 448 - 449 448 dev->net->stats.tx_packets++; 450 449 dev->net->stats.tx_bytes += skb->len; 450 + dev_consume_skb_any(skb); 451 451 netif_stop_queue(net); 452 452 } 453 453
+9
drivers/net/usb/smsc95xx.c
··· 1321 1321 dev->net->ethtool_ops = &smsc95xx_ethtool_ops; 1322 1322 dev->net->flags |= IFF_MULTICAST; 1323 1323 dev->net->hard_header_len += SMSC95XX_TX_OVERHEAD_CSUM; 1324 + dev->net->min_mtu = ETH_MIN_MTU; 1325 + dev->net->max_mtu = ETH_DATA_LEN; 1324 1326 dev->hard_mtu = dev->net->mtu + dev->net->hard_header_len; 1325 1327 1326 1328 pdata->dev = dev; ··· 1600 1598 return ret; 1601 1599 } 1602 1600 1601 + cancel_delayed_work_sync(&pdata->carrier_check); 1602 + 1603 1603 if (pdata->suspend_flags) { 1604 1604 netdev_warn(dev->net, "error during last resume\n"); 1605 1605 pdata->suspend_flags = 0; ··· 1844 1840 */ 1845 1841 if (ret && PMSG_IS_AUTO(message)) 1846 1842 usbnet_resume(intf); 1843 + 1844 + if (ret) 1845 + schedule_delayed_work(&pdata->carrier_check, 1846 + CARRIER_CHECK_DELAY); 1847 + 1847 1848 return ret; 1848 1849 } 1849 1850
+5 -8
drivers/net/virtio_net.c
··· 70 70 VIRTIO_NET_F_GUEST_TSO4, 71 71 VIRTIO_NET_F_GUEST_TSO6, 72 72 VIRTIO_NET_F_GUEST_ECN, 73 - VIRTIO_NET_F_GUEST_UFO 73 + VIRTIO_NET_F_GUEST_UFO, 74 + VIRTIO_NET_F_GUEST_CSUM 74 75 }; 75 76 76 77 struct virtnet_stat_desc { ··· 2335 2334 if (!vi->guest_offloads) 2336 2335 return 0; 2337 2336 2338 - if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_CSUM)) 2339 - offloads = 1ULL << VIRTIO_NET_F_GUEST_CSUM; 2340 - 2341 2337 return virtnet_set_guest_offloads(vi, offloads); 2342 2338 } 2343 2339 ··· 2344 2346 2345 2347 if (!vi->guest_offloads) 2346 2348 return 0; 2347 - if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_CSUM)) 2348 - offloads |= 1ULL << VIRTIO_NET_F_GUEST_CSUM; 2349 2349 2350 2350 return virtnet_set_guest_offloads(vi, offloads); 2351 2351 } ··· 2361 2365 && (virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_TSO4) || 2362 2366 virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_TSO6) || 2363 2367 virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_ECN) || 2364 - virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_UFO))) { 2365 - NL_SET_ERR_MSG_MOD(extack, "Can't set XDP while host is implementing LRO, disable LRO first"); 2368 + virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_UFO) || 2369 + virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_CSUM))) { 2370 + NL_SET_ERR_MSG_MOD(extack, "Can't set XDP while host is implementing LRO/CSUM, disable LRO/CSUM first"); 2366 2371 return -EOPNOTSUPP; 2367 2372 } 2368 2373
+1 -1
drivers/net/wireless/ath/ath10k/mac.c
··· 6867 6867 u32 bitmap; 6868 6868 6869 6869 if (drop) { 6870 - if (vif->type == NL80211_IFTYPE_STATION) { 6870 + if (vif && vif->type == NL80211_IFTYPE_STATION) { 6871 6871 bitmap = ~(1 << WMI_MGMT_TID); 6872 6872 list_for_each_entry(arvif, &ar->arvifs, list) { 6873 6873 if (arvif->vdev_type == WMI_VDEV_TYPE_STA)
+1 -2
drivers/net/wireless/ath/ath9k/main.c
··· 1251 1251 struct ath_vif *avp = (void *)vif->drv_priv; 1252 1252 struct ath_node *an = &avp->mcast_node; 1253 1253 1254 + mutex_lock(&sc->mutex); 1254 1255 if (IS_ENABLED(CONFIG_ATH9K_TX99)) { 1255 1256 if (sc->cur_chan->nvifs >= 1) { 1256 1257 mutex_unlock(&sc->mutex); ··· 1259 1258 } 1260 1259 sc->tx99_vif = vif; 1261 1260 } 1262 - 1263 - mutex_lock(&sc->mutex); 1264 1261 1265 1262 ath_dbg(common, CONFIG, "Attach a VIF of type: %d\n", vif->type); 1266 1263 sc->cur_chan->nvifs++;
+2 -1
drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
··· 6005 6005 * for subsequent chanspecs. 6006 6006 */ 6007 6007 channel->flags = IEEE80211_CHAN_NO_HT40 | 6008 - IEEE80211_CHAN_NO_80MHZ; 6008 + IEEE80211_CHAN_NO_80MHZ | 6009 + IEEE80211_CHAN_NO_160MHZ; 6009 6010 ch.bw = BRCMU_CHAN_BW_20; 6010 6011 cfg->d11inf.encchspec(&ch); 6011 6012 chaninfo = ch.chspec;
+3
drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c
··· 193 193 } 194 194 break; 195 195 case BRCMU_CHSPEC_D11AC_BW_160: 196 + ch->bw = BRCMU_CHAN_BW_160; 197 + ch->sb = brcmu_maskget16(ch->chspec, BRCMU_CHSPEC_D11AC_SB_MASK, 198 + BRCMU_CHSPEC_D11AC_SB_SHIFT); 196 199 switch (ch->sb) { 197 200 case BRCMU_CHAN_SB_LLL: 198 201 ch->control_ch_num -= CH_70MHZ_APART;
+3 -1
drivers/net/wireless/intel/iwlwifi/fw/acpi.h
··· 6 6 * GPL LICENSE SUMMARY 7 7 * 8 8 * Copyright(c) 2017 Intel Deutschland GmbH 9 + * Copyright(c) 2018 Intel Corporation 9 10 * 10 11 * This program is free software; you can redistribute it and/or modify 11 12 * it under the terms of version 2 of the GNU General Public License as ··· 27 26 * BSD LICENSE 28 27 * 29 28 * Copyright(c) 2017 Intel Deutschland GmbH 29 + * Copyright(c) 2018 Intel Corporation 30 30 * All rights reserved. 31 31 * 32 32 * Redistribution and use in source and binary forms, with or without ··· 83 81 #define ACPI_WRDS_WIFI_DATA_SIZE (ACPI_SAR_TABLE_SIZE + 2) 84 82 #define ACPI_EWRD_WIFI_DATA_SIZE ((ACPI_SAR_PROFILE_NUM - 1) * \ 85 83 ACPI_SAR_TABLE_SIZE + 3) 86 - #define ACPI_WGDS_WIFI_DATA_SIZE 18 84 + #define ACPI_WGDS_WIFI_DATA_SIZE 19 87 85 #define ACPI_WRDD_WIFI_DATA_SIZE 2 88 86 #define ACPI_SPLC_WIFI_DATA_SIZE 2 89 87
+5 -1
drivers/net/wireless/intel/iwlwifi/fw/runtime.h
··· 154 154 const struct iwl_fw_runtime_ops *ops, void *ops_ctx, 155 155 struct dentry *dbgfs_dir); 156 156 157 - void iwl_fw_runtime_exit(struct iwl_fw_runtime *fwrt); 157 + static inline void iwl_fw_runtime_free(struct iwl_fw_runtime *fwrt) 158 + { 159 + kfree(fwrt->dump.d3_debug_data); 160 + fwrt->dump.d3_debug_data = NULL; 161 + } 158 162 159 163 void iwl_fw_runtime_suspend(struct iwl_fw_runtime *fwrt); 160 164
+29 -9
drivers/net/wireless/intel/iwlwifi/mvm/fw.c
··· 893 893 IWL_DEBUG_RADIO(mvm, "Sending GEO_TX_POWER_LIMIT\n"); 894 894 895 895 BUILD_BUG_ON(ACPI_NUM_GEO_PROFILES * ACPI_WGDS_NUM_BANDS * 896 - ACPI_WGDS_TABLE_SIZE != ACPI_WGDS_WIFI_DATA_SIZE); 896 + ACPI_WGDS_TABLE_SIZE + 1 != ACPI_WGDS_WIFI_DATA_SIZE); 897 897 898 898 BUILD_BUG_ON(ACPI_NUM_GEO_PROFILES > IWL_NUM_GEO_PROFILES); 899 899 ··· 928 928 return -ENOENT; 929 929 } 930 930 931 + static int iwl_mvm_sar_get_wgds_table(struct iwl_mvm *mvm) 932 + { 933 + return -ENOENT; 934 + } 935 + 931 936 static int iwl_mvm_sar_geo_init(struct iwl_mvm *mvm) 932 937 { 933 938 return 0; ··· 959 954 IWL_DEBUG_RADIO(mvm, 960 955 "WRDS SAR BIOS table invalid or unavailable. (%d)\n", 961 956 ret); 962 - /* if not available, don't fail and don't bother with EWRD */ 963 - return 0; 957 + /* 958 + * If not available, don't fail and don't bother with EWRD. 959 + * Return 1 to tell that we can't use WGDS either. 960 + */ 961 + return 1; 964 962 } 965 963 966 964 ret = iwl_mvm_sar_get_ewrd_table(mvm); ··· 976 968 /* choose profile 1 (WRDS) as default for both chains */ 977 969 ret = iwl_mvm_sar_select_profile(mvm, 1, 1); 978 970 979 - /* if we don't have profile 0 from BIOS, just skip it */ 971 + /* 972 + * If we don't have profile 0 from BIOS, just skip it. This 973 + * means that SAR Geo will not be enabled either, even if we 974 + * have other valid profiles. 975 + */ 980 976 if (ret == -ENOENT) 981 - return 0; 977 + return 1; 982 978 983 979 return ret; 984 980 } ··· 1180 1168 iwl_mvm_unref(mvm, IWL_MVM_REF_UCODE_DOWN); 1181 1169 1182 1170 ret = iwl_mvm_sar_init(mvm); 1183 - if (ret) 1184 - goto error; 1171 + if (ret == 0) { 1172 + ret = iwl_mvm_sar_geo_init(mvm); 1173 + } else if (ret > 0 && !iwl_mvm_sar_get_wgds_table(mvm)) { 1174 + /* 1175 + * If basic SAR is not available, we check for WGDS, 1176 + * which should *not* be available either. If it is 1177 + * available, issue an error, because we can't use SAR 1178 + * Geo without basic SAR. 1179 + */ 1180 + IWL_ERR(mvm, "BIOS contains WGDS but no WRDS\n"); 1181 + } 1185 1182 1186 - ret = iwl_mvm_sar_geo_init(mvm); 1187 - if (ret) 1183 + if (ret < 0) 1188 1184 goto error; 1189 1185 1190 1186 iwl_mvm_leds_sync(mvm);
+6 -6
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
··· 301 301 goto out; 302 302 } 303 303 304 - if (changed) 305 - *changed = (resp->status == MCC_RESP_NEW_CHAN_PROFILE); 304 + if (changed) { 305 + u32 status = le32_to_cpu(resp->status); 306 + 307 + *changed = (status == MCC_RESP_NEW_CHAN_PROFILE || 308 + status == MCC_RESP_ILLEGAL); 309 + } 306 310 307 311 regd = iwl_parse_nvm_mcc_info(mvm->trans->dev, mvm->cfg, 308 312 __le32_to_cpu(resp->n_channels), ··· 4447 4443 sinfo->signal_avg = mvmsta->avg_energy; 4448 4444 sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL_AVG); 4449 4445 } 4450 - 4451 - if (!fw_has_capa(&mvm->fw->ucode_capa, 4452 - IWL_UCODE_TLV_CAPA_RADIO_BEACON_STATS)) 4453 - return; 4454 4446 4455 4447 /* if beacon filtering isn't on mac80211 does it anyway */ 4456 4448 if (!(vif->driver_flags & IEEE80211_VIF_BEACON_FILTER))
+2 -3
drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
··· 539 539 } 540 540 541 541 IWL_DEBUG_LAR(mvm, 542 - "MCC response status: 0x%x. new MCC: 0x%x ('%c%c') change: %d n_chans: %d\n", 543 - status, mcc, mcc >> 8, mcc & 0xff, 544 - !!(status == MCC_RESP_NEW_CHAN_PROFILE), n_channels); 542 + "MCC response status: 0x%x. new MCC: 0x%x ('%c%c') n_chans: %d\n", 543 + status, mcc, mcc >> 8, mcc & 0xff, n_channels); 545 544 546 545 exit: 547 546 iwl_free_resp(&cmd);
+2
drivers/net/wireless/intel/iwlwifi/mvm/ops.c
··· 858 858 iwl_mvm_thermal_exit(mvm); 859 859 out_free: 860 860 iwl_fw_flush_dump(&mvm->fwrt); 861 + iwl_fw_runtime_free(&mvm->fwrt); 861 862 862 863 if (iwlmvm_mod_params.init_dbg) 863 864 return op_mode; ··· 911 910 912 911 iwl_mvm_tof_clean(mvm); 913 912 913 + iwl_fw_runtime_free(&mvm->fwrt); 914 914 mutex_destroy(&mvm->mutex); 915 915 mutex_destroy(&mvm->d0i3_suspend_mutex); 916 916
+6
drivers/net/wireless/mediatek/mt76/Kconfig
··· 1 1 config MT76_CORE 2 2 tristate 3 3 4 + config MT76_LEDS 5 + bool 6 + depends on MT76_CORE 7 + depends on LEDS_CLASS=y || MT76_CORE=LEDS_CLASS 8 + default y 9 + 4 10 config MT76_USB 5 11 tristate 6 12 depends on MT76_CORE
+5 -3
drivers/net/wireless/mediatek/mt76/mac80211.c
··· 345 345 mt76_check_sband(dev, NL80211_BAND_2GHZ); 346 346 mt76_check_sband(dev, NL80211_BAND_5GHZ); 347 347 348 - ret = mt76_led_init(dev); 349 - if (ret) 350 - return ret; 348 + if (IS_ENABLED(CONFIG_MT76_LEDS)) { 349 + ret = mt76_led_init(dev); 350 + if (ret) 351 + return ret; 352 + } 351 353 352 354 return ieee80211_register_hw(hw); 353 355 }
-1
drivers/net/wireless/mediatek/mt76/mt76x02.h
··· 71 71 struct mac_address macaddr_list[8]; 72 72 73 73 struct mutex phy_mutex; 74 - struct mutex mutex; 75 74 76 75 u8 txdone_seq; 77 76 DECLARE_KFIFO_PTR(txstatus_fifo, struct mt76x02_tx_status);
+4 -2
drivers/net/wireless/mediatek/mt76/mt76x2/pci_init.c
··· 507 507 mt76x2_dfs_init_detector(dev); 508 508 509 509 /* init led callbacks */ 510 - dev->mt76.led_cdev.brightness_set = mt76x2_led_set_brightness; 511 - dev->mt76.led_cdev.blink_set = mt76x2_led_set_blink; 510 + if (IS_ENABLED(CONFIG_MT76_LEDS)) { 511 + dev->mt76.led_cdev.brightness_set = mt76x2_led_set_brightness; 512 + dev->mt76.led_cdev.blink_set = mt76x2_led_set_blink; 513 + } 512 514 513 515 ret = mt76_register_device(&dev->mt76, true, mt76x02_rates, 514 516 ARRAY_SIZE(mt76x02_rates));
+2 -2
drivers/net/wireless/mediatek/mt76/mt76x2/pci_main.c
··· 272 272 if (val != ~0 && val > 0xffff) 273 273 return -EINVAL; 274 274 275 - mutex_lock(&dev->mutex); 275 + mutex_lock(&dev->mt76.mutex); 276 276 mt76x2_mac_set_tx_protection(dev, val); 277 - mutex_unlock(&dev->mutex); 277 + mutex_unlock(&dev->mt76.mutex); 278 278 279 279 return 0; 280 280 }
+11 -6
drivers/net/wireless/ti/wlcore/sdio.c
··· 285 285 struct resource res[2]; 286 286 mmc_pm_flag_t mmcflags; 287 287 int ret = -ENOMEM; 288 - int irq, wakeirq; 288 + int irq, wakeirq, num_irqs; 289 289 const char *chip_family; 290 290 291 291 /* We are only able to handle the wlan function */ ··· 353 353 irqd_get_trigger_type(irq_get_irq_data(irq)); 354 354 res[0].name = "irq"; 355 355 356 - res[1].start = wakeirq; 357 - res[1].flags = IORESOURCE_IRQ | 358 - irqd_get_trigger_type(irq_get_irq_data(wakeirq)); 359 - res[1].name = "wakeirq"; 360 356 361 - ret = platform_device_add_resources(glue->core, res, ARRAY_SIZE(res)); 357 + if (wakeirq > 0) { 358 + res[1].start = wakeirq; 359 + res[1].flags = IORESOURCE_IRQ | 360 + irqd_get_trigger_type(irq_get_irq_data(wakeirq)); 361 + res[1].name = "wakeirq"; 362 + num_irqs = 2; 363 + } else { 364 + num_irqs = 1; 365 + } 366 + ret = platform_device_add_resources(glue->core, res, num_irqs); 362 367 if (ret) { 363 368 dev_err(glue->dev, "can't add resources\n"); 364 369 goto out_dev_put;
+8 -4
drivers/nvme/host/core.c
··· 1519 1519 if (ns->ndev) 1520 1520 nvme_nvm_update_nvm_info(ns); 1521 1521 #ifdef CONFIG_NVME_MULTIPATH 1522 - if (ns->head->disk) 1522 + if (ns->head->disk) { 1523 1523 nvme_update_disk_info(ns->head->disk, ns, id); 1524 + blk_queue_stack_limits(ns->head->disk->queue, ns->queue); 1525 + } 1524 1526 #endif 1525 1527 } 1526 1528 ··· 3314 3312 struct nvme_ns *ns, *next; 3315 3313 LIST_HEAD(ns_list); 3316 3314 3315 + /* prevent racing with ns scanning */ 3316 + flush_work(&ctrl->scan_work); 3317 + 3317 3318 /* 3318 3319 * The dead states indicates the controller was not gracefully 3319 3320 * disconnected. In that case, we won't be able to flush any data while ··· 3479 3474 nvme_mpath_stop(ctrl); 3480 3475 nvme_stop_keep_alive(ctrl); 3481 3476 flush_work(&ctrl->async_event_work); 3482 - flush_work(&ctrl->scan_work); 3483 3477 cancel_work_sync(&ctrl->fw_act_work); 3484 3478 if (ctrl->ops->stop_ctrl) 3485 3479 ctrl->ops->stop_ctrl(ctrl); ··· 3587 3583 3588 3584 return 0; 3589 3585 out_free_name: 3590 - kfree_const(dev->kobj.name); 3586 + kfree_const(ctrl->device->kobj.name); 3591 3587 out_release_instance: 3592 3588 ida_simple_remove(&nvme_instance_ida, ctrl->instance); 3593 3589 out: ··· 3609 3605 down_read(&ctrl->namespaces_rwsem); 3610 3606 3611 3607 /* Forcibly unquiesce queues to avoid blocking dispatch */ 3612 - if (ctrl->admin_q) 3608 + if (ctrl->admin_q && !blk_queue_dying(ctrl->admin_q)) 3613 3609 blk_mq_unquiesce_queue(ctrl->admin_q); 3614 3610 3615 3611 list_for_each_entry(ns, &ctrl->namespaces, list)
+65 -12
drivers/nvme/host/fc.c
··· 152 152 153 153 bool ioq_live; 154 154 bool assoc_active; 155 + atomic_t err_work_active; 155 156 u64 association_id; 156 157 157 158 struct list_head ctrl_list; /* rport->ctrl_list */ ··· 161 160 struct blk_mq_tag_set tag_set; 162 161 163 162 struct delayed_work connect_work; 163 + struct work_struct err_work; 164 164 165 165 struct kref ref; 166 166 u32 flags; ··· 1533 1531 struct nvme_fc_fcp_op *aen_op = ctrl->aen_ops; 1534 1532 int i; 1535 1533 1534 + /* ensure we've initialized the ops once */ 1535 + if (!(aen_op->flags & FCOP_FLAGS_AEN)) 1536 + return; 1537 + 1536 1538 for (i = 0; i < NVME_NR_AEN_COMMANDS; i++, aen_op++) 1537 1539 __nvme_fc_abort_op(ctrl, aen_op); 1538 1540 } ··· 1752 1746 struct nvme_fc_queue *queue = &ctrl->queues[queue_idx]; 1753 1747 int res; 1754 1748 1755 - nvme_req(rq)->ctrl = &ctrl->ctrl; 1756 1749 res = __nvme_fc_init_request(ctrl, queue, &op->op, rq, queue->rqcnt++); 1757 1750 if (res) 1758 1751 return res; 1759 1752 op->op.fcp_req.first_sgl = &op->sgl[0]; 1760 1753 op->op.fcp_req.private = &op->priv[0]; 1754 + nvme_req(rq)->ctrl = &ctrl->ctrl; 1761 1755 return res; 1762 1756 } 1763 1757 ··· 2055 2049 static void 2056 2050 nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg) 2057 2051 { 2058 - /* only proceed if in LIVE state - e.g. on first error */ 2052 + int active; 2053 + 2054 + /* 2055 + * if an error (io timeout, etc) while (re)connecting, 2056 + * it's an error on creating the new association. 2057 + * Start the error recovery thread if it hasn't already 2058 + * been started. It is expected there could be multiple 2059 + * ios hitting this path before things are cleaned up. 2060 + */ 2061 + if (ctrl->ctrl.state == NVME_CTRL_CONNECTING) { 2062 + active = atomic_xchg(&ctrl->err_work_active, 1); 2063 + if (!active && !schedule_work(&ctrl->err_work)) { 2064 + atomic_set(&ctrl->err_work_active, 0); 2065 + WARN_ON(1); 2066 + } 2067 + return; 2068 + } 2069 + 2070 + /* Otherwise, only proceed if in LIVE state - e.g. on first error */ 2059 2071 if (ctrl->ctrl.state != NVME_CTRL_LIVE) 2060 2072 return; 2061 2073 ··· 2838 2814 { 2839 2815 struct nvme_fc_ctrl *ctrl = to_fc_ctrl(nctrl); 2840 2816 2817 + cancel_work_sync(&ctrl->err_work); 2841 2818 cancel_delayed_work_sync(&ctrl->connect_work); 2842 2819 /* 2843 2820 * kill the association on the link side. this will block ··· 2891 2866 } 2892 2867 2893 2868 static void 2869 + __nvme_fc_terminate_io(struct nvme_fc_ctrl *ctrl) 2870 + { 2871 + nvme_stop_keep_alive(&ctrl->ctrl); 2872 + 2873 + /* will block will waiting for io to terminate */ 2874 + nvme_fc_delete_association(ctrl); 2875 + 2876 + if (ctrl->ctrl.state != NVME_CTRL_CONNECTING && 2877 + !nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) 2878 + dev_err(ctrl->ctrl.device, 2879 + "NVME-FC{%d}: error_recovery: Couldn't change state " 2880 + "to CONNECTING\n", ctrl->cnum); 2881 + } 2882 + 2883 + static void 2894 2884 nvme_fc_reset_ctrl_work(struct work_struct *work) 2895 2885 { 2896 2886 struct nvme_fc_ctrl *ctrl = 2897 2887 container_of(work, struct nvme_fc_ctrl, ctrl.reset_work); 2898 2888 int ret; 2899 2889 2890 + __nvme_fc_terminate_io(ctrl); 2891 + 2900 2892 nvme_stop_ctrl(&ctrl->ctrl); 2901 - 2902 - /* will block will waiting for io to terminate */ 2903 - nvme_fc_delete_association(ctrl); 2904 - 2905 - if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) { 2906 - dev_err(ctrl->ctrl.device, 2907 - "NVME-FC{%d}: error_recovery: Couldn't change state " 2908 - "to CONNECTING\n", ctrl->cnum); 2909 - return; 2910 - } 2911 2893 2912 2894 if (ctrl->rport->remoteport.port_state == FC_OBJSTATE_ONLINE) 2913 2895 ret = nvme_fc_create_association(ctrl); ··· 2927 2895 dev_info(ctrl->ctrl.device, 2928 2896 "NVME-FC{%d}: controller reset complete\n", 2929 2897 ctrl->cnum); 2898 + } 2899 + 2900 + static void 2901 + nvme_fc_connect_err_work(struct work_struct *work) 2902 + { 2903 + struct nvme_fc_ctrl *ctrl = 2904 + container_of(work, struct nvme_fc_ctrl, err_work); 2905 + 2906 + __nvme_fc_terminate_io(ctrl); 2907 + 2908 + atomic_set(&ctrl->err_work_active, 0); 2909 + 2910 + /* 2911 + * Rescheduling the connection after recovering 2912 + * from the io error is left to the reconnect work 2913 + * item, which is what should have stalled waiting on 2914 + * the io that had the error that scheduled this work. 2915 + */ 2930 2916 } 2931 2917 2932 2918 static const struct nvme_ctrl_ops nvme_fc_ctrl_ops = { ··· 3057 3007 ctrl->cnum = idx; 3058 3008 ctrl->ioq_live = false; 3059 3009 ctrl->assoc_active = false; 3010 + atomic_set(&ctrl->err_work_active, 0); 3060 3011 init_waitqueue_head(&ctrl->ioabort_wait); 3061 3012 3062 3013 get_device(ctrl->dev); ··· 3065 3014 3066 3015 INIT_WORK(&ctrl->ctrl.reset_work, nvme_fc_reset_ctrl_work); 3067 3016 INIT_DELAYED_WORK(&ctrl->connect_work, nvme_fc_connect_ctrl_work); 3017 + INIT_WORK(&ctrl->err_work, nvme_fc_connect_err_work); 3068 3018 spin_lock_init(&ctrl->lock); 3069 3019 3070 3020 /* io queue count */ ··· 3155 3103 fail_ctrl: 3156 3104 nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_DELETING); 3157 3105 cancel_work_sync(&ctrl->ctrl.reset_work); 3106 + cancel_work_sync(&ctrl->err_work); 3158 3107 cancel_delayed_work_sync(&ctrl->connect_work); 3159 3108 3160 3109 ctrl->ctrl.opts = NULL;
+1
drivers/nvme/host/multipath.c
··· 285 285 blk_queue_flag_set(QUEUE_FLAG_NONROT, q); 286 286 /* set to a default value for 512 until disk is validated */ 287 287 blk_queue_logical_block_size(q, 512); 288 + blk_set_stacking_limits(&q->limits); 288 289 289 290 /* we need to propagate up the VMC settings */ 290 291 if (ctrl->vwc & NVME_CTRL_VWC_PRESENT)
+3
drivers/nvme/host/nvme.h
··· 531 531 static inline int nvme_mpath_init(struct nvme_ctrl *ctrl, 532 532 struct nvme_id_ctrl *id) 533 533 { 534 + if (ctrl->subsys->cmic & (1 << 3)) 535 + dev_warn(ctrl->device, 536 + "Please enable CONFIG_NVME_MULTIPATH for full support of multi-port devices.\n"); 534 537 return 0; 535 538 } 536 539 static inline void nvme_mpath_uninit(struct nvme_ctrl *ctrl)
+2
drivers/nvme/host/rdma.c
··· 184 184 qe->dma = ib_dma_map_single(ibdev, qe->data, capsule_size, dir); 185 185 if (ib_dma_mapping_error(ibdev, qe->dma)) { 186 186 kfree(qe->data); 187 + qe->data = NULL; 187 188 return -ENOMEM; 188 189 } 189 190 ··· 824 823 out_free_async_qe: 825 824 nvme_rdma_free_qe(ctrl->device->dev, &ctrl->async_event_sqe, 826 825 sizeof(struct nvme_command), DMA_TO_DEVICE); 826 + ctrl->async_event_sqe.data = NULL; 827 827 out_free_queue: 828 828 nvme_rdma_free_queue(&ctrl->queues[0]); 829 829 return error;
+1 -1
drivers/nvme/target/core.c
··· 420 420 struct pci_dev *p2p_dev; 421 421 int ret; 422 422 423 - if (!ctrl->p2p_client) 423 + if (!ctrl->p2p_client || !ns->use_p2pmem) 424 424 return; 425 425 426 426 if (ns->p2p_dev) {
+4 -15
drivers/nvme/target/rdma.c
··· 122 122 int inline_page_count; 123 123 }; 124 124 125 - static struct workqueue_struct *nvmet_rdma_delete_wq; 126 125 static bool nvmet_rdma_use_srq; 127 126 module_param_named(use_srq, nvmet_rdma_use_srq, bool, 0444); 128 127 MODULE_PARM_DESC(use_srq, "Use shared receive queue."); ··· 1273 1274 1274 1275 if (queue->host_qid == 0) { 1275 1276 /* Let inflight controller teardown complete */ 1276 - flush_workqueue(nvmet_rdma_delete_wq); 1277 + flush_scheduled_work(); 1277 1278 } 1278 1279 1279 1280 ret = nvmet_rdma_cm_accept(cm_id, queue, &event->param.conn); 1280 1281 if (ret) { 1281 - queue_work(nvmet_rdma_delete_wq, &queue->release_work); 1282 + schedule_work(&queue->release_work); 1282 1283 /* Destroying rdma_cm id is not needed here */ 1283 1284 return 0; 1284 1285 } ··· 1343 1344 1344 1345 if (disconnect) { 1345 1346 rdma_disconnect(queue->cm_id); 1346 - queue_work(nvmet_rdma_delete_wq, &queue->release_work); 1347 + schedule_work(&queue->release_work); 1347 1348 } 1348 1349 } 1349 1350 ··· 1373 1374 mutex_unlock(&nvmet_rdma_queue_mutex); 1374 1375 1375 1376 pr_err("failed to connect queue %d\n", queue->idx); 1376 - queue_work(nvmet_rdma_delete_wq, &queue->release_work); 1377 + schedule_work(&queue->release_work); 1377 1378 } 1378 1379 1379 1380 /** ··· 1655 1656 if (ret) 1656 1657 goto err_ib_client; 1657 1658 1658 - nvmet_rdma_delete_wq = alloc_workqueue("nvmet-rdma-delete-wq", 1659 - WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_SYSFS, 0); 1660 - if (!nvmet_rdma_delete_wq) { 1661 - ret = -ENOMEM; 1662 - goto err_unreg_transport; 1663 - } 1664 - 1665 1659 return 0; 1666 1660 1667 - err_unreg_transport: 1668 - nvmet_unregister_transport(&nvmet_rdma_ops); 1669 1661 err_ib_client: 1670 1662 ib_unregister_client(&nvmet_rdma_ib_client); 1671 1663 return ret; ··· 1664 1674 1665 1675 static void __exit nvmet_rdma_exit(void) 1666 1676 { 1667 - destroy_workqueue(nvmet_rdma_delete_wq); 1668 1677 nvmet_unregister_transport(&nvmet_rdma_ops); 1669 1678 ib_unregister_client(&nvmet_rdma_ib_client); 1670 1679 WARN_ON_ONCE(!list_empty(&nvmet_rdma_queue_list));
+6 -4
drivers/nvmem/core.c
··· 44 44 int bytes; 45 45 int bit_offset; 46 46 int nbits; 47 + struct device_node *np; 47 48 struct nvmem_device *nvmem; 48 49 struct list_head node; 49 50 }; ··· 299 298 mutex_lock(&nvmem_mutex); 300 299 list_del(&cell->node); 301 300 mutex_unlock(&nvmem_mutex); 301 + of_node_put(cell->np); 302 302 kfree(cell->name); 303 303 kfree(cell); 304 304 } ··· 532 530 return -ENOMEM; 533 531 534 532 cell->nvmem = nvmem; 533 + cell->np = of_node_get(child); 535 534 cell->offset = be32_to_cpup(addr++); 536 535 cell->bytes = be32_to_cpup(addr); 537 536 cell->name = kasprintf(GFP_KERNEL, "%pOFn", child); ··· 963 960 964 961 #if IS_ENABLED(CONFIG_OF) 965 962 static struct nvmem_cell * 966 - nvmem_find_cell_by_index(struct nvmem_device *nvmem, int index) 963 + nvmem_find_cell_by_node(struct nvmem_device *nvmem, struct device_node *np) 967 964 { 968 965 struct nvmem_cell *cell = NULL; 969 - int i = 0; 970 966 971 967 mutex_lock(&nvmem_mutex); 972 968 list_for_each_entry(cell, &nvmem->cells, node) { 973 - if (index == i++) 969 + if (np == cell->np) 974 970 break; 975 971 } 976 972 mutex_unlock(&nvmem_mutex); ··· 1013 1011 if (IS_ERR(nvmem)) 1014 1012 return ERR_CAST(nvmem); 1015 1013 1016 - cell = nvmem_find_cell_by_index(nvmem, index); 1014 + cell = nvmem_find_cell_by_node(nvmem, cell_np); 1017 1015 if (!cell) { 1018 1016 __nvmem_device_put(nvmem); 1019 1017 return ERR_PTR(-ENOENT);
+3 -1
drivers/of/device.c
··· 149 149 * set by the driver. 150 150 */ 151 151 mask = DMA_BIT_MASK(ilog2(dma_addr + size - 1) + 1); 152 - dev->bus_dma_mask = mask; 153 152 dev->coherent_dma_mask &= mask; 154 153 *dev->dma_mask &= mask; 154 + /* ...but only set bus mask if we found valid dma-ranges earlier */ 155 + if (!ret) 156 + dev->bus_dma_mask = mask; 155 157 156 158 coherent = of_dma_is_coherent(np); 157 159 dev_dbg(dev, "device is%sdma coherent\n",
+7 -2
drivers/of/of_numa.c
··· 104 104 distance = of_read_number(matrix, 1); 105 105 matrix++; 106 106 107 + if ((nodea == nodeb && distance != LOCAL_DISTANCE) || 108 + (nodea != nodeb && distance <= LOCAL_DISTANCE)) { 109 + pr_err("Invalid distance[node%d -> node%d] = %d\n", 110 + nodea, nodeb, distance); 111 + return -EINVAL; 112 + } 113 + 107 114 numa_set_distance(nodea, nodeb, distance); 108 - pr_debug("distance[node%d -> node%d] = %d\n", 109 - nodea, nodeb, distance); 110 115 111 116 /* Set default distance of node B->A same as A->B */ 112 117 if (nodeb > nodea)
+2 -4
drivers/opp/of.c
··· 579 579 */ 580 580 count = of_count_phandle_with_args(dev->of_node, 581 581 "operating-points-v2", NULL); 582 - if (count != 1) 583 - return -ENODEV; 584 - 585 - index = 0; 582 + if (count == 1) 583 + index = 0; 586 584 } 587 585 588 586 opp_table = dev_pm_opp_get_opp_table_indexed(dev, index);
+4 -2
drivers/opp/ti-opp-supply.c
··· 288 288 int ret; 289 289 290 290 vdd_uv = _get_optimal_vdd_voltage(dev, &opp_data, 291 - new_supply_vbb->u_volt); 291 + new_supply_vdd->u_volt); 292 + 293 + if (new_supply_vdd->u_volt_min < vdd_uv) 294 + new_supply_vdd->u_volt_min = vdd_uv; 292 295 293 296 /* Scaling up? Scale voltage before frequency */ 294 297 if (freq > old_freq) { ··· 417 414 .probe = ti_opp_supply_probe, 418 415 .driver = { 419 416 .name = "ti_opp_supply", 420 - .owner = THIS_MODULE, 421 417 .of_match_table = of_match_ptr(ti_opp_supply_of_match), 422 418 }, 423 419 };
+1 -9
drivers/pci/controller/dwc/pci-imx6.c
··· 81 81 #define PCIE_PL_PFLR_FORCE_LINK (1 << 15) 82 82 #define PCIE_PHY_DEBUG_R0 (PL_OFFSET + 0x28) 83 83 #define PCIE_PHY_DEBUG_R1 (PL_OFFSET + 0x2c) 84 - #define PCIE_PHY_DEBUG_R1_XMLH_LINK_IN_TRAINING (1 << 29) 85 - #define PCIE_PHY_DEBUG_R1_XMLH_LINK_UP (1 << 4) 86 84 87 85 #define PCIE_PHY_CTRL (PL_OFFSET + 0x114) 88 86 #define PCIE_PHY_CTRL_DATA_LOC 0 ··· 709 711 return 0; 710 712 } 711 713 712 - static int imx6_pcie_link_up(struct dw_pcie *pci) 713 - { 714 - return dw_pcie_readl_dbi(pci, PCIE_PHY_DEBUG_R1) & 715 - PCIE_PHY_DEBUG_R1_XMLH_LINK_UP; 716 - } 717 - 718 714 static const struct dw_pcie_host_ops imx6_pcie_host_ops = { 719 715 .host_init = imx6_pcie_host_init, 720 716 }; ··· 741 749 } 742 750 743 751 static const struct dw_pcie_ops dw_pcie_ops = { 744 - .link_up = imx6_pcie_link_up, 752 + /* No special ops needed, but pcie-designware still expects this struct */ 745 753 }; 746 754 747 755 #ifdef CONFIG_PM_SLEEP
+1 -1
drivers/pci/controller/dwc/pci-layerscape.c
··· 88 88 int i; 89 89 90 90 for (i = 0; i < PCIE_IATU_NUM; i++) 91 - dw_pcie_disable_atu(pcie->pci, DW_PCIE_REGION_OUTBOUND, i); 91 + dw_pcie_disable_atu(pcie->pci, i, DW_PCIE_REGION_OUTBOUND); 92 92 } 93 93 94 94 static int ls1021_pcie_link_up(struct dw_pcie *pci)
-1
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 440 440 tbl_offset = dw_pcie_readl_dbi(pci, reg); 441 441 bir = (tbl_offset & PCI_MSIX_TABLE_BIR); 442 442 tbl_offset &= PCI_MSIX_TABLE_OFFSET; 443 - tbl_offset >>= 3; 444 443 445 444 reg = PCI_BASE_ADDRESS_0 + (4 * bir); 446 445 bar_addr_upper = 0;
-5
drivers/pci/pci-acpi.c
··· 793 793 { 794 794 struct pci_dev *pci_dev = to_pci_dev(dev); 795 795 struct acpi_device *adev = ACPI_COMPANION(dev); 796 - int node; 797 796 798 797 if (!adev) 799 798 return; 800 - 801 - node = acpi_get_node(adev->handle); 802 - if (node != NUMA_NO_NODE) 803 - set_dev_node(dev, node); 804 799 805 800 pci_acpi_optimize_delay(pci_dev, adev->handle); 806 801
+11 -13
drivers/pci/pci.c
··· 5556 5556 u32 lnkcap2, lnkcap; 5557 5557 5558 5558 /* 5559 - * PCIe r4.0 sec 7.5.3.18 recommends using the Supported Link 5560 - * Speeds Vector in Link Capabilities 2 when supported, falling 5561 - * back to Max Link Speed in Link Capabilities otherwise. 5559 + * Link Capabilities 2 was added in PCIe r3.0, sec 7.8.18. The 5560 + * implementation note there recommends using the Supported Link 5561 + * Speeds Vector in Link Capabilities 2 when supported. 5562 + * 5563 + * Without Link Capabilities 2, i.e., prior to PCIe r3.0, software 5564 + * should use the Supported Link Speeds field in Link Capabilities, 5565 + * where only 2.5 GT/s and 5.0 GT/s speeds were defined. 5562 5566 */ 5563 5567 pcie_capability_read_dword(dev, PCI_EXP_LNKCAP2, &lnkcap2); 5564 5568 if (lnkcap2) { /* PCIe r3.0-compliant */ ··· 5578 5574 } 5579 5575 5580 5576 pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap); 5581 - if (lnkcap) { 5582 - if (lnkcap & PCI_EXP_LNKCAP_SLS_16_0GB) 5583 - return PCIE_SPEED_16_0GT; 5584 - else if (lnkcap & PCI_EXP_LNKCAP_SLS_8_0GB) 5585 - return PCIE_SPEED_8_0GT; 5586 - else if (lnkcap & PCI_EXP_LNKCAP_SLS_5_0GB) 5587 - return PCIE_SPEED_5_0GT; 5588 - else if (lnkcap & PCI_EXP_LNKCAP_SLS_2_5GB) 5589 - return PCIE_SPEED_2_5GT; 5590 - } 5577 + if ((lnkcap & PCI_EXP_LNKCAP_SLS) == PCI_EXP_LNKCAP_SLS_5_0GB) 5578 + return PCIE_SPEED_5_0GT; 5579 + else if ((lnkcap & PCI_EXP_LNKCAP_SLS) == PCI_EXP_LNKCAP_SLS_2_5GB) 5580 + return PCIE_SPEED_2_5GT; 5591 5581 5592 5582 return PCI_SPEED_UNKNOWN; 5593 5583 }
+11 -9
drivers/phy/qualcomm/phy-qcom-qusb2.c
··· 231 231 .mask_core_ready = CORE_READY_STATUS, 232 232 .has_pll_override = true, 233 233 .autoresume_en = BIT(0), 234 + .update_tune1_with_efuse = true, 234 235 }; 235 236 236 237 static const char * const qusb2_phy_vreg_names[] = { ··· 403 402 404 403 /* 405 404 * Read efuse register having TUNE2/1 parameter's high nibble. 406 - * If efuse register shows value as 0x0, or if we fail to find 407 - * a valid efuse register settings, then use default value 408 - * as 0xB for high nibble that we have already set while 409 - * configuring phy. 405 + * If efuse register shows value as 0x0 (indicating value is not 406 + * fused), or if we fail to find a valid efuse register setting, 407 + * then use default value for high nibble that we have already 408 + * set while configuring the phy. 410 409 */ 411 410 val = nvmem_cell_read(qphy->cell, NULL); 412 411 if (IS_ERR(val) || !val[0]) { ··· 416 415 417 416 /* Fused TUNE1/2 value is the higher nibble only */ 418 417 if (cfg->update_tune1_with_efuse) 419 - qusb2_setbits(qphy->base, cfg->regs[QUSB2PHY_PORT_TUNE1], 420 - val[0] << 0x4); 418 + qusb2_write_mask(qphy->base, cfg->regs[QUSB2PHY_PORT_TUNE1], 419 + val[0] << HSTX_TRIM_SHIFT, 420 + HSTX_TRIM_MASK); 421 421 else 422 - qusb2_setbits(qphy->base, cfg->regs[QUSB2PHY_PORT_TUNE2], 423 - val[0] << 0x4); 424 - 422 + qusb2_write_mask(qphy->base, cfg->regs[QUSB2PHY_PORT_TUNE2], 423 + val[0] << HSTX_TRIM_SHIFT, 424 + HSTX_TRIM_MASK); 425 425 } 426 426 427 427 static int qusb2_phy_set_mode(struct phy *phy, enum phy_mode mode)
+2 -1
drivers/phy/socionext/Kconfig
··· 26 26 27 27 config PHY_UNIPHIER_PCIE 28 28 tristate "Uniphier PHY driver for PCIe controller" 29 - depends on (ARCH_UNIPHIER || COMPILE_TEST) && OF 29 + depends on ARCH_UNIPHIER || COMPILE_TEST 30 + depends on OF && HAS_IOMEM 30 31 default PCIE_UNIPHIER 31 32 select GENERIC_PHY 32 33 help
+1 -1
drivers/pinctrl/meson/pinctrl-meson-gxbb.c
··· 830 830 831 831 static struct meson_bank meson_gxbb_aobus_banks[] = { 832 832 /* name first last irq pullen pull dir out in */ 833 - BANK("AO", GPIOAO_0, GPIOAO_13, 0, 13, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0), 833 + BANK("AO", GPIOAO_0, GPIOAO_13, 0, 13, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0), 834 834 }; 835 835 836 836 static struct meson_pinctrl_data meson_gxbb_periphs_pinctrl_data = {
+1 -1
drivers/pinctrl/meson/pinctrl-meson-gxl.c
··· 807 807 808 808 static struct meson_bank meson_gxl_aobus_banks[] = { 809 809 /* name first last irq pullen pull dir out in */ 810 - BANK("AO", GPIOAO_0, GPIOAO_9, 0, 9, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0), 810 + BANK("AO", GPIOAO_0, GPIOAO_9, 0, 9, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0), 811 811 }; 812 812 813 813 static struct meson_pinctrl_data meson_gxl_periphs_pinctrl_data = {
+1 -1
drivers/pinctrl/meson/pinctrl-meson.c
··· 192 192 dev_dbg(pc->dev, "pin %u: disable bias\n", pin); 193 193 194 194 meson_calc_reg_and_bit(bank, pin, REG_PULL, &reg, &bit); 195 - ret = regmap_update_bits(pc->reg_pull, reg, 195 + ret = regmap_update_bits(pc->reg_pullen, reg, 196 196 BIT(bit), 0); 197 197 if (ret) 198 198 return ret;
+1 -1
drivers/pinctrl/meson/pinctrl-meson8.c
··· 1053 1053 1054 1054 static struct meson_bank meson8_aobus_banks[] = { 1055 1055 /* name first last irq pullen pull dir out in */ 1056 - BANK("AO", GPIOAO_0, GPIO_TEST_N, 0, 13, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0), 1056 + BANK("AO", GPIOAO_0, GPIO_TEST_N, 0, 13, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0), 1057 1057 }; 1058 1058 1059 1059 static struct meson_pinctrl_data meson8_cbus_pinctrl_data = {
+1 -1
drivers/pinctrl/meson/pinctrl-meson8b.c
··· 906 906 907 907 static struct meson_bank meson8b_aobus_banks[] = { 908 908 /* name first lastc irq pullen pull dir out in */ 909 - BANK("AO", GPIOAO_0, GPIO_TEST_N, 0, 13, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0), 909 + BANK("AO", GPIOAO_0, GPIO_TEST_N, 0, 13, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0), 910 910 }; 911 911 912 912 static struct meson_pinctrl_data meson8b_cbus_pinctrl_data = {
+3 -1
drivers/rtc/hctosys.c
··· 50 50 tv64.tv_sec = rtc_tm_to_time64(&tm); 51 51 52 52 #if BITS_PER_LONG == 32 53 - if (tv64.tv_sec > INT_MAX) 53 + if (tv64.tv_sec > INT_MAX) { 54 + err = -ERANGE; 54 55 goto err_read; 56 + } 55 57 #endif 56 58 57 59 err = do_settimeofday64(&tv64);
+12 -4
drivers/rtc/rtc-cmos.c
··· 257 257 struct cmos_rtc *cmos = dev_get_drvdata(dev); 258 258 unsigned char rtc_control; 259 259 260 + /* This not only a rtc_op, but also called directly */ 260 261 if (!is_valid_irq(cmos->irq)) 261 262 return -EIO; 262 263 ··· 453 452 unsigned char mon, mday, hrs, min, sec, rtc_control; 454 453 int ret; 455 454 455 + /* This not only a rtc_op, but also called directly */ 456 456 if (!is_valid_irq(cmos->irq)) 457 457 return -EIO; 458 458 ··· 518 516 struct cmos_rtc *cmos = dev_get_drvdata(dev); 519 517 unsigned long flags; 520 518 521 - if (!is_valid_irq(cmos->irq)) 522 - return -EINVAL; 523 - 524 519 spin_lock_irqsave(&rtc_lock, flags); 525 520 526 521 if (enabled) ··· 576 577 .set_alarm = cmos_set_alarm, 577 578 .proc = cmos_procfs, 578 579 .alarm_irq_enable = cmos_alarm_irq_enable, 580 + }; 581 + 582 + static const struct rtc_class_ops cmos_rtc_ops_no_alarm = { 583 + .read_time = cmos_read_time, 584 + .set_time = cmos_set_time, 585 + .proc = cmos_procfs, 579 586 }; 580 587 581 588 /*----------------------------------------------------------------*/ ··· 860 855 dev_dbg(dev, "IRQ %d is already in use\n", rtc_irq); 861 856 goto cleanup1; 862 857 } 858 + 859 + cmos_rtc.rtc->ops = &cmos_rtc_ops; 860 + } else { 861 + cmos_rtc.rtc->ops = &cmos_rtc_ops_no_alarm; 863 862 } 864 863 865 - cmos_rtc.rtc->ops = &cmos_rtc_ops; 866 864 cmos_rtc.rtc->nvram_old_abi = true; 867 865 retval = rtc_register_device(cmos_rtc.rtc); 868 866 if (retval)
+1 -1
drivers/rtc/rtc-hid-sensor-time.c
··· 213 213 /* get a report with all values through requesting one value */ 214 214 sensor_hub_input_attr_get_raw_value(time_state->common_attributes.hsdev, 215 215 HID_USAGE_SENSOR_TIME, hid_time_addresses[0], 216 - time_state->info[0].report_id, SENSOR_HUB_SYNC); 216 + time_state->info[0].report_id, SENSOR_HUB_SYNC, false); 217 217 /* wait for all values (event) */ 218 218 ret = wait_for_completion_killable_timeout( 219 219 &time_state->comp_last_time, HZ*6);
+3
drivers/rtc/rtc-pcf2127.c
··· 303 303 memcpy(buf + 1, val, val_size); 304 304 305 305 ret = i2c_master_send(client, buf, val_size + 1); 306 + 307 + kfree(buf); 308 + 306 309 if (ret != val_size + 1) 307 310 return ret < 0 ? ret : -EIO; 308 311
+4 -2
drivers/s390/cio/vfio_ccw_cp.c
··· 387 387 * orb specified one of the unsupported formats, we defer 388 388 * checking for IDAWs in unsupported formats to here. 389 389 */ 390 - if ((!cp->orb.cmd.c64 || cp->orb.cmd.i2k) && ccw_is_idal(ccw)) 390 + if ((!cp->orb.cmd.c64 || cp->orb.cmd.i2k) && ccw_is_idal(ccw)) { 391 + kfree(p); 391 392 return -EOPNOTSUPP; 393 + } 392 394 393 395 if ((!ccw_is_chain(ccw)) && (!ccw_is_tic(ccw))) 394 396 break; ··· 530 528 531 529 ret = pfn_array_alloc_pin(pat->pat_pa, cp->mdev, ccw->cda, ccw->count); 532 530 if (ret < 0) 533 - goto out_init; 531 + goto out_unpin; 534 532 535 533 /* Translate this direct ccw to a idal ccw. */ 536 534 idaws = kcalloc(ret, sizeof(*idaws), GFP_DMA | GFP_KERNEL);
+5 -5
drivers/s390/cio/vfio_ccw_drv.c
··· 22 22 #include "vfio_ccw_private.h" 23 23 24 24 struct workqueue_struct *vfio_ccw_work_q; 25 - struct kmem_cache *vfio_ccw_io_region; 25 + static struct kmem_cache *vfio_ccw_io_region; 26 26 27 27 /* 28 28 * Helpers ··· 134 134 if (ret) 135 135 goto out_free; 136 136 137 - ret = vfio_ccw_mdev_reg(sch); 138 - if (ret) 139 - goto out_disable; 140 - 141 137 INIT_WORK(&private->io_work, vfio_ccw_sch_io_todo); 142 138 atomic_set(&private->avail, 1); 143 139 private->state = VFIO_CCW_STATE_STANDBY; 140 + 141 + ret = vfio_ccw_mdev_reg(sch); 142 + if (ret) 143 + goto out_disable; 144 144 145 145 return 0; 146 146
+4 -4
drivers/s390/crypto/ap_bus.c
··· 775 775 drvres = ap_drv->flags & AP_DRIVER_FLAG_DEFAULT; 776 776 if (!!devres != !!drvres) 777 777 return -ENODEV; 778 + /* (re-)init queue's state machine */ 779 + ap_queue_reinit_state(to_ap_queue(dev)); 778 780 } 779 781 780 782 /* Add queue/card to list of active queues/cards */ ··· 809 807 struct ap_device *ap_dev = to_ap_dev(dev); 810 808 struct ap_driver *ap_drv = ap_dev->drv; 811 809 810 + if (is_queue_dev(dev)) 811 + ap_queue_remove(to_ap_queue(dev)); 812 812 if (ap_drv->remove) 813 813 ap_drv->remove(ap_dev); 814 814 ··· 1448 1444 aq->ap_dev.device.parent = &ac->ap_dev.device; 1449 1445 dev_set_name(&aq->ap_dev.device, 1450 1446 "%02x.%04x", id, dom); 1451 - /* Start with a device reset */ 1452 - spin_lock_bh(&aq->lock); 1453 - ap_wait(ap_sm_event(aq, AP_EVENT_POLL)); 1454 - spin_unlock_bh(&aq->lock); 1455 1447 /* Register device */ 1456 1448 rc = device_register(&aq->ap_dev.device); 1457 1449 if (rc) {
+1
drivers/s390/crypto/ap_bus.h
··· 254 254 void ap_queue_remove(struct ap_queue *aq); 255 255 void ap_queue_suspend(struct ap_device *ap_dev); 256 256 void ap_queue_resume(struct ap_device *ap_dev); 257 + void ap_queue_reinit_state(struct ap_queue *aq); 257 258 258 259 struct ap_card *ap_card_create(int id, int queue_depth, int raw_device_type, 259 260 int comp_device_type, unsigned int functions);
+15
drivers/s390/crypto/ap_queue.c
··· 718 718 { 719 719 ap_flush_queue(aq); 720 720 del_timer_sync(&aq->timeout); 721 + 722 + /* reset with zero, also clears irq registration */ 723 + spin_lock_bh(&aq->lock); 724 + ap_zapq(aq->qid); 725 + aq->state = AP_STATE_BORKED; 726 + spin_unlock_bh(&aq->lock); 721 727 } 722 728 EXPORT_SYMBOL(ap_queue_remove); 729 + 730 + void ap_queue_reinit_state(struct ap_queue *aq) 731 + { 732 + spin_lock_bh(&aq->lock); 733 + aq->state = AP_STATE_RESET_START; 734 + ap_wait(ap_sm_event(aq, AP_EVENT_POLL)); 735 + spin_unlock_bh(&aq->lock); 736 + } 737 + EXPORT_SYMBOL(ap_queue_reinit_state);
-1
drivers/s390/crypto/zcrypt_cex2a.c
··· 196 196 struct ap_queue *aq = to_ap_queue(&ap_dev->device); 197 197 struct zcrypt_queue *zq = aq->private; 198 198 199 - ap_queue_remove(aq); 200 199 if (zq) 201 200 zcrypt_queue_unregister(zq); 202 201 }
-1
drivers/s390/crypto/zcrypt_cex2c.c
··· 251 251 struct ap_queue *aq = to_ap_queue(&ap_dev->device); 252 252 struct zcrypt_queue *zq = aq->private; 253 253 254 - ap_queue_remove(aq); 255 254 if (zq) 256 255 zcrypt_queue_unregister(zq); 257 256 }
-1
drivers/s390/crypto/zcrypt_cex4.c
··· 275 275 struct ap_queue *aq = to_ap_queue(&ap_dev->device); 276 276 struct zcrypt_queue *zq = aq->private; 277 277 278 - ap_queue_remove(aq); 279 278 if (zq) 280 279 zcrypt_queue_unregister(zq); 281 280 }
+1 -1
drivers/s390/net/ism_drv.c
··· 415 415 break; 416 416 417 417 clear_bit_inv(bit, bv); 418 + ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET] = 0; 418 419 barrier(); 419 420 smcd_handle_irq(ism->smcd, bit + ISM_DMB_BIT_OFFSET); 420 - ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET] = 0; 421 421 } 422 422 423 423 if (ism->sba->e) {
+20 -7
drivers/s390/net/qeth_core.h
··· 87 87 #define SENSE_RESETTING_EVENT_BYTE 1 88 88 #define SENSE_RESETTING_EVENT_FLAG 0x80 89 89 90 + static inline u32 qeth_get_device_id(struct ccw_device *cdev) 91 + { 92 + struct ccw_dev_id dev_id; 93 + u32 id; 94 + 95 + ccw_device_get_id(cdev, &dev_id); 96 + id = dev_id.devno; 97 + id |= (u32) (dev_id.ssid << 16); 98 + 99 + return id; 100 + } 101 + 90 102 /* 91 103 * Common IO related definitions 92 104 */ ··· 109 97 #define CARD_RDEV_ID(card) dev_name(&card->read.ccwdev->dev) 110 98 #define CARD_WDEV_ID(card) dev_name(&card->write.ccwdev->dev) 111 99 #define CARD_DDEV_ID(card) dev_name(&card->data.ccwdev->dev) 112 - #define CHANNEL_ID(channel) dev_name(&channel->ccwdev->dev) 100 + #define CCW_DEVID(cdev) (qeth_get_device_id(cdev)) 101 + #define CARD_DEVID(card) (CCW_DEVID(CARD_RDEV(card))) 113 102 114 103 /** 115 104 * card stuff ··· 843 830 /*some helper functions*/ 844 831 #define QETH_CARD_IFNAME(card) (((card)->dev)? (card)->dev->name : "") 845 832 833 + static inline bool qeth_netdev_is_registered(struct net_device *dev) 834 + { 835 + return dev->netdev_ops != NULL; 836 + } 837 + 846 838 static inline void qeth_scrub_qdio_buffer(struct qdio_buffer *buf, 847 839 unsigned int elements) 848 840 { ··· 991 973 int qeth_do_run_thread(struct qeth_card *, unsigned long); 992 974 void qeth_clear_thread_start_bit(struct qeth_card *, unsigned long); 993 975 void qeth_clear_thread_running_bit(struct qeth_card *, unsigned long); 994 - int qeth_core_hardsetup_card(struct qeth_card *); 976 + int qeth_core_hardsetup_card(struct qeth_card *card, bool *carrier_ok); 995 977 void qeth_print_status_message(struct qeth_card *); 996 978 int qeth_init_qdio_queues(struct qeth_card *); 997 979 int qeth_send_ipa_cmd(struct qeth_card *, struct qeth_cmd_buffer *, ··· 1046 1028 int qeth_hw_trap(struct qeth_card *, enum qeth_diags_trap_action); 1047 1029 void qeth_trace_features(struct qeth_card *); 1048 1030 void qeth_close_dev(struct qeth_card *); 1049 - int qeth_send_setassparms(struct qeth_card *, struct qeth_cmd_buffer *, __u16, 1050 - long, 1051 - int (*reply_cb)(struct qeth_card *, 1052 - struct qeth_reply *, unsigned long), 1053 - void *); 1054 1031 int qeth_setassparms_cb(struct qeth_card *, struct qeth_reply *, unsigned long); 1055 1032 struct qeth_cmd_buffer *qeth_get_setassparms_cmd(struct qeth_card *, 1056 1033 enum qeth_ipa_funcs,
+104 -95
drivers/s390/net/qeth_core_main.c
··· 167 167 return "OSD_1000"; 168 168 case QETH_LINK_TYPE_10GBIT_ETH: 169 169 return "OSD_10GIG"; 170 + case QETH_LINK_TYPE_25GBIT_ETH: 171 + return "OSD_25GIG"; 170 172 case QETH_LINK_TYPE_LANE_ETH100: 171 173 return "OSD_FE_LANE"; 172 174 case QETH_LINK_TYPE_LANE_TR: ··· 556 554 if (!iob) { 557 555 dev_warn(&card->gdev->dev, "The qeth device driver " 558 556 "failed to recover an error on the device\n"); 559 - QETH_DBF_MESSAGE(2, "%s issue_next_read failed: no iob " 560 - "available\n", dev_name(&card->gdev->dev)); 557 + QETH_DBF_MESSAGE(2, "issue_next_read on device %x failed: no iob available\n", 558 + CARD_DEVID(card)); 561 559 return -ENOMEM; 562 560 } 563 561 qeth_setup_ccw(channel->ccw, CCW_CMD_READ, QETH_BUFSIZE, iob->data); ··· 565 563 rc = ccw_device_start(channel->ccwdev, channel->ccw, 566 564 (addr_t) iob, 0, 0); 567 565 if (rc) { 568 - QETH_DBF_MESSAGE(2, "%s error in starting next read ccw! " 569 - "rc=%i\n", dev_name(&card->gdev->dev), rc); 566 + QETH_DBF_MESSAGE(2, "error %i on device %x when starting next read ccw!\n", 567 + rc, CARD_DEVID(card)); 570 568 atomic_set(&channel->irq_pending, 0); 571 569 card->read_or_write_problem = 1; 572 570 qeth_schedule_recovery(card); ··· 615 613 const char *ipa_name; 616 614 int com = cmd->hdr.command; 617 615 ipa_name = qeth_get_ipa_cmd_name(com); 616 + 618 617 if (rc) 619 - QETH_DBF_MESSAGE(2, "IPA: %s(x%X) for %s/%s returned " 620 - "x%X \"%s\"\n", 621 - ipa_name, com, dev_name(&card->gdev->dev), 622 - QETH_CARD_IFNAME(card), rc, 623 - qeth_get_ipa_msg(rc)); 618 + QETH_DBF_MESSAGE(2, "IPA: %s(%#x) for device %x returned %#x \"%s\"\n", 619 + ipa_name, com, CARD_DEVID(card), rc, 620 + qeth_get_ipa_msg(rc)); 624 621 else 625 - QETH_DBF_MESSAGE(5, "IPA: %s(x%X) for %s/%s succeeded\n", 626 - ipa_name, com, dev_name(&card->gdev->dev), 627 - QETH_CARD_IFNAME(card)); 622 + QETH_DBF_MESSAGE(5, "IPA: %s(%#x) for device %x succeeded\n", 623 + ipa_name, com, CARD_DEVID(card)); 628 624 } 629 625 630 626 static struct qeth_ipa_cmd *qeth_check_ipa_data(struct qeth_card *card, ··· 711 711 712 712 QETH_DBF_HEX(CTRL, 2, buffer, QETH_DBF_CTRL_LEN); 713 713 if ((buffer[2] & 0xc0) == 0xc0) { 714 - QETH_DBF_MESSAGE(2, "received an IDX TERMINATE with cause code %#02x\n", 714 + QETH_DBF_MESSAGE(2, "received an IDX TERMINATE with cause code %#04x\n", 715 715 buffer[4]); 716 716 QETH_CARD_TEXT(card, 2, "ckidxres"); 717 717 QETH_CARD_TEXT(card, 2, " idxterm"); ··· 972 972 QETH_CARD_TEXT(card, 2, "CGENCHK"); 973 973 dev_warn(&cdev->dev, "The qeth device driver " 974 974 "failed to recover an error on the device\n"); 975 - QETH_DBF_MESSAGE(2, "%s check on device dstat=x%x, cstat=x%x\n", 976 - dev_name(&cdev->dev), dstat, cstat); 975 + QETH_DBF_MESSAGE(2, "check on channel %x with dstat=%#x, cstat=%#x\n", 976 + CCW_DEVID(cdev), dstat, cstat); 977 977 print_hex_dump(KERN_WARNING, "qeth: irb ", DUMP_PREFIX_OFFSET, 978 978 16, 1, irb, 64, 1); 979 979 return 1; ··· 1013 1013 1014 1014 switch (PTR_ERR(irb)) { 1015 1015 case -EIO: 1016 - QETH_DBF_MESSAGE(2, "%s i/o-error on device\n", 1017 - dev_name(&cdev->dev)); 1016 + QETH_DBF_MESSAGE(2, "i/o-error on channel %x\n", 1017 + CCW_DEVID(cdev)); 1018 1018 QETH_CARD_TEXT(card, 2, "ckirberr"); 1019 1019 QETH_CARD_TEXT_(card, 2, " rc%d", -EIO); 1020 1020 break; ··· 1031 1031 } 1032 1032 break; 1033 1033 default: 1034 - QETH_DBF_MESSAGE(2, "%s unknown error %ld on device\n", 1035 - dev_name(&cdev->dev), PTR_ERR(irb)); 1034 + QETH_DBF_MESSAGE(2, "unknown error %ld on channel %x\n", 1035 + PTR_ERR(irb), CCW_DEVID(cdev)); 1036 1036 QETH_CARD_TEXT(card, 2, "ckirberr"); 1037 1037 QETH_CARD_TEXT(card, 2, " rc???"); 1038 1038 } ··· 1114 1114 dev_warn(&channel->ccwdev->dev, 1115 1115 "The qeth device driver failed to recover " 1116 1116 "an error on the device\n"); 1117 - QETH_DBF_MESSAGE(2, "%s sense data available. cstat " 1118 - "0x%X dstat 0x%X\n", 1119 - dev_name(&channel->ccwdev->dev), cstat, dstat); 1117 + QETH_DBF_MESSAGE(2, "sense data available on channel %x: cstat %#X dstat %#X\n", 1118 + CCW_DEVID(channel->ccwdev), cstat, 1119 + dstat); 1120 1120 print_hex_dump(KERN_WARNING, "qeth: irb ", 1121 1121 DUMP_PREFIX_OFFSET, 16, 1, irb, 32, 1); 1122 1122 print_hex_dump(KERN_WARNING, "qeth: sense data ", ··· 1890 1890 if (channel->state != CH_STATE_ACTIVATING) { 1891 1891 dev_warn(&channel->ccwdev->dev, "The qeth device driver" 1892 1892 " failed to recover an error on the device\n"); 1893 - QETH_DBF_MESSAGE(2, "%s IDX activate timed out\n", 1894 - dev_name(&channel->ccwdev->dev)); 1893 + QETH_DBF_MESSAGE(2, "IDX activate timed out on channel %x\n", 1894 + CCW_DEVID(channel->ccwdev)); 1895 1895 QETH_DBF_TEXT_(SETUP, 2, "2err%d", -ETIME); 1896 1896 return -ETIME; 1897 1897 } ··· 1926 1926 "The adapter is used exclusively by another " 1927 1927 "host\n"); 1928 1928 else 1929 - QETH_DBF_MESSAGE(2, "%s IDX_ACTIVATE on write channel:" 1930 - " negative reply\n", 1931 - dev_name(&channel->ccwdev->dev)); 1929 + QETH_DBF_MESSAGE(2, "IDX_ACTIVATE on channel %x: negative reply\n", 1930 + CCW_DEVID(channel->ccwdev)); 1932 1931 goto out; 1933 1932 } 1934 1933 memcpy(&temp, QETH_IDX_ACT_FUNC_LEVEL(iob->data), 2); 1935 1934 if ((temp & ~0x0100) != qeth_peer_func_level(card->info.func_level)) { 1936 - QETH_DBF_MESSAGE(2, "%s IDX_ACTIVATE on write channel: " 1937 - "function level mismatch (sent: 0x%x, received: " 1938 - "0x%x)\n", dev_name(&channel->ccwdev->dev), 1939 - card->info.func_level, temp); 1935 + QETH_DBF_MESSAGE(2, "IDX_ACTIVATE on channel %x: function level mismatch (sent: %#x, received: %#x)\n", 1936 + CCW_DEVID(channel->ccwdev), 1937 + card->info.func_level, temp); 1940 1938 goto out; 1941 1939 } 1942 1940 channel->state = CH_STATE_UP; ··· 1971 1973 "insufficient authorization\n"); 1972 1974 break; 1973 1975 default: 1974 - QETH_DBF_MESSAGE(2, "%s IDX_ACTIVATE on read channel:" 1975 - " negative reply\n", 1976 - dev_name(&channel->ccwdev->dev)); 1976 + QETH_DBF_MESSAGE(2, "IDX_ACTIVATE on channel %x: negative reply\n", 1977 + CCW_DEVID(channel->ccwdev)); 1977 1978 } 1978 1979 QETH_CARD_TEXT_(card, 2, "idxread%c", 1979 1980 QETH_IDX_ACT_CAUSE_CODE(iob->data)); ··· 1981 1984 1982 1985 memcpy(&temp, QETH_IDX_ACT_FUNC_LEVEL(iob->data), 2); 1983 1986 if (temp != qeth_peer_func_level(card->info.func_level)) { 1984 - QETH_DBF_MESSAGE(2, "%s IDX_ACTIVATE on read channel: function " 1985 - "level mismatch (sent: 0x%x, received: 0x%x)\n", 1986 - dev_name(&channel->ccwdev->dev), 1987 - card->info.func_level, temp); 1987 + QETH_DBF_MESSAGE(2, "IDX_ACTIVATE on channel %x: function level mismatch (sent: %#x, received: %#x)\n", 1988 + CCW_DEVID(channel->ccwdev), 1989 + card->info.func_level, temp); 1988 1990 goto out; 1989 1991 } 1990 1992 memcpy(&card->token.issuer_rm_r, ··· 2092 2096 (addr_t) iob, 0, 0, event_timeout); 2093 2097 spin_unlock_irq(get_ccwdev_lock(channel->ccwdev)); 2094 2098 if (rc) { 2095 - QETH_DBF_MESSAGE(2, "%s qeth_send_control_data: " 2096 - "ccw_device_start rc = %i\n", 2097 - dev_name(&channel->ccwdev->dev), rc); 2099 + QETH_DBF_MESSAGE(2, "qeth_send_control_data on device %x: ccw_device_start rc = %i\n", 2100 + CARD_DEVID(card), rc); 2098 2101 QETH_CARD_TEXT_(card, 2, " err%d", rc); 2099 2102 spin_lock_irq(&card->lock); 2100 2103 list_del_init(&reply->list); ··· 2848 2853 } else { 2849 2854 dev_warn(&card->gdev->dev, 2850 2855 "The qeth driver ran out of channel command buffers\n"); 2851 - QETH_DBF_MESSAGE(1, "%s The qeth driver ran out of channel command buffers", 2852 - dev_name(&card->gdev->dev)); 2856 + QETH_DBF_MESSAGE(1, "device %x ran out of channel command buffers", 2857 + CARD_DEVID(card)); 2853 2858 } 2854 2859 2855 2860 return iob; ··· 2984 2989 return 0; 2985 2990 default: 2986 2991 if (cmd->hdr.return_code) { 2987 - QETH_DBF_MESSAGE(1, "%s IPA_CMD_QIPASSIST: Unhandled " 2988 - "rc=%d\n", 2989 - dev_name(&card->gdev->dev), 2990 - cmd->hdr.return_code); 2992 + QETH_DBF_MESSAGE(1, "IPA_CMD_QIPASSIST on device %x: Unhandled rc=%#x\n", 2993 + CARD_DEVID(card), 2994 + cmd->hdr.return_code); 2991 2995 return 0; 2992 2996 } 2993 2997 } ··· 2998 3004 card->options.ipa6.supported_funcs = cmd->hdr.ipa_supported; 2999 3005 card->options.ipa6.enabled_funcs = cmd->hdr.ipa_enabled; 3000 3006 } else 3001 - QETH_DBF_MESSAGE(1, "%s IPA_CMD_QIPASSIST: Flawed LIC detected" 3002 - "\n", dev_name(&card->gdev->dev)); 3007 + QETH_DBF_MESSAGE(1, "IPA_CMD_QIPASSIST on device %x: Flawed LIC detected\n", 3008 + CARD_DEVID(card)); 3003 3009 return 0; 3004 3010 } 3005 3011 ··· 4291 4297 cmd->data.setadapterparms.hdr.return_code); 4292 4298 if (cmd->data.setadapterparms.hdr.return_code != 4293 4299 SET_ACCESS_CTRL_RC_SUCCESS) 4294 - QETH_DBF_MESSAGE(3, "ERR:SET_ACCESS_CTRL(%s,%d)==%d\n", 4295 - card->gdev->dev.kobj.name, 4296 - access_ctrl_req->subcmd_code, 4297 - cmd->data.setadapterparms.hdr.return_code); 4300 + QETH_DBF_MESSAGE(3, "ERR:SET_ACCESS_CTRL(%#x) on device %x: %#x\n", 4301 + access_ctrl_req->subcmd_code, CARD_DEVID(card), 4302 + cmd->data.setadapterparms.hdr.return_code); 4298 4303 switch (cmd->data.setadapterparms.hdr.return_code) { 4299 4304 case SET_ACCESS_CTRL_RC_SUCCESS: 4300 4305 if (card->options.isolation == ISOLATION_MODE_NONE) { ··· 4305 4312 } 4306 4313 break; 4307 4314 case SET_ACCESS_CTRL_RC_ALREADY_NOT_ISOLATED: 4308 - QETH_DBF_MESSAGE(2, "%s QDIO data connection isolation already " 4309 - "deactivated\n", dev_name(&card->gdev->dev)); 4315 + QETH_DBF_MESSAGE(2, "QDIO data connection isolation on device %x already deactivated\n", 4316 + CARD_DEVID(card)); 4310 4317 if (fallback) 4311 4318 card->options.isolation = card->options.prev_isolation; 4312 4319 break; 4313 4320 case SET_ACCESS_CTRL_RC_ALREADY_ISOLATED: 4314 - QETH_DBF_MESSAGE(2, "%s QDIO data connection isolation already" 4315 - " activated\n", dev_name(&card->gdev->dev)); 4321 + QETH_DBF_MESSAGE(2, "QDIO data connection isolation on device %x already activated\n", 4322 + CARD_DEVID(card)); 4316 4323 if (fallback) 4317 4324 card->options.isolation = card->options.prev_isolation; 4318 4325 break; ··· 4398 4405 rc = qeth_setadpparms_set_access_ctrl(card, 4399 4406 card->options.isolation, fallback); 4400 4407 if (rc) { 4401 - QETH_DBF_MESSAGE(3, 4402 - "IPA(SET_ACCESS_CTRL,%s,%d) sent failed\n", 4403 - card->gdev->dev.kobj.name, 4404 - rc); 4408 + QETH_DBF_MESSAGE(3, "IPA(SET_ACCESS_CTRL(%d) on device %x: sent failed\n", 4409 + rc, CARD_DEVID(card)); 4405 4410 rc = -EOPNOTSUPP; 4406 4411 } 4407 4412 } else if (card->options.isolation != ISOLATION_MODE_NONE) { ··· 4434 4443 rc = BMCR_FULLDPLX; 4435 4444 if ((card->info.link_type != QETH_LINK_TYPE_GBIT_ETH) && 4436 4445 (card->info.link_type != QETH_LINK_TYPE_OSN) && 4437 - (card->info.link_type != QETH_LINK_TYPE_10GBIT_ETH)) 4446 + (card->info.link_type != QETH_LINK_TYPE_10GBIT_ETH) && 4447 + (card->info.link_type != QETH_LINK_TYPE_25GBIT_ETH)) 4438 4448 rc |= BMCR_SPEED100; 4439 4449 break; 4440 4450 case MII_BMSR: /* Basic mode status register */ ··· 4518 4526 { 4519 4527 struct qeth_ipa_cmd *cmd; 4520 4528 struct qeth_arp_query_info *qinfo; 4521 - struct qeth_snmp_cmd *snmp; 4522 4529 unsigned char *data; 4530 + void *snmp_data; 4523 4531 __u16 data_len; 4524 4532 4525 4533 QETH_CARD_TEXT(card, 3, "snpcmdcb"); ··· 4527 4535 cmd = (struct qeth_ipa_cmd *) sdata; 4528 4536 data = (unsigned char *)((char *)cmd - reply->offset); 4529 4537 qinfo = (struct qeth_arp_query_info *) reply->param; 4530 - snmp = &cmd->data.setadapterparms.data.snmp; 4531 4538 4532 4539 if (cmd->hdr.return_code) { 4533 4540 QETH_CARD_TEXT_(card, 4, "scer1%x", cmd->hdr.return_code); ··· 4539 4548 return 0; 4540 4549 } 4541 4550 data_len = *((__u16 *)QETH_IPA_PDU_LEN_PDU1(data)); 4542 - if (cmd->data.setadapterparms.hdr.seq_no == 1) 4543 - data_len -= (__u16)((char *)&snmp->data - (char *)cmd); 4544 - else 4545 - data_len -= (__u16)((char *)&snmp->request - (char *)cmd); 4551 + if (cmd->data.setadapterparms.hdr.seq_no == 1) { 4552 + snmp_data = &cmd->data.setadapterparms.data.snmp; 4553 + data_len -= offsetof(struct qeth_ipa_cmd, 4554 + data.setadapterparms.data.snmp); 4555 + } else { 4556 + snmp_data = &cmd->data.setadapterparms.data.snmp.request; 4557 + data_len -= offsetof(struct qeth_ipa_cmd, 4558 + data.setadapterparms.data.snmp.request); 4559 + } 4546 4560 4547 4561 /* check if there is enough room in userspace */ 4548 4562 if ((qinfo->udata_len - qinfo->udata_offset) < data_len) { ··· 4560 4564 QETH_CARD_TEXT_(card, 4, "sseqn%i", 4561 4565 cmd->data.setadapterparms.hdr.seq_no); 4562 4566 /*copy entries to user buffer*/ 4563 - if (cmd->data.setadapterparms.hdr.seq_no == 1) { 4564 - memcpy(qinfo->udata + qinfo->udata_offset, 4565 - (char *)snmp, 4566 - data_len + offsetof(struct qeth_snmp_cmd, data)); 4567 - qinfo->udata_offset += offsetof(struct qeth_snmp_cmd, data); 4568 - } else { 4569 - memcpy(qinfo->udata + qinfo->udata_offset, 4570 - (char *)&snmp->request, data_len); 4571 - } 4567 + memcpy(qinfo->udata + qinfo->udata_offset, snmp_data, data_len); 4572 4568 qinfo->udata_offset += data_len; 4569 + 4573 4570 /* check if all replies received ... */ 4574 4571 QETH_CARD_TEXT_(card, 4, "srtot%i", 4575 4572 cmd->data.setadapterparms.hdr.used_total); ··· 4623 4634 rc = qeth_send_ipa_snmp_cmd(card, iob, QETH_SETADP_BASE_LEN + req_len, 4624 4635 qeth_snmp_command_cb, (void *)&qinfo); 4625 4636 if (rc) 4626 - QETH_DBF_MESSAGE(2, "SNMP command failed on %s: (0x%x)\n", 4627 - QETH_CARD_IFNAME(card), rc); 4637 + QETH_DBF_MESSAGE(2, "SNMP command failed on device %x: (%#x)\n", 4638 + CARD_DEVID(card), rc); 4628 4639 else { 4629 4640 if (copy_to_user(udata, qinfo.udata, qinfo.udata_len)) 4630 4641 rc = -EFAULT; ··· 4858 4869 4859 4870 rc = qeth_read_conf_data(card, (void **) &prcd, &length); 4860 4871 if (rc) { 4861 - QETH_DBF_MESSAGE(2, "%s qeth_read_conf_data returned %i\n", 4862 - dev_name(&card->gdev->dev), rc); 4872 + QETH_DBF_MESSAGE(2, "qeth_read_conf_data on device %x returned %i\n", 4873 + CARD_DEVID(card), rc); 4863 4874 QETH_DBF_TEXT_(SETUP, 2, "5err%d", rc); 4864 4875 goto out_offline; 4865 4876 } ··· 5075 5086 .remove = ccwgroup_remove_ccwdev, 5076 5087 }; 5077 5088 5078 - int qeth_core_hardsetup_card(struct qeth_card *card) 5089 + int qeth_core_hardsetup_card(struct qeth_card *card, bool *carrier_ok) 5079 5090 { 5080 5091 int retries = 3; 5081 5092 int rc; ··· 5085 5096 qeth_update_from_chp_desc(card); 5086 5097 retry: 5087 5098 if (retries < 3) 5088 - QETH_DBF_MESSAGE(2, "%s Retrying to do IDX activates.\n", 5089 - dev_name(&card->gdev->dev)); 5099 + QETH_DBF_MESSAGE(2, "Retrying to do IDX activates on device %x.\n", 5100 + CARD_DEVID(card)); 5090 5101 rc = qeth_qdio_clear_card(card, card->info.type != QETH_CARD_TYPE_IQD); 5091 5102 ccw_device_set_offline(CARD_DDEV(card)); 5092 5103 ccw_device_set_offline(CARD_WDEV(card)); ··· 5150 5161 if (rc == IPA_RC_LAN_OFFLINE) { 5151 5162 dev_warn(&card->gdev->dev, 5152 5163 "The LAN is offline\n"); 5153 - netif_carrier_off(card->dev); 5164 + *carrier_ok = false; 5154 5165 } else { 5155 5166 rc = -ENODEV; 5156 5167 goto out; 5157 5168 } 5158 5169 } else { 5159 - netif_carrier_on(card->dev); 5170 + *carrier_ok = true; 5171 + } 5172 + 5173 + if (qeth_netdev_is_registered(card->dev)) { 5174 + if (*carrier_ok) 5175 + netif_carrier_on(card->dev); 5176 + else 5177 + netif_carrier_off(card->dev); 5160 5178 } 5161 5179 5162 5180 card->options.ipa4.supported_funcs = 0; ··· 5197 5201 out: 5198 5202 dev_warn(&card->gdev->dev, "The qeth device driver failed to recover " 5199 5203 "an error on the device\n"); 5200 - QETH_DBF_MESSAGE(2, "%s Initialization in hardsetup failed! rc=%d\n", 5201 - dev_name(&card->gdev->dev), rc); 5204 + QETH_DBF_MESSAGE(2, "Initialization for device %x failed in hardsetup! rc=%d\n", 5205 + CARD_DEVID(card), rc); 5202 5206 return rc; 5203 5207 } 5204 5208 EXPORT_SYMBOL_GPL(qeth_core_hardsetup_card); ··· 5477 5481 } 5478 5482 EXPORT_SYMBOL_GPL(qeth_get_setassparms_cmd); 5479 5483 5480 - int qeth_send_setassparms(struct qeth_card *card, 5481 - struct qeth_cmd_buffer *iob, __u16 len, long data, 5482 - int (*reply_cb)(struct qeth_card *, 5483 - struct qeth_reply *, unsigned long), 5484 - void *reply_param) 5484 + static int qeth_send_setassparms(struct qeth_card *card, 5485 + struct qeth_cmd_buffer *iob, u16 len, 5486 + long data, int (*reply_cb)(struct qeth_card *, 5487 + struct qeth_reply *, 5488 + unsigned long), 5489 + void *reply_param) 5485 5490 { 5486 5491 int rc; 5487 5492 struct qeth_ipa_cmd *cmd; ··· 5498 5501 rc = qeth_send_ipa_cmd(card, iob, reply_cb, reply_param); 5499 5502 return rc; 5500 5503 } 5501 - EXPORT_SYMBOL_GPL(qeth_send_setassparms); 5502 5504 5503 5505 int qeth_send_simple_setassparms_prot(struct qeth_card *card, 5504 5506 enum qeth_ipa_funcs ipa_func, ··· 6166 6170 WARN_ON_ONCE(1); 6167 6171 } 6168 6172 6169 - /* fallthrough from high to low, to select all legal speeds: */ 6173 + /* partially does fall through, to also select lower speeds */ 6170 6174 switch (maxspeed) { 6175 + case SPEED_25000: 6176 + ethtool_link_ksettings_add_link_mode(cmd, supported, 6177 + 25000baseSR_Full); 6178 + ethtool_link_ksettings_add_link_mode(cmd, advertising, 6179 + 25000baseSR_Full); 6180 + break; 6171 6181 case SPEED_10000: 6172 6182 ethtool_link_ksettings_add_link_mode(cmd, supported, 6173 6183 10000baseT_Full); ··· 6256 6254 cmd->base.speed = SPEED_10000; 6257 6255 cmd->base.port = PORT_FIBRE; 6258 6256 break; 6257 + case QETH_LINK_TYPE_25GBIT_ETH: 6258 + cmd->base.speed = SPEED_25000; 6259 + cmd->base.port = PORT_FIBRE; 6260 + break; 6259 6261 default: 6260 6262 cmd->base.speed = SPEED_10; 6261 6263 cmd->base.port = PORT_TP; ··· 6325 6319 break; 6326 6320 case CARD_INFO_PORTS_10G: 6327 6321 cmd->base.speed = SPEED_10000; 6322 + break; 6323 + case CARD_INFO_PORTS_25G: 6324 + cmd->base.speed = SPEED_25000; 6328 6325 break; 6329 6326 } 6330 6327
+3 -1
drivers/s390/net/qeth_core_mpc.h
··· 90 90 QETH_LINK_TYPE_GBIT_ETH = 0x03, 91 91 QETH_LINK_TYPE_OSN = 0x04, 92 92 QETH_LINK_TYPE_10GBIT_ETH = 0x10, 93 + QETH_LINK_TYPE_25GBIT_ETH = 0x12, 93 94 QETH_LINK_TYPE_LANE_ETH100 = 0x81, 94 95 QETH_LINK_TYPE_LANE_TR = 0x82, 95 96 QETH_LINK_TYPE_LANE_ETH1000 = 0x83, ··· 348 347 CARD_INFO_PORTS_100M = 0x00000006, 349 348 CARD_INFO_PORTS_1G = 0x00000007, 350 349 CARD_INFO_PORTS_10G = 0x00000008, 350 + CARD_INFO_PORTS_25G = 0x0000000A, 351 351 }; 352 352 353 353 /* (SET)DELIP(M) IPA stuff ***************************************************/ ··· 438 436 __u32 flags_32bit; 439 437 struct qeth_ipa_caps caps; 440 438 struct qeth_checksum_cmd chksum; 441 - struct qeth_arp_cache_entry add_arp_entry; 439 + struct qeth_arp_cache_entry arp_entry; 442 440 struct qeth_arp_query_data query_arp; 443 441 struct qeth_tso_start_data tso; 444 442 __u8 ip[16];
+22 -17
drivers/s390/net/qeth_l2_main.c
··· 146 146 QETH_CARD_TEXT(card, 2, "L2Wmac"); 147 147 rc = qeth_l2_send_setdelmac(card, mac, cmd); 148 148 if (rc == -EEXIST) 149 - QETH_DBF_MESSAGE(2, "MAC %pM already registered on %s\n", 150 - mac, QETH_CARD_IFNAME(card)); 149 + QETH_DBF_MESSAGE(2, "MAC already registered on device %x\n", 150 + CARD_DEVID(card)); 151 151 else if (rc) 152 - QETH_DBF_MESSAGE(2, "Failed to register MAC %pM on %s: %d\n", 153 - mac, QETH_CARD_IFNAME(card), rc); 152 + QETH_DBF_MESSAGE(2, "Failed to register MAC on device %x: %d\n", 153 + CARD_DEVID(card), rc); 154 154 return rc; 155 155 } 156 156 ··· 163 163 QETH_CARD_TEXT(card, 2, "L2Rmac"); 164 164 rc = qeth_l2_send_setdelmac(card, mac, cmd); 165 165 if (rc) 166 - QETH_DBF_MESSAGE(2, "Failed to delete MAC %pM on %s: %d\n", 167 - mac, QETH_CARD_IFNAME(card), rc); 166 + QETH_DBF_MESSAGE(2, "Failed to delete MAC on device %u: %d\n", 167 + CARD_DEVID(card), rc); 168 168 return rc; 169 169 } 170 170 ··· 260 260 261 261 QETH_CARD_TEXT(card, 2, "L2sdvcb"); 262 262 if (cmd->hdr.return_code) { 263 - QETH_DBF_MESSAGE(2, "Error in processing VLAN %i on %s: 0x%x.\n", 263 + QETH_DBF_MESSAGE(2, "Error in processing VLAN %u on device %x: %#x.\n", 264 264 cmd->data.setdelvlan.vlan_id, 265 - QETH_CARD_IFNAME(card), cmd->hdr.return_code); 265 + CARD_DEVID(card), cmd->hdr.return_code); 266 266 QETH_CARD_TEXT_(card, 2, "L2VL%4x", cmd->hdr.command); 267 267 QETH_CARD_TEXT_(card, 2, "err%d", cmd->hdr.return_code); 268 268 } ··· 455 455 rc = qeth_vm_request_mac(card); 456 456 if (!rc) 457 457 goto out; 458 - QETH_DBF_MESSAGE(2, "z/VM MAC Service failed on device %s: x%x\n", 459 - CARD_BUS_ID(card), rc); 458 + QETH_DBF_MESSAGE(2, "z/VM MAC Service failed on device %x: %#x\n", 459 + CARD_DEVID(card), rc); 460 460 QETH_DBF_TEXT_(SETUP, 2, "err%04x", rc); 461 461 /* fall back to alternative mechanism: */ 462 462 } ··· 468 468 rc = qeth_setadpparms_change_macaddr(card); 469 469 if (!rc) 470 470 goto out; 471 - QETH_DBF_MESSAGE(2, "READ_MAC Assist failed on device %s: x%x\n", 472 - CARD_BUS_ID(card), rc); 471 + QETH_DBF_MESSAGE(2, "READ_MAC Assist failed on device %x: %#x\n", 472 + CARD_DEVID(card), rc); 473 473 QETH_DBF_TEXT_(SETUP, 2, "1err%04x", rc); 474 474 /* fall back once more: */ 475 475 } ··· 826 826 827 827 if (cgdev->state == CCWGROUP_ONLINE) 828 828 qeth_l2_set_offline(cgdev); 829 - unregister_netdev(card->dev); 829 + if (qeth_netdev_is_registered(card->dev)) 830 + unregister_netdev(card->dev); 830 831 } 831 832 832 833 static const struct ethtool_ops qeth_l2_ethtool_ops = { ··· 863 862 .ndo_set_features = qeth_set_features 864 863 }; 865 864 866 - static int qeth_l2_setup_netdev(struct qeth_card *card) 865 + static int qeth_l2_setup_netdev(struct qeth_card *card, bool carrier_ok) 867 866 { 868 867 int rc; 869 868 870 - if (card->dev->netdev_ops) 869 + if (qeth_netdev_is_registered(card->dev)) 871 870 return 0; 872 871 873 872 card->dev->priv_flags |= IFF_UNICAST_FLT; ··· 920 919 qeth_l2_request_initial_mac(card); 921 920 netif_napi_add(card->dev, &card->napi, qeth_poll, QETH_NAPI_WEIGHT); 922 921 rc = register_netdev(card->dev); 922 + if (!rc && carrier_ok) 923 + netif_carrier_on(card->dev); 924 + 923 925 if (rc) 924 926 card->dev->netdev_ops = NULL; 925 927 return rc; ··· 953 949 struct qeth_card *card = dev_get_drvdata(&gdev->dev); 954 950 int rc = 0; 955 951 enum qeth_card_states recover_flag; 952 + bool carrier_ok; 956 953 957 954 mutex_lock(&card->discipline_mutex); 958 955 mutex_lock(&card->conf_mutex); ··· 961 956 QETH_DBF_HEX(SETUP, 2, &card, sizeof(void *)); 962 957 963 958 recover_flag = card->state; 964 - rc = qeth_core_hardsetup_card(card); 959 + rc = qeth_core_hardsetup_card(card, &carrier_ok); 965 960 if (rc) { 966 961 QETH_DBF_TEXT_(SETUP, 2, "2err%04x", rc); 967 962 rc = -ENODEV; ··· 972 967 dev_info(&card->gdev->dev, 973 968 "The device represents a Bridge Capable Port\n"); 974 969 975 - rc = qeth_l2_setup_netdev(card); 970 + rc = qeth_l2_setup_netdev(card, carrier_ok); 976 971 if (rc) 977 972 goto out_remove; 978 973
+70 -137
drivers/s390/net/qeth_l3_main.c
··· 278 278 279 279 QETH_CARD_TEXT(card, 4, "clearip"); 280 280 281 - if (recover && card->options.sniffer) 282 - return; 283 - 284 281 spin_lock_bh(&card->ip_lock); 285 282 286 283 hash_for_each_safe(card->ip_htable, i, tmp, addr, hnode) { ··· 491 494 QETH_PROT_IPV4); 492 495 if (rc) { 493 496 card->options.route4.type = NO_ROUTER; 494 - QETH_DBF_MESSAGE(2, "Error (0x%04x) while setting routing type" 495 - " on %s. Type set to 'no router'.\n", rc, 496 - QETH_CARD_IFNAME(card)); 497 + QETH_DBF_MESSAGE(2, "Error (%#06x) while setting routing type on device %x. Type set to 'no router'.\n", 498 + rc, CARD_DEVID(card)); 497 499 } 498 500 return rc; 499 501 } ··· 514 518 QETH_PROT_IPV6); 515 519 if (rc) { 516 520 card->options.route6.type = NO_ROUTER; 517 - QETH_DBF_MESSAGE(2, "Error (0x%04x) while setting routing type" 518 - " on %s. Type set to 'no router'.\n", rc, 519 - QETH_CARD_IFNAME(card)); 521 + QETH_DBF_MESSAGE(2, "Error (%#06x) while setting routing type on device %x. Type set to 'no router'.\n", 522 + rc, CARD_DEVID(card)); 520 523 } 521 524 return rc; 522 525 } ··· 658 663 int rc = 0; 659 664 int cnt = 3; 660 665 666 + if (card->options.sniffer) 667 + return 0; 661 668 662 669 if (addr->proto == QETH_PROT_IPV4) { 663 670 QETH_CARD_TEXT(card, 2, "setaddr4"); ··· 693 696 struct qeth_ipaddr *addr) 694 697 { 695 698 int rc = 0; 699 + 700 + if (card->options.sniffer) 701 + return 0; 696 702 697 703 if (addr->proto == QETH_PROT_IPV4) { 698 704 QETH_CARD_TEXT(card, 2, "deladdr4"); ··· 1070 1070 } 1071 1071 break; 1072 1072 default: 1073 - QETH_DBF_MESSAGE(2, "Unknown sniffer action (0x%04x) on %s\n", 1074 - cmd->data.diagass.action, QETH_CARD_IFNAME(card)); 1073 + QETH_DBF_MESSAGE(2, "Unknown sniffer action (%#06x) on device %x\n", 1074 + cmd->data.diagass.action, CARD_DEVID(card)); 1075 1075 } 1076 1076 1077 1077 return 0; ··· 1517 1517 qeth_l3_handle_promisc_mode(card); 1518 1518 } 1519 1519 1520 - static const char *qeth_l3_arp_get_error_cause(int *rc) 1520 + static int qeth_l3_arp_makerc(int rc) 1521 1521 { 1522 - switch (*rc) { 1523 - case QETH_IPA_ARP_RC_FAILED: 1524 - *rc = -EIO; 1525 - return "operation failed"; 1522 + switch (rc) { 1523 + case IPA_RC_SUCCESS: 1524 + return 0; 1526 1525 case QETH_IPA_ARP_RC_NOTSUPP: 1527 - *rc = -EOPNOTSUPP; 1528 - return "operation not supported"; 1529 - case QETH_IPA_ARP_RC_OUT_OF_RANGE: 1530 - *rc = -EINVAL; 1531 - return "argument out of range"; 1532 1526 case QETH_IPA_ARP_RC_Q_NOTSUPP: 1533 - *rc = -EOPNOTSUPP; 1534 - return "query operation not supported"; 1527 + return -EOPNOTSUPP; 1528 + case QETH_IPA_ARP_RC_OUT_OF_RANGE: 1529 + return -EINVAL; 1535 1530 case QETH_IPA_ARP_RC_Q_NO_DATA: 1536 - *rc = -ENOENT; 1537 - return "no query data available"; 1531 + return -ENOENT; 1538 1532 default: 1539 - return "unknown error"; 1533 + return -EIO; 1540 1534 } 1541 1535 } 1542 1536 1543 1537 static int qeth_l3_arp_set_no_entries(struct qeth_card *card, int no_entries) 1544 1538 { 1545 - int tmp; 1546 1539 int rc; 1547 1540 1548 1541 QETH_CARD_TEXT(card, 3, "arpstnoe"); ··· 1553 1560 rc = qeth_send_simple_setassparms(card, IPA_ARP_PROCESSING, 1554 1561 IPA_CMD_ASS_ARP_SET_NO_ENTRIES, 1555 1562 no_entries); 1556 - if (rc) { 1557 - tmp = rc; 1558 - QETH_DBF_MESSAGE(2, "Could not set number of ARP entries on " 1559 - "%s: %s (0x%x/%d)\n", QETH_CARD_IFNAME(card), 1560 - qeth_l3_arp_get_error_cause(&rc), tmp, tmp); 1561 - } 1562 - return rc; 1563 + if (rc) 1564 + QETH_DBF_MESSAGE(2, "Could not set number of ARP entries on device %x: %#x\n", 1565 + CARD_DEVID(card), rc); 1566 + return qeth_l3_arp_makerc(rc); 1563 1567 } 1564 1568 1565 1569 static __u32 get_arp_entry_size(struct qeth_card *card, ··· 1706 1716 { 1707 1717 struct qeth_cmd_buffer *iob; 1708 1718 struct qeth_ipa_cmd *cmd; 1709 - int tmp; 1710 1719 int rc; 1711 1720 1712 1721 QETH_CARD_TEXT_(card, 3, "qarpipv%i", prot); ··· 1724 1735 rc = qeth_l3_send_ipa_arp_cmd(card, iob, 1725 1736 QETH_SETASS_BASE_LEN+QETH_ARP_CMD_LEN, 1726 1737 qeth_l3_arp_query_cb, (void *)qinfo); 1727 - if (rc) { 1728 - tmp = rc; 1729 - QETH_DBF_MESSAGE(2, 1730 - "Error while querying ARP cache on %s: %s " 1731 - "(0x%x/%d)\n", QETH_CARD_IFNAME(card), 1732 - qeth_l3_arp_get_error_cause(&rc), tmp, tmp); 1733 - } 1734 - 1735 - return rc; 1738 + if (rc) 1739 + QETH_DBF_MESSAGE(2, "Error while querying ARP cache on device %x: %#x\n", 1740 + CARD_DEVID(card), rc); 1741 + return qeth_l3_arp_makerc(rc); 1736 1742 } 1737 1743 1738 1744 static int qeth_l3_arp_query(struct qeth_card *card, char __user *udata) ··· 1777 1793 return rc; 1778 1794 } 1779 1795 1780 - static int qeth_l3_arp_add_entry(struct qeth_card *card, 1781 - struct qeth_arp_cache_entry *entry) 1796 + static int qeth_l3_arp_modify_entry(struct qeth_card *card, 1797 + struct qeth_arp_cache_entry *entry, 1798 + enum qeth_arp_process_subcmds arp_cmd) 1782 1799 { 1800 + struct qeth_arp_cache_entry *cmd_entry; 1783 1801 struct qeth_cmd_buffer *iob; 1784 - char buf[16]; 1785 - int tmp; 1786 1802 int rc; 1787 1803 1788 - QETH_CARD_TEXT(card, 3, "arpadent"); 1804 + if (arp_cmd == IPA_CMD_ASS_ARP_ADD_ENTRY) 1805 + QETH_CARD_TEXT(card, 3, "arpadd"); 1806 + else 1807 + QETH_CARD_TEXT(card, 3, "arpdel"); 1789 1808 1790 1809 /* 1791 1810 * currently GuestLAN only supports the ARP assist function ··· 1801 1814 return -EOPNOTSUPP; 1802 1815 } 1803 1816 1804 - iob = qeth_get_setassparms_cmd(card, IPA_ARP_PROCESSING, 1805 - IPA_CMD_ASS_ARP_ADD_ENTRY, 1806 - sizeof(struct qeth_arp_cache_entry), 1807 - QETH_PROT_IPV4); 1817 + iob = qeth_get_setassparms_cmd(card, IPA_ARP_PROCESSING, arp_cmd, 1818 + sizeof(*cmd_entry), QETH_PROT_IPV4); 1808 1819 if (!iob) 1809 1820 return -ENOMEM; 1810 - rc = qeth_send_setassparms(card, iob, 1811 - sizeof(struct qeth_arp_cache_entry), 1812 - (unsigned long) entry, 1813 - qeth_setassparms_cb, NULL); 1814 - if (rc) { 1815 - tmp = rc; 1816 - qeth_l3_ipaddr4_to_string((u8 *)entry->ipaddr, buf); 1817 - QETH_DBF_MESSAGE(2, "Could not add ARP entry for address %s " 1818 - "on %s: %s (0x%x/%d)\n", buf, QETH_CARD_IFNAME(card), 1819 - qeth_l3_arp_get_error_cause(&rc), tmp, tmp); 1820 - } 1821 - return rc; 1822 - } 1823 1821 1824 - static int qeth_l3_arp_remove_entry(struct qeth_card *card, 1825 - struct qeth_arp_cache_entry *entry) 1826 - { 1827 - struct qeth_cmd_buffer *iob; 1828 - char buf[16] = {0, }; 1829 - int tmp; 1830 - int rc; 1822 + cmd_entry = &__ipa_cmd(iob)->data.setassparms.data.arp_entry; 1823 + ether_addr_copy(cmd_entry->macaddr, entry->macaddr); 1824 + memcpy(cmd_entry->ipaddr, entry->ipaddr, 4); 1825 + rc = qeth_send_ipa_cmd(card, iob, qeth_setassparms_cb, NULL); 1826 + if (rc) 1827 + QETH_DBF_MESSAGE(2, "Could not modify (cmd: %#x) ARP entry on device %x: %#x\n", 1828 + arp_cmd, CARD_DEVID(card), rc); 1831 1829 1832 - QETH_CARD_TEXT(card, 3, "arprment"); 1833 - 1834 - /* 1835 - * currently GuestLAN only supports the ARP assist function 1836 - * IPA_CMD_ASS_ARP_QUERY_INFO, but not IPA_CMD_ASS_ARP_REMOVE_ENTRY; 1837 - * thus we say EOPNOTSUPP for this ARP function 1838 - */ 1839 - if (card->info.guestlan) 1840 - return -EOPNOTSUPP; 1841 - if (!qeth_is_supported(card, IPA_ARP_PROCESSING)) { 1842 - return -EOPNOTSUPP; 1843 - } 1844 - memcpy(buf, entry, 12); 1845 - iob = qeth_get_setassparms_cmd(card, IPA_ARP_PROCESSING, 1846 - IPA_CMD_ASS_ARP_REMOVE_ENTRY, 1847 - 12, 1848 - QETH_PROT_IPV4); 1849 - if (!iob) 1850 - return -ENOMEM; 1851 - rc = qeth_send_setassparms(card, iob, 1852 - 12, (unsigned long)buf, 1853 - qeth_setassparms_cb, NULL); 1854 - if (rc) { 1855 - tmp = rc; 1856 - memset(buf, 0, 16); 1857 - qeth_l3_ipaddr4_to_string((u8 *)entry->ipaddr, buf); 1858 - QETH_DBF_MESSAGE(2, "Could not delete ARP entry for address %s" 1859 - " on %s: %s (0x%x/%d)\n", buf, QETH_CARD_IFNAME(card), 1860 - qeth_l3_arp_get_error_cause(&rc), tmp, tmp); 1861 - } 1862 - return rc; 1830 + return qeth_l3_arp_makerc(rc); 1863 1831 } 1864 1832 1865 1833 static int qeth_l3_arp_flush_cache(struct qeth_card *card) 1866 1834 { 1867 1835 int rc; 1868 - int tmp; 1869 1836 1870 1837 QETH_CARD_TEXT(card, 3, "arpflush"); 1871 1838 ··· 1835 1894 } 1836 1895 rc = qeth_send_simple_setassparms(card, IPA_ARP_PROCESSING, 1837 1896 IPA_CMD_ASS_ARP_FLUSH_CACHE, 0); 1838 - if (rc) { 1839 - tmp = rc; 1840 - QETH_DBF_MESSAGE(2, "Could not flush ARP cache on %s: %s " 1841 - "(0x%x/%d)\n", QETH_CARD_IFNAME(card), 1842 - qeth_l3_arp_get_error_cause(&rc), tmp, tmp); 1843 - } 1844 - return rc; 1897 + if (rc) 1898 + QETH_DBF_MESSAGE(2, "Could not flush ARP cache on device %x: %#x\n", 1899 + CARD_DEVID(card), rc); 1900 + return qeth_l3_arp_makerc(rc); 1845 1901 } 1846 1902 1847 1903 static int qeth_l3_do_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) 1848 1904 { 1849 1905 struct qeth_card *card = dev->ml_priv; 1850 1906 struct qeth_arp_cache_entry arp_entry; 1907 + enum qeth_arp_process_subcmds arp_cmd; 1851 1908 int rc = 0; 1852 1909 1853 1910 switch (cmd) { ··· 1864 1925 rc = qeth_l3_arp_query(card, rq->ifr_ifru.ifru_data); 1865 1926 break; 1866 1927 case SIOC_QETH_ARP_ADD_ENTRY: 1867 - if (!capable(CAP_NET_ADMIN)) { 1868 - rc = -EPERM; 1869 - break; 1870 - } 1871 - if (copy_from_user(&arp_entry, rq->ifr_ifru.ifru_data, 1872 - sizeof(struct qeth_arp_cache_entry))) 1873 - rc = -EFAULT; 1874 - else 1875 - rc = qeth_l3_arp_add_entry(card, &arp_entry); 1876 - break; 1877 1928 case SIOC_QETH_ARP_REMOVE_ENTRY: 1878 - if (!capable(CAP_NET_ADMIN)) { 1879 - rc = -EPERM; 1880 - break; 1881 - } 1882 - if (copy_from_user(&arp_entry, rq->ifr_ifru.ifru_data, 1883 - sizeof(struct qeth_arp_cache_entry))) 1884 - rc = -EFAULT; 1885 - else 1886 - rc = qeth_l3_arp_remove_entry(card, &arp_entry); 1887 - break; 1929 + if (!capable(CAP_NET_ADMIN)) 1930 + return -EPERM; 1931 + if (copy_from_user(&arp_entry, rq->ifr_data, sizeof(arp_entry))) 1932 + return -EFAULT; 1933 + 1934 + arp_cmd = (cmd == SIOC_QETH_ARP_ADD_ENTRY) ? 1935 + IPA_CMD_ASS_ARP_ADD_ENTRY : 1936 + IPA_CMD_ASS_ARP_REMOVE_ENTRY; 1937 + return qeth_l3_arp_modify_entry(card, &arp_entry, arp_cmd); 1888 1938 case SIOC_QETH_ARP_FLUSH_CACHE: 1889 1939 if (!capable(CAP_NET_ADMIN)) { 1890 1940 rc = -EPERM; ··· 2311 2383 .ndo_neigh_setup = qeth_l3_neigh_setup, 2312 2384 }; 2313 2385 2314 - static int qeth_l3_setup_netdev(struct qeth_card *card) 2386 + static int qeth_l3_setup_netdev(struct qeth_card *card, bool carrier_ok) 2315 2387 { 2316 2388 unsigned int headroom; 2317 2389 int rc; 2318 2390 2319 - if (card->dev->netdev_ops) 2391 + if (qeth_netdev_is_registered(card->dev)) 2320 2392 return 0; 2321 2393 2322 2394 if (card->info.type == QETH_CARD_TYPE_OSD || ··· 2385 2457 2386 2458 netif_napi_add(card->dev, &card->napi, qeth_poll, QETH_NAPI_WEIGHT); 2387 2459 rc = register_netdev(card->dev); 2460 + if (!rc && carrier_ok) 2461 + netif_carrier_on(card->dev); 2462 + 2388 2463 out: 2389 2464 if (rc) 2390 2465 card->dev->netdev_ops = NULL; ··· 2428 2497 if (cgdev->state == CCWGROUP_ONLINE) 2429 2498 qeth_l3_set_offline(cgdev); 2430 2499 2431 - unregister_netdev(card->dev); 2500 + if (qeth_netdev_is_registered(card->dev)) 2501 + unregister_netdev(card->dev); 2432 2502 qeth_l3_clear_ip_htable(card, 0); 2433 2503 qeth_l3_clear_ipato_list(card); 2434 2504 } ··· 2439 2507 struct qeth_card *card = dev_get_drvdata(&gdev->dev); 2440 2508 int rc = 0; 2441 2509 enum qeth_card_states recover_flag; 2510 + bool carrier_ok; 2442 2511 2443 2512 mutex_lock(&card->discipline_mutex); 2444 2513 mutex_lock(&card->conf_mutex); ··· 2447 2514 QETH_DBF_HEX(SETUP, 2, &card, sizeof(void *)); 2448 2515 2449 2516 recover_flag = card->state; 2450 - rc = qeth_core_hardsetup_card(card); 2517 + rc = qeth_core_hardsetup_card(card, &carrier_ok); 2451 2518 if (rc) { 2452 2519 QETH_DBF_TEXT_(SETUP, 2, "2err%04x", rc); 2453 2520 rc = -ENODEV; 2454 2521 goto out_remove; 2455 2522 } 2456 2523 2457 - rc = qeth_l3_setup_netdev(card); 2524 + rc = qeth_l3_setup_netdev(card, carrier_ok); 2458 2525 if (rc) 2459 2526 goto out_remove; 2460 2527
+1
drivers/scsi/Kconfig
··· 578 578 config SCSI_MYRS 579 579 tristate "Mylex DAC960/DAC1100 PCI RAID Controller (SCSI Interface)" 580 580 depends on PCI 581 + depends on !CPU_BIG_ENDIAN || COMPILE_TEST 581 582 select RAID_ATTRS 582 583 help 583 584 This driver adds support for the Mylex DAC960, AcceleRAID, and
+1 -1
drivers/scsi/NCR5380.c
··· 1198 1198 1199 1199 out: 1200 1200 if (!hostdata->selecting) 1201 - return NULL; 1201 + return false; 1202 1202 hostdata->selecting = NULL; 1203 1203 return ret; 1204 1204 }
-2
drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
··· 904 904 { 905 905 struct hisi_hba *hisi_hba = dq->hisi_hba; 906 906 struct hisi_sas_slot *s, *s1, *s2 = NULL; 907 - struct list_head *dq_list; 908 907 int dlvry_queue = dq->id; 909 908 int wp; 910 909 911 - dq_list = &dq->list; 912 910 list_for_each_entry_safe(s, s1, &dq->list, delivery) { 913 911 if (!s->ready) 914 912 break;
-2
drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
··· 1670 1670 { 1671 1671 struct hisi_hba *hisi_hba = dq->hisi_hba; 1672 1672 struct hisi_sas_slot *s, *s1, *s2 = NULL; 1673 - struct list_head *dq_list; 1674 1673 int dlvry_queue = dq->id; 1675 1674 int wp; 1676 1675 1677 - dq_list = &dq->list; 1678 1676 list_for_each_entry_safe(s, s1, &dq->list, delivery) { 1679 1677 if (!s->ready) 1680 1678 break;
-2
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
··· 886 886 { 887 887 struct hisi_hba *hisi_hba = dq->hisi_hba; 888 888 struct hisi_sas_slot *s, *s1, *s2 = NULL; 889 - struct list_head *dq_list; 890 889 int dlvry_queue = dq->id; 891 890 int wp; 892 891 893 - dq_list = &dq->list; 894 892 list_for_each_entry_safe(s, s1, &dq->list, delivery) { 895 893 if (!s->ready) 896 894 break;
+2
drivers/scsi/lpfc/lpfc_debugfs.c
··· 698 698 rport = lpfc_ndlp_get_nrport(ndlp); 699 699 if (rport) 700 700 nrport = rport->remoteport; 701 + else 702 + nrport = NULL; 701 703 spin_unlock(&phba->hbalock); 702 704 if (!nrport) 703 705 continue;
+2 -1
drivers/scsi/myrb.c
··· 1049 1049 enquiry2->fw.firmware_type = '0'; 1050 1050 enquiry2->fw.turn_id = 0; 1051 1051 } 1052 - sprintf(cb->fw_version, "%d.%02d-%c-%02d", 1052 + snprintf(cb->fw_version, sizeof(cb->fw_version), 1053 + "%d.%02d-%c-%02d", 1053 1054 enquiry2->fw.major_version, 1054 1055 enquiry2->fw.minor_version, 1055 1056 enquiry2->fw.firmware_type,
+8 -5
drivers/scsi/myrs.c
··· 163 163 dma_addr_t ctlr_info_addr; 164 164 union myrs_sgl *sgl; 165 165 unsigned char status; 166 - struct myrs_ctlr_info old; 166 + unsigned short ldev_present, ldev_critical, ldev_offline; 167 167 168 - memcpy(&old, cs->ctlr_info, sizeof(struct myrs_ctlr_info)); 168 + ldev_present = cs->ctlr_info->ldev_present; 169 + ldev_critical = cs->ctlr_info->ldev_critical; 170 + ldev_offline = cs->ctlr_info->ldev_offline; 171 + 169 172 ctlr_info_addr = dma_map_single(&cs->pdev->dev, cs->ctlr_info, 170 173 sizeof(struct myrs_ctlr_info), 171 174 DMA_FROM_DEVICE); ··· 201 198 cs->ctlr_info->rbld_active + 202 199 cs->ctlr_info->exp_active != 0) 203 200 cs->needs_update = true; 204 - if (cs->ctlr_info->ldev_present != old.ldev_present || 205 - cs->ctlr_info->ldev_critical != old.ldev_critical || 206 - cs->ctlr_info->ldev_offline != old.ldev_offline) 201 + if (cs->ctlr_info->ldev_present != ldev_present || 202 + cs->ctlr_info->ldev_critical != ldev_critical || 203 + cs->ctlr_info->ldev_offline != ldev_offline) 207 204 shost_printk(KERN_INFO, cs->host, 208 205 "Logical drive count changes (%d/%d/%d)\n", 209 206 cs->ctlr_info->ldev_critical,
+1
drivers/scsi/qla2xxx/qla_init.c
··· 4763 4763 fcport->loop_id = FC_NO_LOOP_ID; 4764 4764 qla2x00_set_fcport_state(fcport, FCS_UNCONFIGURED); 4765 4765 fcport->supported_classes = FC_COS_UNSPECIFIED; 4766 + fcport->fp_speed = PORT_SPEED_UNKNOWN; 4766 4767 4767 4768 fcport->ct_desc.ct_sns = dma_alloc_coherent(&vha->hw->pdev->dev, 4768 4769 sizeof(struct ct_sns_pkt), &fcport->ct_desc.ct_sns_dma,
+9 -3
drivers/scsi/qla2xxx/qla_os.c
··· 67 67 MODULE_PARM_DESC(ql2xplogiabsentdevice, 68 68 "Option to enable PLOGI to devices that are not present after " 69 69 "a Fabric scan. This is needed for several broken switches. " 70 - "Default is 0 - no PLOGI. 1 - perfom PLOGI."); 70 + "Default is 0 - no PLOGI. 1 - perform PLOGI."); 71 71 72 72 int ql2xloginretrycount = 0; 73 73 module_param(ql2xloginretrycount, int, S_IRUGO); ··· 1749 1749 static void 1750 1750 __qla2x00_abort_all_cmds(struct qla_qpair *qp, int res) 1751 1751 { 1752 - int cnt; 1752 + int cnt, status; 1753 1753 unsigned long flags; 1754 1754 srb_t *sp; 1755 1755 scsi_qla_host_t *vha = qp->vha; ··· 1799 1799 if (!sp_get(sp)) { 1800 1800 spin_unlock_irqrestore 1801 1801 (qp->qp_lock_ptr, flags); 1802 - qla2xxx_eh_abort( 1802 + status = qla2xxx_eh_abort( 1803 1803 GET_CMD_SP(sp)); 1804 1804 spin_lock_irqsave 1805 1805 (qp->qp_lock_ptr, flags); 1806 + /* 1807 + * Get rid of extra reference caused 1808 + * by early exit from qla2xxx_eh_abort 1809 + */ 1810 + if (status == FAST_IO_FAIL) 1811 + atomic_dec(&sp->ref_count); 1806 1812 } 1807 1813 } 1808 1814 sp->done(sp, res);
+8
drivers/scsi/scsi_lib.c
··· 697 697 */ 698 698 scsi_mq_uninit_cmd(cmd); 699 699 700 + /* 701 + * queue is still alive, so grab the ref for preventing it 702 + * from being cleaned up during running queue. 703 + */ 704 + percpu_ref_get(&q->q_usage_counter); 705 + 700 706 __blk_mq_end_request(req, error); 701 707 702 708 if (scsi_target(sdev)->single_lun || ··· 710 704 kblockd_schedule_work(&sdev->requeue_work); 711 705 else 712 706 blk_mq_run_hw_queues(q, true); 707 + 708 + percpu_ref_put(&q->q_usage_counter); 713 709 } else { 714 710 unsigned long flags; 715 711
+9
drivers/scsi/ufs/ufs-hisi.c
··· 20 20 #include "unipro.h" 21 21 #include "ufs-hisi.h" 22 22 #include "ufshci.h" 23 + #include "ufs_quirks.h" 23 24 24 25 static int ufs_hisi_check_hibern8(struct ufs_hba *hba) 25 26 { ··· 391 390 392 391 static void ufs_hisi_pwr_change_pre_change(struct ufs_hba *hba) 393 392 { 393 + if (hba->dev_quirks & UFS_DEVICE_QUIRK_HOST_VS_DEBUGSAVECONFIGTIME) { 394 + pr_info("ufs flash device must set VS_DebugSaveConfigTime 0x10\n"); 395 + /* VS_DebugSaveConfigTime */ 396 + ufshcd_dme_set(hba, UIC_ARG_MIB(0xD0A0), 0x10); 397 + /* sync length */ 398 + ufshcd_dme_set(hba, UIC_ARG_MIB(0x1556), 0x48); 399 + } 400 + 394 401 /* update */ 395 402 ufshcd_dme_set(hba, UIC_ARG_MIB(0x15A8), 0x1); 396 403 /* PA_TxSkip */
+6
drivers/scsi/ufs/ufs_quirks.h
··· 131 131 */ 132 132 #define UFS_DEVICE_QUIRK_HOST_PA_SAVECONFIGTIME (1 << 8) 133 133 134 + /* 135 + * Some UFS devices require VS_DebugSaveConfigTime is 0x10, 136 + * enabling this quirk ensure this. 137 + */ 138 + #define UFS_DEVICE_QUIRK_HOST_VS_DEBUGSAVECONFIGTIME (1 << 9) 139 + 134 140 #endif /* UFS_QUIRKS_H_ */
+2 -7
drivers/scsi/ufs/ufshcd.c
··· 231 231 UFS_FIX(UFS_VENDOR_SKHYNIX, UFS_ANY_MODEL, UFS_DEVICE_NO_VCCQ), 232 232 UFS_FIX(UFS_VENDOR_SKHYNIX, UFS_ANY_MODEL, 233 233 UFS_DEVICE_QUIRK_HOST_PA_SAVECONFIGTIME), 234 + UFS_FIX(UFS_VENDOR_SKHYNIX, "hB8aL1" /*H28U62301AMR*/, 235 + UFS_DEVICE_QUIRK_HOST_VS_DEBUGSAVECONFIGTIME), 234 236 235 237 END_FIX 236 238 }; ··· 8101 8099 err = -ENOMEM; 8102 8100 goto out_error; 8103 8101 } 8104 - 8105 - /* 8106 - * Do not use blk-mq at this time because blk-mq does not support 8107 - * runtime pm. 8108 - */ 8109 - host->use_blk_mq = false; 8110 - 8111 8102 hba = shost_priv(host); 8112 8103 hba->host = host; 8113 8104 hba->dev = dev;
-3
drivers/slimbus/qcom-ngd-ctrl.c
··· 777 777 u8 la = txn->la; 778 778 bool usr_msg = false; 779 779 780 - if (txn->mc & SLIM_MSG_CLK_PAUSE_SEQ_FLG) 781 - return -EPROTONOSUPPORT; 782 - 783 780 if (txn->mt == SLIM_MSG_MT_CORE && 784 781 (txn->mc >= SLIM_MSG_MC_BEGIN_RECONFIGURATION && 785 782 txn->mc <= SLIM_MSG_MC_RECONFIGURE_NOW))
-6
drivers/slimbus/slimbus.h
··· 61 61 #define SLIM_MSG_MC_NEXT_REMOVE_CHANNEL 0x58 62 62 #define SLIM_MSG_MC_RECONFIGURE_NOW 0x5F 63 63 64 - /* 65 - * Clock pause flag to indicate that the reconfig message 66 - * corresponds to clock pause sequence 67 - */ 68 - #define SLIM_MSG_CLK_PAUSE_SEQ_FLG (1U << 8) 69 - 70 64 /* Clock pause values per SLIMbus spec */ 71 65 #define SLIM_CLK_FAST 0 72 66 #define SLIM_CLK_CONST_PHASE 1
+2 -2
drivers/spi/spi-mt65xx.c
··· 522 522 mdata->xfer_len = min(MTK_SPI_MAX_FIFO_SIZE, len); 523 523 mtk_spi_setup_packet(master); 524 524 525 - cnt = len / 4; 525 + cnt = mdata->xfer_len / 4; 526 526 iowrite32_rep(mdata->base + SPI_TX_DATA_REG, 527 527 trans->tx_buf + mdata->num_xfered, cnt); 528 528 529 - remainder = len % 4; 529 + remainder = mdata->xfer_len % 4; 530 530 if (remainder > 0) { 531 531 reg_val = 0; 532 532 memcpy(&reg_val,
+25 -12
drivers/spi/spi-omap2-mcspi.c
··· 1540 1540 /* work with hotplug and coldplug */ 1541 1541 MODULE_ALIAS("platform:omap2_mcspi"); 1542 1542 1543 - #ifdef CONFIG_SUSPEND 1544 - static int omap2_mcspi_suspend_noirq(struct device *dev) 1543 + static int __maybe_unused omap2_mcspi_suspend(struct device *dev) 1545 1544 { 1546 - return pinctrl_pm_select_sleep_state(dev); 1545 + struct spi_master *master = dev_get_drvdata(dev); 1546 + struct omap2_mcspi *mcspi = spi_master_get_devdata(master); 1547 + int error; 1548 + 1549 + error = pinctrl_pm_select_sleep_state(dev); 1550 + if (error) 1551 + dev_warn(mcspi->dev, "%s: failed to set pins: %i\n", 1552 + __func__, error); 1553 + 1554 + error = spi_master_suspend(master); 1555 + if (error) 1556 + dev_warn(mcspi->dev, "%s: master suspend failed: %i\n", 1557 + __func__, error); 1558 + 1559 + return pm_runtime_force_suspend(dev); 1547 1560 } 1548 1561 1549 - static int omap2_mcspi_resume_noirq(struct device *dev) 1562 + static int __maybe_unused omap2_mcspi_resume(struct device *dev) 1550 1563 { 1551 1564 struct spi_master *master = dev_get_drvdata(dev); 1552 1565 struct omap2_mcspi *mcspi = spi_master_get_devdata(master); ··· 1570 1557 dev_warn(mcspi->dev, "%s: failed to set pins: %i\n", 1571 1558 __func__, error); 1572 1559 1573 - return 0; 1560 + error = spi_master_resume(master); 1561 + if (error) 1562 + dev_warn(mcspi->dev, "%s: master resume failed: %i\n", 1563 + __func__, error); 1564 + 1565 + return pm_runtime_force_resume(dev); 1574 1566 } 1575 1567 1576 - #else 1577 - #define omap2_mcspi_suspend_noirq NULL 1578 - #define omap2_mcspi_resume_noirq NULL 1579 - #endif 1580 - 1581 1568 static const struct dev_pm_ops omap2_mcspi_pm_ops = { 1582 - .suspend_noirq = omap2_mcspi_suspend_noirq, 1583 - .resume_noirq = omap2_mcspi_resume_noirq, 1569 + SET_SYSTEM_SLEEP_PM_OPS(omap2_mcspi_suspend, 1570 + omap2_mcspi_resume) 1584 1571 .runtime_resume = omap_mcspi_runtime_resume, 1585 1572 }; 1586 1573
+21 -18
drivers/staging/comedi/comedi.h
··· 1005 1005 * and INSN_DEVICE_CONFIG_GET_ROUTES. 1006 1006 */ 1007 1007 #define NI_NAMES_BASE 0x8000u 1008 + 1009 + #define _TERM_N(base, n, x) ((base) + ((x) & ((n) - 1))) 1010 + 1008 1011 /* 1009 1012 * not necessarily all allowed 64 PFIs are valid--certainly not for all devices 1010 1013 */ 1011 - #define NI_PFI(x) (NI_NAMES_BASE + ((x) & 0x3f)) 1014 + #define NI_PFI(x) _TERM_N(NI_NAMES_BASE, 64, x) 1012 1015 /* 8 trigger lines by standard, Some devices cannot talk to all eight. */ 1013 - #define TRIGGER_LINE(x) (NI_PFI(-1) + 1 + ((x) & 0x7)) 1016 + #define TRIGGER_LINE(x) _TERM_N(NI_PFI(-1) + 1, 8, x) 1014 1017 /* 4 RTSI shared MUXes to route signals to/from TRIGGER_LINES on NI hardware */ 1015 - #define NI_RTSI_BRD(x) (TRIGGER_LINE(-1) + 1 + ((x) & 0x3)) 1018 + #define NI_RTSI_BRD(x) _TERM_N(TRIGGER_LINE(-1) + 1, 4, x) 1016 1019 1017 1020 /* *** Counter/timer names : 8 counters max *** */ 1018 - #define NI_COUNTER_NAMES_BASE (NI_RTSI_BRD(-1) + 1) 1019 - #define NI_MAX_COUNTERS 7 1020 - #define NI_CtrSource(x) (NI_COUNTER_NAMES_BASE + ((x) & NI_MAX_COUNTERS)) 1021 + #define NI_MAX_COUNTERS 8 1022 + #define NI_COUNTER_NAMES_BASE (NI_RTSI_BRD(-1) + 1) 1023 + #define NI_CtrSource(x) _TERM_N(NI_COUNTER_NAMES_BASE, NI_MAX_COUNTERS, x) 1021 1024 /* Gate, Aux, A,B,Z are all treated, at times as gates */ 1022 - #define NI_GATES_NAMES_BASE (NI_CtrSource(-1) + 1) 1023 - #define NI_CtrGate(x) (NI_GATES_NAMES_BASE + ((x) & NI_MAX_COUNTERS)) 1024 - #define NI_CtrAux(x) (NI_CtrGate(-1) + 1 + ((x) & NI_MAX_COUNTERS)) 1025 - #define NI_CtrA(x) (NI_CtrAux(-1) + 1 + ((x) & NI_MAX_COUNTERS)) 1026 - #define NI_CtrB(x) (NI_CtrA(-1) + 1 + ((x) & NI_MAX_COUNTERS)) 1027 - #define NI_CtrZ(x) (NI_CtrB(-1) + 1 + ((x) & NI_MAX_COUNTERS)) 1028 - #define NI_GATES_NAMES_MAX NI_CtrZ(-1) 1029 - #define NI_CtrArmStartTrigger(x) (NI_CtrZ(-1) + 1 + ((x) & NI_MAX_COUNTERS)) 1025 + #define NI_GATES_NAMES_BASE (NI_CtrSource(-1) + 1) 1026 + #define NI_CtrGate(x) _TERM_N(NI_GATES_NAMES_BASE, NI_MAX_COUNTERS, x) 1027 + #define NI_CtrAux(x) _TERM_N(NI_CtrGate(-1) + 1, NI_MAX_COUNTERS, x) 1028 + #define NI_CtrA(x) _TERM_N(NI_CtrAux(-1) + 1, NI_MAX_COUNTERS, x) 1029 + #define NI_CtrB(x) _TERM_N(NI_CtrA(-1) + 1, NI_MAX_COUNTERS, x) 1030 + #define NI_CtrZ(x) _TERM_N(NI_CtrB(-1) + 1, NI_MAX_COUNTERS, x) 1031 + #define NI_GATES_NAMES_MAX NI_CtrZ(-1) 1032 + #define NI_CtrArmStartTrigger(x) _TERM_N(NI_CtrZ(-1) + 1, NI_MAX_COUNTERS, x) 1030 1033 #define NI_CtrInternalOutput(x) \ 1031 - (NI_CtrArmStartTrigger(-1) + 1 + ((x) & NI_MAX_COUNTERS)) 1034 + _TERM_N(NI_CtrArmStartTrigger(-1) + 1, NI_MAX_COUNTERS, x) 1032 1035 /** external pin(s) labeled conveniently as Ctr<i>Out. */ 1033 - #define NI_CtrOut(x) (NI_CtrInternalOutput(-1) + 1 + ((x) & NI_MAX_COUNTERS)) 1036 + #define NI_CtrOut(x) _TERM_N(NI_CtrInternalOutput(-1) + 1, NI_MAX_COUNTERS, x) 1034 1037 /** For Buffered sampling of ctr -- x series capability. */ 1035 - #define NI_CtrSampleClock(x) (NI_CtrOut(-1) + 1 + ((x) & NI_MAX_COUNTERS)) 1036 - #define NI_COUNTER_NAMES_MAX NI_CtrSampleClock(-1) 1038 + #define NI_CtrSampleClock(x) _TERM_N(NI_CtrOut(-1) + 1, NI_MAX_COUNTERS, x) 1039 + #define NI_COUNTER_NAMES_MAX NI_CtrSampleClock(-1) 1037 1040 1038 1041 enum ni_common_signal_names { 1039 1042 /* PXI_Star: this is a non-NI-specific signal */
+2 -1
drivers/staging/comedi/drivers/ni_mio_common.c
··· 2843 2843 return ni_ao_arm(dev, s); 2844 2844 case INSN_CONFIG_GET_CMD_TIMING_CONSTRAINTS: 2845 2845 /* we don't care about actual channels */ 2846 - data[1] = board->ao_speed; 2846 + /* data[3] : chanlist_len */ 2847 + data[1] = board->ao_speed * data[3]; 2847 2848 data[2] = 0; 2848 2849 return 0; 2849 2850 default:
+1
drivers/staging/media/davinci_vpfe/dm365_ipipeif.c
··· 307 307 ipipeif_write(val, ipipeif_base_addr, IPIPEIF_CFG2); 308 308 break; 309 309 } 310 + /* fall through */ 310 311 311 312 case IPIPEIF_SDRAM_YUV: 312 313 /* Set clock divider */
+12 -12
drivers/staging/media/sunxi/cedrus/cedrus.c
··· 108 108 unsigned int count; 109 109 unsigned int i; 110 110 111 - count = vb2_request_buffer_cnt(req); 112 - if (!count) { 113 - v4l2_info(&ctx->dev->v4l2_dev, 114 - "No buffer was provided with the request\n"); 115 - return -ENOENT; 116 - } else if (count > 1) { 117 - v4l2_info(&ctx->dev->v4l2_dev, 118 - "More than one buffer was provided with the request\n"); 119 - return -EINVAL; 120 - } 121 - 122 111 list_for_each_entry(obj, &req->objects, list) { 123 112 struct vb2_buffer *vb; 124 113 ··· 121 132 122 133 if (!ctx) 123 134 return -ENOENT; 135 + 136 + count = vb2_request_buffer_cnt(req); 137 + if (!count) { 138 + v4l2_info(&ctx->dev->v4l2_dev, 139 + "No buffer was provided with the request\n"); 140 + return -ENOENT; 141 + } else if (count > 1) { 142 + v4l2_info(&ctx->dev->v4l2_dev, 143 + "More than one buffer was provided with the request\n"); 144 + return -EINVAL; 145 + } 124 146 125 147 parent_hdl = &ctx->hdl; 126 148 ··· 253 253 254 254 static const struct media_device_ops cedrus_m2m_media_ops = { 255 255 .req_validate = cedrus_request_validate, 256 - .req_queue = vb2_m2m_request_queue, 256 + .req_queue = v4l2_m2m_request_queue, 257 257 }; 258 258 259 259 static int cedrus_probe(struct platform_device *pdev)
+1 -1
drivers/staging/most/core.c
··· 351 351 352 352 for (i = 0; i < ARRAY_SIZE(ch_data_type); i++) { 353 353 if (c->cfg.data_type & ch_data_type[i].most_ch_data_type) 354 - return snprintf(buf, PAGE_SIZE, ch_data_type[i].name); 354 + return snprintf(buf, PAGE_SIZE, "%s", ch_data_type[i].name); 355 355 } 356 356 return snprintf(buf, PAGE_SIZE, "unconfigured\n"); 357 357 }
+2 -1
drivers/staging/mt7621-dma/mtk-hsdma.c
··· 335 335 /* tx desc */ 336 336 src = sg->src_addr; 337 337 for (i = 0; i < chan->desc->num_sgs; i++) { 338 + tx_desc = &chan->tx_ring[chan->tx_idx]; 339 + 338 340 if (len > HSDMA_MAX_PLEN) 339 341 tlen = HSDMA_MAX_PLEN; 340 342 else ··· 346 344 tx_desc->addr1 = src; 347 345 tx_desc->flags |= HSDMA_DESC_PLEN1(tlen); 348 346 } else { 349 - tx_desc = &chan->tx_ring[chan->tx_idx]; 350 347 tx_desc->addr0 = src; 351 348 tx_desc->flags = HSDMA_DESC_PLEN0(tlen); 352 349
+1 -1
drivers/staging/mt7621-pinctrl/pinctrl-rt2880.c
··· 82 82 struct property *prop; 83 83 const char *function_name, *group_name; 84 84 int ret; 85 - int ngroups; 85 + int ngroups = 0; 86 86 unsigned int reserved_maps = 0; 87 87 88 88 for_each_node_with_property(np_config, "group")
+2 -2
drivers/staging/rtl8723bs/hal/rtl8723bs_recv.c
··· 109 109 rx_bssid = get_hdr_bssid(wlanhdr); 110 110 pkt_info.bssid_match = ((!IsFrameTypeCtrl(wlanhdr)) && 111 111 !pattrib->icv_err && !pattrib->crc_err && 112 - !ether_addr_equal(rx_bssid, my_bssid)); 112 + ether_addr_equal(rx_bssid, my_bssid)); 113 113 114 114 rx_ra = get_ra(wlanhdr); 115 115 my_hwaddr = myid(&padapter->eeprompriv); 116 116 pkt_info.to_self = pkt_info.bssid_match && 117 - !ether_addr_equal(rx_ra, my_hwaddr); 117 + ether_addr_equal(rx_ra, my_hwaddr); 118 118 119 119 120 120 pkt_info.is_beacon = pkt_info.bssid_match &&
+1 -1
drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
··· 1277 1277 1278 1278 sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_PACKETS); 1279 1279 sinfo->tx_packets = psta->sta_stats.tx_pkts; 1280 - 1280 + sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_FAILED); 1281 1281 } 1282 1282 1283 1283 /* for Ad-Hoc/AP mode */
+1 -1
drivers/staging/rtl8723bs/os_dep/ioctl_linux.c
··· 2289 2289 exit: 2290 2290 kfree(ptmp); 2291 2291 2292 - return 0; 2292 + return ret; 2293 2293 } 2294 2294 2295 2295 static int rtw_wx_write32(struct net_device *dev,
+6 -1
drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
··· 1731 1731 struct vchiq_await_completion32 args32; 1732 1732 struct vchiq_completion_data32 completion32; 1733 1733 unsigned int *msgbufcount32; 1734 + unsigned int msgbufcount_native; 1734 1735 compat_uptr_t msgbuf32; 1735 1736 void *msgbuf; 1736 1737 void **msgbufptr; ··· 1843 1842 sizeof(completion32))) 1844 1843 return -EFAULT; 1845 1844 1846 - args32.msgbufcount--; 1845 + if (get_user(msgbufcount_native, &args->msgbufcount)) 1846 + return -EFAULT; 1847 + 1848 + if (!msgbufcount_native) 1849 + args32.msgbufcount--; 1847 1850 1848 1851 msgbufcount32 = 1849 1852 &((struct vchiq_await_completion32 __user *)arg)->msgbufcount;
+2 -2
drivers/target/target_core_transport.c
··· 1778 1778 void transport_generic_request_failure(struct se_cmd *cmd, 1779 1779 sense_reason_t sense_reason) 1780 1780 { 1781 - int ret = 0; 1781 + int ret = 0, post_ret; 1782 1782 1783 1783 pr_debug("-----[ Storage Engine Exception; sense_reason %d\n", 1784 1784 sense_reason); ··· 1790 1790 transport_complete_task_attr(cmd); 1791 1791 1792 1792 if (cmd->transport_complete_callback) 1793 - cmd->transport_complete_callback(cmd, false, NULL); 1793 + cmd->transport_complete_callback(cmd, false, &post_ret); 1794 1794 1795 1795 if (transport_check_aborted_status(cmd, 1)) 1796 1796 return;
+38 -2
drivers/thunderbolt/switch.c
··· 863 863 } 864 864 static DEVICE_ATTR(key, 0600, key_show, key_store); 865 865 866 + static void nvm_authenticate_start(struct tb_switch *sw) 867 + { 868 + struct pci_dev *root_port; 869 + 870 + /* 871 + * During host router NVM upgrade we should not allow root port to 872 + * go into D3cold because some root ports cannot trigger PME 873 + * itself. To be on the safe side keep the root port in D0 during 874 + * the whole upgrade process. 875 + */ 876 + root_port = pci_find_pcie_root_port(sw->tb->nhi->pdev); 877 + if (root_port) 878 + pm_runtime_get_noresume(&root_port->dev); 879 + } 880 + 881 + static void nvm_authenticate_complete(struct tb_switch *sw) 882 + { 883 + struct pci_dev *root_port; 884 + 885 + root_port = pci_find_pcie_root_port(sw->tb->nhi->pdev); 886 + if (root_port) 887 + pm_runtime_put(&root_port->dev); 888 + } 889 + 866 890 static ssize_t nvm_authenticate_show(struct device *dev, 867 891 struct device_attribute *attr, char *buf) 868 892 { ··· 936 912 937 913 sw->nvm->authenticating = true; 938 914 939 - if (!tb_route(sw)) 915 + if (!tb_route(sw)) { 916 + /* 917 + * Keep root port from suspending as long as the 918 + * NVM upgrade process is running. 919 + */ 920 + nvm_authenticate_start(sw); 940 921 ret = nvm_authenticate_host(sw); 941 - else 922 + if (ret) 923 + nvm_authenticate_complete(sw); 924 + } else { 942 925 ret = nvm_authenticate_device(sw); 926 + } 943 927 pm_runtime_mark_last_busy(&sw->dev); 944 928 pm_runtime_put_autosuspend(&sw->dev); 945 929 } ··· 1365 1333 ret = dma_port_flash_update_auth_status(sw->dma_port, &status); 1366 1334 if (ret <= 0) 1367 1335 return ret; 1336 + 1337 + /* Now we can allow root port to suspend again */ 1338 + if (!tb_route(sw)) 1339 + nvm_authenticate_complete(sw); 1368 1340 1369 1341 if (status) { 1370 1342 tb_sw_info(sw, "switch flash authentication failed\n");
+4 -4
drivers/tty/serial/sh-sci.c
··· 1614 1614 hrtimer_init(&s->rx_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); 1615 1615 s->rx_timer.function = rx_timer_fn; 1616 1616 1617 + s->chan_rx_saved = s->chan_rx = chan; 1618 + 1617 1619 if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) 1618 1620 sci_submit_rx(s); 1619 - 1620 - s->chan_rx_saved = s->chan_rx = chan; 1621 1621 } 1622 1622 } 1623 1623 ··· 3102 3102 static int sci_remove(struct platform_device *dev) 3103 3103 { 3104 3104 struct sci_port *port = platform_get_drvdata(dev); 3105 + unsigned int type = port->port.type; /* uart_remove_... clears it */ 3105 3106 3106 3107 sci_ports_in_use &= ~BIT(port->port.line); 3107 3108 uart_remove_one_port(&sci_uart_driver, &port->port); ··· 3113 3112 sysfs_remove_file(&dev->dev.kobj, 3114 3113 &dev_attr_rx_fifo_trigger.attr); 3115 3114 } 3116 - if (port->port.type == PORT_SCIFA || port->port.type == PORT_SCIFB || 3117 - port->port.type == PORT_HSCIF) { 3115 + if (type == PORT_SCIFA || type == PORT_SCIFB || type == PORT_HSCIF) { 3118 3116 sysfs_remove_file(&dev->dev.kobj, 3119 3117 &dev_attr_rx_fifo_timeout.attr); 3120 3118 }
+2 -2
drivers/tty/tty_baudrate.c
··· 77 77 else 78 78 cbaud += 15; 79 79 } 80 - return baud_table[cbaud]; 80 + return cbaud >= n_baud_table ? 0 : baud_table[cbaud]; 81 81 } 82 82 EXPORT_SYMBOL(tty_termios_baud_rate); 83 83 ··· 113 113 else 114 114 cbaud += 15; 115 115 } 116 - return baud_table[cbaud]; 116 + return cbaud >= n_baud_table ? 0 : baud_table[cbaud]; 117 117 #else /* IBSHIFT */ 118 118 return tty_termios_baud_rate(termios); 119 119 #endif /* IBSHIFT */
+1 -1
drivers/tty/vt/vt.c
··· 1548 1548 scr_memsetw(start + offset, vc->vc_video_erase_char, 2 * count); 1549 1549 vc->vc_need_wrap = 0; 1550 1550 if (con_should_update(vc)) 1551 - do_update_region(vc, (unsigned long) start, count); 1551 + do_update_region(vc, (unsigned long)(start + offset), count); 1552 1552 } 1553 1553 1554 1554 static void csi_X(struct vc_data *vc, int vpar) /* erase the following vpar positions */
+5 -2
drivers/uio/uio.c
··· 961 961 if (ret) 962 962 goto err_uio_dev_add_attributes; 963 963 964 + info->uio_dev = idev; 965 + 964 966 if (info->irq && (info->irq != UIO_IRQ_CUSTOM)) { 965 967 /* 966 968 * Note that we deliberately don't use devm_request_irq ··· 974 972 */ 975 973 ret = request_irq(info->irq, uio_interrupt, 976 974 info->irq_flags, info->name, idev); 977 - if (ret) 975 + if (ret) { 976 + info->uio_dev = NULL; 978 977 goto err_request_irq; 978 + } 979 979 } 980 980 981 - info->uio_dev = idev; 982 981 return 0; 983 982 984 983 err_request_irq:
+3
drivers/usb/class/cdc-acm.c
··· 1696 1696 { USB_DEVICE(0x0572, 0x1328), /* Shiro / Aztech USB MODEM UM-3100 */ 1697 1697 .driver_info = NO_UNION_NORMAL, /* has no union descriptor */ 1698 1698 }, 1699 + { USB_DEVICE(0x0572, 0x1349), /* Hiro (Conexant) USB MODEM H50228 */ 1700 + .driver_info = NO_UNION_NORMAL, /* has no union descriptor */ 1701 + }, 1699 1702 { USB_DEVICE(0x20df, 0x0001), /* Simtec Electronics Entropy Key */ 1700 1703 .driver_info = QUIRK_CONTROL_LINE_STATE, }, 1701 1704 { USB_DEVICE(0x2184, 0x001c) }, /* GW Instek AFG-2225 */
+14 -4
drivers/usb/core/hub.c
··· 2794 2794 int i, status; 2795 2795 u16 portchange, portstatus; 2796 2796 struct usb_port *port_dev = hub->ports[port1 - 1]; 2797 + int reset_recovery_time; 2797 2798 2798 2799 if (!hub_is_superspeed(hub->hdev)) { 2799 2800 if (warm) { ··· 2850 2849 USB_PORT_FEAT_C_BH_PORT_RESET); 2851 2850 usb_clear_port_feature(hub->hdev, port1, 2852 2851 USB_PORT_FEAT_C_PORT_LINK_STATE); 2853 - usb_clear_port_feature(hub->hdev, port1, 2852 + 2853 + if (udev) 2854 + usb_clear_port_feature(hub->hdev, port1, 2854 2855 USB_PORT_FEAT_C_CONNECTION); 2855 2856 2856 2857 /* ··· 2888 2885 2889 2886 done: 2890 2887 if (status == 0) { 2891 - /* TRSTRCY = 10 ms; plus some extra */ 2892 2888 if (port_dev->quirks & USB_PORT_QUIRK_FAST_ENUM) 2893 2889 usleep_range(10000, 12000); 2894 - else 2895 - msleep(10 + 40); 2890 + else { 2891 + /* TRSTRCY = 10 ms; plus some extra */ 2892 + reset_recovery_time = 10 + 40; 2893 + 2894 + /* Hub needs extra delay after resetting its port. */ 2895 + if (hub->hdev->quirks & USB_QUIRK_HUB_SLOW_RESET) 2896 + reset_recovery_time += 100; 2897 + 2898 + msleep(reset_recovery_time); 2899 + } 2896 2900 2897 2901 if (udev) { 2898 2902 struct usb_hcd *hcd = bus_to_hcd(udev->bus);
+17
drivers/usb/core/quirks.c
··· 128 128 case 'n': 129 129 flags |= USB_QUIRK_DELAY_CTRL_MSG; 130 130 break; 131 + case 'o': 132 + flags |= USB_QUIRK_HUB_SLOW_RESET; 133 + break; 131 134 /* Ignore unrecognized flag characters */ 132 135 } 133 136 } ··· 208 205 209 206 /* Microsoft LifeCam-VX700 v2.0 */ 210 207 { USB_DEVICE(0x045e, 0x0770), .driver_info = USB_QUIRK_RESET_RESUME }, 208 + 209 + /* Cherry Stream G230 2.0 (G85-231) and 3.0 (G85-232) */ 210 + { USB_DEVICE(0x046a, 0x0023), .driver_info = USB_QUIRK_RESET_RESUME }, 211 211 212 212 /* Logitech HD Pro Webcams C920, C920-C, C925e and C930e */ 213 213 { USB_DEVICE(0x046d, 0x082d), .driver_info = USB_QUIRK_DELAY_INIT }, ··· 386 380 { USB_DEVICE(0x1a0a, 0x0200), .driver_info = 387 381 USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL }, 388 382 383 + /* Terminus Technology Inc. Hub */ 384 + { USB_DEVICE(0x1a40, 0x0101), .driver_info = USB_QUIRK_HUB_SLOW_RESET }, 385 + 389 386 /* Corsair K70 RGB */ 390 387 { USB_DEVICE(0x1b1c, 0x1b13), .driver_info = USB_QUIRK_DELAY_INIT }, 391 388 ··· 399 390 /* Corsair Strafe RGB */ 400 391 { USB_DEVICE(0x1b1c, 0x1b20), .driver_info = USB_QUIRK_DELAY_INIT | 401 392 USB_QUIRK_DELAY_CTRL_MSG }, 393 + 394 + /* Corsair K70 LUX RGB */ 395 + { USB_DEVICE(0x1b1c, 0x1b33), .driver_info = USB_QUIRK_DELAY_INIT }, 402 396 403 397 /* Corsair K70 LUX */ 404 398 { USB_DEVICE(0x1b1c, 0x1b36), .driver_info = USB_QUIRK_DELAY_INIT }, ··· 422 410 /* Hauppauge HVR-950q */ 423 411 { USB_DEVICE(0x2040, 0x7200), .driver_info = 424 412 USB_QUIRK_CONFIG_INTF_STRINGS }, 413 + 414 + /* Raydium Touchscreen */ 415 + { USB_DEVICE(0x2386, 0x3114), .driver_info = USB_QUIRK_NO_LPM }, 416 + 417 + { USB_DEVICE(0x2386, 0x3119), .driver_info = USB_QUIRK_NO_LPM }, 425 418 426 419 /* DJI CineSSD */ 427 420 { USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
+1
drivers/usb/dwc2/pci.c
··· 120 120 dwc2 = platform_device_alloc("dwc2", PLATFORM_DEVID_AUTO); 121 121 if (!dwc2) { 122 122 dev_err(dev, "couldn't allocate dwc2 device\n"); 123 + ret = -ENOMEM; 123 124 goto err; 124 125 } 125 126
+1
drivers/usb/dwc3/core.c
··· 1499 1499 1500 1500 err5: 1501 1501 dwc3_event_buffers_cleanup(dwc); 1502 + dwc3_ulpi_exit(dwc); 1502 1503 1503 1504 err4: 1504 1505 dwc3_free_scratch_buffers(dwc);
+3 -1
drivers/usb/dwc3/dwc3-pci.c
··· 283 283 static void dwc3_pci_remove(struct pci_dev *pci) 284 284 { 285 285 struct dwc3_pci *dwc = pci_get_drvdata(pci); 286 + struct pci_dev *pdev = dwc->pci; 286 287 287 - gpiod_remove_lookup_table(&platform_bytcr_gpios); 288 + if (pdev->device == PCI_DEVICE_ID_INTEL_BYT) 289 + gpiod_remove_lookup_table(&platform_bytcr_gpios); 288 290 #ifdef CONFIG_PM 289 291 cancel_work_sync(&dwc->wakeup_work); 290 292 #endif
+4 -9
drivers/usb/dwc3/gadget.c
··· 1081 1081 /* Now prepare one extra TRB to align transfer size */ 1082 1082 trb = &dep->trb_pool[dep->trb_enqueue]; 1083 1083 __dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr, 1084 - maxp - rem, false, 0, 1084 + maxp - rem, false, 1, 1085 1085 req->request.stream_id, 1086 1086 req->request.short_not_ok, 1087 1087 req->request.no_interrupt); ··· 1125 1125 /* Now prepare one extra TRB to align transfer size */ 1126 1126 trb = &dep->trb_pool[dep->trb_enqueue]; 1127 1127 __dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr, maxp - rem, 1128 - false, 0, req->request.stream_id, 1128 + false, 1, req->request.stream_id, 1129 1129 req->request.short_not_ok, 1130 1130 req->request.no_interrupt); 1131 1131 } else if (req->request.zero && req->request.length && ··· 1141 1141 /* Now prepare one extra TRB to handle ZLP */ 1142 1142 trb = &dep->trb_pool[dep->trb_enqueue]; 1143 1143 __dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr, 0, 1144 - false, 0, req->request.stream_id, 1144 + false, 1, req->request.stream_id, 1145 1145 req->request.short_not_ok, 1146 1146 req->request.no_interrupt); 1147 1147 } else { ··· 1470 1470 unsigned transfer_in_flight; 1471 1471 unsigned started; 1472 1472 1473 - if (dep->flags & DWC3_EP_STALL) 1474 - return 0; 1475 - 1476 1473 if (dep->number > 1) 1477 1474 trb = dwc3_ep_prev_trb(dep, dep->trb_enqueue); 1478 1475 else ··· 1491 1494 else 1492 1495 dep->flags |= DWC3_EP_STALL; 1493 1496 } else { 1494 - if (!(dep->flags & DWC3_EP_STALL)) 1495 - return 0; 1496 1497 1497 1498 ret = dwc3_send_clear_stall_ep_cmd(dep); 1498 1499 if (ret) ··· 2254 2259 * with one TRB pending in the ring. We need to manually clear HWO bit 2255 2260 * from that TRB. 2256 2261 */ 2257 - if ((req->zero || req->unaligned) && (trb->ctrl & DWC3_TRB_CTRL_HWO)) { 2262 + if ((req->zero || req->unaligned) && !(trb->ctrl & DWC3_TRB_CTRL_CHN)) { 2258 2263 trb->ctrl &= ~DWC3_TRB_CTRL_HWO; 2259 2264 return 1; 2260 2265 }
+8 -18
drivers/usb/gadget/function/f_fs.c
··· 215 215 216 216 struct mm_struct *mm; 217 217 struct work_struct work; 218 - struct work_struct cancellation_work; 219 218 220 219 struct usb_ep *ep; 221 220 struct usb_request *req; ··· 1072 1073 return 0; 1073 1074 } 1074 1075 1075 - static void ffs_aio_cancel_worker(struct work_struct *work) 1076 - { 1077 - struct ffs_io_data *io_data = container_of(work, struct ffs_io_data, 1078 - cancellation_work); 1079 - 1080 - ENTER(); 1081 - 1082 - usb_ep_dequeue(io_data->ep, io_data->req); 1083 - } 1084 - 1085 1076 static int ffs_aio_cancel(struct kiocb *kiocb) 1086 1077 { 1087 1078 struct ffs_io_data *io_data = kiocb->private; 1088 - struct ffs_data *ffs = io_data->ffs; 1079 + struct ffs_epfile *epfile = kiocb->ki_filp->private_data; 1089 1080 int value; 1090 1081 1091 1082 ENTER(); 1092 1083 1093 - if (likely(io_data && io_data->ep && io_data->req)) { 1094 - INIT_WORK(&io_data->cancellation_work, ffs_aio_cancel_worker); 1095 - queue_work(ffs->io_completion_wq, &io_data->cancellation_work); 1096 - value = -EINPROGRESS; 1097 - } else { 1084 + spin_lock_irq(&epfile->ffs->eps_lock); 1085 + 1086 + if (likely(io_data && io_data->ep && io_data->req)) 1087 + value = usb_ep_dequeue(io_data->ep, io_data->req); 1088 + else 1098 1089 value = -EINVAL; 1099 - } 1090 + 1091 + spin_unlock_irq(&epfile->ffs->eps_lock); 1100 1092 1101 1093 return value; 1102 1094 }
+6 -5
drivers/usb/gadget/function/u_ether.c
··· 401 401 static void rx_fill(struct eth_dev *dev, gfp_t gfp_flags) 402 402 { 403 403 struct usb_request *req; 404 - struct usb_request *tmp; 405 404 unsigned long flags; 406 405 407 406 /* fill unused rxq slots with some skb */ 408 407 spin_lock_irqsave(&dev->req_lock, flags); 409 - list_for_each_entry_safe(req, tmp, &dev->rx_reqs, list) { 408 + while (!list_empty(&dev->rx_reqs)) { 409 + req = list_first_entry(&dev->rx_reqs, struct usb_request, list); 410 410 list_del_init(&req->list); 411 411 spin_unlock_irqrestore(&dev->req_lock, flags); 412 412 ··· 1125 1125 { 1126 1126 struct eth_dev *dev = link->ioport; 1127 1127 struct usb_request *req; 1128 - struct usb_request *tmp; 1129 1128 1130 1129 WARN_ON(!dev); 1131 1130 if (!dev) ··· 1141 1142 */ 1142 1143 usb_ep_disable(link->in_ep); 1143 1144 spin_lock(&dev->req_lock); 1144 - list_for_each_entry_safe(req, tmp, &dev->tx_reqs, list) { 1145 + while (!list_empty(&dev->tx_reqs)) { 1146 + req = list_first_entry(&dev->tx_reqs, struct usb_request, list); 1145 1147 list_del(&req->list); 1146 1148 1147 1149 spin_unlock(&dev->req_lock); ··· 1154 1154 1155 1155 usb_ep_disable(link->out_ep); 1156 1156 spin_lock(&dev->req_lock); 1157 - list_for_each_entry_safe(req, tmp, &dev->rx_reqs, list) { 1157 + while (!list_empty(&dev->rx_reqs)) { 1158 + req = list_first_entry(&dev->rx_reqs, struct usb_request, list); 1158 1159 list_del(&req->list); 1159 1160 1160 1161 spin_unlock(&dev->req_lock);
+31 -57
drivers/usb/gadget/udc/omap_udc.c
··· 2033 2033 { 2034 2034 return machine_is_omap_innovator() 2035 2035 || machine_is_omap_osk() 2036 + || machine_is_omap_palmte() 2036 2037 || machine_is_sx1() 2037 2038 /* No known omap7xx boards with vbus sense */ 2038 2039 || cpu_is_omap7xx(); ··· 2042 2041 static int omap_udc_start(struct usb_gadget *g, 2043 2042 struct usb_gadget_driver *driver) 2044 2043 { 2045 - int status = -ENODEV; 2044 + int status; 2046 2045 struct omap_ep *ep; 2047 2046 unsigned long flags; 2048 2047 ··· 2080 2079 goto done; 2081 2080 } 2082 2081 } else { 2082 + status = 0; 2083 2083 if (can_pullup(udc)) 2084 2084 pullup_enable(udc); 2085 2085 else ··· 2595 2593 2596 2594 static void omap_udc_release(struct device *dev) 2597 2595 { 2598 - complete(udc->done); 2596 + pullup_disable(udc); 2597 + if (!IS_ERR_OR_NULL(udc->transceiver)) { 2598 + usb_put_phy(udc->transceiver); 2599 + udc->transceiver = NULL; 2600 + } 2601 + omap_writew(0, UDC_SYSCON1); 2602 + remove_proc_file(); 2603 + if (udc->dc_clk) { 2604 + if (udc->clk_requested) 2605 + omap_udc_enable_clock(0); 2606 + clk_put(udc->hhc_clk); 2607 + clk_put(udc->dc_clk); 2608 + } 2609 + if (udc->done) 2610 + complete(udc->done); 2599 2611 kfree(udc); 2600 - udc = NULL; 2601 2612 } 2602 2613 2603 2614 static int ··· 2642 2627 udc->gadget.speed = USB_SPEED_UNKNOWN; 2643 2628 udc->gadget.max_speed = USB_SPEED_FULL; 2644 2629 udc->gadget.name = driver_name; 2630 + udc->gadget.quirk_ep_out_aligned_size = 1; 2645 2631 udc->transceiver = xceiv; 2646 2632 2647 2633 /* ep0 is special; put it right after the SETUP buffer */ ··· 2883 2867 udc->clr_halt = UDC_RESET_EP; 2884 2868 2885 2869 /* USB general purpose IRQ: ep0, state changes, dma, etc */ 2886 - status = request_irq(pdev->resource[1].start, omap_udc_irq, 2887 - 0, driver_name, udc); 2870 + status = devm_request_irq(&pdev->dev, pdev->resource[1].start, 2871 + omap_udc_irq, 0, driver_name, udc); 2888 2872 if (status != 0) { 2889 2873 ERR("can't get irq %d, err %d\n", 2890 2874 (int) pdev->resource[1].start, status); ··· 2892 2876 } 2893 2877 2894 2878 /* USB "non-iso" IRQ (PIO for all but ep0) */ 2895 - status = request_irq(pdev->resource[2].start, omap_udc_pio_irq, 2896 - 0, "omap_udc pio", udc); 2879 + status = devm_request_irq(&pdev->dev, pdev->resource[2].start, 2880 + omap_udc_pio_irq, 0, "omap_udc pio", udc); 2897 2881 if (status != 0) { 2898 2882 ERR("can't get irq %d, err %d\n", 2899 2883 (int) pdev->resource[2].start, status); 2900 - goto cleanup2; 2884 + goto cleanup1; 2901 2885 } 2902 2886 #ifdef USE_ISO 2903 - status = request_irq(pdev->resource[3].start, omap_udc_iso_irq, 2904 - 0, "omap_udc iso", udc); 2887 + status = devm_request_irq(&pdev->dev, pdev->resource[3].start, 2888 + omap_udc_iso_irq, 0, "omap_udc iso", udc); 2905 2889 if (status != 0) { 2906 2890 ERR("can't get irq %d, err %d\n", 2907 2891 (int) pdev->resource[3].start, status); 2908 - goto cleanup3; 2892 + goto cleanup1; 2909 2893 } 2910 2894 #endif 2911 2895 if (cpu_is_omap16xx() || cpu_is_omap7xx()) { ··· 2916 2900 } 2917 2901 2918 2902 create_proc_file(); 2919 - status = usb_add_gadget_udc_release(&pdev->dev, &udc->gadget, 2920 - omap_udc_release); 2921 - if (status) 2922 - goto cleanup4; 2923 - 2924 - return 0; 2925 - 2926 - cleanup4: 2927 - remove_proc_file(); 2928 - 2929 - #ifdef USE_ISO 2930 - cleanup3: 2931 - free_irq(pdev->resource[2].start, udc); 2932 - #endif 2933 - 2934 - cleanup2: 2935 - free_irq(pdev->resource[1].start, udc); 2903 + return usb_add_gadget_udc_release(&pdev->dev, &udc->gadget, 2904 + omap_udc_release); 2936 2905 2937 2906 cleanup1: 2938 2907 kfree(udc); ··· 2944 2943 { 2945 2944 DECLARE_COMPLETION_ONSTACK(done); 2946 2945 2947 - if (!udc) 2948 - return -ENODEV; 2949 - 2950 - usb_del_gadget_udc(&udc->gadget); 2951 - if (udc->driver) 2952 - return -EBUSY; 2953 - 2954 2946 udc->done = &done; 2955 2947 2956 - pullup_disable(udc); 2957 - if (!IS_ERR_OR_NULL(udc->transceiver)) { 2958 - usb_put_phy(udc->transceiver); 2959 - udc->transceiver = NULL; 2960 - } 2961 - omap_writew(0, UDC_SYSCON1); 2948 + usb_del_gadget_udc(&udc->gadget); 2962 2949 2963 - remove_proc_file(); 2964 - 2965 - #ifdef USE_ISO 2966 - free_irq(pdev->resource[3].start, udc); 2967 - #endif 2968 - free_irq(pdev->resource[2].start, udc); 2969 - free_irq(pdev->resource[1].start, udc); 2970 - 2971 - if (udc->dc_clk) { 2972 - if (udc->clk_requested) 2973 - omap_udc_enable_clock(0); 2974 - clk_put(udc->hhc_clk); 2975 - clk_put(udc->dc_clk); 2976 - } 2950 + wait_for_completion(&done); 2977 2951 2978 2952 release_mem_region(pdev->resource[0].start, 2979 2953 pdev->resource[0].end - pdev->resource[0].start + 1); 2980 - 2981 - wait_for_completion(&done); 2982 2954 2983 2955 return 0; 2984 2956 }
+4 -2
drivers/usb/host/xhci-histb.c
··· 325 325 struct xhci_hcd_histb *histb = platform_get_drvdata(dev); 326 326 struct usb_hcd *hcd = histb->hcd; 327 327 struct xhci_hcd *xhci = hcd_to_xhci(hcd); 328 + struct usb_hcd *shared_hcd = xhci->shared_hcd; 328 329 329 330 xhci->xhc_state |= XHCI_STATE_REMOVING; 330 331 331 - usb_remove_hcd(xhci->shared_hcd); 332 + usb_remove_hcd(shared_hcd); 333 + xhci->shared_hcd = NULL; 332 334 device_wakeup_disable(&dev->dev); 333 335 334 336 usb_remove_hcd(hcd); 335 - usb_put_hcd(xhci->shared_hcd); 337 + usb_put_hcd(shared_hcd); 336 338 337 339 xhci_histb_host_disable(histb); 338 340 usb_put_hcd(hcd);
+49 -17
drivers/usb/host/xhci-hub.c
··· 876 876 status |= USB_PORT_STAT_SUSPEND; 877 877 } 878 878 if ((raw_port_status & PORT_PLS_MASK) == XDEV_RESUME && 879 - !DEV_SUPERSPEED_ANY(raw_port_status)) { 879 + !DEV_SUPERSPEED_ANY(raw_port_status) && hcd->speed < HCD_USB3) { 880 880 if ((raw_port_status & PORT_RESET) || 881 881 !(raw_port_status & PORT_PE)) 882 882 return 0xffffffff; ··· 921 921 time_left = wait_for_completion_timeout( 922 922 &bus_state->rexit_done[wIndex], 923 923 msecs_to_jiffies( 924 - XHCI_MAX_REXIT_TIMEOUT)); 924 + XHCI_MAX_REXIT_TIMEOUT_MS)); 925 925 spin_lock_irqsave(&xhci->lock, flags); 926 926 927 927 if (time_left) { ··· 935 935 } else { 936 936 int port_status = readl(port->addr); 937 937 xhci_warn(xhci, "Port resume took longer than %i msec, port status = 0x%x\n", 938 - XHCI_MAX_REXIT_TIMEOUT, 938 + XHCI_MAX_REXIT_TIMEOUT_MS, 939 939 port_status); 940 940 status |= USB_PORT_STAT_SUSPEND; 941 941 clear_bit(wIndex, &bus_state->rexit_ports); ··· 1474 1474 unsigned long flags; 1475 1475 struct xhci_hub *rhub; 1476 1476 struct xhci_port **ports; 1477 + u32 portsc_buf[USB_MAXCHILDREN]; 1478 + bool wake_enabled; 1477 1479 1478 1480 rhub = xhci_get_rhub(hcd); 1479 1481 ports = rhub->ports; 1480 1482 max_ports = rhub->num_ports; 1481 1483 bus_state = &xhci->bus_state[hcd_index(hcd)]; 1484 + wake_enabled = hcd->self.root_hub->do_remote_wakeup; 1482 1485 1483 1486 spin_lock_irqsave(&xhci->lock, flags); 1484 1487 1485 - if (hcd->self.root_hub->do_remote_wakeup) { 1488 + if (wake_enabled) { 1486 1489 if (bus_state->resuming_ports || /* USB2 */ 1487 1490 bus_state->port_remote_wakeup) { /* USB3 */ 1488 1491 spin_unlock_irqrestore(&xhci->lock, flags); ··· 1493 1490 return -EBUSY; 1494 1491 } 1495 1492 } 1496 - 1497 - port_index = max_ports; 1493 + /* 1494 + * Prepare ports for suspend, but don't write anything before all ports 1495 + * are checked and we know bus suspend can proceed 1496 + */ 1498 1497 bus_state->bus_suspended = 0; 1498 + port_index = max_ports; 1499 1499 while (port_index--) { 1500 - /* suspend the port if the port is not suspended */ 1501 1500 u32 t1, t2; 1502 - int slot_id; 1503 1501 1504 1502 t1 = readl(ports[port_index]->addr); 1505 1503 t2 = xhci_port_state_to_neutral(t1); 1504 + portsc_buf[port_index] = 0; 1506 1505 1507 - if ((t1 & PORT_PE) && !(t1 & PORT_PLS_MASK)) { 1508 - xhci_dbg(xhci, "port %d not suspended\n", port_index); 1509 - slot_id = xhci_find_slot_id_by_port(hcd, xhci, 1510 - port_index + 1); 1511 - if (slot_id) { 1506 + /* Bail out if a USB3 port has a new device in link training */ 1507 + if ((t1 & PORT_PLS_MASK) == XDEV_POLLING) { 1508 + bus_state->bus_suspended = 0; 1509 + spin_unlock_irqrestore(&xhci->lock, flags); 1510 + xhci_dbg(xhci, "Bus suspend bailout, port in polling\n"); 1511 + return -EBUSY; 1512 + } 1513 + 1514 + /* suspend ports in U0, or bail out for new connect changes */ 1515 + if ((t1 & PORT_PE) && (t1 & PORT_PLS_MASK) == XDEV_U0) { 1516 + if ((t1 & PORT_CSC) && wake_enabled) { 1517 + bus_state->bus_suspended = 0; 1512 1518 spin_unlock_irqrestore(&xhci->lock, flags); 1513 - xhci_stop_device(xhci, slot_id, 1); 1514 - spin_lock_irqsave(&xhci->lock, flags); 1519 + xhci_dbg(xhci, "Bus suspend bailout, port connect change\n"); 1520 + return -EBUSY; 1515 1521 } 1522 + xhci_dbg(xhci, "port %d not suspended\n", port_index); 1516 1523 t2 &= ~PORT_PLS_MASK; 1517 1524 t2 |= PORT_LINK_STROBE | XDEV_U3; 1518 1525 set_bit(port_index, &bus_state->bus_suspended); ··· 1531 1518 * including the USB 3.0 roothub, but only if CONFIG_PM 1532 1519 * is enabled, so also enable remote wake here. 1533 1520 */ 1534 - if (hcd->self.root_hub->do_remote_wakeup) { 1521 + if (wake_enabled) { 1535 1522 if (t1 & PORT_CONNECT) { 1536 1523 t2 |= PORT_WKOC_E | PORT_WKDISC_E; 1537 1524 t2 &= ~PORT_WKCONN_E; ··· 1551 1538 1552 1539 t1 = xhci_port_state_to_neutral(t1); 1553 1540 if (t1 != t2) 1554 - writel(t2, ports[port_index]->addr); 1541 + portsc_buf[port_index] = t2; 1542 + } 1543 + 1544 + /* write port settings, stopping and suspending ports if needed */ 1545 + port_index = max_ports; 1546 + while (port_index--) { 1547 + if (!portsc_buf[port_index]) 1548 + continue; 1549 + if (test_bit(port_index, &bus_state->bus_suspended)) { 1550 + int slot_id; 1551 + 1552 + slot_id = xhci_find_slot_id_by_port(hcd, xhci, 1553 + port_index + 1); 1554 + if (slot_id) { 1555 + spin_unlock_irqrestore(&xhci->lock, flags); 1556 + xhci_stop_device(xhci, slot_id, 1); 1557 + spin_lock_irqsave(&xhci->lock, flags); 1558 + } 1559 + } 1560 + writel(portsc_buf[port_index], ports[port_index]->addr); 1555 1561 } 1556 1562 hcd->state = HC_STATE_SUSPENDED; 1557 1563 bus_state->next_statechange = jiffies + msecs_to_jiffies(10);
+4 -2
drivers/usb/host/xhci-mtk.c
··· 590 590 struct xhci_hcd_mtk *mtk = platform_get_drvdata(dev); 591 591 struct usb_hcd *hcd = mtk->hcd; 592 592 struct xhci_hcd *xhci = hcd_to_xhci(hcd); 593 + struct usb_hcd *shared_hcd = xhci->shared_hcd; 593 594 594 - usb_remove_hcd(xhci->shared_hcd); 595 + usb_remove_hcd(shared_hcd); 596 + xhci->shared_hcd = NULL; 595 597 device_init_wakeup(&dev->dev, false); 596 598 597 599 usb_remove_hcd(hcd); 598 - usb_put_hcd(xhci->shared_hcd); 600 + usb_put_hcd(shared_hcd); 599 601 usb_put_hcd(hcd); 600 602 xhci_mtk_sch_exit(mtk); 601 603 xhci_mtk_clks_disable(mtk);
+6
drivers/usb/host/xhci-pci.c
··· 248 248 if (pdev->vendor == PCI_VENDOR_ID_TI && pdev->device == 0x8241) 249 249 xhci->quirks |= XHCI_LIMIT_ENDPOINT_INTERVAL_7; 250 250 251 + if ((pdev->vendor == PCI_VENDOR_ID_BROADCOM || 252 + pdev->vendor == PCI_VENDOR_ID_CAVIUM) && 253 + pdev->device == 0x9026) 254 + xhci->quirks |= XHCI_RESET_PLL_ON_DISCONNECT; 255 + 251 256 if (xhci->quirks & XHCI_RESET_ON_RESUME) 252 257 xhci_dbg_trace(xhci, trace_xhci_dbg_quirks, 253 258 "QUIRK: Resetting on resume"); ··· 385 380 if (xhci->shared_hcd) { 386 381 usb_remove_hcd(xhci->shared_hcd); 387 382 usb_put_hcd(xhci->shared_hcd); 383 + xhci->shared_hcd = NULL; 388 384 } 389 385 390 386 /* Workaround for spurious wakeups at shutdown with HSW */
+4 -2
drivers/usb/host/xhci-plat.c
··· 362 362 struct xhci_hcd *xhci = hcd_to_xhci(hcd); 363 363 struct clk *clk = xhci->clk; 364 364 struct clk *reg_clk = xhci->reg_clk; 365 + struct usb_hcd *shared_hcd = xhci->shared_hcd; 365 366 366 367 xhci->xhc_state |= XHCI_STATE_REMOVING; 367 368 368 - usb_remove_hcd(xhci->shared_hcd); 369 + usb_remove_hcd(shared_hcd); 370 + xhci->shared_hcd = NULL; 369 371 usb_phy_shutdown(hcd->usb_phy); 370 372 371 373 usb_remove_hcd(hcd); 372 - usb_put_hcd(xhci->shared_hcd); 374 + usb_put_hcd(shared_hcd); 373 375 374 376 clk_disable_unprepare(clk); 375 377 clk_disable_unprepare(reg_clk);
+43 -2
drivers/usb/host/xhci-ring.c
··· 1521 1521 usb_wakeup_notification(udev->parent, udev->portnum); 1522 1522 } 1523 1523 1524 + /* 1525 + * Quirk hanlder for errata seen on Cavium ThunderX2 processor XHCI 1526 + * Controller. 1527 + * As per ThunderX2errata-129 USB 2 device may come up as USB 1 1528 + * If a connection to a USB 1 device is followed by another connection 1529 + * to a USB 2 device. 1530 + * 1531 + * Reset the PHY after the USB device is disconnected if device speed 1532 + * is less than HCD_USB3. 1533 + * Retry the reset sequence max of 4 times checking the PLL lock status. 1534 + * 1535 + */ 1536 + static void xhci_cavium_reset_phy_quirk(struct xhci_hcd *xhci) 1537 + { 1538 + struct usb_hcd *hcd = xhci_to_hcd(xhci); 1539 + u32 pll_lock_check; 1540 + u32 retry_count = 4; 1541 + 1542 + do { 1543 + /* Assert PHY reset */ 1544 + writel(0x6F, hcd->regs + 0x1048); 1545 + udelay(10); 1546 + /* De-assert the PHY reset */ 1547 + writel(0x7F, hcd->regs + 0x1048); 1548 + udelay(200); 1549 + pll_lock_check = readl(hcd->regs + 0x1070); 1550 + } while (!(pll_lock_check & 0x1) && --retry_count); 1551 + } 1552 + 1524 1553 static void handle_port_status(struct xhci_hcd *xhci, 1525 1554 union xhci_trb *event) 1526 1555 { ··· 1581 1552 port = &xhci->hw_ports[port_id - 1]; 1582 1553 if (!port || !port->rhub || port->hcd_portnum == DUPLICATE_ENTRY) { 1583 1554 xhci_warn(xhci, "Event for invalid port %u\n", port_id); 1555 + bogus_port_status = true; 1556 + goto cleanup; 1557 + } 1558 + 1559 + /* We might get interrupts after shared_hcd is removed */ 1560 + if (port->rhub == &xhci->usb3_rhub && xhci->shared_hcd == NULL) { 1561 + xhci_dbg(xhci, "ignore port event for removed USB3 hcd\n"); 1584 1562 bogus_port_status = true; 1585 1563 goto cleanup; 1586 1564 } ··· 1675 1639 * RExit to a disconnect state). If so, let the the driver know it's 1676 1640 * out of the RExit state. 1677 1641 */ 1678 - if (!DEV_SUPERSPEED_ANY(portsc) && 1642 + if (!DEV_SUPERSPEED_ANY(portsc) && hcd->speed < HCD_USB3 && 1679 1643 test_and_clear_bit(hcd_portnum, 1680 1644 &bus_state->rexit_ports)) { 1681 1645 complete(&bus_state->rexit_done[hcd_portnum]); ··· 1683 1647 goto cleanup; 1684 1648 } 1685 1649 1686 - if (hcd->speed < HCD_USB3) 1650 + if (hcd->speed < HCD_USB3) { 1687 1651 xhci_test_and_clear_bit(xhci, port, PORT_PLC); 1652 + if ((xhci->quirks & XHCI_RESET_PLL_ON_DISCONNECT) && 1653 + (portsc & PORT_CSC) && !(portsc & PORT_CONNECT)) 1654 + xhci_cavium_reset_phy_quirk(xhci); 1655 + } 1688 1656 1689 1657 cleanup: 1690 1658 /* Update event ring dequeue pointer before dropping the lock */ ··· 2306 2266 goto cleanup; 2307 2267 case COMP_RING_UNDERRUN: 2308 2268 case COMP_RING_OVERRUN: 2269 + case COMP_STOPPED_LENGTH_INVALID: 2309 2270 goto cleanup; 2310 2271 default: 2311 2272 xhci_err(xhci, "ERROR Transfer event for unknown stream ring slot %u ep %u\n",
+1
drivers/usb/host/xhci-tegra.c
··· 1303 1303 1304 1304 usb_remove_hcd(xhci->shared_hcd); 1305 1305 usb_put_hcd(xhci->shared_hcd); 1306 + xhci->shared_hcd = NULL; 1306 1307 usb_remove_hcd(tegra->hcd); 1307 1308 usb_put_hcd(tegra->hcd); 1308 1309
-2
drivers/usb/host/xhci.c
··· 719 719 720 720 /* Only halt host and free memory after both hcds are removed */ 721 721 if (!usb_hcd_is_primary_hcd(hcd)) { 722 - /* usb core will free this hcd shortly, unset pointer */ 723 - xhci->shared_hcd = NULL; 724 722 mutex_unlock(&xhci->mutex); 725 723 return; 726 724 }
+2 -1
drivers/usb/host/xhci.h
··· 1680 1680 * It can take up to 20 ms to transition from RExit to U0 on the 1681 1681 * Intel Lynx Point LP xHCI host. 1682 1682 */ 1683 - #define XHCI_MAX_REXIT_TIMEOUT (20 * 1000) 1683 + #define XHCI_MAX_REXIT_TIMEOUT_MS 20 1684 1684 1685 1685 static inline unsigned int hcd_index(struct usb_hcd *hcd) 1686 1686 { ··· 1849 1849 #define XHCI_INTEL_USB_ROLE_SW BIT_ULL(31) 1850 1850 #define XHCI_ZERO_64B_REGS BIT_ULL(32) 1851 1851 #define XHCI_DEFAULT_PM_RUNTIME_ALLOW BIT_ULL(33) 1852 + #define XHCI_RESET_PLL_ON_DISCONNECT BIT_ULL(34) 1852 1853 1853 1854 unsigned int num_active_eps; 1854 1855 unsigned int limit_active_eps;
+1
drivers/usb/misc/appledisplay.c
··· 50 50 { APPLEDISPLAY_DEVICE(0x9219) }, 51 51 { APPLEDISPLAY_DEVICE(0x921c) }, 52 52 { APPLEDISPLAY_DEVICE(0x921d) }, 53 + { APPLEDISPLAY_DEVICE(0x9222) }, 53 54 { APPLEDISPLAY_DEVICE(0x9236) }, 54 55 55 56 /* Terminating entry */
+10
drivers/usb/storage/unusual_realtek.h
··· 27 27 "USB Card Reader", 28 28 USB_SC_DEVICE, USB_PR_DEVICE, init_realtek_cr, 0), 29 29 30 + UNUSUAL_DEV(0x0bda, 0x0177, 0x0000, 0x9999, 31 + "Realtek", 32 + "USB Card Reader", 33 + USB_SC_DEVICE, USB_PR_DEVICE, init_realtek_cr, 0), 34 + 35 + UNUSUAL_DEV(0x0bda, 0x0184, 0x0000, 0x9999, 36 + "Realtek", 37 + "USB Card Reader", 38 + USB_SC_DEVICE, USB_PR_DEVICE, init_realtek_cr, 0), 39 + 30 40 #endif /* defined(CONFIG_USB_STORAGE_REALTEK) || ... */
+10
drivers/usb/typec/ucsi/Kconfig
··· 23 23 24 24 if TYPEC_UCSI 25 25 26 + config UCSI_CCG 27 + tristate "UCSI Interface Driver for Cypress CCGx" 28 + depends on I2C 29 + help 30 + This driver enables UCSI support on platforms that expose a 31 + Cypress CCGx Type-C controller over I2C interface. 32 + 33 + To compile the driver as a module, choose M here: the module will be 34 + called ucsi_ccg. 35 + 26 36 config UCSI_ACPI 27 37 tristate "UCSI ACPI Interface Driver" 28 38 depends on ACPI
+2
drivers/usb/typec/ucsi/Makefile
··· 8 8 typec_ucsi-$(CONFIG_TRACING) += trace.o 9 9 10 10 obj-$(CONFIG_UCSI_ACPI) += ucsi_acpi.o 11 + 12 + obj-$(CONFIG_UCSI_CCG) += ucsi_ccg.o
+307
drivers/usb/typec/ucsi/ucsi_ccg.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * UCSI driver for Cypress CCGx Type-C controller 4 + * 5 + * Copyright (C) 2017-2018 NVIDIA Corporation. All rights reserved. 6 + * Author: Ajay Gupta <ajayg@nvidia.com> 7 + * 8 + * Some code borrowed from drivers/usb/typec/ucsi/ucsi_acpi.c 9 + */ 10 + #include <linux/acpi.h> 11 + #include <linux/delay.h> 12 + #include <linux/i2c.h> 13 + #include <linux/module.h> 14 + #include <linux/pci.h> 15 + #include <linux/platform_device.h> 16 + 17 + #include <asm/unaligned.h> 18 + #include "ucsi.h" 19 + 20 + struct ucsi_ccg { 21 + struct device *dev; 22 + struct ucsi *ucsi; 23 + struct ucsi_ppm ppm; 24 + struct i2c_client *client; 25 + }; 26 + 27 + #define CCGX_RAB_INTR_REG 0x06 28 + #define CCGX_RAB_UCSI_CONTROL 0x39 29 + #define CCGX_RAB_UCSI_CONTROL_START BIT(0) 30 + #define CCGX_RAB_UCSI_CONTROL_STOP BIT(1) 31 + #define CCGX_RAB_UCSI_DATA_BLOCK(offset) (0xf000 | ((offset) & 0xff)) 32 + 33 + static int ccg_read(struct ucsi_ccg *uc, u16 rab, u8 *data, u32 len) 34 + { 35 + struct i2c_client *client = uc->client; 36 + const struct i2c_adapter_quirks *quirks = client->adapter->quirks; 37 + unsigned char buf[2]; 38 + struct i2c_msg msgs[] = { 39 + { 40 + .addr = client->addr, 41 + .flags = 0x0, 42 + .len = sizeof(buf), 43 + .buf = buf, 44 + }, 45 + { 46 + .addr = client->addr, 47 + .flags = I2C_M_RD, 48 + .buf = data, 49 + }, 50 + }; 51 + u32 rlen, rem_len = len, max_read_len = len; 52 + int status; 53 + 54 + /* check any max_read_len limitation on i2c adapter */ 55 + if (quirks && quirks->max_read_len) 56 + max_read_len = quirks->max_read_len; 57 + 58 + while (rem_len > 0) { 59 + msgs[1].buf = &data[len - rem_len]; 60 + rlen = min_t(u16, rem_len, max_read_len); 61 + msgs[1].len = rlen; 62 + put_unaligned_le16(rab, buf); 63 + status = i2c_transfer(client->adapter, msgs, ARRAY_SIZE(msgs)); 64 + if (status < 0) { 65 + dev_err(uc->dev, "i2c_transfer failed %d\n", status); 66 + return status; 67 + } 68 + rab += rlen; 69 + rem_len -= rlen; 70 + } 71 + 72 + return 0; 73 + } 74 + 75 + static int ccg_write(struct ucsi_ccg *uc, u16 rab, u8 *data, u32 len) 76 + { 77 + struct i2c_client *client = uc->client; 78 + unsigned char *buf; 79 + struct i2c_msg msgs[] = { 80 + { 81 + .addr = client->addr, 82 + .flags = 0x0, 83 + } 84 + }; 85 + int status; 86 + 87 + buf = kzalloc(len + sizeof(rab), GFP_KERNEL); 88 + if (!buf) 89 + return -ENOMEM; 90 + 91 + put_unaligned_le16(rab, buf); 92 + memcpy(buf + sizeof(rab), data, len); 93 + 94 + msgs[0].len = len + sizeof(rab); 95 + msgs[0].buf = buf; 96 + 97 + status = i2c_transfer(client->adapter, msgs, ARRAY_SIZE(msgs)); 98 + if (status < 0) { 99 + dev_err(uc->dev, "i2c_transfer failed %d\n", status); 100 + kfree(buf); 101 + return status; 102 + } 103 + 104 + kfree(buf); 105 + return 0; 106 + } 107 + 108 + static int ucsi_ccg_init(struct ucsi_ccg *uc) 109 + { 110 + unsigned int count = 10; 111 + u8 data; 112 + int status; 113 + 114 + data = CCGX_RAB_UCSI_CONTROL_STOP; 115 + status = ccg_write(uc, CCGX_RAB_UCSI_CONTROL, &data, sizeof(data)); 116 + if (status < 0) 117 + return status; 118 + 119 + data = CCGX_RAB_UCSI_CONTROL_START; 120 + status = ccg_write(uc, CCGX_RAB_UCSI_CONTROL, &data, sizeof(data)); 121 + if (status < 0) 122 + return status; 123 + 124 + /* 125 + * Flush CCGx RESPONSE queue by acking interrupts. Above ucsi control 126 + * register write will push response which must be cleared. 127 + */ 128 + do { 129 + status = ccg_read(uc, CCGX_RAB_INTR_REG, &data, sizeof(data)); 130 + if (status < 0) 131 + return status; 132 + 133 + if (!data) 134 + return 0; 135 + 136 + status = ccg_write(uc, CCGX_RAB_INTR_REG, &data, sizeof(data)); 137 + if (status < 0) 138 + return status; 139 + 140 + usleep_range(10000, 11000); 141 + } while (--count); 142 + 143 + return -ETIMEDOUT; 144 + } 145 + 146 + static int ucsi_ccg_send_data(struct ucsi_ccg *uc) 147 + { 148 + u8 *ppm = (u8 *)uc->ppm.data; 149 + int status; 150 + u16 rab; 151 + 152 + rab = CCGX_RAB_UCSI_DATA_BLOCK(offsetof(struct ucsi_data, message_out)); 153 + status = ccg_write(uc, rab, ppm + 154 + offsetof(struct ucsi_data, message_out), 155 + sizeof(uc->ppm.data->message_out)); 156 + if (status < 0) 157 + return status; 158 + 159 + rab = CCGX_RAB_UCSI_DATA_BLOCK(offsetof(struct ucsi_data, ctrl)); 160 + return ccg_write(uc, rab, ppm + offsetof(struct ucsi_data, ctrl), 161 + sizeof(uc->ppm.data->ctrl)); 162 + } 163 + 164 + static int ucsi_ccg_recv_data(struct ucsi_ccg *uc) 165 + { 166 + u8 *ppm = (u8 *)uc->ppm.data; 167 + int status; 168 + u16 rab; 169 + 170 + rab = CCGX_RAB_UCSI_DATA_BLOCK(offsetof(struct ucsi_data, cci)); 171 + status = ccg_read(uc, rab, ppm + offsetof(struct ucsi_data, cci), 172 + sizeof(uc->ppm.data->cci)); 173 + if (status < 0) 174 + return status; 175 + 176 + rab = CCGX_RAB_UCSI_DATA_BLOCK(offsetof(struct ucsi_data, message_in)); 177 + return ccg_read(uc, rab, ppm + offsetof(struct ucsi_data, message_in), 178 + sizeof(uc->ppm.data->message_in)); 179 + } 180 + 181 + static int ucsi_ccg_ack_interrupt(struct ucsi_ccg *uc) 182 + { 183 + int status; 184 + unsigned char data; 185 + 186 + status = ccg_read(uc, CCGX_RAB_INTR_REG, &data, sizeof(data)); 187 + if (status < 0) 188 + return status; 189 + 190 + return ccg_write(uc, CCGX_RAB_INTR_REG, &data, sizeof(data)); 191 + } 192 + 193 + static int ucsi_ccg_sync(struct ucsi_ppm *ppm) 194 + { 195 + struct ucsi_ccg *uc = container_of(ppm, struct ucsi_ccg, ppm); 196 + int status; 197 + 198 + status = ucsi_ccg_recv_data(uc); 199 + if (status < 0) 200 + return status; 201 + 202 + /* ack interrupt to allow next command to run */ 203 + return ucsi_ccg_ack_interrupt(uc); 204 + } 205 + 206 + static int ucsi_ccg_cmd(struct ucsi_ppm *ppm, struct ucsi_control *ctrl) 207 + { 208 + struct ucsi_ccg *uc = container_of(ppm, struct ucsi_ccg, ppm); 209 + 210 + ppm->data->ctrl.raw_cmd = ctrl->raw_cmd; 211 + return ucsi_ccg_send_data(uc); 212 + } 213 + 214 + static irqreturn_t ccg_irq_handler(int irq, void *data) 215 + { 216 + struct ucsi_ccg *uc = data; 217 + 218 + ucsi_notify(uc->ucsi); 219 + 220 + return IRQ_HANDLED; 221 + } 222 + 223 + static int ucsi_ccg_probe(struct i2c_client *client, 224 + const struct i2c_device_id *id) 225 + { 226 + struct device *dev = &client->dev; 227 + struct ucsi_ccg *uc; 228 + int status; 229 + u16 rab; 230 + 231 + uc = devm_kzalloc(dev, sizeof(*uc), GFP_KERNEL); 232 + if (!uc) 233 + return -ENOMEM; 234 + 235 + uc->ppm.data = devm_kzalloc(dev, sizeof(struct ucsi_data), GFP_KERNEL); 236 + if (!uc->ppm.data) 237 + return -ENOMEM; 238 + 239 + uc->ppm.cmd = ucsi_ccg_cmd; 240 + uc->ppm.sync = ucsi_ccg_sync; 241 + uc->dev = dev; 242 + uc->client = client; 243 + 244 + /* reset ccg device and initialize ucsi */ 245 + status = ucsi_ccg_init(uc); 246 + if (status < 0) { 247 + dev_err(uc->dev, "ucsi_ccg_init failed - %d\n", status); 248 + return status; 249 + } 250 + 251 + status = devm_request_threaded_irq(dev, client->irq, NULL, 252 + ccg_irq_handler, 253 + IRQF_ONESHOT | IRQF_TRIGGER_HIGH, 254 + dev_name(dev), uc); 255 + if (status < 0) { 256 + dev_err(uc->dev, "request_threaded_irq failed - %d\n", status); 257 + return status; 258 + } 259 + 260 + uc->ucsi = ucsi_register_ppm(dev, &uc->ppm); 261 + if (IS_ERR(uc->ucsi)) { 262 + dev_err(uc->dev, "ucsi_register_ppm failed\n"); 263 + return PTR_ERR(uc->ucsi); 264 + } 265 + 266 + rab = CCGX_RAB_UCSI_DATA_BLOCK(offsetof(struct ucsi_data, version)); 267 + status = ccg_read(uc, rab, (u8 *)(uc->ppm.data) + 268 + offsetof(struct ucsi_data, version), 269 + sizeof(uc->ppm.data->version)); 270 + if (status < 0) { 271 + ucsi_unregister_ppm(uc->ucsi); 272 + return status; 273 + } 274 + 275 + i2c_set_clientdata(client, uc); 276 + return 0; 277 + } 278 + 279 + static int ucsi_ccg_remove(struct i2c_client *client) 280 + { 281 + struct ucsi_ccg *uc = i2c_get_clientdata(client); 282 + 283 + ucsi_unregister_ppm(uc->ucsi); 284 + 285 + return 0; 286 + } 287 + 288 + static const struct i2c_device_id ucsi_ccg_device_id[] = { 289 + {"ccgx-ucsi", 0}, 290 + {} 291 + }; 292 + MODULE_DEVICE_TABLE(i2c, ucsi_ccg_device_id); 293 + 294 + static struct i2c_driver ucsi_ccg_driver = { 295 + .driver = { 296 + .name = "ucsi_ccg", 297 + }, 298 + .probe = ucsi_ccg_probe, 299 + .remove = ucsi_ccg_remove, 300 + .id_table = ucsi_ccg_device_id, 301 + }; 302 + 303 + module_i2c_driver(ucsi_ccg_driver); 304 + 305 + MODULE_AUTHOR("Ajay Gupta <ajayg@nvidia.com>"); 306 + MODULE_DESCRIPTION("UCSI driver for Cypress CCGx Type-C controller"); 307 + MODULE_LICENSE("GPL v2");
+9 -56
drivers/xen/balloon.c
··· 251 251 kfree(resource); 252 252 } 253 253 254 - /* 255 - * Host memory not allocated to dom0. We can use this range for hotplug-based 256 - * ballooning. 257 - * 258 - * It's a type-less resource. Setting IORESOURCE_MEM will make resource 259 - * management algorithms (arch_remove_reservations()) look into guest e820, 260 - * which we don't want. 261 - */ 262 - static struct resource hostmem_resource = { 263 - .name = "Host RAM", 264 - }; 265 - 266 - void __attribute__((weak)) __init arch_xen_balloon_init(struct resource *res) 267 - {} 268 - 269 254 static struct resource *additional_memory_resource(phys_addr_t size) 270 255 { 271 - struct resource *res, *res_hostmem; 272 - int ret = -ENOMEM; 256 + struct resource *res; 257 + int ret; 273 258 274 259 res = kzalloc(sizeof(*res), GFP_KERNEL); 275 260 if (!res) ··· 263 278 res->name = "System RAM"; 264 279 res->flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY; 265 280 266 - res_hostmem = kzalloc(sizeof(*res), GFP_KERNEL); 267 - if (res_hostmem) { 268 - /* Try to grab a range from hostmem */ 269 - res_hostmem->name = "Host memory"; 270 - ret = allocate_resource(&hostmem_resource, res_hostmem, 271 - size, 0, -1, 272 - PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL); 273 - } 274 - 275 - if (!ret) { 276 - /* 277 - * Insert this resource into iomem. Because hostmem_resource 278 - * tracks portion of guest e820 marked as UNUSABLE noone else 279 - * should try to use it. 280 - */ 281 - res->start = res_hostmem->start; 282 - res->end = res_hostmem->end; 283 - ret = insert_resource(&iomem_resource, res); 284 - if (ret < 0) { 285 - pr_err("Can't insert iomem_resource [%llx - %llx]\n", 286 - res->start, res->end); 287 - release_memory_resource(res_hostmem); 288 - res_hostmem = NULL; 289 - res->start = res->end = 0; 290 - } 291 - } 292 - 293 - if (ret) { 294 - ret = allocate_resource(&iomem_resource, res, 295 - size, 0, -1, 296 - PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL); 297 - if (ret < 0) { 298 - pr_err("Cannot allocate new System RAM resource\n"); 299 - kfree(res); 300 - return NULL; 301 - } 281 + ret = allocate_resource(&iomem_resource, res, 282 + size, 0, -1, 283 + PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL); 284 + if (ret < 0) { 285 + pr_err("Cannot allocate new System RAM resource\n"); 286 + kfree(res); 287 + return NULL; 302 288 } 303 289 304 290 #ifdef CONFIG_SPARSEMEM ··· 281 325 pr_err("New System RAM resource outside addressable RAM (%lu > %lu)\n", 282 326 pfn, limit); 283 327 release_memory_resource(res); 284 - release_memory_resource(res_hostmem); 285 328 return NULL; 286 329 } 287 330 } ··· 705 750 set_online_page_callback(&xen_online_page); 706 751 register_memory_notifier(&xen_memory_nb); 707 752 register_sysctl_table(xen_root); 708 - 709 - arch_xen_balloon_init(&hostmem_resource); 710 753 #endif 711 754 712 755 #ifdef CONFIG_XEN_PV
+1 -1
drivers/xen/grant-table.c
··· 914 914 915 915 ret = xenmem_reservation_increase(args->nr_pages, args->frames); 916 916 if (ret != args->nr_pages) { 917 - pr_debug("Failed to decrease reservation for DMA buffer\n"); 917 + pr_debug("Failed to increase reservation for DMA buffer\n"); 918 918 ret = -EFAULT; 919 919 } else { 920 920 ret = 0;
+4 -18
drivers/xen/privcmd-buf.c
··· 21 21 22 22 MODULE_LICENSE("GPL"); 23 23 24 - static unsigned int limit = 64; 25 - module_param(limit, uint, 0644); 26 - MODULE_PARM_DESC(limit, "Maximum number of pages that may be allocated by " 27 - "the privcmd-buf device per open file"); 28 - 29 24 struct privcmd_buf_private { 30 25 struct mutex lock; 31 26 struct list_head list; 32 - unsigned int allocated; 33 27 }; 34 28 35 29 struct privcmd_buf_vma_private { ··· 54 60 { 55 61 unsigned int i; 56 62 57 - vma_priv->file_priv->allocated -= vma_priv->n_pages; 58 - 59 63 list_del(&vma_priv->list); 60 64 61 65 for (i = 0; i < vma_priv->n_pages; i++) 62 - if (vma_priv->pages[i]) 63 - __free_page(vma_priv->pages[i]); 66 + __free_page(vma_priv->pages[i]); 64 67 65 68 kfree(vma_priv); 66 69 } ··· 137 146 unsigned int i; 138 147 int ret = 0; 139 148 140 - if (!(vma->vm_flags & VM_SHARED) || count > limit || 141 - file_priv->allocated + count > limit) 149 + if (!(vma->vm_flags & VM_SHARED)) 142 150 return -EINVAL; 143 151 144 152 vma_priv = kzalloc(sizeof(*vma_priv) + count * sizeof(void *), ··· 145 155 if (!vma_priv) 146 156 return -ENOMEM; 147 157 148 - vma_priv->n_pages = count; 149 - count = 0; 150 - for (i = 0; i < vma_priv->n_pages; i++) { 158 + for (i = 0; i < count; i++) { 151 159 vma_priv->pages[i] = alloc_page(GFP_KERNEL | __GFP_ZERO); 152 160 if (!vma_priv->pages[i]) 153 161 break; 154 - count++; 162 + vma_priv->n_pages++; 155 163 } 156 164 157 165 mutex_lock(&file_priv->lock); 158 - 159 - file_priv->allocated += count; 160 166 161 167 vma_priv->file_priv = file_priv; 162 168 vma_priv->users = 1;
+2 -2
drivers/xen/pvcalls-front.c
··· 385 385 out_error: 386 386 if (*evtchn >= 0) 387 387 xenbus_free_evtchn(pvcalls_front_dev, *evtchn); 388 - kfree(map->active.data.in); 389 - kfree(map->active.ring); 388 + free_pages((unsigned long)map->active.data.in, PVCALLS_RING_ORDER); 389 + free_page((unsigned long)map->active.ring); 390 390 return ret; 391 391 } 392 392
+1
drivers/xen/xlate_mmu.c
··· 36 36 #include <asm/xen/hypervisor.h> 37 37 38 38 #include <xen/xen.h> 39 + #include <xen/xen-ops.h> 39 40 #include <xen/page.h> 40 41 #include <xen/interface/xen.h> 41 42 #include <xen/interface/memory.h>
+1 -3
fs/afs/dir.c
··· 1075 1075 if (fc->ac.error < 0) 1076 1076 return; 1077 1077 1078 - d_drop(new_dentry); 1079 - 1080 1078 inode = afs_iget(fc->vnode->vfs_inode.i_sb, fc->key, 1081 1079 newfid, newstatus, newcb, fc->cbi); 1082 1080 if (IS_ERR(inode)) { ··· 1088 1090 vnode = AFS_FS_I(inode); 1089 1091 set_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags); 1090 1092 afs_vnode_commit_status(fc, vnode, 0); 1091 - d_add(new_dentry, inode); 1093 + d_instantiate(new_dentry, inode); 1092 1094 } 1093 1095 1094 1096 /*
+24 -15
fs/afs/fs_probe.c
··· 61 61 afs_io_error(call, afs_io_error_fs_probe_fail); 62 62 goto out; 63 63 case -ECONNRESET: /* Responded, but call expired. */ 64 + case -ERFKILL: 65 + case -EADDRNOTAVAIL: 64 66 case -ENETUNREACH: 65 67 case -EHOSTUNREACH: 68 + case -EHOSTDOWN: 66 69 case -ECONNREFUSED: 67 70 case -ETIMEDOUT: 68 71 case -ETIME: ··· 135 132 static int afs_do_probe_fileserver(struct afs_net *net, 136 133 struct afs_server *server, 137 134 struct key *key, 138 - unsigned int server_index) 135 + unsigned int server_index, 136 + struct afs_error *_e) 139 137 { 140 138 struct afs_addr_cursor ac = { 141 139 .index = 0, 142 140 }; 143 - int ret; 141 + bool in_progress = false; 142 + int err; 144 143 145 144 _enter("%pU", &server->uuid); 146 145 ··· 156 151 server->probe.rtt = UINT_MAX; 157 152 158 153 for (ac.index = 0; ac.index < ac.alist->nr_addrs; ac.index++) { 159 - ret = afs_fs_get_capabilities(net, server, &ac, key, server_index, 154 + err = afs_fs_get_capabilities(net, server, &ac, key, server_index, 160 155 true); 161 - if (ret != -EINPROGRESS) { 162 - afs_fs_probe_done(server); 163 - return ret; 164 - } 156 + if (err == -EINPROGRESS) 157 + in_progress = true; 158 + else 159 + afs_prioritise_error(_e, err, ac.abort_code); 165 160 } 166 161 167 - return 0; 162 + if (!in_progress) 163 + afs_fs_probe_done(server); 164 + return in_progress; 168 165 } 169 166 170 167 /* ··· 176 169 struct afs_server_list *list) 177 170 { 178 171 struct afs_server *server; 179 - int i, ret; 172 + struct afs_error e; 173 + bool in_progress = false; 174 + int i; 180 175 176 + e.error = 0; 177 + e.responded = false; 181 178 for (i = 0; i < list->nr_servers; i++) { 182 179 server = list->servers[i].server; 183 180 if (test_bit(AFS_SERVER_FL_PROBED, &server->flags)) 184 181 continue; 185 182 186 - if (!test_and_set_bit_lock(AFS_SERVER_FL_PROBING, &server->flags)) { 187 - ret = afs_do_probe_fileserver(net, server, key, i); 188 - if (ret) 189 - return ret; 190 - } 183 + if (!test_and_set_bit_lock(AFS_SERVER_FL_PROBING, &server->flags) && 184 + afs_do_probe_fileserver(net, server, key, i, &e)) 185 + in_progress = true; 191 186 } 192 187 193 - return 0; 188 + return in_progress ? 0 : e.error; 194 189 } 195 190 196 191 /*
+12 -6
fs/afs/inode.c
··· 382 382 int afs_validate(struct afs_vnode *vnode, struct key *key) 383 383 { 384 384 time64_t now = ktime_get_real_seconds(); 385 - bool valid = false; 385 + bool valid; 386 386 int ret; 387 387 388 388 _enter("{v={%llx:%llu} fl=%lx},%x", ··· 402 402 vnode->cb_v_break = vnode->volume->cb_v_break; 403 403 valid = false; 404 404 } else if (vnode->status.type == AFS_FTYPE_DIR && 405 - test_bit(AFS_VNODE_DIR_VALID, &vnode->flags) && 406 - vnode->cb_expires_at - 10 > now) { 407 - valid = true; 408 - } else if (!test_bit(AFS_VNODE_ZAP_DATA, &vnode->flags) && 409 - vnode->cb_expires_at - 10 > now) { 405 + (!test_bit(AFS_VNODE_DIR_VALID, &vnode->flags) || 406 + vnode->cb_expires_at - 10 <= now)) { 407 + valid = false; 408 + } else if (test_bit(AFS_VNODE_ZAP_DATA, &vnode->flags) || 409 + vnode->cb_expires_at - 10 <= now) { 410 + valid = false; 411 + } else { 410 412 valid = true; 411 413 } 412 414 } else if (test_bit(AFS_VNODE_DELETED, &vnode->flags)) { 413 415 valid = true; 416 + } else { 417 + vnode->cb_s_break = vnode->cb_interest->server->cb_s_break; 418 + vnode->cb_v_break = vnode->volume->cb_v_break; 419 + valid = false; 414 420 } 415 421 416 422 read_sequnlock_excl(&vnode->cb_lock);
+9
fs/afs/internal.h
··· 696 696 }; 697 697 698 698 /* 699 + * Error prioritisation and accumulation. 700 + */ 701 + struct afs_error { 702 + short error; /* Accumulated error */ 703 + bool responded; /* T if server responded */ 704 + }; 705 + 706 + /* 699 707 * Cursor for iterating over a server's address list. 700 708 */ 701 709 struct afs_addr_cursor { ··· 1023 1015 * misc.c 1024 1016 */ 1025 1017 extern int afs_abort_to_error(u32); 1018 + extern void afs_prioritise_error(struct afs_error *, int, u32); 1026 1019 1027 1020 /* 1028 1021 * mntpt.c
+52
fs/afs/misc.c
··· 118 118 default: return -EREMOTEIO; 119 119 } 120 120 } 121 + 122 + /* 123 + * Select the error to report from a set of errors. 124 + */ 125 + void afs_prioritise_error(struct afs_error *e, int error, u32 abort_code) 126 + { 127 + switch (error) { 128 + case 0: 129 + return; 130 + default: 131 + if (e->error == -ETIMEDOUT || 132 + e->error == -ETIME) 133 + return; 134 + case -ETIMEDOUT: 135 + case -ETIME: 136 + if (e->error == -ENOMEM || 137 + e->error == -ENONET) 138 + return; 139 + case -ENOMEM: 140 + case -ENONET: 141 + if (e->error == -ERFKILL) 142 + return; 143 + case -ERFKILL: 144 + if (e->error == -EADDRNOTAVAIL) 145 + return; 146 + case -EADDRNOTAVAIL: 147 + if (e->error == -ENETUNREACH) 148 + return; 149 + case -ENETUNREACH: 150 + if (e->error == -EHOSTUNREACH) 151 + return; 152 + case -EHOSTUNREACH: 153 + if (e->error == -EHOSTDOWN) 154 + return; 155 + case -EHOSTDOWN: 156 + if (e->error == -ECONNREFUSED) 157 + return; 158 + case -ECONNREFUSED: 159 + if (e->error == -ECONNRESET) 160 + return; 161 + case -ECONNRESET: /* Responded, but call expired. */ 162 + if (e->responded) 163 + return; 164 + e->error = error; 165 + return; 166 + 167 + case -ECONNABORTED: 168 + e->responded = true; 169 + e->error = afs_abort_to_error(abort_code); 170 + return; 171 + } 172 + }
+13 -40
fs/afs/rotate.c
··· 136 136 struct afs_addr_list *alist; 137 137 struct afs_server *server; 138 138 struct afs_vnode *vnode = fc->vnode; 139 - u32 rtt, abort_code; 139 + struct afs_error e; 140 + u32 rtt; 140 141 int error = fc->ac.error, i; 141 142 142 143 _enter("%lx[%d],%lx[%d],%d,%d", ··· 307 306 if (fc->error != -EDESTADDRREQ) 308 307 goto iterate_address; 309 308 /* Fall through */ 309 + case -ERFKILL: 310 + case -EADDRNOTAVAIL: 310 311 case -ENETUNREACH: 311 312 case -EHOSTUNREACH: 313 + case -EHOSTDOWN: 312 314 case -ECONNREFUSED: 313 315 _debug("no conn"); 314 316 fc->error = error; ··· 450 446 if (fc->flags & AFS_FS_CURSOR_VBUSY) 451 447 goto restart_from_beginning; 452 448 453 - abort_code = 0; 454 - error = -EDESTADDRREQ; 449 + e.error = -EDESTADDRREQ; 450 + e.responded = false; 455 451 for (i = 0; i < fc->server_list->nr_servers; i++) { 456 452 struct afs_server *s = fc->server_list->servers[i].server; 457 - int probe_error = READ_ONCE(s->probe.error); 458 453 459 - switch (probe_error) { 460 - case 0: 461 - continue; 462 - default: 463 - if (error == -ETIMEDOUT || 464 - error == -ETIME) 465 - continue; 466 - case -ETIMEDOUT: 467 - case -ETIME: 468 - if (error == -ENOMEM || 469 - error == -ENONET) 470 - continue; 471 - case -ENOMEM: 472 - case -ENONET: 473 - if (error == -ENETUNREACH) 474 - continue; 475 - case -ENETUNREACH: 476 - if (error == -EHOSTUNREACH) 477 - continue; 478 - case -EHOSTUNREACH: 479 - if (error == -ECONNREFUSED) 480 - continue; 481 - case -ECONNREFUSED: 482 - if (error == -ECONNRESET) 483 - continue; 484 - case -ECONNRESET: /* Responded, but call expired. */ 485 - if (error == -ECONNABORTED) 486 - continue; 487 - case -ECONNABORTED: 488 - abort_code = s->probe.abort_code; 489 - error = probe_error; 490 - continue; 491 - } 454 + afs_prioritise_error(&e, READ_ONCE(s->probe.error), 455 + s->probe.abort_code); 492 456 } 493 - 494 - if (error == -ECONNABORTED) 495 - error = afs_abort_to_error(abort_code); 496 457 497 458 failed_set_error: 498 459 fc->error = error; ··· 522 553 _leave(" = f [abort]"); 523 554 return false; 524 555 556 + case -ERFKILL: 557 + case -EADDRNOTAVAIL: 525 558 case -ENETUNREACH: 526 559 case -EHOSTUNREACH: 560 + case -EHOSTDOWN: 527 561 case -ECONNREFUSED: 528 562 case -ETIMEDOUT: 529 563 case -ETIME: ··· 605 633 struct afs_net *net = afs_v2net(fc->vnode); 606 634 607 635 if (fc->error == -EDESTADDRREQ || 636 + fc->error == -EADDRNOTAVAIL || 608 637 fc->error == -ENETUNREACH || 609 638 fc->error == -EHOSTUNREACH) 610 639 afs_dump_edestaddrreq(fc);
+10 -1
fs/afs/rxrpc.c
··· 576 576 { 577 577 signed long rtt2, timeout; 578 578 long ret; 579 + bool stalled = false; 579 580 u64 rtt; 580 581 u32 life, last_life; 581 582 ··· 610 609 611 610 life = rxrpc_kernel_check_life(call->net->socket, call->rxcall); 612 611 if (timeout == 0 && 613 - life == last_life && signal_pending(current)) 612 + life == last_life && signal_pending(current)) { 613 + if (stalled) 614 614 break; 615 + __set_current_state(TASK_RUNNING); 616 + rxrpc_kernel_probe_life(call->net->socket, call->rxcall); 617 + timeout = rtt2; 618 + stalled = true; 619 + continue; 620 + } 615 621 616 622 if (life != last_life) { 617 623 timeout = rtt2; 618 624 last_life = life; 625 + stalled = false; 619 626 } 620 627 621 628 timeout = schedule_timeout(timeout);
+27 -18
fs/afs/vl_probe.c
··· 61 61 afs_io_error(call, afs_io_error_vl_probe_fail); 62 62 goto out; 63 63 case -ECONNRESET: /* Responded, but call expired. */ 64 + case -ERFKILL: 65 + case -EADDRNOTAVAIL: 64 66 case -ENETUNREACH: 65 67 case -EHOSTUNREACH: 68 + case -EHOSTDOWN: 66 69 case -ECONNREFUSED: 67 70 case -ETIMEDOUT: 68 71 case -ETIME: ··· 132 129 * Probe all of a vlserver's addresses to find out the best route and to 133 130 * query its capabilities. 134 131 */ 135 - static int afs_do_probe_vlserver(struct afs_net *net, 136 - struct afs_vlserver *server, 137 - struct key *key, 138 - unsigned int server_index) 132 + static bool afs_do_probe_vlserver(struct afs_net *net, 133 + struct afs_vlserver *server, 134 + struct key *key, 135 + unsigned int server_index, 136 + struct afs_error *_e) 139 137 { 140 138 struct afs_addr_cursor ac = { 141 139 .index = 0, 142 140 }; 143 - int ret; 141 + bool in_progress = false; 142 + int err; 144 143 145 144 _enter("%s", server->name); 146 145 ··· 156 151 server->probe.rtt = UINT_MAX; 157 152 158 153 for (ac.index = 0; ac.index < ac.alist->nr_addrs; ac.index++) { 159 - ret = afs_vl_get_capabilities(net, &ac, key, server, 154 + err = afs_vl_get_capabilities(net, &ac, key, server, 160 155 server_index, true); 161 - if (ret != -EINPROGRESS) { 162 - afs_vl_probe_done(server); 163 - return ret; 164 - } 156 + if (err == -EINPROGRESS) 157 + in_progress = true; 158 + else 159 + afs_prioritise_error(_e, err, ac.abort_code); 165 160 } 166 161 167 - return 0; 162 + if (!in_progress) 163 + afs_vl_probe_done(server); 164 + return in_progress; 168 165 } 169 166 170 167 /* ··· 176 169 struct afs_vlserver_list *vllist) 177 170 { 178 171 struct afs_vlserver *server; 179 - int i, ret; 172 + struct afs_error e; 173 + bool in_progress = false; 174 + int i; 180 175 176 + e.error = 0; 177 + e.responded = false; 181 178 for (i = 0; i < vllist->nr_servers; i++) { 182 179 server = vllist->servers[i].server; 183 180 if (test_bit(AFS_VLSERVER_FL_PROBED, &server->flags)) 184 181 continue; 185 182 186 - if (!test_and_set_bit_lock(AFS_VLSERVER_FL_PROBING, &server->flags)) { 187 - ret = afs_do_probe_vlserver(net, server, key, i); 188 - if (ret) 189 - return ret; 190 - } 183 + if (!test_and_set_bit_lock(AFS_VLSERVER_FL_PROBING, &server->flags) && 184 + afs_do_probe_vlserver(net, server, key, i, &e)) 185 + in_progress = true; 191 186 } 192 187 193 - return 0; 188 + return in_progress ? 0 : e.error; 194 189 } 195 190 196 191 /*
+10 -40
fs/afs/vl_rotate.c
··· 71 71 { 72 72 struct afs_addr_list *alist; 73 73 struct afs_vlserver *vlserver; 74 + struct afs_error e; 74 75 u32 rtt; 75 - int error = vc->ac.error, abort_code, i; 76 + int error = vc->ac.error, i; 76 77 77 78 _enter("%lx[%d],%lx[%d],%d,%d", 78 79 vc->untried, vc->index, ··· 120 119 goto failed; 121 120 } 122 121 122 + case -ERFKILL: 123 + case -EADDRNOTAVAIL: 123 124 case -ENETUNREACH: 124 125 case -EHOSTUNREACH: 126 + case -EHOSTDOWN: 125 127 case -ECONNREFUSED: 126 128 case -ETIMEDOUT: 127 129 case -ETIME: ··· 239 235 if (vc->flags & AFS_VL_CURSOR_RETRY) 240 236 goto restart_from_beginning; 241 237 242 - abort_code = 0; 243 - error = -EDESTADDRREQ; 238 + e.error = -EDESTADDRREQ; 239 + e.responded = false; 244 240 for (i = 0; i < vc->server_list->nr_servers; i++) { 245 241 struct afs_vlserver *s = vc->server_list->servers[i].server; 246 - int probe_error = READ_ONCE(s->probe.error); 247 242 248 - switch (probe_error) { 249 - case 0: 250 - continue; 251 - default: 252 - if (error == -ETIMEDOUT || 253 - error == -ETIME) 254 - continue; 255 - case -ETIMEDOUT: 256 - case -ETIME: 257 - if (error == -ENOMEM || 258 - error == -ENONET) 259 - continue; 260 - case -ENOMEM: 261 - case -ENONET: 262 - if (error == -ENETUNREACH) 263 - continue; 264 - case -ENETUNREACH: 265 - if (error == -EHOSTUNREACH) 266 - continue; 267 - case -EHOSTUNREACH: 268 - if (error == -ECONNREFUSED) 269 - continue; 270 - case -ECONNREFUSED: 271 - if (error == -ECONNRESET) 272 - continue; 273 - case -ECONNRESET: /* Responded, but call expired. */ 274 - if (error == -ECONNABORTED) 275 - continue; 276 - case -ECONNABORTED: 277 - abort_code = s->probe.abort_code; 278 - error = probe_error; 279 - continue; 280 - } 243 + afs_prioritise_error(&e, READ_ONCE(s->probe.error), 244 + s->probe.abort_code); 281 245 } 282 - 283 - if (error == -ECONNABORTED) 284 - error = afs_abort_to_error(abort_code); 285 246 286 247 failed_set_error: 287 248 vc->error = error; ··· 310 341 struct afs_net *net = vc->cell->net; 311 342 312 343 if (vc->error == -EDESTADDRREQ || 344 + vc->error == -EADDRNOTAVAIL || 313 345 vc->error == -ENETUNREACH || 314 346 vc->error == -EHOSTUNREACH) 315 347 afs_vl_dump_edestaddrreq(vc);
+1
fs/aio.c
··· 1436 1436 ret = ioprio_check_cap(iocb->aio_reqprio); 1437 1437 if (ret) { 1438 1438 pr_debug("aio ioprio check cap error: %d\n", ret); 1439 + fput(req->ki_filp); 1439 1440 return ret; 1440 1441 } 1441 1442
+3
fs/btrfs/ctree.h
··· 3163 3163 int btrfs_drop_inode(struct inode *inode); 3164 3164 int __init btrfs_init_cachep(void); 3165 3165 void __cold btrfs_destroy_cachep(void); 3166 + struct inode *btrfs_iget_path(struct super_block *s, struct btrfs_key *location, 3167 + struct btrfs_root *root, int *new, 3168 + struct btrfs_path *path); 3166 3169 struct inode *btrfs_iget(struct super_block *s, struct btrfs_key *location, 3167 3170 struct btrfs_root *root, int *was_new); 3168 3171 struct extent_map *btrfs_get_extent(struct btrfs_inode *inode,
+27 -47
fs/btrfs/disk-io.c
··· 477 477 int mirror_num = 0; 478 478 int failed_mirror = 0; 479 479 480 - clear_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags); 481 480 io_tree = &BTRFS_I(fs_info->btree_inode)->io_tree; 482 481 while (1) { 482 + clear_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags); 483 483 ret = read_extent_buffer_pages(io_tree, eb, WAIT_COMPLETE, 484 484 mirror_num); 485 485 if (!ret) { ··· 492 492 else 493 493 break; 494 494 } 495 - 496 - /* 497 - * This buffer's crc is fine, but its contents are corrupted, so 498 - * there is no reason to read the other copies, they won't be 499 - * any less wrong. 500 - */ 501 - if (test_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags) || 502 - ret == -EUCLEAN) 503 - break; 504 495 505 496 num_copies = btrfs_num_copies(fs_info, 506 497 eb->start, eb->len); ··· 1655 1664 struct btrfs_root *root = arg; 1656 1665 struct btrfs_fs_info *fs_info = root->fs_info; 1657 1666 int again; 1658 - struct btrfs_trans_handle *trans; 1659 1667 1660 - do { 1668 + while (1) { 1661 1669 again = 0; 1662 1670 1663 1671 /* Make the cleaner go to sleep early. */ ··· 1705 1715 */ 1706 1716 btrfs_delete_unused_bgs(fs_info); 1707 1717 sleep: 1718 + if (kthread_should_park()) 1719 + kthread_parkme(); 1720 + if (kthread_should_stop()) 1721 + return 0; 1708 1722 if (!again) { 1709 1723 set_current_state(TASK_INTERRUPTIBLE); 1710 - if (!kthread_should_stop()) 1711 - schedule(); 1724 + schedule(); 1712 1725 __set_current_state(TASK_RUNNING); 1713 1726 } 1714 - } while (!kthread_should_stop()); 1715 - 1716 - /* 1717 - * Transaction kthread is stopped before us and wakes us up. 1718 - * However we might have started a new transaction and COWed some 1719 - * tree blocks when deleting unused block groups for example. So 1720 - * make sure we commit the transaction we started to have a clean 1721 - * shutdown when evicting the btree inode - if it has dirty pages 1722 - * when we do the final iput() on it, eviction will trigger a 1723 - * writeback for it which will fail with null pointer dereferences 1724 - * since work queues and other resources were already released and 1725 - * destroyed by the time the iput/eviction/writeback is made. 1726 - */ 1727 - trans = btrfs_attach_transaction(root); 1728 - if (IS_ERR(trans)) { 1729 - if (PTR_ERR(trans) != -ENOENT) 1730 - btrfs_err(fs_info, 1731 - "cleaner transaction attach returned %ld", 1732 - PTR_ERR(trans)); 1733 - } else { 1734 - int ret; 1735 - 1736 - ret = btrfs_commit_transaction(trans); 1737 - if (ret) 1738 - btrfs_err(fs_info, 1739 - "cleaner open transaction commit returned %d", 1740 - ret); 1741 1727 } 1742 - 1743 - return 0; 1744 1728 } 1745 1729 1746 1730 static int transaction_kthread(void *arg) ··· 3895 3931 int ret; 3896 3932 3897 3933 set_bit(BTRFS_FS_CLOSING_START, &fs_info->flags); 3934 + /* 3935 + * We don't want the cleaner to start new transactions, add more delayed 3936 + * iputs, etc. while we're closing. We can't use kthread_stop() yet 3937 + * because that frees the task_struct, and the transaction kthread might 3938 + * still try to wake up the cleaner. 3939 + */ 3940 + kthread_park(fs_info->cleaner_kthread); 3898 3941 3899 3942 /* wait for the qgroup rescan worker to stop */ 3900 3943 btrfs_qgroup_wait_for_completion(fs_info, false); ··· 3929 3958 3930 3959 if (!sb_rdonly(fs_info->sb)) { 3931 3960 /* 3932 - * If the cleaner thread is stopped and there are 3933 - * block groups queued for removal, the deletion will be 3934 - * skipped when we quit the cleaner thread. 3961 + * The cleaner kthread is stopped, so do one final pass over 3962 + * unused block groups. 3935 3963 */ 3936 3964 btrfs_delete_unused_bgs(fs_info); 3937 3965 ··· 4329 4359 unpin = pinned_extents; 4330 4360 again: 4331 4361 while (1) { 4362 + /* 4363 + * The btrfs_finish_extent_commit() may get the same range as 4364 + * ours between find_first_extent_bit and clear_extent_dirty. 4365 + * Hence, hold the unused_bg_unpin_mutex to avoid double unpin 4366 + * the same extent range. 4367 + */ 4368 + mutex_lock(&fs_info->unused_bg_unpin_mutex); 4332 4369 ret = find_first_extent_bit(unpin, 0, &start, &end, 4333 4370 EXTENT_DIRTY, NULL); 4334 - if (ret) 4371 + if (ret) { 4372 + mutex_unlock(&fs_info->unused_bg_unpin_mutex); 4335 4373 break; 4374 + } 4336 4375 4337 4376 clear_extent_dirty(unpin, start, end); 4338 4377 btrfs_error_unpin_extent_range(fs_info, start, end); 4378 + mutex_unlock(&fs_info->unused_bg_unpin_mutex); 4339 4379 cond_resched(); 4340 4380 } 4341 4381
+24
fs/btrfs/file.c
··· 2089 2089 atomic_inc(&root->log_batch); 2090 2090 2091 2091 /* 2092 + * Before we acquired the inode's lock, someone may have dirtied more 2093 + * pages in the target range. We need to make sure that writeback for 2094 + * any such pages does not start while we are logging the inode, because 2095 + * if it does, any of the following might happen when we are not doing a 2096 + * full inode sync: 2097 + * 2098 + * 1) We log an extent after its writeback finishes but before its 2099 + * checksums are added to the csum tree, leading to -EIO errors 2100 + * when attempting to read the extent after a log replay. 2101 + * 2102 + * 2) We can end up logging an extent before its writeback finishes. 2103 + * Therefore after the log replay we will have a file extent item 2104 + * pointing to an unwritten extent (and no data checksums as well). 2105 + * 2106 + * So trigger writeback for any eventual new dirty pages and then we 2107 + * wait for all ordered extents to complete below. 2108 + */ 2109 + ret = start_ordered_ops(inode, start, end); 2110 + if (ret) { 2111 + inode_unlock(inode); 2112 + goto out; 2113 + } 2114 + 2115 + /* 2092 2116 * We have to do this here to avoid the priority inversion of waiting on 2093 2117 * IO of a lower priority task while holding a transaciton open. 2094 2118 */
+21 -1
fs/btrfs/free-space-cache.c
··· 75 75 * sure NOFS is set to keep us from deadlocking. 76 76 */ 77 77 nofs_flag = memalloc_nofs_save(); 78 - inode = btrfs_iget(fs_info->sb, &location, root, NULL); 78 + inode = btrfs_iget_path(fs_info->sb, &location, root, NULL, path); 79 + btrfs_release_path(path); 79 80 memalloc_nofs_restore(nofs_flag); 80 81 if (IS_ERR(inode)) 81 82 return inode; ··· 839 838 path->search_commit_root = 1; 840 839 path->skip_locking = 1; 841 840 841 + /* 842 + * We must pass a path with search_commit_root set to btrfs_iget in 843 + * order to avoid a deadlock when allocating extents for the tree root. 844 + * 845 + * When we are COWing an extent buffer from the tree root, when looking 846 + * for a free extent, at extent-tree.c:find_free_extent(), we can find 847 + * block group without its free space cache loaded. When we find one 848 + * we must load its space cache which requires reading its free space 849 + * cache's inode item from the root tree. If this inode item is located 850 + * in the same leaf that we started COWing before, then we end up in 851 + * deadlock on the extent buffer (trying to read lock it when we 852 + * previously write locked it). 853 + * 854 + * It's safe to read the inode item using the commit root because 855 + * block groups, once loaded, stay in memory forever (until they are 856 + * removed) as well as their space caches once loaded. New block groups 857 + * once created get their ->cached field set to BTRFS_CACHE_FINISHED so 858 + * we will never try to read their inode item while the fs is mounted. 859 + */ 842 860 inode = lookup_free_space_inode(fs_info, block_group, path); 843 861 if (IS_ERR(inode)) { 844 862 btrfs_free_path(path);
+24 -13
fs/btrfs/inode.c
··· 1531 1531 } 1532 1532 btrfs_release_path(path); 1533 1533 1534 - if (cur_offset <= end && cow_start == (u64)-1) { 1534 + if (cur_offset <= end && cow_start == (u64)-1) 1535 1535 cow_start = cur_offset; 1536 - cur_offset = end; 1537 - } 1538 1536 1539 1537 if (cow_start != (u64)-1) { 1538 + cur_offset = end; 1540 1539 ret = cow_file_range(inode, locked_page, cow_start, end, end, 1541 1540 page_started, nr_written, 1, NULL); 1542 1541 if (ret) ··· 3569 3570 /* 3570 3571 * read an inode from the btree into the in-memory inode 3571 3572 */ 3572 - static int btrfs_read_locked_inode(struct inode *inode) 3573 + static int btrfs_read_locked_inode(struct inode *inode, 3574 + struct btrfs_path *in_path) 3573 3575 { 3574 3576 struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); 3575 - struct btrfs_path *path; 3577 + struct btrfs_path *path = in_path; 3576 3578 struct extent_buffer *leaf; 3577 3579 struct btrfs_inode_item *inode_item; 3578 3580 struct btrfs_root *root = BTRFS_I(inode)->root; ··· 3589 3589 if (!ret) 3590 3590 filled = true; 3591 3591 3592 - path = btrfs_alloc_path(); 3593 - if (!path) 3594 - return -ENOMEM; 3592 + if (!path) { 3593 + path = btrfs_alloc_path(); 3594 + if (!path) 3595 + return -ENOMEM; 3596 + } 3595 3597 3596 3598 memcpy(&location, &BTRFS_I(inode)->location, sizeof(location)); 3597 3599 3598 3600 ret = btrfs_lookup_inode(NULL, root, path, &location, 0); 3599 3601 if (ret) { 3600 - btrfs_free_path(path); 3602 + if (path != in_path) 3603 + btrfs_free_path(path); 3601 3604 return ret; 3602 3605 } 3603 3606 ··· 3725 3722 btrfs_ino(BTRFS_I(inode)), 3726 3723 root->root_key.objectid, ret); 3727 3724 } 3728 - btrfs_free_path(path); 3725 + if (path != in_path) 3726 + btrfs_free_path(path); 3729 3727 3730 3728 if (!maybe_acls) 3731 3729 cache_no_acl(inode); ··· 5648 5644 /* Get an inode object given its location and corresponding root. 5649 5645 * Returns in *is_new if the inode was read from disk 5650 5646 */ 5651 - struct inode *btrfs_iget(struct super_block *s, struct btrfs_key *location, 5652 - struct btrfs_root *root, int *new) 5647 + struct inode *btrfs_iget_path(struct super_block *s, struct btrfs_key *location, 5648 + struct btrfs_root *root, int *new, 5649 + struct btrfs_path *path) 5653 5650 { 5654 5651 struct inode *inode; 5655 5652 ··· 5661 5656 if (inode->i_state & I_NEW) { 5662 5657 int ret; 5663 5658 5664 - ret = btrfs_read_locked_inode(inode); 5659 + ret = btrfs_read_locked_inode(inode, path); 5665 5660 if (!ret) { 5666 5661 inode_tree_add(inode); 5667 5662 unlock_new_inode(inode); ··· 5681 5676 } 5682 5677 5683 5678 return inode; 5679 + } 5680 + 5681 + struct inode *btrfs_iget(struct super_block *s, struct btrfs_key *location, 5682 + struct btrfs_root *root, int *new) 5683 + { 5684 + return btrfs_iget_path(s, location, root, new, NULL); 5684 5685 } 5685 5686 5686 5687 static struct inode *new_simple_dir(struct super_block *s,
+12 -2
fs/btrfs/ioctl.c
··· 3488 3488 const u64 sz = BTRFS_I(src)->root->fs_info->sectorsize; 3489 3489 3490 3490 len = round_down(i_size_read(src), sz) - loff; 3491 + if (len == 0) 3492 + return 0; 3491 3493 olen = len; 3492 3494 } 3493 3495 } ··· 4259 4257 goto out_unlock; 4260 4258 if (len == 0) 4261 4259 olen = len = src->i_size - off; 4262 - /* if we extend to eof, continue to block boundary */ 4263 - if (off + len == src->i_size) 4260 + /* 4261 + * If we extend to eof, continue to block boundary if and only if the 4262 + * destination end offset matches the destination file's size, otherwise 4263 + * we would be corrupting data by placing the eof block into the middle 4264 + * of a file. 4265 + */ 4266 + if (off + len == src->i_size) { 4267 + if (!IS_ALIGNED(len, bs) && destoff + len < inode->i_size) 4268 + goto out_unlock; 4264 4269 len = ALIGN(src->i_size, bs) - off; 4270 + } 4265 4271 4266 4272 if (len == 0) { 4267 4273 ret = 0;
+2 -1
fs/btrfs/qgroup.c
··· 2659 2659 int i; 2660 2660 u64 *i_qgroups; 2661 2661 struct btrfs_fs_info *fs_info = trans->fs_info; 2662 - struct btrfs_root *quota_root = fs_info->quota_root; 2662 + struct btrfs_root *quota_root; 2663 2663 struct btrfs_qgroup *srcgroup; 2664 2664 struct btrfs_qgroup *dstgroup; 2665 2665 u32 level_size = 0; ··· 2669 2669 if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags)) 2670 2670 goto out; 2671 2671 2672 + quota_root = fs_info->quota_root; 2672 2673 if (!quota_root) { 2673 2674 ret = -EINVAL; 2674 2675 goto out;
+1
fs/btrfs/relocation.c
··· 3959 3959 restart: 3960 3960 if (update_backref_cache(trans, &rc->backref_cache)) { 3961 3961 btrfs_end_transaction(trans); 3962 + trans = NULL; 3962 3963 continue; 3963 3964 } 3964 3965
+8 -3
fs/btrfs/send.c
··· 3340 3340 kfree(m); 3341 3341 } 3342 3342 3343 - static void tail_append_pending_moves(struct pending_dir_move *moves, 3343 + static void tail_append_pending_moves(struct send_ctx *sctx, 3344 + struct pending_dir_move *moves, 3344 3345 struct list_head *stack) 3345 3346 { 3346 3347 if (list_empty(&moves->list)) { ··· 3351 3350 list_splice_init(&moves->list, &list); 3352 3351 list_add_tail(&moves->list, stack); 3353 3352 list_splice_tail(&list, stack); 3353 + } 3354 + if (!RB_EMPTY_NODE(&moves->node)) { 3355 + rb_erase(&moves->node, &sctx->pending_dir_moves); 3356 + RB_CLEAR_NODE(&moves->node); 3354 3357 } 3355 3358 } 3356 3359 ··· 3370 3365 return 0; 3371 3366 3372 3367 INIT_LIST_HEAD(&stack); 3373 - tail_append_pending_moves(pm, &stack); 3368 + tail_append_pending_moves(sctx, pm, &stack); 3374 3369 3375 3370 while (!list_empty(&stack)) { 3376 3371 pm = list_first_entry(&stack, struct pending_dir_move, list); ··· 3381 3376 goto out; 3382 3377 pm = get_pending_dir_moves(sctx, parent_ino); 3383 3378 if (pm) 3384 - tail_append_pending_moves(pm, &stack); 3379 + tail_append_pending_moves(sctx, pm, &stack); 3385 3380 } 3386 3381 return 0; 3387 3382
+4 -3
fs/btrfs/super.c
··· 1916 1916 } 1917 1917 1918 1918 /* Used to sort the devices by max_avail(descending sort) */ 1919 - static int btrfs_cmp_device_free_bytes(const void *dev_info1, 1919 + static inline int btrfs_cmp_device_free_bytes(const void *dev_info1, 1920 1920 const void *dev_info2) 1921 1921 { 1922 1922 if (((struct btrfs_device_info *)dev_info1)->max_avail > ··· 1945 1945 * The helper to calc the free space on the devices that can be used to store 1946 1946 * file data. 1947 1947 */ 1948 - static int btrfs_calc_avail_data_space(struct btrfs_fs_info *fs_info, 1949 - u64 *free_bytes) 1948 + static inline int btrfs_calc_avail_data_space(struct btrfs_fs_info *fs_info, 1949 + u64 *free_bytes) 1950 1950 { 1951 1951 struct btrfs_device_info *devices_info; 1952 1952 struct btrfs_fs_devices *fs_devices = fs_info->fs_devices; ··· 2237 2237 vol = memdup_user((void __user *)arg, sizeof(*vol)); 2238 2238 if (IS_ERR(vol)) 2239 2239 return PTR_ERR(vol); 2240 + vol->name[BTRFS_PATH_NAME_MAX] = '\0'; 2240 2241 2241 2242 switch (cmd) { 2242 2243 case BTRFS_IOC_SCAN_DEV:
+1 -1
fs/btrfs/tree-checker.c
··· 440 440 type != (BTRFS_BLOCK_GROUP_METADATA | 441 441 BTRFS_BLOCK_GROUP_DATA)) { 442 442 block_group_err(fs_info, leaf, slot, 443 - "invalid type, have 0x%llx (%lu bits set) expect either 0x%llx, 0x%llx, 0x%llu or 0x%llx", 443 + "invalid type, have 0x%llx (%lu bits set) expect either 0x%llx, 0x%llx, 0x%llx or 0x%llx", 444 444 type, hweight64(type), 445 445 BTRFS_BLOCK_GROUP_DATA, BTRFS_BLOCK_GROUP_METADATA, 446 446 BTRFS_BLOCK_GROUP_SYSTEM,
+17
fs/btrfs/tree-log.c
··· 4396 4396 logged_end = end; 4397 4397 4398 4398 list_for_each_entry_safe(em, n, &tree->modified_extents, list) { 4399 + /* 4400 + * Skip extents outside our logging range. It's important to do 4401 + * it for correctness because if we don't ignore them, we may 4402 + * log them before their ordered extent completes, and therefore 4403 + * we could log them without logging their respective checksums 4404 + * (the checksum items are added to the csum tree at the very 4405 + * end of btrfs_finish_ordered_io()). Also leave such extents 4406 + * outside of our range in the list, since we may have another 4407 + * ranged fsync in the near future that needs them. If an extent 4408 + * outside our range corresponds to a hole, log it to avoid 4409 + * leaving gaps between extents (fsck will complain when we are 4410 + * not using the NO_HOLES feature). 4411 + */ 4412 + if ((em->start > end || em->start + em->len <= start) && 4413 + em->block_start != EXTENT_MAP_HOLE) 4414 + continue; 4415 + 4399 4416 list_del_init(&em->list); 4400 4417 /* 4401 4418 * Just an arbitrary number, this can be really CPU intensive
+5 -3
fs/cachefiles/namei.c
··· 244 244 245 245 ASSERT(!test_bit(CACHEFILES_OBJECT_ACTIVE, &xobject->flags)); 246 246 247 - cache->cache.ops->put_object(&xobject->fscache, cachefiles_obj_put_wait_retry); 247 + cache->cache.ops->put_object(&xobject->fscache, 248 + (enum fscache_obj_ref_trace)cachefiles_obj_put_wait_retry); 248 249 goto try_again; 249 250 250 251 requeue: 251 - cache->cache.ops->put_object(&xobject->fscache, cachefiles_obj_put_wait_timeo); 252 + cache->cache.ops->put_object(&xobject->fscache, 253 + (enum fscache_obj_ref_trace)cachefiles_obj_put_wait_timeo); 252 254 _leave(" = -ETIMEDOUT"); 253 255 return -ETIMEDOUT; 254 256 } ··· 338 336 try_again: 339 337 /* first step is to make up a grave dentry in the graveyard */ 340 338 sprintf(nbuffer, "%08x%08x", 341 - (uint32_t) get_seconds(), 339 + (uint32_t) ktime_get_real_seconds(), 342 340 (uint32_t) atomic_inc_return(&cache->gravecounter)); 343 341 344 342 /* do the multiway lock magic */
+6 -3
fs/cachefiles/rdwr.c
··· 535 535 netpage->index, cachefiles_gfp); 536 536 if (ret < 0) { 537 537 if (ret == -EEXIST) { 538 + put_page(backpage); 539 + backpage = NULL; 538 540 put_page(netpage); 541 + netpage = NULL; 539 542 fscache_retrieval_complete(op, 1); 540 543 continue; 541 544 } ··· 611 608 netpage->index, cachefiles_gfp); 612 609 if (ret < 0) { 613 610 if (ret == -EEXIST) { 611 + put_page(backpage); 612 + backpage = NULL; 614 613 put_page(netpage); 614 + netpage = NULL; 615 615 fscache_retrieval_complete(op, 1); 616 616 continue; 617 617 } ··· 968 962 __releases(&object->fscache.cookie->lock) 969 963 { 970 964 struct cachefiles_object *object; 971 - struct cachefiles_cache *cache; 972 965 973 966 object = container_of(_object, struct cachefiles_object, fscache); 974 - cache = container_of(object->fscache.cache, 975 - struct cachefiles_cache, cache); 976 967 977 968 _enter("%p,{%lu}", object, page->index); 978 969
+2 -1
fs/cachefiles/xattr.c
··· 135 135 struct dentry *dentry = object->dentry; 136 136 int ret; 137 137 138 - ASSERT(dentry); 138 + if (!dentry) 139 + return -ESTALE; 139 140 140 141 _enter("%p,#%d", object, auxdata->len); 141 142
+9 -2
fs/ceph/file.c
··· 1931 1931 if (!prealloc_cf) 1932 1932 return -ENOMEM; 1933 1933 1934 - /* Start by sync'ing the source file */ 1934 + /* Start by sync'ing the source and destination files */ 1935 1935 ret = file_write_and_wait_range(src_file, src_off, (src_off + len)); 1936 - if (ret < 0) 1936 + if (ret < 0) { 1937 + dout("failed to write src file (%zd)\n", ret); 1937 1938 goto out; 1939 + } 1940 + ret = file_write_and_wait_range(dst_file, dst_off, (dst_off + len)); 1941 + if (ret < 0) { 1942 + dout("failed to write dst file (%zd)\n", ret); 1943 + goto out; 1944 + } 1938 1945 1939 1946 /* 1940 1947 * We need FILE_WR caps for dst_ci and FILE_RD for src_ci as other
+3 -9
fs/ceph/mds_client.c
··· 80 80 info->symlink = *p; 81 81 *p += info->symlink_len; 82 82 83 - if (features & CEPH_FEATURE_DIRLAYOUTHASH) 84 - ceph_decode_copy_safe(p, end, &info->dir_layout, 85 - sizeof(info->dir_layout), bad); 86 - else 87 - memset(&info->dir_layout, 0, sizeof(info->dir_layout)); 88 - 83 + ceph_decode_copy_safe(p, end, &info->dir_layout, 84 + sizeof(info->dir_layout), bad); 89 85 ceph_decode_32_safe(p, end, info->xattr_len, bad); 90 86 ceph_decode_need(p, end, info->xattr_len, bad); 91 87 info->xattr_data = *p; ··· 3178 3182 recon_state.pagelist = pagelist; 3179 3183 if (session->s_con.peer_features & CEPH_FEATURE_MDSENC) 3180 3184 recon_state.msg_version = 3; 3181 - else if (session->s_con.peer_features & CEPH_FEATURE_FLOCK) 3182 - recon_state.msg_version = 2; 3183 3185 else 3184 - recon_state.msg_version = 1; 3186 + recon_state.msg_version = 2; 3185 3187 err = iterate_session_caps(session, encode_caps_cb, &recon_state); 3186 3188 if (err < 0) 3187 3189 goto fail;
+2 -1
fs/ceph/quota.c
··· 237 237 ceph_put_snap_realm(mdsc, realm); 238 238 realm = next; 239 239 } 240 - ceph_put_snap_realm(mdsc, realm); 240 + if (realm) 241 + ceph_put_snap_realm(mdsc, realm); 241 242 up_read(&mdsc->snap_rwsem); 242 243 243 244 return exceeded;
+35 -25
fs/dax.c
··· 98 98 return xa_mk_value(flags | (pfn_t_to_pfn(pfn) << DAX_SHIFT)); 99 99 } 100 100 101 - static void *dax_make_page_entry(struct page *page) 102 - { 103 - pfn_t pfn = page_to_pfn_t(page); 104 - return dax_make_entry(pfn, PageHead(page) ? DAX_PMD : 0); 105 - } 106 - 107 101 static bool dax_is_locked(void *entry) 108 102 { 109 103 return xa_to_value(entry) & DAX_LOCKED; ··· 110 116 return 0; 111 117 } 112 118 113 - static int dax_is_pmd_entry(void *entry) 119 + static unsigned long dax_is_pmd_entry(void *entry) 114 120 { 115 121 return xa_to_value(entry) & DAX_PMD; 116 122 } 117 123 118 - static int dax_is_pte_entry(void *entry) 124 + static bool dax_is_pte_entry(void *entry) 119 125 { 120 126 return !(xa_to_value(entry) & DAX_PMD); 121 127 } ··· 216 222 ewait.wait.func = wake_exceptional_entry_func; 217 223 218 224 for (;;) { 219 - entry = xas_load(xas); 220 - if (!entry || xa_is_internal(entry) || 221 - WARN_ON_ONCE(!xa_is_value(entry)) || 225 + entry = xas_find_conflict(xas); 226 + if (!entry || WARN_ON_ONCE(!xa_is_value(entry)) || 222 227 !dax_is_locked(entry)) 223 228 return entry; 224 229 ··· 248 255 { 249 256 void *old; 250 257 258 + BUG_ON(dax_is_locked(entry)); 251 259 xas_reset(xas); 252 260 xas_lock_irq(xas); 253 261 old = xas_store(xas, entry); ··· 346 352 return NULL; 347 353 } 348 354 355 + /* 356 + * dax_lock_mapping_entry - Lock the DAX entry corresponding to a page 357 + * @page: The page whose entry we want to lock 358 + * 359 + * Context: Process context. 360 + * Return: %true if the entry was locked or does not need to be locked. 361 + */ 349 362 bool dax_lock_mapping_entry(struct page *page) 350 363 { 351 364 XA_STATE(xas, NULL, 0); 352 365 void *entry; 366 + bool locked; 353 367 368 + /* Ensure page->mapping isn't freed while we look at it */ 369 + rcu_read_lock(); 354 370 for (;;) { 355 371 struct address_space *mapping = READ_ONCE(page->mapping); 356 372 373 + locked = false; 357 374 if (!dax_mapping(mapping)) 358 - return false; 375 + break; 359 376 360 377 /* 361 378 * In the device-dax case there's no need to lock, a ··· 375 370 * otherwise we would not have a valid pfn_to_page() 376 371 * translation. 377 372 */ 373 + locked = true; 378 374 if (S_ISCHR(mapping->host->i_mode)) 379 - return true; 375 + break; 380 376 381 377 xas.xa = &mapping->i_pages; 382 378 xas_lock_irq(&xas); ··· 388 382 xas_set(&xas, page->index); 389 383 entry = xas_load(&xas); 390 384 if (dax_is_locked(entry)) { 385 + rcu_read_unlock(); 391 386 entry = get_unlocked_entry(&xas); 392 - /* Did the page move while we slept? */ 393 - if (dax_to_pfn(entry) != page_to_pfn(page)) { 394 - xas_unlock_irq(&xas); 395 - continue; 396 - } 387 + xas_unlock_irq(&xas); 388 + put_unlocked_entry(&xas, entry); 389 + rcu_read_lock(); 390 + continue; 397 391 } 398 392 dax_lock_entry(&xas, entry); 399 393 xas_unlock_irq(&xas); 400 - return true; 394 + break; 401 395 } 396 + rcu_read_unlock(); 397 + return locked; 402 398 } 403 399 404 400 void dax_unlock_mapping_entry(struct page *page) 405 401 { 406 402 struct address_space *mapping = page->mapping; 407 403 XA_STATE(xas, &mapping->i_pages, page->index); 404 + void *entry; 408 405 409 406 if (S_ISCHR(mapping->host->i_mode)) 410 407 return; 411 408 412 - dax_unlock_entry(&xas, dax_make_page_entry(page)); 409 + rcu_read_lock(); 410 + entry = xas_load(&xas); 411 + rcu_read_unlock(); 412 + entry = dax_make_entry(page_to_pfn_t(page), dax_is_pmd_entry(entry)); 413 + dax_unlock_entry(&xas, entry); 413 414 } 414 415 415 416 /* ··· 458 445 retry: 459 446 xas_lock_irq(xas); 460 447 entry = get_unlocked_entry(xas); 461 - if (xa_is_internal(entry)) 462 - goto fallback; 463 448 464 449 if (entry) { 465 - if (WARN_ON_ONCE(!xa_is_value(entry))) { 450 + if (!xa_is_value(entry)) { 466 451 xas_set_err(xas, EIO); 467 452 goto out_unlock; 468 453 } ··· 1639 1628 /* Did we race with someone splitting entry or so? */ 1640 1629 if (!entry || 1641 1630 (order == 0 && !dax_is_pte_entry(entry)) || 1642 - (order == PMD_ORDER && (xa_is_internal(entry) || 1643 - !dax_is_pmd_entry(entry)))) { 1631 + (order == PMD_ORDER && !dax_is_pmd_entry(entry))) { 1644 1632 put_unlocked_entry(&xas, entry); 1645 1633 xas_unlock_irq(&xas); 1646 1634 trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf,
+2 -2
fs/direct-io.c
··· 325 325 */ 326 326 dio->iocb->ki_pos += transferred; 327 327 328 - if (dio->op == REQ_OP_WRITE) 329 - ret = generic_write_sync(dio->iocb, transferred); 328 + if (ret > 0 && dio->op == REQ_OP_WRITE) 329 + ret = generic_write_sync(dio->iocb, ret); 330 330 dio->iocb->ki_complete(dio->iocb, ret, 0); 331 331 } 332 332
+3 -2
fs/exec.c
··· 62 62 #include <linux/oom.h> 63 63 #include <linux/compat.h> 64 64 #include <linux/vmalloc.h> 65 + #include <linux/freezer.h> 65 66 66 67 #include <linux/uaccess.h> 67 68 #include <asm/mmu_context.h> ··· 1084 1083 while (sig->notify_count) { 1085 1084 __set_current_state(TASK_KILLABLE); 1086 1085 spin_unlock_irq(lock); 1087 - schedule(); 1086 + freezable_schedule(); 1088 1087 if (unlikely(__fatal_signal_pending(tsk))) 1089 1088 goto killed; 1090 1089 spin_lock_irq(lock); ··· 1112 1111 __set_current_state(TASK_KILLABLE); 1113 1112 write_unlock_irq(&tasklist_lock); 1114 1113 cgroup_threadgroup_change_end(tsk); 1115 - schedule(); 1114 + freezable_schedule(); 1116 1115 if (unlikely(__fatal_signal_pending(tsk))) 1117 1116 goto killed; 1118 1117 }
+2 -1
fs/exportfs/expfs.c
··· 77 77 struct dentry *parent = dget_parent(dentry); 78 78 79 79 dput(dentry); 80 - if (IS_ROOT(dentry)) { 80 + if (dentry == parent) { 81 81 dput(parent); 82 82 return false; 83 83 } ··· 147 147 tmp = lookup_one_len_unlocked(nbuf, parent, strlen(nbuf)); 148 148 if (IS_ERR(tmp)) { 149 149 dprintk("%s: lookup failed: %d\n", __func__, PTR_ERR(tmp)); 150 + err = PTR_ERR(tmp); 150 151 goto out_err; 151 152 } 152 153 if (tmp != dentry) {
+1
fs/ext2/super.c
··· 892 892 if (sb->s_magic != EXT2_SUPER_MAGIC) 893 893 goto cantfind_ext2; 894 894 895 + opts.s_mount_opt = 0; 895 896 /* Set defaults before we parse the mount options */ 896 897 def_mount_opts = le32_to_cpu(es->s_default_mount_opts); 897 898 if (def_mount_opts & EXT2_DEFM_DEBUG)
+1 -1
fs/ext2/xattr.c
··· 612 612 } 613 613 614 614 cleanup: 615 - brelse(bh); 616 615 if (!(bh && header == HDR(bh))) 617 616 kfree(header); 617 + brelse(bh); 618 618 up_write(&EXT2_I(inode)->xattr_sem); 619 619 620 620 return error;
+3 -2
fs/ext4/inode.c
··· 5835 5835 { 5836 5836 int err = 0; 5837 5837 5838 - if (unlikely(ext4_forced_shutdown(EXT4_SB(inode->i_sb)))) 5838 + if (unlikely(ext4_forced_shutdown(EXT4_SB(inode->i_sb)))) { 5839 + put_bh(iloc->bh); 5839 5840 return -EIO; 5840 - 5841 + } 5841 5842 if (IS_I_VERSION(inode)) 5842 5843 inode_inc_iversion(inode); 5843 5844
+4 -1
fs/ext4/namei.c
··· 126 126 if (!is_dx_block && type == INDEX) { 127 127 ext4_error_inode(inode, func, line, block, 128 128 "directory leaf block found instead of index block"); 129 + brelse(bh); 129 130 return ERR_PTR(-EFSCORRUPTED); 130 131 } 131 132 if (!ext4_has_metadata_csum(inode->i_sb) || ··· 2812 2811 list_del_init(&EXT4_I(inode)->i_orphan); 2813 2812 mutex_unlock(&sbi->s_orphan_lock); 2814 2813 } 2815 - } 2814 + } else 2815 + brelse(iloc.bh); 2816 + 2816 2817 jbd_debug(4, "superblock will point to %lu\n", inode->i_ino); 2817 2818 jbd_debug(4, "orphan inode %lu will point to %d\n", 2818 2819 inode->i_ino, NEXT_ORPHAN(inode));
+16 -12
fs/ext4/resize.c
··· 459 459 460 460 BUFFER_TRACE(bh, "get_write_access"); 461 461 err = ext4_journal_get_write_access(handle, bh); 462 - if (err) 462 + if (err) { 463 + brelse(bh); 463 464 return err; 465 + } 464 466 ext4_debug("mark block bitmap %#04llx (+%llu/%u)\n", 465 467 first_cluster, first_cluster - start, count2); 466 468 ext4_set_bits(bh->b_data, first_cluster - start, count2); 467 469 468 470 err = ext4_handle_dirty_metadata(handle, NULL, bh); 471 + brelse(bh); 469 472 if (unlikely(err)) 470 473 return err; 471 - brelse(bh); 472 474 } 473 475 474 476 return 0; ··· 607 605 bh = bclean(handle, sb, block); 608 606 if (IS_ERR(bh)) { 609 607 err = PTR_ERR(bh); 610 - bh = NULL; 611 608 goto out; 612 609 } 613 610 overhead = ext4_group_overhead_blocks(sb, group); ··· 619 618 ext4_mark_bitmap_end(EXT4_B2C(sbi, group_data[i].blocks_count), 620 619 sb->s_blocksize * 8, bh->b_data); 621 620 err = ext4_handle_dirty_metadata(handle, NULL, bh); 621 + brelse(bh); 622 622 if (err) 623 623 goto out; 624 - brelse(bh); 625 624 626 625 handle_ib: 627 626 if (bg_flags[i] & EXT4_BG_INODE_UNINIT) ··· 636 635 bh = bclean(handle, sb, block); 637 636 if (IS_ERR(bh)) { 638 637 err = PTR_ERR(bh); 639 - bh = NULL; 640 638 goto out; 641 639 } 642 640 643 641 ext4_mark_bitmap_end(EXT4_INODES_PER_GROUP(sb), 644 642 sb->s_blocksize * 8, bh->b_data); 645 643 err = ext4_handle_dirty_metadata(handle, NULL, bh); 644 + brelse(bh); 646 645 if (err) 647 646 goto out; 648 - brelse(bh); 649 647 } 650 - bh = NULL; 651 648 652 649 /* Mark group tables in block bitmap */ 653 650 for (j = 0; j < GROUP_TABLE_COUNT; j++) { ··· 684 685 } 685 686 686 687 out: 687 - brelse(bh); 688 688 err2 = ext4_journal_stop(handle); 689 689 if (err2 && !err) 690 690 err = err2; ··· 871 873 err = ext4_handle_dirty_metadata(handle, NULL, gdb_bh); 872 874 if (unlikely(err)) { 873 875 ext4_std_error(sb, err); 876 + iloc.bh = NULL; 874 877 goto exit_inode; 875 878 } 876 879 brelse(dind); ··· 923 924 sizeof(struct buffer_head *), 924 925 GFP_NOFS); 925 926 if (!n_group_desc) { 927 + brelse(gdb_bh); 926 928 err = -ENOMEM; 927 929 ext4_warning(sb, "not enough memory for %lu groups", 928 930 gdb_num + 1); ··· 939 939 kvfree(o_group_desc); 940 940 BUFFER_TRACE(gdb_bh, "get_write_access"); 941 941 err = ext4_journal_get_write_access(handle, gdb_bh); 942 - if (unlikely(err)) 943 - brelse(gdb_bh); 944 942 return err; 945 943 } 946 944 ··· 1122 1124 backup_block, backup_block - 1123 1125 ext4_group_first_block_no(sb, group)); 1124 1126 BUFFER_TRACE(bh, "get_write_access"); 1125 - if ((err = ext4_journal_get_write_access(handle, bh))) 1127 + if ((err = ext4_journal_get_write_access(handle, bh))) { 1128 + brelse(bh); 1126 1129 break; 1130 + } 1127 1131 lock_buffer(bh); 1128 1132 memcpy(bh->b_data, data, size); 1129 1133 if (rest) ··· 2023 2023 2024 2024 err = ext4_alloc_flex_bg_array(sb, n_group + 1); 2025 2025 if (err) 2026 - return err; 2026 + goto out; 2027 2027 2028 2028 err = ext4_mb_alloc_groupinfo(sb, n_group + 1); 2029 2029 if (err) ··· 2059 2059 n_blocks_count_retry = 0; 2060 2060 free_flex_gd(flex_gd); 2061 2061 flex_gd = NULL; 2062 + if (resize_inode) { 2063 + iput(resize_inode); 2064 + resize_inode = NULL; 2065 + } 2062 2066 goto retry; 2063 2067 } 2064 2068
+9 -8
fs/ext4/super.c
··· 4075 4075 sbi->s_groups_count = blocks_count; 4076 4076 sbi->s_blockfile_groups = min_t(ext4_group_t, sbi->s_groups_count, 4077 4077 (EXT4_MAX_BLOCK_FILE_PHYS / EXT4_BLOCKS_PER_GROUP(sb))); 4078 + if (((u64)sbi->s_groups_count * sbi->s_inodes_per_group) != 4079 + le32_to_cpu(es->s_inodes_count)) { 4080 + ext4_msg(sb, KERN_ERR, "inodes count not valid: %u vs %llu", 4081 + le32_to_cpu(es->s_inodes_count), 4082 + ((u64)sbi->s_groups_count * sbi->s_inodes_per_group)); 4083 + ret = -EINVAL; 4084 + goto failed_mount; 4085 + } 4078 4086 db_count = (sbi->s_groups_count + EXT4_DESC_PER_BLOCK(sb) - 1) / 4079 4087 EXT4_DESC_PER_BLOCK(sb); 4080 4088 if (ext4_has_feature_meta_bg(sb)) { ··· 4100 4092 if (sbi->s_group_desc == NULL) { 4101 4093 ext4_msg(sb, KERN_ERR, "not enough memory"); 4102 4094 ret = -ENOMEM; 4103 - goto failed_mount; 4104 - } 4105 - if (((u64)sbi->s_groups_count * sbi->s_inodes_per_group) != 4106 - le32_to_cpu(es->s_inodes_count)) { 4107 - ext4_msg(sb, KERN_ERR, "inodes count not valid: %u vs %llu", 4108 - le32_to_cpu(es->s_inodes_count), 4109 - ((u64)sbi->s_groups_count * sbi->s_inodes_per_group)); 4110 - ret = -EINVAL; 4111 4095 goto failed_mount; 4112 4096 } 4113 4097 ··· 4510 4510 percpu_counter_destroy(&sbi->s_freeinodes_counter); 4511 4511 percpu_counter_destroy(&sbi->s_dirs_counter); 4512 4512 percpu_counter_destroy(&sbi->s_dirtyclusters_counter); 4513 + percpu_free_rwsem(&sbi->s_journal_flag_rwsem); 4513 4514 failed_mount5: 4514 4515 ext4_ext_release(sb); 4515 4516 ext4_release_system_zone(sb);
+19 -8
fs/ext4/xattr.c
··· 1031 1031 inode_lock(ea_inode); 1032 1032 1033 1033 ret = ext4_reserve_inode_write(handle, ea_inode, &iloc); 1034 - if (ret) { 1035 - iloc.bh = NULL; 1034 + if (ret) 1036 1035 goto out; 1037 - } 1038 1036 1039 1037 ref_count = ext4_xattr_inode_get_ref(ea_inode); 1040 1038 ref_count += ref_change; ··· 1078 1080 } 1079 1081 1080 1082 ret = ext4_mark_iloc_dirty(handle, ea_inode, &iloc); 1081 - iloc.bh = NULL; 1082 1083 if (ret) 1083 1084 ext4_warning_inode(ea_inode, 1084 1085 "ext4_mark_iloc_dirty() failed ret=%d", ret); 1085 1086 out: 1086 - brelse(iloc.bh); 1087 1087 inode_unlock(ea_inode); 1088 1088 return ret; 1089 1089 } ··· 1384 1388 bh = ext4_getblk(handle, ea_inode, block, 0); 1385 1389 if (IS_ERR(bh)) 1386 1390 return PTR_ERR(bh); 1391 + if (!bh) { 1392 + WARN_ON_ONCE(1); 1393 + EXT4_ERROR_INODE(ea_inode, 1394 + "ext4_getblk() return bh = NULL"); 1395 + return -EFSCORRUPTED; 1396 + } 1387 1397 ret = ext4_journal_get_write_access(handle, bh); 1388 1398 if (ret) 1389 1399 goto out; ··· 2278 2276 if (!bh) 2279 2277 return ERR_PTR(-EIO); 2280 2278 error = ext4_xattr_check_block(inode, bh); 2281 - if (error) 2279 + if (error) { 2280 + brelse(bh); 2282 2281 return ERR_PTR(error); 2282 + } 2283 2283 return bh; 2284 2284 } 2285 2285 ··· 2401 2397 error = ext4_xattr_block_set(handle, inode, &i, &bs); 2402 2398 } else if (error == -ENOSPC) { 2403 2399 if (EXT4_I(inode)->i_file_acl && !bs.s.base) { 2400 + brelse(bs.bh); 2401 + bs.bh = NULL; 2404 2402 error = ext4_xattr_block_find(inode, &i, &bs); 2405 2403 if (error) 2406 2404 goto cleanup; ··· 2623 2617 kfree(buffer); 2624 2618 if (is) 2625 2619 brelse(is->iloc.bh); 2620 + if (bs) 2621 + brelse(bs->bh); 2626 2622 kfree(is); 2627 2623 kfree(bs); 2628 2624 ··· 2704 2696 struct ext4_inode *raw_inode, handle_t *handle) 2705 2697 { 2706 2698 struct ext4_xattr_ibody_header *header; 2707 - struct buffer_head *bh; 2708 2699 struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb); 2709 2700 static unsigned int mnt_count; 2710 2701 size_t min_offs; ··· 2744 2737 * EA block can hold new_extra_isize bytes. 2745 2738 */ 2746 2739 if (EXT4_I(inode)->i_file_acl) { 2740 + struct buffer_head *bh; 2741 + 2747 2742 bh = sb_bread(inode->i_sb, EXT4_I(inode)->i_file_acl); 2748 2743 error = -EIO; 2749 2744 if (!bh) 2750 2745 goto cleanup; 2751 2746 error = ext4_xattr_check_block(inode, bh); 2752 - if (error) 2747 + if (error) { 2748 + brelse(bh); 2753 2749 goto cleanup; 2750 + } 2754 2751 base = BHDR(bh); 2755 2752 end = bh->b_data + bh->b_size; 2756 2753 min_offs = end - base;
+3
fs/fscache/object.c
··· 730 730 731 731 if (awaken) 732 732 wake_up_bit(&cookie->flags, FSCACHE_COOKIE_INVALIDATING); 733 + if (test_and_clear_bit(FSCACHE_COOKIE_LOOKING_UP, &cookie->flags)) 734 + wake_up_bit(&cookie->flags, FSCACHE_COOKIE_LOOKING_UP); 735 + 733 736 734 737 /* Prevent a race with our last child, which has to signal EV_CLEARED 735 738 * before dropping our spinlock.
+12 -4
fs/fuse/dev.c
··· 165 165 166 166 static void fuse_drop_waiting(struct fuse_conn *fc) 167 167 { 168 - if (fc->connected) { 169 - atomic_dec(&fc->num_waiting); 170 - } else if (atomic_dec_and_test(&fc->num_waiting)) { 168 + /* 169 + * lockess check of fc->connected is okay, because atomic_dec_and_test() 170 + * provides a memory barrier mached with the one in fuse_wait_aborted() 171 + * to ensure no wake-up is missed. 172 + */ 173 + if (atomic_dec_and_test(&fc->num_waiting) && 174 + !READ_ONCE(fc->connected)) { 171 175 /* wake up aborters */ 172 176 wake_up_all(&fc->blocked_waitq); 173 177 } ··· 1772 1768 req->in.args[1].size = total_len; 1773 1769 1774 1770 err = fuse_request_send_notify_reply(fc, req, outarg->notify_unique); 1775 - if (err) 1771 + if (err) { 1776 1772 fuse_retrieve_end(fc, req); 1773 + fuse_put_request(fc, req); 1774 + } 1777 1775 1778 1776 return err; 1779 1777 } ··· 2225 2219 2226 2220 void fuse_wait_aborted(struct fuse_conn *fc) 2227 2221 { 2222 + /* matches implicit memory barrier in fuse_drop_waiting() */ 2223 + smp_mb(); 2228 2224 wait_event(fc->blocked_waitq, atomic_read(&fc->num_waiting) == 0); 2229 2225 } 2230 2226
+3 -1
fs/fuse/file.c
··· 2924 2924 } 2925 2925 2926 2926 if (io->async) { 2927 + bool blocking = io->blocking; 2928 + 2927 2929 fuse_aio_complete(io, ret < 0 ? ret : 0, -1); 2928 2930 2929 2931 /* we have a non-extending, async request, so return */ 2930 - if (!io->blocking) 2932 + if (!blocking) 2931 2933 return -EIOCBQUEUED; 2932 2934 2933 2935 wait_for_completion(&wait);
+27 -27
fs/gfs2/bmap.c
··· 826 826 ret = gfs2_meta_inode_buffer(ip, &dibh); 827 827 if (ret) 828 828 goto unlock; 829 - iomap->private = dibh; 829 + mp->mp_bh[0] = dibh; 830 830 831 831 if (gfs2_is_stuffed(ip)) { 832 832 if (flags & IOMAP_WRITE) { ··· 863 863 len = lblock_stop - lblock + 1; 864 864 iomap->length = len << inode->i_blkbits; 865 865 866 - get_bh(dibh); 867 - mp->mp_bh[0] = dibh; 868 - 869 866 height = ip->i_height; 870 867 while ((lblock + 1) * sdp->sd_sb.sb_bsize > sdp->sd_heightsize[height]) 871 868 height++; ··· 895 898 iomap->bdev = inode->i_sb->s_bdev; 896 899 unlock: 897 900 up_read(&ip->i_rw_mutex); 898 - if (ret && dibh) 899 - brelse(dibh); 900 901 return ret; 901 902 902 903 do_alloc: ··· 975 980 976 981 static int gfs2_iomap_begin_write(struct inode *inode, loff_t pos, 977 982 loff_t length, unsigned flags, 978 - struct iomap *iomap) 983 + struct iomap *iomap, 984 + struct metapath *mp) 979 985 { 980 - struct metapath mp = { .mp_aheight = 1, }; 981 986 struct gfs2_inode *ip = GFS2_I(inode); 982 987 struct gfs2_sbd *sdp = GFS2_SB(inode); 983 988 unsigned int data_blocks = 0, ind_blocks = 0, rblocks; ··· 991 996 unstuff = gfs2_is_stuffed(ip) && 992 997 pos + length > gfs2_max_stuffed_size(ip); 993 998 994 - ret = gfs2_iomap_get(inode, pos, length, flags, iomap, &mp); 999 + ret = gfs2_iomap_get(inode, pos, length, flags, iomap, mp); 995 1000 if (ret) 996 - goto out_release; 1001 + goto out_unlock; 997 1002 998 1003 alloc_required = unstuff || iomap->type == IOMAP_HOLE; 999 1004 ··· 1008 1013 1009 1014 ret = gfs2_quota_lock_check(ip, &ap); 1010 1015 if (ret) 1011 - goto out_release; 1016 + goto out_unlock; 1012 1017 1013 1018 ret = gfs2_inplace_reserve(ip, &ap); 1014 1019 if (ret) ··· 1033 1038 ret = gfs2_unstuff_dinode(ip, NULL); 1034 1039 if (ret) 1035 1040 goto out_trans_end; 1036 - release_metapath(&mp); 1037 - brelse(iomap->private); 1038 - iomap->private = NULL; 1041 + release_metapath(mp); 1039 1042 ret = gfs2_iomap_get(inode, iomap->offset, iomap->length, 1040 - flags, iomap, &mp); 1043 + flags, iomap, mp); 1041 1044 if (ret) 1042 1045 goto out_trans_end; 1043 1046 } 1044 1047 1045 1048 if (iomap->type == IOMAP_HOLE) { 1046 - ret = gfs2_iomap_alloc(inode, iomap, flags, &mp); 1049 + ret = gfs2_iomap_alloc(inode, iomap, flags, mp); 1047 1050 if (ret) { 1048 1051 gfs2_trans_end(sdp); 1049 1052 gfs2_inplace_release(ip); ··· 1049 1056 goto out_qunlock; 1050 1057 } 1051 1058 } 1052 - release_metapath(&mp); 1053 1059 if (!gfs2_is_stuffed(ip) && gfs2_is_jdata(ip)) 1054 1060 iomap->page_done = gfs2_iomap_journaled_page_done; 1055 1061 return 0; ··· 1061 1069 out_qunlock: 1062 1070 if (alloc_required) 1063 1071 gfs2_quota_unlock(ip); 1064 - out_release: 1065 - if (iomap->private) 1066 - brelse(iomap->private); 1067 - release_metapath(&mp); 1072 + out_unlock: 1068 1073 gfs2_write_unlock(inode); 1069 1074 return ret; 1070 1075 } ··· 1077 1088 1078 1089 trace_gfs2_iomap_start(ip, pos, length, flags); 1079 1090 if ((flags & IOMAP_WRITE) && !(flags & IOMAP_DIRECT)) { 1080 - ret = gfs2_iomap_begin_write(inode, pos, length, flags, iomap); 1091 + ret = gfs2_iomap_begin_write(inode, pos, length, flags, iomap, &mp); 1081 1092 } else { 1082 1093 ret = gfs2_iomap_get(inode, pos, length, flags, iomap, &mp); 1083 - release_metapath(&mp); 1094 + 1084 1095 /* 1085 1096 * Silently fall back to buffered I/O for stuffed files or if 1086 1097 * we've hot a hole (see gfs2_file_direct_write). ··· 1089 1100 iomap->type != IOMAP_MAPPED) 1090 1101 ret = -ENOTBLK; 1091 1102 } 1103 + if (!ret) { 1104 + get_bh(mp.mp_bh[0]); 1105 + iomap->private = mp.mp_bh[0]; 1106 + } 1107 + release_metapath(&mp); 1092 1108 trace_gfs2_iomap_end(ip, iomap, ret); 1093 1109 return ret; 1094 1110 } ··· 1902 1908 if (ret < 0) 1903 1909 goto out; 1904 1910 1905 - /* issue read-ahead on metadata */ 1906 - if (mp.mp_aheight > 1) { 1907 - for (; ret > 1; ret--) { 1908 - metapointer_range(&mp, mp.mp_aheight - ret, 1911 + /* On the first pass, issue read-ahead on metadata. */ 1912 + if (mp.mp_aheight > 1 && strip_h == ip->i_height - 1) { 1913 + unsigned int height = mp.mp_aheight - 1; 1914 + 1915 + /* No read-ahead for data blocks. */ 1916 + if (mp.mp_aheight - 1 == strip_h) 1917 + height--; 1918 + 1919 + for (; height >= mp.mp_aheight - ret; height--) { 1920 + metapointer_range(&mp, height, 1909 1921 start_list, start_aligned, 1910 1922 end_list, end_aligned, 1911 1923 &start, &end);
+2 -1
fs/gfs2/rgrp.c
··· 733 733 734 734 if (gl) { 735 735 glock_clear_object(gl, rgd); 736 + gfs2_rgrp_brelse(rgd); 736 737 gfs2_glock_put(gl); 737 738 } 738 739 ··· 1175 1174 * @rgd: the struct gfs2_rgrpd describing the RG to read in 1176 1175 * 1177 1176 * Read in all of a Resource Group's header and bitmap blocks. 1178 - * Caller must eventually call gfs2_rgrp_relse() to free the bitmaps. 1177 + * Caller must eventually call gfs2_rgrp_brelse() to free the bitmaps. 1179 1178 * 1180 1179 * Returns: errno 1181 1180 */
+2 -1
fs/hfs/btree.c
··· 338 338 339 339 nidx -= len * 8; 340 340 i = node->next; 341 - hfs_bnode_put(node); 342 341 if (!i) { 343 342 /* panic */; 344 343 pr_crit("unable to free bnode %u. bmap not found!\n", 345 344 node->this); 345 + hfs_bnode_put(node); 346 346 return; 347 347 } 348 + hfs_bnode_put(node); 348 349 node = hfs_bnode_find(tree, i); 349 350 if (IS_ERR(node)) 350 351 return;
+2 -1
fs/hfsplus/btree.c
··· 466 466 467 467 nidx -= len * 8; 468 468 i = node->next; 469 - hfs_bnode_put(node); 470 469 if (!i) { 471 470 /* panic */; 472 471 pr_crit("unable to free bnode %u. " 473 472 "bmap not found!\n", 474 473 node->this); 474 + hfs_bnode_put(node); 475 475 return; 476 476 } 477 + hfs_bnode_put(node); 477 478 node = hfs_bnode_find(tree, i); 478 479 if (IS_ERR(node)) 479 480 return;
+5 -2
fs/inode.c
··· 730 730 return LRU_REMOVED; 731 731 } 732 732 733 - /* recently referenced inodes get one more pass */ 734 - if (inode->i_state & I_REFERENCED) { 733 + /* 734 + * Recently referenced inodes and inodes with many attached pages 735 + * get one more pass. 736 + */ 737 + if (inode->i_state & I_REFERENCED || inode->i_data.nrpages > 1) { 735 738 inode->i_state &= ~I_REFERENCED; 736 739 spin_unlock(&inode->i_lock); 737 740 return LRU_ROTATE;
+41 -12
fs/iomap.c
··· 142 142 iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, 143 143 loff_t *pos, loff_t length, unsigned *offp, unsigned *lenp) 144 144 { 145 + loff_t orig_pos = *pos; 146 + loff_t isize = i_size_read(inode); 145 147 unsigned block_bits = inode->i_blkbits; 146 148 unsigned block_size = (1 << block_bits); 147 149 unsigned poff = offset_in_page(*pos); 148 150 unsigned plen = min_t(loff_t, PAGE_SIZE - poff, length); 149 151 unsigned first = poff >> block_bits; 150 152 unsigned last = (poff + plen - 1) >> block_bits; 151 - unsigned end = offset_in_page(i_size_read(inode)) >> block_bits; 152 153 153 154 /* 154 155 * If the block size is smaller than the page size we need to check the ··· 184 183 * handle both halves separately so that we properly zero data in the 185 184 * page cache for blocks that are entirely outside of i_size. 186 185 */ 187 - if (first <= end && last > end) 188 - plen -= (last - end) * block_size; 186 + if (orig_pos <= isize && orig_pos + length > isize) { 187 + unsigned end = offset_in_page(isize - 1) >> block_bits; 188 + 189 + if (first <= end && last > end) 190 + plen -= (last - end) * block_size; 191 + } 189 192 190 193 *offp = poff; 191 194 *lenp = plen; ··· 1585 1580 struct bio *bio; 1586 1581 bool need_zeroout = false; 1587 1582 bool use_fua = false; 1588 - int nr_pages, ret; 1583 + int nr_pages, ret = 0; 1589 1584 size_t copied = 0; 1590 1585 1591 1586 if ((pos | length | align) & ((1 << blkbits) - 1)) ··· 1601 1596 1602 1597 if (iomap->flags & IOMAP_F_NEW) { 1603 1598 need_zeroout = true; 1604 - } else { 1599 + } else if (iomap->type == IOMAP_MAPPED) { 1605 1600 /* 1606 - * Use a FUA write if we need datasync semantics, this 1607 - * is a pure data IO that doesn't require any metadata 1608 - * updates and the underlying device supports FUA. This 1609 - * allows us to avoid cache flushes on IO completion. 1601 + * Use a FUA write if we need datasync semantics, this is a pure 1602 + * data IO that doesn't require any metadata updates (including 1603 + * after IO completion such as unwritten extent conversion) and 1604 + * the underlying device supports FUA. This allows us to avoid 1605 + * cache flushes on IO completion. 1610 1606 */ 1611 1607 if (!(iomap->flags & (IOMAP_F_SHARED|IOMAP_F_DIRTY)) && 1612 1608 (dio->flags & IOMAP_DIO_WRITE_FUA) && ··· 1650 1644 1651 1645 ret = bio_iov_iter_get_pages(bio, &iter); 1652 1646 if (unlikely(ret)) { 1647 + /* 1648 + * We have to stop part way through an IO. We must fall 1649 + * through to the sub-block tail zeroing here, otherwise 1650 + * this short IO may expose stale data in the tail of 1651 + * the block we haven't written data to. 1652 + */ 1653 1653 bio_put(bio); 1654 - return copied ? copied : ret; 1654 + goto zero_tail; 1655 1655 } 1656 1656 1657 1657 n = bio->bi_iter.bi_size; ··· 1688 1676 dio->submit.cookie = submit_bio(bio); 1689 1677 } while (nr_pages); 1690 1678 1691 - if (need_zeroout) { 1679 + /* 1680 + * We need to zeroout the tail of a sub-block write if the extent type 1681 + * requires zeroing or the write extends beyond EOF. If we don't zero 1682 + * the block tail in the latter case, we can expose stale data via mmap 1683 + * reads of the EOF block. 1684 + */ 1685 + zero_tail: 1686 + if (need_zeroout || 1687 + ((dio->flags & IOMAP_DIO_WRITE) && pos >= i_size_read(inode))) { 1692 1688 /* zero out from the end of the write to the end of the block */ 1693 1689 pad = pos & (fs_block_size - 1); 1694 1690 if (pad) 1695 1691 iomap_dio_zero(dio, iomap, pos, fs_block_size - pad); 1696 1692 } 1697 - return copied; 1693 + return copied ? copied : ret; 1698 1694 } 1699 1695 1700 1696 static loff_t ··· 1877 1857 dio->wait_for_completion = true; 1878 1858 ret = 0; 1879 1859 } 1860 + 1861 + /* 1862 + * Splicing to pipes can fail on a full pipe. We have to 1863 + * swallow this to make it look like a short IO 1864 + * otherwise the higher splice layers will completely 1865 + * mishandle the error and stop moving data. 1866 + */ 1867 + if (ret == -EFAULT) 1868 + ret = 0; 1880 1869 break; 1881 1870 } 1882 1871 pos += ret;
+20 -8
fs/namespace.c
··· 695 695 696 696 hlist_for_each_entry(mp, chain, m_hash) { 697 697 if (mp->m_dentry == dentry) { 698 - /* might be worth a WARN_ON() */ 699 - if (d_unlinked(dentry)) 700 - return ERR_PTR(-ENOENT); 701 698 mp->m_count++; 702 699 return mp; 703 700 } ··· 708 711 int ret; 709 712 710 713 if (d_mountpoint(dentry)) { 714 + /* might be worth a WARN_ON() */ 715 + if (d_unlinked(dentry)) 716 + return ERR_PTR(-ENOENT); 711 717 mountpoint: 712 718 read_seqlock_excl(&mount_lock); 713 719 mp = lookup_mountpoint(dentry); ··· 1540 1540 1541 1541 namespace_lock(); 1542 1542 lock_mount_hash(); 1543 - event++; 1544 1543 1544 + /* Recheck MNT_LOCKED with the locks held */ 1545 + retval = -EINVAL; 1546 + if (mnt->mnt.mnt_flags & MNT_LOCKED) 1547 + goto out; 1548 + 1549 + event++; 1545 1550 if (flags & MNT_DETACH) { 1546 1551 if (!list_empty(&mnt->mnt_list)) 1547 1552 umount_tree(mnt, UMOUNT_PROPAGATE); ··· 1560 1555 retval = 0; 1561 1556 } 1562 1557 } 1558 + out: 1563 1559 unlock_mount_hash(); 1564 1560 namespace_unlock(); 1565 1561 return retval; ··· 1651 1645 goto dput_and_out; 1652 1646 if (!check_mnt(mnt)) 1653 1647 goto dput_and_out; 1654 - if (mnt->mnt.mnt_flags & MNT_LOCKED) 1648 + if (mnt->mnt.mnt_flags & MNT_LOCKED) /* Check optimistically */ 1655 1649 goto dput_and_out; 1656 1650 retval = -EPERM; 1657 1651 if (flags & MNT_FORCE && !capable(CAP_SYS_ADMIN)) ··· 1734 1728 for (s = r; s; s = next_mnt(s, r)) { 1735 1729 if (!(flag & CL_COPY_UNBINDABLE) && 1736 1730 IS_MNT_UNBINDABLE(s)) { 1737 - s = skip_mnt_tree(s); 1738 - continue; 1731 + if (s->mnt.mnt_flags & MNT_LOCKED) { 1732 + /* Both unbindable and locked. */ 1733 + q = ERR_PTR(-EPERM); 1734 + goto out; 1735 + } else { 1736 + s = skip_mnt_tree(s); 1737 + continue; 1738 + } 1739 1739 } 1740 1740 if (!(flag & CL_COPY_MNT_NS_FILE) && 1741 1741 is_mnt_ns_file(s->mnt.mnt_root)) { ··· 1794 1782 { 1795 1783 namespace_lock(); 1796 1784 lock_mount_hash(); 1797 - umount_tree(real_mount(mnt), UMOUNT_SYNC); 1785 + umount_tree(real_mount(mnt), 0); 1798 1786 unlock_mount_hash(); 1799 1787 namespace_unlock(); 1800 1788 }
+13 -13
fs/nfs/callback_proc.c
··· 66 66 out_iput: 67 67 rcu_read_unlock(); 68 68 trace_nfs4_cb_getattr(cps->clp, &args->fh, inode, -ntohl(res->status)); 69 - iput(inode); 69 + nfs_iput_and_deactive(inode); 70 70 out: 71 71 dprintk("%s: exit with status = %d\n", __func__, ntohl(res->status)); 72 72 return res->status; ··· 108 108 } 109 109 trace_nfs4_cb_recall(cps->clp, &args->fh, inode, 110 110 &args->stateid, -ntohl(res)); 111 - iput(inode); 111 + nfs_iput_and_deactive(inode); 112 112 out: 113 113 dprintk("%s: exit with status = %d\n", __func__, ntohl(res)); 114 114 return res; ··· 686 686 { 687 687 struct cb_offloadargs *args = data; 688 688 struct nfs_server *server; 689 - struct nfs4_copy_state *copy; 689 + struct nfs4_copy_state *copy, *tmp_copy; 690 690 bool found = false; 691 + 692 + copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS); 693 + if (!copy) 694 + return htonl(NFS4ERR_SERVERFAULT); 691 695 692 696 spin_lock(&cps->clp->cl_lock); 693 697 rcu_read_lock(); 694 698 list_for_each_entry_rcu(server, &cps->clp->cl_superblocks, 695 699 client_link) { 696 - list_for_each_entry(copy, &server->ss_copies, copies) { 700 + list_for_each_entry(tmp_copy, &server->ss_copies, copies) { 697 701 if (memcmp(args->coa_stateid.other, 698 - copy->stateid.other, 702 + tmp_copy->stateid.other, 699 703 sizeof(args->coa_stateid.other))) 700 704 continue; 701 - nfs4_copy_cb_args(copy, args); 702 - complete(&copy->completion); 705 + nfs4_copy_cb_args(tmp_copy, args); 706 + complete(&tmp_copy->completion); 703 707 found = true; 704 708 goto out; 705 709 } ··· 711 707 out: 712 708 rcu_read_unlock(); 713 709 if (!found) { 714 - copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS); 715 - if (!copy) { 716 - spin_unlock(&cps->clp->cl_lock); 717 - return htonl(NFS4ERR_SERVERFAULT); 718 - } 719 710 memcpy(&copy->stateid, &args->coa_stateid, NFS4_STATEID_SIZE); 720 711 nfs4_copy_cb_args(copy, args); 721 712 list_add_tail(&copy->copies, &cps->clp->pending_cb_stateids); 722 - } 713 + } else 714 + kfree(copy); 723 715 spin_unlock(&cps->clp->cl_lock); 724 716 725 717 return 0;
+9 -2
fs/nfs/delegation.c
··· 850 850 const struct nfs_fh *fhandle) 851 851 { 852 852 struct nfs_delegation *delegation; 853 - struct inode *res = NULL; 853 + struct inode *freeme, *res = NULL; 854 854 855 855 list_for_each_entry_rcu(delegation, &server->delegations, super_list) { 856 856 spin_lock(&delegation->lock); 857 857 if (delegation->inode != NULL && 858 858 nfs_compare_fh(fhandle, &NFS_I(delegation->inode)->fh) == 0) { 859 - res = igrab(delegation->inode); 859 + freeme = igrab(delegation->inode); 860 + if (freeme && nfs_sb_active(freeme->i_sb)) 861 + res = freeme; 860 862 spin_unlock(&delegation->lock); 861 863 if (res != NULL) 862 864 return res; 865 + if (freeme) { 866 + rcu_read_unlock(); 867 + iput(freeme); 868 + rcu_read_lock(); 869 + } 863 870 return ERR_PTR(-EAGAIN); 864 871 } 865 872 spin_unlock(&delegation->lock);
+9 -12
fs/nfs/flexfilelayout/flexfilelayout.c
··· 1361 1361 task)) 1362 1362 return; 1363 1363 1364 - if (ff_layout_read_prepare_common(task, hdr)) 1365 - return; 1366 - 1367 - if (nfs4_set_rw_stateid(&hdr->args.stateid, hdr->args.context, 1368 - hdr->args.lock_context, FMODE_READ) == -EIO) 1369 - rpc_exit(task, -EIO); /* lost lock, terminate I/O */ 1364 + ff_layout_read_prepare_common(task, hdr); 1370 1365 } 1371 1366 1372 1367 static void ff_layout_read_call_done(struct rpc_task *task, void *data) ··· 1537 1542 task)) 1538 1543 return; 1539 1544 1540 - if (ff_layout_write_prepare_common(task, hdr)) 1541 - return; 1542 - 1543 - if (nfs4_set_rw_stateid(&hdr->args.stateid, hdr->args.context, 1544 - hdr->args.lock_context, FMODE_WRITE) == -EIO) 1545 - rpc_exit(task, -EIO); /* lost lock, terminate I/O */ 1545 + ff_layout_write_prepare_common(task, hdr); 1546 1546 } 1547 1547 1548 1548 static void ff_layout_write_call_done(struct rpc_task *task, void *data) ··· 1732 1742 fh = nfs4_ff_layout_select_ds_fh(lseg, idx); 1733 1743 if (fh) 1734 1744 hdr->args.fh = fh; 1745 + 1746 + if (!nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid)) 1747 + goto out_failed; 1748 + 1735 1749 /* 1736 1750 * Note that if we ever decide to split across DSes, 1737 1751 * then we may need to handle dense-like offsets. ··· 1797 1803 fh = nfs4_ff_layout_select_ds_fh(lseg, idx); 1798 1804 if (fh) 1799 1805 hdr->args.fh = fh; 1806 + 1807 + if (!nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid)) 1808 + goto out_failed; 1800 1809 1801 1810 /* 1802 1811 * Note that if we ever decide to split across DSes,
+4
fs/nfs/flexfilelayout/flexfilelayout.h
··· 215 215 unsigned int maxnum); 216 216 struct nfs_fh * 217 217 nfs4_ff_layout_select_ds_fh(struct pnfs_layout_segment *lseg, u32 mirror_idx); 218 + int 219 + nfs4_ff_layout_select_ds_stateid(struct pnfs_layout_segment *lseg, 220 + u32 mirror_idx, 221 + nfs4_stateid *stateid); 218 222 219 223 struct nfs4_pnfs_ds * 220 224 nfs4_ff_layout_prepare_ds(struct pnfs_layout_segment *lseg, u32 ds_idx,
+19
fs/nfs/flexfilelayout/flexfilelayoutdev.c
··· 370 370 return fh; 371 371 } 372 372 373 + int 374 + nfs4_ff_layout_select_ds_stateid(struct pnfs_layout_segment *lseg, 375 + u32 mirror_idx, 376 + nfs4_stateid *stateid) 377 + { 378 + struct nfs4_ff_layout_mirror *mirror = FF_LAYOUT_COMP(lseg, mirror_idx); 379 + 380 + if (!ff_layout_mirror_valid(lseg, mirror, false)) { 381 + pr_err_ratelimited("NFS: %s: No data server for mirror offset index %d\n", 382 + __func__, mirror_idx); 383 + goto out; 384 + } 385 + 386 + nfs4_stateid_copy(stateid, &mirror->stateid); 387 + return 1; 388 + out: 389 + return 0; 390 + } 391 + 373 392 /** 374 393 * nfs4_ff_layout_prepare_ds - prepare a DS connection for an RPC call 375 394 * @lseg: the layout segment we're operating on
+10 -9
fs/nfs/nfs42proc.c
··· 137 137 struct file *dst, 138 138 nfs4_stateid *src_stateid) 139 139 { 140 - struct nfs4_copy_state *copy; 140 + struct nfs4_copy_state *copy, *tmp_copy; 141 141 int status = NFS4_OK; 142 142 bool found_pending = false; 143 143 struct nfs_open_context *ctx = nfs_file_open_context(dst); 144 144 145 + copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS); 146 + if (!copy) 147 + return -ENOMEM; 148 + 145 149 spin_lock(&server->nfs_client->cl_lock); 146 - list_for_each_entry(copy, &server->nfs_client->pending_cb_stateids, 150 + list_for_each_entry(tmp_copy, &server->nfs_client->pending_cb_stateids, 147 151 copies) { 148 - if (memcmp(&res->write_res.stateid, &copy->stateid, 152 + if (memcmp(&res->write_res.stateid, &tmp_copy->stateid, 149 153 NFS4_STATEID_SIZE)) 150 154 continue; 151 155 found_pending = true; 152 - list_del(&copy->copies); 156 + list_del(&tmp_copy->copies); 153 157 break; 154 158 } 155 159 if (found_pending) { 156 160 spin_unlock(&server->nfs_client->cl_lock); 161 + kfree(copy); 162 + copy = tmp_copy; 157 163 goto out; 158 164 } 159 165 160 - copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS); 161 - if (!copy) { 162 - spin_unlock(&server->nfs_client->cl_lock); 163 - return -ENOMEM; 164 - } 165 166 memcpy(&copy->stateid, &res->write_res.stateid, NFS4_STATEID_SIZE); 166 167 init_completion(&copy->completion); 167 168 copy->parent_state = ctx->state;
+2
fs/nfs/nfs4_fs.h
··· 41 41 NFS4CLNT_MOVED, 42 42 NFS4CLNT_LEASE_MOVED, 43 43 NFS4CLNT_DELEGATION_EXPIRED, 44 + NFS4CLNT_RUN_MANAGER, 45 + NFS4CLNT_DELEGRETURN_RUNNING, 44 46 }; 45 47 46 48 #define NFS4_RENEW_TIMEOUT 0x01
+17 -9
fs/nfs/nfs4state.c
··· 1210 1210 struct task_struct *task; 1211 1211 char buf[INET6_ADDRSTRLEN + sizeof("-manager") + 1]; 1212 1212 1213 + set_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state); 1213 1214 if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0) 1214 1215 return; 1215 1216 __module_get(THIS_MODULE); ··· 2504 2503 2505 2504 /* Ensure exclusive access to NFSv4 state */ 2506 2505 do { 2506 + clear_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state); 2507 2507 if (test_bit(NFS4CLNT_PURGE_STATE, &clp->cl_state)) { 2508 2508 section = "purge state"; 2509 2509 status = nfs4_purge_lease(clp); ··· 2595 2593 } 2596 2594 2597 2595 nfs4_end_drain_session(clp); 2598 - if (test_and_clear_bit(NFS4CLNT_DELEGRETURN, &clp->cl_state)) { 2599 - nfs_client_return_marked_delegations(clp); 2600 - continue; 2596 + nfs4_clear_state_manager_bit(clp); 2597 + 2598 + if (!test_and_set_bit(NFS4CLNT_DELEGRETURN_RUNNING, &clp->cl_state)) { 2599 + if (test_and_clear_bit(NFS4CLNT_DELEGRETURN, &clp->cl_state)) { 2600 + nfs_client_return_marked_delegations(clp); 2601 + set_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state); 2602 + } 2603 + clear_bit(NFS4CLNT_DELEGRETURN_RUNNING, &clp->cl_state); 2601 2604 } 2602 2605 2603 - nfs4_clear_state_manager_bit(clp); 2604 2606 /* Did we race with an attempt to give us more work? */ 2605 - if (clp->cl_state == 0) 2606 - break; 2607 + if (!test_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state)) 2608 + return; 2607 2609 if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0) 2608 - break; 2609 - } while (refcount_read(&clp->cl_count) > 1); 2610 - return; 2610 + return; 2611 + } while (refcount_read(&clp->cl_count) > 1 && !signalled()); 2612 + goto out_drain; 2613 + 2611 2614 out_error: 2612 2615 if (strlen(section)) 2613 2616 section_sep = ": "; ··· 2620 2613 " with error %d\n", section_sep, section, 2621 2614 clp->cl_hostname, -status); 2622 2615 ssleep(1); 2616 + out_drain: 2623 2617 nfs4_end_drain_session(clp); 2624 2618 nfs4_clear_state_manager_bit(clp); 2625 2619 }
+3
fs/nfsd/nfs4proc.c
··· 1038 1038 { 1039 1039 __be32 status; 1040 1040 1041 + if (!cstate->save_fh.fh_dentry) 1042 + return nfserr_nofilehandle; 1043 + 1041 1044 status = nfs4_preprocess_stateid_op(rqstp, cstate, &cstate->save_fh, 1042 1045 src_stateid, RD_STATE, src, NULL); 1043 1046 if (status) {
+1 -3
fs/nilfs2/btnode.c
··· 266 266 return; 267 267 268 268 if (nbh == NULL) { /* blocksize == pagesize */ 269 - xa_lock_irq(&btnc->i_pages); 270 - __xa_erase(&btnc->i_pages, newkey); 271 - xa_unlock_irq(&btnc->i_pages); 269 + xa_erase_irq(&btnc->i_pages, newkey); 272 270 unlock_page(ctxt->bh->b_page); 273 271 } else 274 272 brelse(nbh);
+5 -5
fs/notify/fanotify/fanotify.c
··· 115 115 continue; 116 116 mark = iter_info->marks[type]; 117 117 /* 118 - * if the event is for a child and this inode doesn't care about 119 - * events on the child, don't send it! 118 + * If the event is for a child and this mark doesn't care about 119 + * events on a child, don't send it! 120 120 */ 121 - if (type == FSNOTIFY_OBJ_TYPE_INODE && 122 - (event_mask & FS_EVENT_ON_CHILD) && 123 - !(mark->mask & FS_EVENT_ON_CHILD)) 121 + if (event_mask & FS_EVENT_ON_CHILD && 122 + (type != FSNOTIFY_OBJ_TYPE_INODE || 123 + !(mark->mask & FS_EVENT_ON_CHILD))) 124 124 continue; 125 125 126 126 marks_mask |= mark->mask;
+5 -2
fs/notify/fsnotify.c
··· 167 167 parent = dget_parent(dentry); 168 168 p_inode = parent->d_inode; 169 169 170 - if (unlikely(!fsnotify_inode_watches_children(p_inode))) 170 + if (unlikely(!fsnotify_inode_watches_children(p_inode))) { 171 171 __fsnotify_update_child_dentry_flags(p_inode); 172 - else if (p_inode->i_fsnotify_mask & mask) { 172 + } else if (p_inode->i_fsnotify_mask & mask & ALL_FSNOTIFY_EVENTS) { 173 173 struct name_snapshot name; 174 174 175 175 /* we are notifying a parent so come up with the new mask which ··· 339 339 sb = mnt->mnt.mnt_sb; 340 340 mnt_or_sb_mask = mnt->mnt_fsnotify_mask | sb->s_fsnotify_mask; 341 341 } 342 + /* An event "on child" is not intended for a mount/sb mark */ 343 + if (mask & FS_EVENT_ON_CHILD) 344 + mnt_or_sb_mask = 0; 342 345 343 346 /* 344 347 * Optimization: srcu_read_lock() has a memory barrier which can
+10 -2
fs/ocfs2/aops.c
··· 2411 2411 /* this io's submitter should not have unlocked this before we could */ 2412 2412 BUG_ON(!ocfs2_iocb_is_rw_locked(iocb)); 2413 2413 2414 - if (bytes > 0 && private) 2415 - ret = ocfs2_dio_end_io_write(inode, private, offset, bytes); 2414 + if (bytes <= 0) 2415 + mlog_ratelimited(ML_ERROR, "Direct IO failed, bytes = %lld", 2416 + (long long)bytes); 2417 + if (private) { 2418 + if (bytes > 0) 2419 + ret = ocfs2_dio_end_io_write(inode, private, offset, 2420 + bytes); 2421 + else 2422 + ocfs2_dio_free_write_ctx(inode, private); 2423 + } 2416 2424 2417 2425 ocfs2_iocb_clear_rw_locked(iocb); 2418 2426
+9
fs/ocfs2/cluster/masklog.h
··· 178 178 ##__VA_ARGS__); \ 179 179 } while (0) 180 180 181 + #define mlog_ratelimited(mask, fmt, ...) \ 182 + do { \ 183 + static DEFINE_RATELIMIT_STATE(_rs, \ 184 + DEFAULT_RATELIMIT_INTERVAL, \ 185 + DEFAULT_RATELIMIT_BURST); \ 186 + if (__ratelimit(&_rs)) \ 187 + mlog(mask, fmt, ##__VA_ARGS__); \ 188 + } while (0) 189 + 181 190 #define mlog_errno(st) ({ \ 182 191 int _st = (st); \ 183 192 if (_st != -ERESTARTSYS && _st != -EINTR && \
+1 -1
fs/ocfs2/export.c
··· 125 125 126 126 check_gen: 127 127 if (handle->ih_generation != inode->i_generation) { 128 - iput(inode); 129 128 trace_ocfs2_get_dentry_generation((unsigned long long)blkno, 130 129 handle->ih_generation, 131 130 inode->i_generation); 131 + iput(inode); 132 132 result = ERR_PTR(-ESTALE); 133 133 goto bail; 134 134 }
+26 -21
fs/ocfs2/move_extents.c
··· 157 157 } 158 158 159 159 /* 160 - * lock allocators, and reserving appropriate number of bits for 161 - * meta blocks and data clusters. 162 - * 163 - * in some cases, we don't need to reserve clusters, just let data_ac 164 - * be NULL. 160 + * lock allocator, and reserve appropriate number of bits for 161 + * meta blocks. 165 162 */ 166 - static int ocfs2_lock_allocators_move_extents(struct inode *inode, 163 + static int ocfs2_lock_meta_allocator_move_extents(struct inode *inode, 167 164 struct ocfs2_extent_tree *et, 168 165 u32 clusters_to_move, 169 166 u32 extents_to_split, 170 167 struct ocfs2_alloc_context **meta_ac, 171 - struct ocfs2_alloc_context **data_ac, 172 168 int extra_blocks, 173 169 int *credits) 174 170 { ··· 189 193 goto out; 190 194 } 191 195 192 - if (data_ac) { 193 - ret = ocfs2_reserve_clusters(osb, clusters_to_move, data_ac); 194 - if (ret) { 195 - mlog_errno(ret); 196 - goto out; 197 - } 198 - } 199 196 200 197 *credits += ocfs2_calc_extend_credits(osb->sb, et->et_root_el); 201 198 ··· 248 259 } 249 260 } 250 261 251 - ret = ocfs2_lock_allocators_move_extents(inode, &context->et, *len, 1, 252 - &context->meta_ac, 253 - &context->data_ac, 254 - extra_blocks, &credits); 262 + ret = ocfs2_lock_meta_allocator_move_extents(inode, &context->et, 263 + *len, 1, 264 + &context->meta_ac, 265 + extra_blocks, &credits); 255 266 if (ret) { 256 267 mlog_errno(ret); 257 268 goto out; ··· 272 283 mlog_errno(ret); 273 284 goto out_unlock_mutex; 274 285 } 286 + } 287 + 288 + /* 289 + * Make sure ocfs2_reserve_cluster is called after 290 + * __ocfs2_flush_truncate_log, otherwise, dead lock may happen. 291 + * 292 + * If ocfs2_reserve_cluster is called 293 + * before __ocfs2_flush_truncate_log, dead lock on global bitmap 294 + * may happen. 295 + * 296 + */ 297 + ret = ocfs2_reserve_clusters(osb, *len, &context->data_ac); 298 + if (ret) { 299 + mlog_errno(ret); 300 + goto out_unlock_mutex; 275 301 } 276 302 277 303 handle = ocfs2_start_trans(osb, credits); ··· 621 617 } 622 618 } 623 619 624 - ret = ocfs2_lock_allocators_move_extents(inode, &context->et, len, 1, 625 - &context->meta_ac, 626 - NULL, extra_blocks, &credits); 620 + ret = ocfs2_lock_meta_allocator_move_extents(inode, &context->et, 621 + len, 1, 622 + &context->meta_ac, 623 + extra_blocks, &credits); 627 624 if (ret) { 628 625 mlog_errno(ret); 629 626 goto out;
+6 -9
fs/pstore/ram.c
··· 816 816 817 817 cxt->pstore.data = cxt; 818 818 /* 819 - * Console can handle any buffer size, so prefer LOG_LINE_MAX. If we 820 - * have to handle dumps, we must have at least record_size buffer. And 821 - * for ftrace, bufsize is irrelevant (if bufsize is 0, buf will be 822 - * ZERO_SIZE_PTR). 819 + * Since bufsize is only used for dmesg crash dumps, it 820 + * must match the size of the dprz record (after PRZ header 821 + * and ECC bytes have been accounted for). 823 822 */ 824 - if (cxt->console_size) 825 - cxt->pstore.bufsize = 1024; /* LOG_LINE_MAX */ 826 - cxt->pstore.bufsize = max(cxt->record_size, cxt->pstore.bufsize); 827 - cxt->pstore.buf = kmalloc(cxt->pstore.bufsize, GFP_KERNEL); 823 + cxt->pstore.bufsize = cxt->dprzs[0]->buffer_size; 824 + cxt->pstore.buf = kzalloc(cxt->pstore.bufsize, GFP_KERNEL); 828 825 if (!cxt->pstore.buf) { 829 - pr_err("cannot allocate pstore buffer\n"); 826 + pr_err("cannot allocate pstore crash dump buffer\n"); 830 827 err = -ENOMEM; 831 828 goto fail_clear; 832 829 }
+7 -8
fs/read_write.c
··· 2094 2094 off = same->src_offset; 2095 2095 len = same->src_length; 2096 2096 2097 - ret = -EISDIR; 2098 2097 if (S_ISDIR(src->i_mode)) 2099 - goto out; 2098 + return -EISDIR; 2100 2099 2101 - ret = -EINVAL; 2102 2100 if (!S_ISREG(src->i_mode)) 2103 - goto out; 2101 + return -EINVAL; 2102 + 2103 + if (!file->f_op->remap_file_range) 2104 + return -EOPNOTSUPP; 2104 2105 2105 2106 ret = remap_verify_area(file, off, len, false); 2106 2107 if (ret < 0) 2107 - goto out; 2108 + return ret; 2108 2109 ret = 0; 2109 2110 2110 2111 if (off + len > i_size_read(src)) ··· 2148 2147 fdput(dst_fd); 2149 2148 next_loop: 2150 2149 if (fatal_signal_pending(current)) 2151 - goto out; 2150 + break; 2152 2151 } 2153 - 2154 - out: 2155 2152 return ret; 2156 2153 } 2157 2154 EXPORT_SYMBOL(vfs_dedupe_file_range);
+1 -1
fs/sysv/inode.c
··· 275 275 } 276 276 } 277 277 brelse(bh); 278 - return 0; 278 + return err; 279 279 } 280 280 281 281 int sysv_write_inode(struct inode *inode, struct writeback_control *wbc)
+10 -6
fs/udf/super.c
··· 827 827 828 828 829 829 ret = udf_dstrCS0toChar(sb, outstr, 31, pvoldesc->volIdent, 32); 830 - if (ret < 0) 831 - goto out_bh; 832 - 833 - strncpy(UDF_SB(sb)->s_volume_ident, outstr, ret); 830 + if (ret < 0) { 831 + strcpy(UDF_SB(sb)->s_volume_ident, "InvalidName"); 832 + pr_warn("incorrect volume identification, setting to " 833 + "'InvalidName'\n"); 834 + } else { 835 + strncpy(UDF_SB(sb)->s_volume_ident, outstr, ret); 836 + } 834 837 udf_debug("volIdent[] = '%s'\n", UDF_SB(sb)->s_volume_ident); 835 838 836 839 ret = udf_dstrCS0toChar(sb, outstr, 127, pvoldesc->volSetIdent, 128); 837 - if (ret < 0) 840 + if (ret < 0) { 841 + ret = 0; 838 842 goto out_bh; 839 - 843 + } 840 844 outstr[ret] = 0; 841 845 udf_debug("volSetIdent[] = '%s'\n", outstr); 842 846
+11 -3
fs/udf/unicode.c
··· 351 351 return u_len; 352 352 } 353 353 354 + /* 355 + * Convert CS0 dstring to output charset. Warning: This function may truncate 356 + * input string if it is too long as it is used for informational strings only 357 + * and it is better to truncate the string than to refuse mounting a media. 358 + */ 354 359 int udf_dstrCS0toChar(struct super_block *sb, uint8_t *utf_o, int o_len, 355 360 const uint8_t *ocu_i, int i_len) 356 361 { ··· 364 359 if (i_len > 0) { 365 360 s_len = ocu_i[i_len - 1]; 366 361 if (s_len >= i_len) { 367 - pr_err("incorrect dstring lengths (%d/%d)\n", 368 - s_len, i_len); 369 - return -EINVAL; 362 + pr_warn("incorrect dstring lengths (%d/%d)," 363 + " truncating\n", s_len, i_len); 364 + s_len = i_len - 1; 365 + /* 2-byte encoding? Need to round properly... */ 366 + if (ocu_i[0] == 16) 367 + s_len -= (s_len - 1) & 2; 370 368 } 371 369 } 372 370
+15
fs/userfaultfd.c
··· 1361 1361 ret = -EINVAL; 1362 1362 if (!vma_can_userfault(cur)) 1363 1363 goto out_unlock; 1364 + 1365 + /* 1366 + * UFFDIO_COPY will fill file holes even without 1367 + * PROT_WRITE. This check enforces that if this is a 1368 + * MAP_SHARED, the process has write permission to the backing 1369 + * file. If VM_MAYWRITE is set it also enforces that on a 1370 + * MAP_SHARED vma: there is no F_WRITE_SEAL and no further 1371 + * F_WRITE_SEAL can be taken until the vma is destroyed. 1372 + */ 1373 + ret = -EPERM; 1374 + if (unlikely(!(cur->vm_flags & VM_MAYWRITE))) 1375 + goto out_unlock; 1376 + 1364 1377 /* 1365 1378 * If this vma contains ending address, and huge pages 1366 1379 * check alignment. ··· 1419 1406 BUG_ON(!vma_can_userfault(vma)); 1420 1407 BUG_ON(vma->vm_userfaultfd_ctx.ctx && 1421 1408 vma->vm_userfaultfd_ctx.ctx != ctx); 1409 + WARN_ON(!(vma->vm_flags & VM_MAYWRITE)); 1422 1410 1423 1411 /* 1424 1412 * Nothing to do: this vma is already registered into this ··· 1566 1552 cond_resched(); 1567 1553 1568 1554 BUG_ON(!vma_can_userfault(vma)); 1555 + WARN_ON(!(vma->vm_flags & VM_MAYWRITE)); 1569 1556 1570 1557 /* 1571 1558 * Nothing to do: this vma is already registered into this
+9 -2
fs/xfs/libxfs/xfs_attr_leaf.c
··· 243 243 struct xfs_mount *mp = bp->b_target->bt_mount; 244 244 struct xfs_attr_leafblock *leaf = bp->b_addr; 245 245 struct xfs_attr_leaf_entry *entries; 246 - uint16_t end; 246 + uint32_t end; /* must be 32bit - see below */ 247 247 int i; 248 248 249 249 xfs_attr3_leaf_hdr_from_disk(mp->m_attr_geo, &ichdr, leaf); ··· 293 293 /* 294 294 * Quickly check the freemap information. Attribute data has to be 295 295 * aligned to 4-byte boundaries, and likewise for the free space. 296 + * 297 + * Note that for 64k block size filesystems, the freemap entries cannot 298 + * overflow as they are only be16 fields. However, when checking end 299 + * pointer of the freemap, we have to be careful to detect overflows and 300 + * so use uint32_t for those checks. 296 301 */ 297 302 for (i = 0; i < XFS_ATTR_LEAF_MAPSIZE; i++) { 298 303 if (ichdr.freemap[i].base > mp->m_attr_geo->blksize) ··· 308 303 return __this_address; 309 304 if (ichdr.freemap[i].size & 0x3) 310 305 return __this_address; 311 - end = ichdr.freemap[i].base + ichdr.freemap[i].size; 306 + 307 + /* be care of 16 bit overflows here */ 308 + end = (uint32_t)ichdr.freemap[i].base + ichdr.freemap[i].size; 312 309 if (end < ichdr.freemap[i].base) 313 310 return __this_address; 314 311 if (end > mp->m_attr_geo->blksize)
+4 -1
fs/xfs/libxfs/xfs_bmap.c
··· 1694 1694 case BMAP_LEFT_FILLING | BMAP_RIGHT_FILLING | BMAP_RIGHT_CONTIG: 1695 1695 /* 1696 1696 * Filling in all of a previously delayed allocation extent. 1697 - * The right neighbor is contiguous, the left is not. 1697 + * The right neighbor is contiguous, the left is not. Take care 1698 + * with delay -> unwritten extent allocation here because the 1699 + * delalloc record we are overwriting is always written. 1698 1700 */ 1699 1701 PREV.br_startblock = new->br_startblock; 1700 1702 PREV.br_blockcount += RIGHT.br_blockcount; 1703 + PREV.br_state = new->br_state; 1701 1704 1702 1705 xfs_iext_next(ifp, &bma->icur); 1703 1706 xfs_iext_remove(bma->ip, &bma->icur, state);
+7 -4
fs/xfs/libxfs/xfs_ialloc_btree.c
··· 538 538 539 539 static xfs_extlen_t 540 540 xfs_inobt_max_size( 541 - struct xfs_mount *mp) 541 + struct xfs_mount *mp, 542 + xfs_agnumber_t agno) 542 543 { 544 + xfs_agblock_t agblocks = xfs_ag_block_count(mp, agno); 545 + 543 546 /* Bail out if we're uninitialized, which can happen in mkfs. */ 544 547 if (mp->m_inobt_mxr[0] == 0) 545 548 return 0; 546 549 547 550 return xfs_btree_calc_size(mp->m_inobt_mnr, 548 - (uint64_t)mp->m_sb.sb_agblocks * mp->m_sb.sb_inopblock / 549 - XFS_INODES_PER_CHUNK); 551 + (uint64_t)agblocks * mp->m_sb.sb_inopblock / 552 + XFS_INODES_PER_CHUNK); 550 553 } 551 554 552 555 static int ··· 597 594 if (error) 598 595 return error; 599 596 600 - *ask += xfs_inobt_max_size(mp); 597 + *ask += xfs_inobt_max_size(mp, agno); 601 598 *used += tree_len; 602 599 return 0; 603 600 }
+2 -8
fs/xfs/xfs_bmap_util.c
··· 1042 1042 goto out_unlock; 1043 1043 } 1044 1044 1045 - static int 1045 + int 1046 1046 xfs_flush_unmap_range( 1047 1047 struct xfs_inode *ip, 1048 1048 xfs_off_t offset, ··· 1195 1195 * Writeback and invalidate cache for the remainder of the file as we're 1196 1196 * about to shift down every extent from offset to EOF. 1197 1197 */ 1198 - error = filemap_write_and_wait_range(VFS_I(ip)->i_mapping, offset, -1); 1199 - if (error) 1200 - return error; 1201 - error = invalidate_inode_pages2_range(VFS_I(ip)->i_mapping, 1202 - offset >> PAGE_SHIFT, -1); 1203 - if (error) 1204 - return error; 1198 + error = xfs_flush_unmap_range(ip, offset, XFS_ISIZE(ip)); 1205 1199 1206 1200 /* 1207 1201 * Clean out anything hanging around in the cow fork now that
+3
fs/xfs/xfs_bmap_util.h
··· 80 80 int whichfork, xfs_extnum_t *nextents, 81 81 xfs_filblks_t *count); 82 82 83 + int xfs_flush_unmap_range(struct xfs_inode *ip, xfs_off_t offset, 84 + xfs_off_t len); 85 + 83 86 #endif /* __XFS_BMAP_UTIL_H__ */
+21 -7
fs/xfs/xfs_buf_item.c
··· 1233 1233 } 1234 1234 1235 1235 /* 1236 - * Requeue a failed buffer for writeback 1236 + * Requeue a failed buffer for writeback. 1237 1237 * 1238 - * Return true if the buffer has been re-queued properly, false otherwise 1238 + * We clear the log item failed state here as well, but we have to be careful 1239 + * about reference counts because the only active reference counts on the buffer 1240 + * may be the failed log items. Hence if we clear the log item failed state 1241 + * before queuing the buffer for IO we can release all active references to 1242 + * the buffer and free it, leading to use after free problems in 1243 + * xfs_buf_delwri_queue. It makes no difference to the buffer or log items which 1244 + * order we process them in - the buffer is locked, and we own the buffer list 1245 + * so nothing on them is going to change while we are performing this action. 1246 + * 1247 + * Hence we can safely queue the buffer for IO before we clear the failed log 1248 + * item state, therefore always having an active reference to the buffer and 1249 + * avoiding the transient zero-reference state that leads to use-after-free. 1250 + * 1251 + * Return true if the buffer was added to the buffer list, false if it was 1252 + * already on the buffer list. 1239 1253 */ 1240 1254 bool 1241 1255 xfs_buf_resubmit_failed_buffers( ··· 1257 1243 struct list_head *buffer_list) 1258 1244 { 1259 1245 struct xfs_log_item *lip; 1246 + bool ret; 1247 + 1248 + ret = xfs_buf_delwri_queue(bp, buffer_list); 1260 1249 1261 1250 /* 1262 - * Clear XFS_LI_FAILED flag from all items before resubmit 1263 - * 1264 - * XFS_LI_FAILED set/clear is protected by ail_lock, caller this 1251 + * XFS_LI_FAILED set/clear is protected by ail_lock, caller of this 1265 1252 * function already have it acquired 1266 1253 */ 1267 1254 list_for_each_entry(lip, &bp->b_li_list, li_bio_list) 1268 1255 xfs_clear_li_failed(lip); 1269 1256 1270 - /* Add this buffer back to the delayed write list */ 1271 - return xfs_buf_delwri_queue(bp, buffer_list); 1257 + return ret; 1272 1258 }
+1 -1
fs/xfs/xfs_file.c
··· 920 920 } 921 921 922 922 923 - loff_t 923 + STATIC loff_t 924 924 xfs_file_remap_range( 925 925 struct file *file_in, 926 926 loff_t pos_in,
+1 -1
fs/xfs/xfs_ioctl.c
··· 1608 1608 error = 0; 1609 1609 out_free_buf: 1610 1610 kmem_free(buf); 1611 - return 0; 1611 + return error; 1612 1612 } 1613 1613 1614 1614 struct getfsmap_info {
+1 -1
fs/xfs/xfs_message.c
··· 107 107 void 108 108 xfs_hex_dump(void *p, int length) 109 109 { 110 - print_hex_dump(KERN_ALERT, "", DUMP_PREFIX_ADDRESS, 16, 1, p, length, 1); 110 + print_hex_dump(KERN_ALERT, "", DUMP_PREFIX_OFFSET, 16, 1, p, length, 1); 111 111 }
+14 -4
fs/xfs/xfs_reflink.c
··· 296 296 if (error) 297 297 return error; 298 298 299 + xfs_trim_extent(imap, got.br_startoff, got.br_blockcount); 299 300 trace_xfs_reflink_cow_alloc(ip, &got); 300 301 return 0; 301 302 } ··· 1352 1351 if (ret) 1353 1352 goto out_unlock; 1354 1353 1355 - /* Zap any page cache for the destination file's range. */ 1356 - truncate_inode_pages_range(&inode_out->i_data, 1357 - round_down(pos_out, PAGE_SIZE), 1358 - round_up(pos_out + *len, PAGE_SIZE) - 1); 1354 + /* 1355 + * If pos_out > EOF, we may have dirtied blocks between EOF and 1356 + * pos_out. In that case, we need to extend the flush and unmap to cover 1357 + * from EOF to the end of the copy length. 1358 + */ 1359 + if (pos_out > XFS_ISIZE(dest)) { 1360 + loff_t flen = *len + (pos_out - XFS_ISIZE(dest)); 1361 + ret = xfs_flush_unmap_range(dest, XFS_ISIZE(dest), flen); 1362 + } else { 1363 + ret = xfs_flush_unmap_range(dest, pos_out, *len); 1364 + } 1365 + if (ret) 1366 + goto out_unlock; 1359 1367 1360 1368 return 1; 1361 1369 out_unlock:
+4 -1
fs/xfs/xfs_trace.h
··· 280 280 ), 281 281 TP_fast_assign( 282 282 __entry->dev = bp->b_target->bt_dev; 283 - __entry->bno = bp->b_bn; 283 + if (bp->b_bn == XFS_BUF_DADDR_NULL) 284 + __entry->bno = bp->b_maps[0].bm_bn; 285 + else 286 + __entry->bno = bp->b_bn; 284 287 __entry->nblks = bp->b_length; 285 288 __entry->hold = atomic_read(&bp->b_hold); 286 289 __entry->pincount = atomic_read(&bp->b_pin_count);
+1 -1
include/asm-generic/4level-fixup.h
··· 3 3 #define _4LEVEL_FIXUP_H 4 4 5 5 #define __ARCH_HAS_4LEVEL_HACK 6 - #define __PAGETABLE_PUD_FOLDED 6 + #define __PAGETABLE_PUD_FOLDED 1 7 7 8 8 #define PUD_SHIFT PGDIR_SHIFT 9 9 #define PUD_SIZE PGDIR_SIZE
+1 -1
include/asm-generic/5level-fixup.h
··· 3 3 #define _5LEVEL_FIXUP_H 4 4 5 5 #define __ARCH_HAS_5LEVEL_HACK 6 - #define __PAGETABLE_P4D_FOLDED 6 + #define __PAGETABLE_P4D_FOLDED 1 7 7 8 8 #define P4D_SHIFT PGDIR_SHIFT 9 9 #define P4D_SIZE PGDIR_SIZE
+1 -1
include/asm-generic/pgtable-nop4d-hack.h
··· 5 5 #ifndef __ASSEMBLY__ 6 6 #include <asm-generic/5level-fixup.h> 7 7 8 - #define __PAGETABLE_PUD_FOLDED 8 + #define __PAGETABLE_PUD_FOLDED 1 9 9 10 10 /* 11 11 * Having the pud type consist of a pgd gets the size right, and allows
+1 -1
include/asm-generic/pgtable-nop4d.h
··· 4 4 5 5 #ifndef __ASSEMBLY__ 6 6 7 - #define __PAGETABLE_P4D_FOLDED 7 + #define __PAGETABLE_P4D_FOLDED 1 8 8 9 9 typedef struct { pgd_t pgd; } p4d_t; 10 10
+1 -1
include/asm-generic/pgtable-nopmd.h
··· 8 8 9 9 struct mm_struct; 10 10 11 - #define __PAGETABLE_PMD_FOLDED 11 + #define __PAGETABLE_PMD_FOLDED 1 12 12 13 13 /* 14 14 * Having the pmd type consist of a pud gets the size right, and allows
+1 -1
include/asm-generic/pgtable-nopud.h
··· 9 9 #else 10 10 #include <asm-generic/pgtable-nop4d.h> 11 11 12 - #define __PAGETABLE_PUD_FOLDED 12 + #define __PAGETABLE_PUD_FOLDED 1 13 13 14 14 /* 15 15 * Having the pud type consist of a p4d gets the size right, and allows
+16
include/asm-generic/pgtable.h
··· 1127 1127 #endif 1128 1128 #endif 1129 1129 1130 + /* 1131 + * On some architectures it depends on the mm if the p4d/pud or pmd 1132 + * layer of the page table hierarchy is folded or not. 1133 + */ 1134 + #ifndef mm_p4d_folded 1135 + #define mm_p4d_folded(mm) __is_defined(__PAGETABLE_P4D_FOLDED) 1136 + #endif 1137 + 1138 + #ifndef mm_pud_folded 1139 + #define mm_pud_folded(mm) __is_defined(__PAGETABLE_PUD_FOLDED) 1140 + #endif 1141 + 1142 + #ifndef mm_pmd_folded 1143 + #define mm_pmd_folded(mm) __is_defined(__PAGETABLE_PMD_FOLDED) 1144 + #endif 1145 + 1130 1146 #endif /* _ASM_GENERIC_PGTABLE_H */
+1
include/linux/can/dev.h
··· 169 169 170 170 void can_put_echo_skb(struct sk_buff *skb, struct net_device *dev, 171 171 unsigned int idx); 172 + struct sk_buff *__can_get_echo_skb(struct net_device *dev, unsigned int idx, u8 *len_ptr); 172 173 unsigned int can_get_echo_skb(struct net_device *dev, unsigned int idx); 173 174 void can_free_echo_skb(struct net_device *dev, unsigned int idx); 174 175
+6 -1
include/linux/can/rx-offload.h
··· 41 41 int can_rx_offload_add_fifo(struct net_device *dev, struct can_rx_offload *offload, unsigned int weight); 42 42 int can_rx_offload_irq_offload_timestamp(struct can_rx_offload *offload, u64 reg); 43 43 int can_rx_offload_irq_offload_fifo(struct can_rx_offload *offload); 44 - int can_rx_offload_irq_queue_err_skb(struct can_rx_offload *offload, struct sk_buff *skb); 44 + int can_rx_offload_queue_sorted(struct can_rx_offload *offload, 45 + struct sk_buff *skb, u32 timestamp); 46 + unsigned int can_rx_offload_get_echo_skb(struct can_rx_offload *offload, 47 + unsigned int idx, u32 timestamp); 48 + int can_rx_offload_queue_tail(struct can_rx_offload *offload, 49 + struct sk_buff *skb); 45 50 void can_rx_offload_reset(struct can_rx_offload *offload); 46 51 void can_rx_offload_del(struct can_rx_offload *offload); 47 52 void can_rx_offload_enable(struct can_rx_offload *offload);
+1 -7
include/linux/ceph/ceph_features.h
··· 213 213 CEPH_FEATURE_NEW_OSDOPREPLY_ENCODING | \ 214 214 CEPH_FEATURE_CEPHX_V2) 215 215 216 - #define CEPH_FEATURES_REQUIRED_DEFAULT \ 217 - (CEPH_FEATURE_NOSRCADDR | \ 218 - CEPH_FEATURE_SUBSCRIBE2 | \ 219 - CEPH_FEATURE_RECONNECT_SEQ | \ 220 - CEPH_FEATURE_PGID64 | \ 221 - CEPH_FEATURE_PGPOOL3 | \ 222 - CEPH_FEATURE_OSDENC) 216 + #define CEPH_FEATURES_REQUIRED_DEFAULT 0 223 217 224 218 #endif
-12
include/linux/compiler-gcc.h
··· 143 143 #define KASAN_ABI_VERSION 3 144 144 #endif 145 145 146 - /* 147 - * Because __no_sanitize_address conflicts with inlining: 148 - * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67368 149 - * we do one or the other. 150 - */ 151 - #ifdef CONFIG_KASAN 152 - #define __no_sanitize_address_or_inline \ 153 - __no_sanitize_address __maybe_unused notrace 154 - #else 155 - #define __no_sanitize_address_or_inline inline 156 - #endif 157 - 158 146 #if GCC_VERSION >= 50100 159 147 #define COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW 1 160 148 #endif
+1 -1
include/linux/compiler.h
··· 189 189 * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67368 190 190 * '__maybe_unused' allows us to avoid defined-but-not-used warnings. 191 191 */ 192 - # define __no_kasan_or_inline __no_sanitize_address __maybe_unused 192 + # define __no_kasan_or_inline __no_sanitize_address notrace __maybe_unused 193 193 #else 194 194 # define __no_kasan_or_inline __always_inline 195 195 #endif
+9 -5
include/linux/compiler_attributes.h
··· 4 4 5 5 /* 6 6 * The attributes in this file are unconditionally defined and they directly 7 - * map to compiler attribute(s) -- except those that are optional. 7 + * map to compiler attribute(s), unless one of the compilers does not support 8 + * the attribute. In that case, __has_attribute is used to check for support 9 + * and the reason is stated in its comment ("Optional: ..."). 8 10 * 9 11 * Any other "attributes" (i.e. those that depend on a configuration option, 10 12 * on a compiler, on an architecture, on plugins, on other attributes...) 11 13 * should be defined elsewhere (e.g. compiler_types.h or compiler-*.h). 14 + * The intention is to keep this file as simple as possible, as well as 15 + * compiler- and version-agnostic (e.g. avoiding GCC_VERSION checks). 12 16 * 13 17 * This file is meant to be sorted (by actual attribute name, 14 18 * not by #define identifier). Use the __attribute__((__name__)) syntax 15 19 * (i.e. with underscores) to avoid future collisions with other macros. 16 - * If an attribute is optional, state the reason in the comment. 20 + * Provide links to the documentation of each supported compiler, if it exists. 17 21 */ 18 22 19 23 /* 20 - * To check for optional attributes, we use __has_attribute, which is supported 21 - * on gcc >= 5, clang >= 2.9 and icc >= 17. In the meantime, to support 22 - * 4.6 <= gcc < 5, we implement __has_attribute by hand. 24 + * __has_attribute is supported on gcc >= 5, clang >= 2.9 and icc >= 17. 25 + * In the meantime, to support 4.6 <= gcc < 5, we implement __has_attribute 26 + * by hand. 23 27 * 24 28 * sparse does not support __has_attribute (yet) and defines __GNUC_MINOR__ 25 29 * depending on the compiler used to build it; however, these attributes have
+4
include/linux/compiler_types.h
··· 130 130 # define randomized_struct_fields_end 131 131 #endif 132 132 133 + #ifndef asm_volatile_goto 134 + #define asm_volatile_goto(x...) asm goto(x) 135 + #endif 136 + 133 137 /* Are two types/vars the same type (ignoring qualifiers)? */ 134 138 #define __same_type(a, b) __builtin_types_compatible_p(typeof(a), typeof(b)) 135 139
+1 -1
include/linux/dma-direct.h
··· 5 5 #include <linux/dma-mapping.h> 6 6 #include <linux/mem_encrypt.h> 7 7 8 - #define DIRECT_MAPPING_ERROR 0 8 + #define DIRECT_MAPPING_ERROR (~(dma_addr_t)0) 9 9 10 10 #ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA 11 11 #include <asm/dma-direct.h>
+7
include/linux/efi.h
··· 1167 1167 extern void efi_reboot(enum reboot_mode reboot_mode, const char *__unused); 1168 1168 1169 1169 extern bool efi_is_table_address(unsigned long phys_addr); 1170 + 1171 + extern int efi_apply_persistent_mem_reservations(void); 1170 1172 #else 1171 1173 static inline bool efi_enabled(int feature) 1172 1174 { ··· 1186 1184 static inline bool efi_is_table_address(unsigned long phys_addr) 1187 1185 { 1188 1186 return false; 1187 + } 1188 + 1189 + static inline int efi_apply_persistent_mem_reservations(void) 1190 + { 1191 + return 0; 1189 1192 } 1190 1193 #endif 1191 1194
+4
include/linux/filter.h
··· 866 866 867 867 void bpf_jit_free(struct bpf_prog *fp); 868 868 869 + int bpf_jit_get_func_addr(const struct bpf_prog *prog, 870 + const struct bpf_insn *insn, bool extra_pass, 871 + u64 *func_addr, bool *func_addr_fixed); 872 + 869 873 struct bpf_prog *bpf_jit_blind_constants(struct bpf_prog *fp); 870 874 void bpf_jit_prog_release_other(struct bpf_prog *fp, struct bpf_prog *fp_other); 871 875
+1 -2
include/linux/fscache-cache.h
··· 196 196 static inline void fscache_retrieval_complete(struct fscache_retrieval *op, 197 197 int n_pages) 198 198 { 199 - atomic_sub(n_pages, &op->n_pages); 200 - if (atomic_read(&op->n_pages) <= 0) 199 + if (atomic_sub_return_relaxed(n_pages, &op->n_pages) <= 0) 201 200 fscache_op_complete(&op->op, false); 202 201 } 203 202
+2 -2
include/linux/ftrace.h
··· 777 777 extern void return_to_handler(void); 778 778 779 779 extern int 780 - ftrace_push_return_trace(unsigned long ret, unsigned long func, int *depth, 781 - unsigned long frame_pointer, unsigned long *retp); 780 + function_graph_enter(unsigned long ret, unsigned long func, 781 + unsigned long frame_pointer, unsigned long *retp); 782 782 783 783 unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx, 784 784 unsigned long ret, unsigned long *retp);
+3 -1
include/linux/hid-sensor-hub.h
··· 177 177 * @attr_usage_id: Attribute usage id as per spec 178 178 * @report_id: Report id to look for 179 179 * @flag: Synchronous or asynchronous read 180 + * @is_signed: If true then fields < 32 bits will be sign-extended 180 181 * 181 182 * Issues a synchronous or asynchronous read request for an input attribute. 182 183 * Returns data upto 32 bits. ··· 191 190 int sensor_hub_input_attr_get_raw_value(struct hid_sensor_hub_device *hsdev, 192 191 u32 usage_id, 193 192 u32 attr_usage_id, u32 report_id, 194 - enum sensor_hub_read_flags flag 193 + enum sensor_hub_read_flags flag, 194 + bool is_signed 195 195 ); 196 196 197 197 /**
+2 -30
include/linux/hid.h
··· 722 722 * input will not be passed to raw_event unless hid_device_io_start is 723 723 * called. 724 724 * 725 - * raw_event and event should return 0 on no action performed, 1 when no 726 - * further processing should be done and negative on error 725 + * raw_event and event should return negative on error, any other value will 726 + * pass the event on to .event() typically return 0 for success. 727 727 * 728 728 * input_mapping shall return a negative value to completely ignore this usage 729 729 * (e.g. doubled or invalid usage), zero to continue with parsing of this ··· 1138 1138 1139 1139 int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, u32 size, 1140 1140 int interrupt); 1141 - 1142 - 1143 - /** 1144 - * struct hid_scroll_counter - Utility class for processing high-resolution 1145 - * scroll events. 1146 - * @dev: the input device for which events should be reported. 1147 - * @microns_per_hi_res_unit: the amount moved by the user's finger for each 1148 - * high-resolution unit reported by the mouse, in 1149 - * microns. 1150 - * @resolution_multiplier: the wheel's resolution in high-resolution mode as a 1151 - * multiple of its lower resolution. For example, if 1152 - * moving the wheel by one "notch" would result in a 1153 - * value of 1 in low-resolution mode but 8 in 1154 - * high-resolution, the multiplier is 8. 1155 - * @remainder: counts the number of high-resolution units moved since the last 1156 - * low-resolution event (REL_WHEEL or REL_HWHEEL) was sent. Should 1157 - * only be used by class methods. 1158 - */ 1159 - struct hid_scroll_counter { 1160 - struct input_dev *dev; 1161 - int microns_per_hi_res_unit; 1162 - int resolution_multiplier; 1163 - 1164 - int remainder; 1165 - }; 1166 - 1167 - void hid_scroll_counter_handle_scroll(struct hid_scroll_counter *counter, 1168 - int hi_res_value); 1169 1141 1170 1142 /* HID quirks API */ 1171 1143 unsigned long hid_lookup_quirk(const struct hid_device *hdev);
+1
include/linux/i8253.h
··· 21 21 #define PIT_LATCH ((PIT_TICK_RATE + HZ/2) / HZ) 22 22 23 23 extern raw_spinlock_t i8253_lock; 24 + extern bool i8253_clear_counter_on_shutdown; 24 25 extern struct clock_event_device i8253_clockevent; 25 26 extern void clockevent_i8253_init(bool oneshot); 26 27
+8 -4
include/linux/mlx5/mlx5_ifc.h
··· 2473 2473 2474 2474 u8 wq_signature[0x1]; 2475 2475 u8 cont_srq[0x1]; 2476 - u8 dbr_umem_valid[0x1]; 2476 + u8 reserved_at_22[0x1]; 2477 2477 u8 rlky[0x1]; 2478 2478 u8 basic_cyclic_rcv_wqe[0x1]; 2479 2479 u8 log_rq_stride[0x3]; 2480 2480 u8 xrcd[0x18]; 2481 2481 2482 2482 u8 page_offset[0x6]; 2483 - u8 reserved_at_46[0x2]; 2483 + u8 reserved_at_46[0x1]; 2484 + u8 dbr_umem_valid[0x1]; 2484 2485 u8 cqn[0x18]; 2485 2486 2486 2487 u8 reserved_at_60[0x20]; ··· 6690 6689 6691 6690 struct mlx5_ifc_xrc_srqc_bits xrc_srq_context_entry; 6692 6691 6693 - u8 reserved_at_280[0x40]; 6692 + u8 reserved_at_280[0x60]; 6693 + 6694 6694 u8 xrc_srq_umem_valid[0x1]; 6695 - u8 reserved_at_2c1[0x5bf]; 6695 + u8 reserved_at_2e1[0x1f]; 6696 + 6697 + u8 reserved_at_300[0x580]; 6696 6698 6697 6699 u8 pas[0][0x40]; 6698 6700 };
+8
include/linux/mm.h
··· 1744 1744 1745 1745 static inline void mm_inc_nr_puds(struct mm_struct *mm) 1746 1746 { 1747 + if (mm_pud_folded(mm)) 1748 + return; 1747 1749 atomic_long_add(PTRS_PER_PUD * sizeof(pud_t), &mm->pgtables_bytes); 1748 1750 } 1749 1751 1750 1752 static inline void mm_dec_nr_puds(struct mm_struct *mm) 1751 1753 { 1754 + if (mm_pud_folded(mm)) 1755 + return; 1752 1756 atomic_long_sub(PTRS_PER_PUD * sizeof(pud_t), &mm->pgtables_bytes); 1753 1757 } 1754 1758 #endif ··· 1772 1768 1773 1769 static inline void mm_inc_nr_pmds(struct mm_struct *mm) 1774 1770 { 1771 + if (mm_pmd_folded(mm)) 1772 + return; 1775 1773 atomic_long_add(PTRS_PER_PMD * sizeof(pmd_t), &mm->pgtables_bytes); 1776 1774 } 1777 1775 1778 1776 static inline void mm_dec_nr_pmds(struct mm_struct *mm) 1779 1777 { 1778 + if (mm_pmd_folded(mm)) 1779 + return; 1780 1780 atomic_long_sub(PTRS_PER_PMD * sizeof(pmd_t), &mm->pgtables_bytes); 1781 1781 } 1782 1782 #endif
+3 -4
include/linux/mtd/nand.h
··· 324 324 */ 325 325 static inline unsigned int nanddev_neraseblocks(const struct nand_device *nand) 326 326 { 327 - return (u64)nand->memorg.luns_per_target * 328 - nand->memorg.eraseblocks_per_lun * 329 - nand->memorg.pages_per_eraseblock; 327 + return nand->memorg.ntargets * nand->memorg.luns_per_target * 328 + nand->memorg.eraseblocks_per_lun; 330 329 } 331 330 332 331 /** ··· 568 569 } 569 570 570 571 /** 571 - * nanddev_pos_next_eraseblock() - Move a position to the next page 572 + * nanddev_pos_next_page() - Move a position to the next page 572 573 * @nand: NAND device 573 574 * @pos: the position to update 574 575 *
+2
include/linux/net_dim.h
··· 406 406 } 407 407 /* fall through */ 408 408 case NET_DIM_START_MEASURE: 409 + net_dim_sample(end_sample.event_ctr, end_sample.pkt_ctr, end_sample.byte_ctr, 410 + &dim->start_sample); 409 411 dim->state = NET_DIM_MEASURE_IN_PROGRESS; 410 412 break; 411 413 case NET_DIM_APPLY_NEW_PROFILE:
+20
include/linux/netdevice.h
··· 3190 3190 #endif 3191 3191 } 3192 3192 3193 + /* Variant of netdev_tx_sent_queue() for drivers that are aware 3194 + * that they should not test BQL status themselves. 3195 + * We do want to change __QUEUE_STATE_STACK_XOFF only for the last 3196 + * skb of a batch. 3197 + * Returns true if the doorbell must be used to kick the NIC. 3198 + */ 3199 + static inline bool __netdev_tx_sent_queue(struct netdev_queue *dev_queue, 3200 + unsigned int bytes, 3201 + bool xmit_more) 3202 + { 3203 + if (xmit_more) { 3204 + #ifdef CONFIG_BQL 3205 + dql_queued(&dev_queue->dql, bytes); 3206 + #endif 3207 + return netif_tx_queue_stopped(dev_queue); 3208 + } 3209 + netdev_tx_sent_queue(dev_queue, bytes); 3210 + return true; 3211 + } 3212 + 3193 3213 /** 3194 3214 * netdev_sent_queue - report the number of bytes queued to hardware 3195 3215 * @dev: network device
+1 -1
include/linux/netfilter/ipset/ip_set.h
··· 314 314 extern ip_set_id_t ip_set_get_byname(struct net *net, 315 315 const char *name, struct ip_set **set); 316 316 extern void ip_set_put_byindex(struct net *net, ip_set_id_t index); 317 - extern const char *ip_set_name_byindex(struct net *net, ip_set_id_t index); 317 + extern void ip_set_name_byindex(struct net *net, ip_set_id_t index, char *name); 318 318 extern ip_set_id_t ip_set_nfnl_get_byindex(struct net *net, ip_set_id_t index); 319 319 extern void ip_set_nfnl_put(struct net *net, ip_set_id_t index); 320 320
+2 -2
include/linux/netfilter/ipset/ip_set_comment.h
··· 43 43 rcu_assign_pointer(comment->c, c); 44 44 } 45 45 46 - /* Used only when dumping a set, protected by rcu_read_lock_bh() */ 46 + /* Used only when dumping a set, protected by rcu_read_lock() */ 47 47 static inline int 48 48 ip_set_put_comment(struct sk_buff *skb, const struct ip_set_comment *comment) 49 49 { 50 - struct ip_set_comment_rcu *c = rcu_dereference_bh(comment->c); 50 + struct ip_set_comment_rcu *c = rcu_dereference(comment->c); 51 51 52 52 if (!c) 53 53 return 0;
+13
include/linux/netfilter/nf_conntrack_proto_gre.h
··· 21 21 struct nf_conntrack_tuple tuple; 22 22 }; 23 23 24 + enum grep_conntrack { 25 + GRE_CT_UNREPLIED, 26 + GRE_CT_REPLIED, 27 + GRE_CT_MAX 28 + }; 29 + 30 + struct netns_proto_gre { 31 + struct nf_proto_net nf; 32 + rwlock_t keymap_lock; 33 + struct list_head keymap_list; 34 + unsigned int gre_timeouts[GRE_CT_MAX]; 35 + }; 36 + 24 37 /* add new tuple->key_reply pair to keymap */ 25 38 int nf_ct_gre_keymap_add(struct nf_conn *ct, enum ip_conntrack_dir dir, 26 39 struct nf_conntrack_tuple *t);
+2
include/linux/nmi.h
··· 119 119 void watchdog_nmi_stop(void); 120 120 void watchdog_nmi_start(void); 121 121 int watchdog_nmi_probe(void); 122 + int watchdog_nmi_enable(unsigned int cpu); 123 + void watchdog_nmi_disable(unsigned int cpu); 122 124 123 125 /** 124 126 * touch_nmi_watchdog - restart NMI watchdog timeout.
+2
include/linux/platform_data/gpio-davinci.h
··· 17 17 #define __DAVINCI_GPIO_PLATFORM_H 18 18 19 19 struct davinci_gpio_platform_data { 20 + bool no_auto_base; 21 + u32 base; 20 22 u32 ngpio; 21 23 u32 gpio_unbanked; 22 24 };
+2 -1
include/linux/psi.h
··· 1 1 #ifndef _LINUX_PSI_H 2 2 #define _LINUX_PSI_H 3 3 4 + #include <linux/jump_label.h> 4 5 #include <linux/psi_types.h> 5 6 #include <linux/sched.h> 6 7 ··· 10 9 11 10 #ifdef CONFIG_PSI 12 11 13 - extern bool psi_disabled; 12 + extern struct static_key_false psi_disabled; 14 13 15 14 void psi_init(void); 16 15
+4 -1
include/linux/pstore.h
··· 90 90 * 91 91 * @buf_lock: spinlock to serialize access to @buf 92 92 * @buf: preallocated crash dump buffer 93 - * @bufsize: size of @buf available for crash dump writes 93 + * @bufsize: size of @buf available for crash dump bytes (must match 94 + * smallest number of bytes available for writing to a 95 + * backend entry, since compressed bytes don't take kindly 96 + * to being truncated) 94 97 * 95 98 * @read_mutex: serializes @open, @read, @close, and @erase callbacks 96 99 * @flags: bitfield of frontends the backend can accept writes for
-17
include/linux/ptrace.h
··· 64 64 #define PTRACE_MODE_NOAUDIT 0x04 65 65 #define PTRACE_MODE_FSCREDS 0x08 66 66 #define PTRACE_MODE_REALCREDS 0x10 67 - #define PTRACE_MODE_SCHED 0x20 68 - #define PTRACE_MODE_IBPB 0x40 69 67 70 68 /* shorthands for READ/ATTACH and FSCREDS/REALCREDS combinations */ 71 69 #define PTRACE_MODE_READ_FSCREDS (PTRACE_MODE_READ | PTRACE_MODE_FSCREDS) 72 70 #define PTRACE_MODE_READ_REALCREDS (PTRACE_MODE_READ | PTRACE_MODE_REALCREDS) 73 71 #define PTRACE_MODE_ATTACH_FSCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_FSCREDS) 74 72 #define PTRACE_MODE_ATTACH_REALCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_REALCREDS) 75 - #define PTRACE_MODE_SPEC_IBPB (PTRACE_MODE_ATTACH_REALCREDS | PTRACE_MODE_IBPB) 76 73 77 74 /** 78 75 * ptrace_may_access - check whether the caller is permitted to access ··· 86 89 * process_vm_writev or ptrace (and should use the real credentials). 87 90 */ 88 91 extern bool ptrace_may_access(struct task_struct *task, unsigned int mode); 89 - 90 - /** 91 - * ptrace_may_access - check whether the caller is permitted to access 92 - * a target task. 93 - * @task: target task 94 - * @mode: selects type of access and caller credentials 95 - * 96 - * Returns true on success, false on denial. 97 - * 98 - * Similar to ptrace_may_access(). Only to be called from context switch 99 - * code. Does not call into audit and the regular LSM hooks due to locking 100 - * constraints. 101 - */ 102 - extern bool ptrace_may_access_sched(struct task_struct *task, unsigned int mode); 103 92 104 93 static inline int ptrace_reparented(struct task_struct *child) 105 94 {
+10
include/linux/sched.h
··· 1116 1116 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 1117 1117 /* Index of current stored address in ret_stack: */ 1118 1118 int curr_ret_stack; 1119 + int curr_ret_depth; 1119 1120 1120 1121 /* Stack of return addresses for return function tracing: */ 1121 1122 struct ftrace_ret_stack *ret_stack; ··· 1454 1453 #define PFA_SPREAD_SLAB 2 /* Spread some slab caches over cpuset */ 1455 1454 #define PFA_SPEC_SSB_DISABLE 3 /* Speculative Store Bypass disabled */ 1456 1455 #define PFA_SPEC_SSB_FORCE_DISABLE 4 /* Speculative Store Bypass force disabled*/ 1456 + #define PFA_SPEC_IB_DISABLE 5 /* Indirect branch speculation restricted */ 1457 + #define PFA_SPEC_IB_FORCE_DISABLE 6 /* Indirect branch speculation permanently restricted */ 1457 1458 1458 1459 #define TASK_PFA_TEST(name, func) \ 1459 1460 static inline bool task_##func(struct task_struct *p) \ ··· 1486 1483 1487 1484 TASK_PFA_TEST(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable) 1488 1485 TASK_PFA_SET(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable) 1486 + 1487 + TASK_PFA_TEST(SPEC_IB_DISABLE, spec_ib_disable) 1488 + TASK_PFA_SET(SPEC_IB_DISABLE, spec_ib_disable) 1489 + TASK_PFA_CLEAR(SPEC_IB_DISABLE, spec_ib_disable) 1490 + 1491 + TASK_PFA_TEST(SPEC_IB_FORCE_DISABLE, spec_ib_force_disable) 1492 + TASK_PFA_SET(SPEC_IB_FORCE_DISABLE, spec_ib_force_disable) 1489 1493 1490 1494 static inline void 1491 1495 current_restore_flags(unsigned long orig_flags, unsigned long flags)
+20
include/linux/sched/smt.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _LINUX_SCHED_SMT_H 3 + #define _LINUX_SCHED_SMT_H 4 + 5 + #include <linux/static_key.h> 6 + 7 + #ifdef CONFIG_SCHED_SMT 8 + extern struct static_key_false sched_smt_present; 9 + 10 + static __always_inline bool sched_smt_active(void) 11 + { 12 + return static_branch_likely(&sched_smt_present); 13 + } 14 + #else 15 + static inline bool sched_smt_active(void) { return false; } 16 + #endif 17 + 18 + void arch_smt_update(void); 19 + 20 + #endif
+17 -1
include/linux/skbuff.h
··· 1326 1326 } 1327 1327 } 1328 1328 1329 + static inline void skb_zcopy_set_nouarg(struct sk_buff *skb, void *val) 1330 + { 1331 + skb_shinfo(skb)->destructor_arg = (void *)((uintptr_t) val | 0x1UL); 1332 + skb_shinfo(skb)->tx_flags |= SKBTX_ZEROCOPY_FRAG; 1333 + } 1334 + 1335 + static inline bool skb_zcopy_is_nouarg(struct sk_buff *skb) 1336 + { 1337 + return (uintptr_t) skb_shinfo(skb)->destructor_arg & 0x1UL; 1338 + } 1339 + 1340 + static inline void *skb_zcopy_get_nouarg(struct sk_buff *skb) 1341 + { 1342 + return (void *)((uintptr_t) skb_shinfo(skb)->destructor_arg & ~0x1UL); 1343 + } 1344 + 1329 1345 /* Release a reference on a zerocopy structure */ 1330 1346 static inline void skb_zcopy_clear(struct sk_buff *skb, bool zerocopy) 1331 1347 { ··· 1351 1335 if (uarg->callback == sock_zerocopy_callback) { 1352 1336 uarg->zerocopy = uarg->zerocopy && zerocopy; 1353 1337 sock_zerocopy_put(uarg); 1354 - } else { 1338 + } else if (!skb_zcopy_is_nouarg(skb)) { 1355 1339 uarg->callback(uarg, zerocopy); 1356 1340 } 1357 1341
+1
include/linux/tcp.h
··· 196 196 u32 rcv_tstamp; /* timestamp of last received ACK (for keepalives) */ 197 197 u32 lsndtime; /* timestamp of last sent data packet (for restart window) */ 198 198 u32 last_oow_ack_time; /* timestamp of last out-of-window ACK */ 199 + u32 compressed_ack_rcv_nxt; 199 200 200 201 u32 tsoffset; /* timestamp offset */ 201 202
+2 -2
include/linux/tracehook.h
··· 83 83 * tracehook_report_syscall_entry - task is about to attempt a system call 84 84 * @regs: user register state of current task 85 85 * 86 - * This will be called if %TIF_SYSCALL_TRACE has been set, when the 87 - * current task has just entered the kernel for a system call. 86 + * This will be called if %TIF_SYSCALL_TRACE or %TIF_SYSCALL_EMU have been set, 87 + * when the current task has just entered the kernel for a system call. 88 88 * Full user register state is available here. Changing the values 89 89 * in @regs can affect the system call number and arguments to be tried. 90 90 * It is safe to block here, preventing the system call from beginning.
+3 -3
include/linux/tracepoint.h
··· 166 166 struct tracepoint_func *it_func_ptr; \ 167 167 void *it_func; \ 168 168 void *__data; \ 169 - int __maybe_unused idx = 0; \ 169 + int __maybe_unused __idx = 0; \ 170 170 \ 171 171 if (!(cond)) \ 172 172 return; \ ··· 182 182 * doesn't work from the idle path. \ 183 183 */ \ 184 184 if (rcuidle) { \ 185 - idx = srcu_read_lock_notrace(&tracepoint_srcu); \ 185 + __idx = srcu_read_lock_notrace(&tracepoint_srcu);\ 186 186 rcu_irq_enter_irqson(); \ 187 187 } \ 188 188 \ ··· 198 198 \ 199 199 if (rcuidle) { \ 200 200 rcu_irq_exit_irqson(); \ 201 - srcu_read_unlock_notrace(&tracepoint_srcu, idx);\ 201 + srcu_read_unlock_notrace(&tracepoint_srcu, __idx);\ 202 202 } \ 203 203 \ 204 204 preempt_enable_notrace(); \
+3
include/linux/usb/quirks.h
··· 66 66 /* Device needs a pause after every control message. */ 67 67 #define USB_QUIRK_DELAY_CTRL_MSG BIT(13) 68 68 69 + /* Hub needs extra delay after resetting its port. */ 70 + #define USB_QUIRK_HUB_SLOW_RESET BIT(14) 71 + 69 72 #endif /* __LINUX_USB_QUIRKS_H */
+203 -64
include/linux/xarray.h
··· 289 289 void xa_init_flags(struct xarray *, gfp_t flags); 290 290 void *xa_load(struct xarray *, unsigned long index); 291 291 void *xa_store(struct xarray *, unsigned long index, void *entry, gfp_t); 292 - void *xa_cmpxchg(struct xarray *, unsigned long index, 293 - void *old, void *entry, gfp_t); 294 - int xa_reserve(struct xarray *, unsigned long index, gfp_t); 292 + void *xa_erase(struct xarray *, unsigned long index); 295 293 void *xa_store_range(struct xarray *, unsigned long first, unsigned long last, 296 294 void *entry, gfp_t); 297 295 bool xa_get_mark(struct xarray *, unsigned long index, xa_mark_t); ··· 339 341 static inline bool xa_marked(const struct xarray *xa, xa_mark_t mark) 340 342 { 341 343 return xa->xa_flags & XA_FLAGS_MARK(mark); 342 - } 343 - 344 - /** 345 - * xa_erase() - Erase this entry from the XArray. 346 - * @xa: XArray. 347 - * @index: Index of entry. 348 - * 349 - * This function is the equivalent of calling xa_store() with %NULL as 350 - * the third argument. The XArray does not need to allocate memory, so 351 - * the user does not need to provide GFP flags. 352 - * 353 - * Context: Process context. Takes and releases the xa_lock. 354 - * Return: The entry which used to be at this index. 355 - */ 356 - static inline void *xa_erase(struct xarray *xa, unsigned long index) 357 - { 358 - return xa_store(xa, index, NULL, 0); 359 - } 360 - 361 - /** 362 - * xa_insert() - Store this entry in the XArray unless another entry is 363 - * already present. 364 - * @xa: XArray. 365 - * @index: Index into array. 366 - * @entry: New entry. 367 - * @gfp: Memory allocation flags. 368 - * 369 - * If you would rather see the existing entry in the array, use xa_cmpxchg(). 370 - * This function is for users who don't care what the entry is, only that 371 - * one is present. 372 - * 373 - * Context: Process context. Takes and releases the xa_lock. 374 - * May sleep if the @gfp flags permit. 375 - * Return: 0 if the store succeeded. -EEXIST if another entry was present. 376 - * -ENOMEM if memory could not be allocated. 377 - */ 378 - static inline int xa_insert(struct xarray *xa, unsigned long index, 379 - void *entry, gfp_t gfp) 380 - { 381 - void *curr = xa_cmpxchg(xa, index, NULL, entry, gfp); 382 - if (!curr) 383 - return 0; 384 - if (xa_is_err(curr)) 385 - return xa_err(curr); 386 - return -EEXIST; 387 - } 388 - 389 - /** 390 - * xa_release() - Release a reserved entry. 391 - * @xa: XArray. 392 - * @index: Index of entry. 393 - * 394 - * After calling xa_reserve(), you can call this function to release the 395 - * reservation. If the entry at @index has been stored to, this function 396 - * will do nothing. 397 - */ 398 - static inline void xa_release(struct xarray *xa, unsigned long index) 399 - { 400 - xa_cmpxchg(xa, index, NULL, NULL, 0); 401 344 } 402 345 403 346 /** ··· 394 455 void *__xa_cmpxchg(struct xarray *, unsigned long index, void *old, 395 456 void *entry, gfp_t); 396 457 int __xa_alloc(struct xarray *, u32 *id, u32 max, void *entry, gfp_t); 458 + int __xa_reserve(struct xarray *, unsigned long index, gfp_t); 397 459 void __xa_set_mark(struct xarray *, unsigned long index, xa_mark_t); 398 460 void __xa_clear_mark(struct xarray *, unsigned long index, xa_mark_t); 399 461 ··· 427 487 } 428 488 429 489 /** 490 + * xa_store_bh() - Store this entry in the XArray. 491 + * @xa: XArray. 492 + * @index: Index into array. 493 + * @entry: New entry. 494 + * @gfp: Memory allocation flags. 495 + * 496 + * This function is like calling xa_store() except it disables softirqs 497 + * while holding the array lock. 498 + * 499 + * Context: Any context. Takes and releases the xa_lock while 500 + * disabling softirqs. 501 + * Return: The entry which used to be at this index. 502 + */ 503 + static inline void *xa_store_bh(struct xarray *xa, unsigned long index, 504 + void *entry, gfp_t gfp) 505 + { 506 + void *curr; 507 + 508 + xa_lock_bh(xa); 509 + curr = __xa_store(xa, index, entry, gfp); 510 + xa_unlock_bh(xa); 511 + 512 + return curr; 513 + } 514 + 515 + /** 516 + * xa_store_irq() - Erase this entry from the XArray. 517 + * @xa: XArray. 518 + * @index: Index into array. 519 + * @entry: New entry. 520 + * @gfp: Memory allocation flags. 521 + * 522 + * This function is like calling xa_store() except it disables interrupts 523 + * while holding the array lock. 524 + * 525 + * Context: Process context. Takes and releases the xa_lock while 526 + * disabling interrupts. 527 + * Return: The entry which used to be at this index. 528 + */ 529 + static inline void *xa_store_irq(struct xarray *xa, unsigned long index, 530 + void *entry, gfp_t gfp) 531 + { 532 + void *curr; 533 + 534 + xa_lock_irq(xa); 535 + curr = __xa_store(xa, index, entry, gfp); 536 + xa_unlock_irq(xa); 537 + 538 + return curr; 539 + } 540 + 541 + /** 430 542 * xa_erase_bh() - Erase this entry from the XArray. 431 543 * @xa: XArray. 432 544 * @index: Index of entry. ··· 487 495 * the third argument. The XArray does not need to allocate memory, so 488 496 * the user does not need to provide GFP flags. 489 497 * 490 - * Context: Process context. Takes and releases the xa_lock while 498 + * Context: Any context. Takes and releases the xa_lock while 491 499 * disabling softirqs. 492 500 * Return: The entry which used to be at this index. 493 501 */ ··· 524 532 xa_unlock_irq(xa); 525 533 526 534 return entry; 535 + } 536 + 537 + /** 538 + * xa_cmpxchg() - Conditionally replace an entry in the XArray. 539 + * @xa: XArray. 540 + * @index: Index into array. 541 + * @old: Old value to test against. 542 + * @entry: New value to place in array. 543 + * @gfp: Memory allocation flags. 544 + * 545 + * If the entry at @index is the same as @old, replace it with @entry. 546 + * If the return value is equal to @old, then the exchange was successful. 547 + * 548 + * Context: Any context. Takes and releases the xa_lock. May sleep 549 + * if the @gfp flags permit. 550 + * Return: The old value at this index or xa_err() if an error happened. 551 + */ 552 + static inline void *xa_cmpxchg(struct xarray *xa, unsigned long index, 553 + void *old, void *entry, gfp_t gfp) 554 + { 555 + void *curr; 556 + 557 + xa_lock(xa); 558 + curr = __xa_cmpxchg(xa, index, old, entry, gfp); 559 + xa_unlock(xa); 560 + 561 + return curr; 562 + } 563 + 564 + /** 565 + * xa_insert() - Store this entry in the XArray unless another entry is 566 + * already present. 567 + * @xa: XArray. 568 + * @index: Index into array. 569 + * @entry: New entry. 570 + * @gfp: Memory allocation flags. 571 + * 572 + * If you would rather see the existing entry in the array, use xa_cmpxchg(). 573 + * This function is for users who don't care what the entry is, only that 574 + * one is present. 575 + * 576 + * Context: Process context. Takes and releases the xa_lock. 577 + * May sleep if the @gfp flags permit. 578 + * Return: 0 if the store succeeded. -EEXIST if another entry was present. 579 + * -ENOMEM if memory could not be allocated. 580 + */ 581 + static inline int xa_insert(struct xarray *xa, unsigned long index, 582 + void *entry, gfp_t gfp) 583 + { 584 + void *curr = xa_cmpxchg(xa, index, NULL, entry, gfp); 585 + if (!curr) 586 + return 0; 587 + if (xa_is_err(curr)) 588 + return xa_err(curr); 589 + return -EEXIST; 527 590 } 528 591 529 592 /** ··· 622 575 * Updates the @id pointer with the index, then stores the entry at that 623 576 * index. A concurrent lookup will not see an uninitialised @id. 624 577 * 625 - * Context: Process context. Takes and releases the xa_lock while 578 + * Context: Any context. Takes and releases the xa_lock while 626 579 * disabling softirqs. May sleep if the @gfp flags permit. 627 580 * Return: 0 on success, -ENOMEM if memory allocation fails or -ENOSPC if 628 581 * there is no more space in the XArray. ··· 666 619 xa_unlock_irq(xa); 667 620 668 621 return err; 622 + } 623 + 624 + /** 625 + * xa_reserve() - Reserve this index in the XArray. 626 + * @xa: XArray. 627 + * @index: Index into array. 628 + * @gfp: Memory allocation flags. 629 + * 630 + * Ensures there is somewhere to store an entry at @index in the array. 631 + * If there is already something stored at @index, this function does 632 + * nothing. If there was nothing there, the entry is marked as reserved. 633 + * Loading from a reserved entry returns a %NULL pointer. 634 + * 635 + * If you do not use the entry that you have reserved, call xa_release() 636 + * or xa_erase() to free any unnecessary memory. 637 + * 638 + * Context: Any context. Takes and releases the xa_lock. 639 + * May sleep if the @gfp flags permit. 640 + * Return: 0 if the reservation succeeded or -ENOMEM if it failed. 641 + */ 642 + static inline 643 + int xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp) 644 + { 645 + int ret; 646 + 647 + xa_lock(xa); 648 + ret = __xa_reserve(xa, index, gfp); 649 + xa_unlock(xa); 650 + 651 + return ret; 652 + } 653 + 654 + /** 655 + * xa_reserve_bh() - Reserve this index in the XArray. 656 + * @xa: XArray. 657 + * @index: Index into array. 658 + * @gfp: Memory allocation flags. 659 + * 660 + * A softirq-disabling version of xa_reserve(). 661 + * 662 + * Context: Any context. Takes and releases the xa_lock while 663 + * disabling softirqs. 664 + * Return: 0 if the reservation succeeded or -ENOMEM if it failed. 665 + */ 666 + static inline 667 + int xa_reserve_bh(struct xarray *xa, unsigned long index, gfp_t gfp) 668 + { 669 + int ret; 670 + 671 + xa_lock_bh(xa); 672 + ret = __xa_reserve(xa, index, gfp); 673 + xa_unlock_bh(xa); 674 + 675 + return ret; 676 + } 677 + 678 + /** 679 + * xa_reserve_irq() - Reserve this index in the XArray. 680 + * @xa: XArray. 681 + * @index: Index into array. 682 + * @gfp: Memory allocation flags. 683 + * 684 + * An interrupt-disabling version of xa_reserve(). 685 + * 686 + * Context: Process context. Takes and releases the xa_lock while 687 + * disabling interrupts. 688 + * Return: 0 if the reservation succeeded or -ENOMEM if it failed. 689 + */ 690 + static inline 691 + int xa_reserve_irq(struct xarray *xa, unsigned long index, gfp_t gfp) 692 + { 693 + int ret; 694 + 695 + xa_lock_irq(xa); 696 + ret = __xa_reserve(xa, index, gfp); 697 + xa_unlock_irq(xa); 698 + 699 + return ret; 700 + } 701 + 702 + /** 703 + * xa_release() - Release a reserved entry. 704 + * @xa: XArray. 705 + * @index: Index of entry. 706 + * 707 + * After calling xa_reserve(), you can call this function to release the 708 + * reservation. If the entry at @index has been stored to, this function 709 + * will do nothing. 710 + */ 711 + static inline void xa_release(struct xarray *xa, unsigned long index) 712 + { 713 + xa_cmpxchg(xa, index, NULL, NULL, 0); 669 714 } 670 715 671 716 /* Everything below here is the Advanced API. Proceed with caution. */
+1 -1
include/media/v4l2-mem2mem.h
··· 624 624 625 625 /* v4l2 request helper */ 626 626 627 - void vb2_m2m_request_queue(struct media_request *req); 627 + void v4l2_m2m_request_queue(struct media_request *req); 628 628 629 629 /* v4l2 ioctl helpers */ 630 630
+2
include/net/addrconf.h
··· 317 317 const struct in6_addr *addr); 318 318 bool ipv6_chk_acast_addr_src(struct net *net, struct net_device *dev, 319 319 const struct in6_addr *addr); 320 + int ipv6_anycast_init(void); 321 + void ipv6_anycast_cleanup(void); 320 322 321 323 /* Device notifier */ 322 324 int register_inet6addr_notifier(struct notifier_block *nb);
+2 -1
include/net/af_rxrpc.h
··· 77 77 struct sockaddr_rxrpc *, struct key *); 78 78 int rxrpc_kernel_check_call(struct socket *, struct rxrpc_call *, 79 79 enum rxrpc_call_completion *, u32 *); 80 - u32 rxrpc_kernel_check_life(struct socket *, struct rxrpc_call *); 80 + u32 rxrpc_kernel_check_life(const struct socket *, const struct rxrpc_call *); 81 + void rxrpc_kernel_probe_life(struct socket *, struct rxrpc_call *); 81 82 u32 rxrpc_kernel_get_epoch(struct socket *, struct rxrpc_call *); 82 83 bool rxrpc_kernel_get_reply_time(struct socket *, struct rxrpc_call *, 83 84 ktime_t *);
+2
include/net/if_inet6.h
··· 146 146 struct in6_addr aca_addr; 147 147 struct fib6_info *aca_rt; 148 148 struct ifacaddr6 *aca_next; 149 + struct hlist_node aca_addr_lst; 149 150 int aca_users; 150 151 refcount_t aca_refcnt; 151 152 unsigned long aca_cstamp; 152 153 unsigned long aca_tstamp; 154 + struct rcu_head rcu; 153 155 }; 154 156 155 157 #define IFA_HOST IPV6_ADDR_LOOPBACK
+1 -1
include/net/netfilter/ipv4/nf_nat_masquerade.h
··· 9 9 const struct nf_nat_range2 *range, 10 10 const struct net_device *out); 11 11 12 - void nf_nat_masquerade_ipv4_register_notifier(void); 12 + int nf_nat_masquerade_ipv4_register_notifier(void); 13 13 void nf_nat_masquerade_ipv4_unregister_notifier(void); 14 14 15 15 #endif /*_NF_NAT_MASQUERADE_IPV4_H_ */
+1 -1
include/net/netfilter/ipv6/nf_nat_masquerade.h
··· 5 5 unsigned int 6 6 nf_nat_masquerade_ipv6(struct sk_buff *skb, const struct nf_nat_range2 *range, 7 7 const struct net_device *out); 8 - void nf_nat_masquerade_ipv6_register_notifier(void); 8 + int nf_nat_masquerade_ipv6_register_notifier(void); 9 9 void nf_nat_masquerade_ipv6_unregister_notifier(void); 10 10 11 11 #endif /* _NF_NAT_MASQUERADE_IPV6_H_ */
+39
include/net/netfilter/nf_conntrack_l4proto.h
··· 153 153 const char *fmt, ...) { } 154 154 #endif /* CONFIG_SYSCTL */ 155 155 156 + static inline struct nf_generic_net *nf_generic_pernet(struct net *net) 157 + { 158 + return &net->ct.nf_ct_proto.generic; 159 + } 160 + 161 + static inline struct nf_tcp_net *nf_tcp_pernet(struct net *net) 162 + { 163 + return &net->ct.nf_ct_proto.tcp; 164 + } 165 + 166 + static inline struct nf_udp_net *nf_udp_pernet(struct net *net) 167 + { 168 + return &net->ct.nf_ct_proto.udp; 169 + } 170 + 171 + static inline struct nf_icmp_net *nf_icmp_pernet(struct net *net) 172 + { 173 + return &net->ct.nf_ct_proto.icmp; 174 + } 175 + 176 + static inline struct nf_icmp_net *nf_icmpv6_pernet(struct net *net) 177 + { 178 + return &net->ct.nf_ct_proto.icmpv6; 179 + } 180 + 181 + #ifdef CONFIG_NF_CT_PROTO_DCCP 182 + static inline struct nf_dccp_net *nf_dccp_pernet(struct net *net) 183 + { 184 + return &net->ct.nf_ct_proto.dccp; 185 + } 186 + #endif 187 + 188 + #ifdef CONFIG_NF_CT_PROTO_SCTP 189 + static inline struct nf_sctp_net *nf_sctp_pernet(struct net *net) 190 + { 191 + return &net->ct.nf_ct_proto.sctp; 192 + } 193 + #endif 194 + 156 195 #endif /*_NF_CONNTRACK_PROTOCOL_H*/
+12
include/net/sctp/sctp.h
··· 608 608 SCTP_DEFAULT_MINSEGMENT)); 609 609 } 610 610 611 + static inline bool sctp_transport_pmtu_check(struct sctp_transport *t) 612 + { 613 + __u32 pmtu = sctp_dst_mtu(t->dst); 614 + 615 + if (t->pathmtu == pmtu) 616 + return true; 617 + 618 + t->pathmtu = pmtu; 619 + 620 + return false; 621 + } 622 + 611 623 #endif /* __net_sctp_h__ */
+1 -1
include/sound/soc.h
··· 1192 1192 ((i) < rtd->num_codecs) && ((dai) = rtd->codec_dais[i]); \ 1193 1193 (i)++) 1194 1194 #define for_each_rtd_codec_dai_rollback(rtd, i, dai) \ 1195 - for (; ((i--) >= 0) && ((dai) = rtd->codec_dais[i]);) 1195 + for (; ((--i) >= 0) && ((dai) = rtd->codec_dais[i]);) 1196 1196 1197 1197 1198 1198 /* mixer control */
+4 -4
include/trace/events/kyber.h
··· 31 31 32 32 TP_fast_assign( 33 33 __entry->dev = disk_devt(dev_to_disk(kobj_to_dev(q->kobj.parent))); 34 - strlcpy(__entry->domain, domain, DOMAIN_LEN); 35 - strlcpy(__entry->type, type, DOMAIN_LEN); 34 + strlcpy(__entry->domain, domain, sizeof(__entry->domain)); 35 + strlcpy(__entry->type, type, sizeof(__entry->type)); 36 36 __entry->percentile = percentile; 37 37 __entry->numerator = numerator; 38 38 __entry->denominator = denominator; ··· 60 60 61 61 TP_fast_assign( 62 62 __entry->dev = disk_devt(dev_to_disk(kobj_to_dev(q->kobj.parent))); 63 - strlcpy(__entry->domain, domain, DOMAIN_LEN); 63 + strlcpy(__entry->domain, domain, sizeof(__entry->domain)); 64 64 __entry->depth = depth; 65 65 ), 66 66 ··· 82 82 83 83 TP_fast_assign( 84 84 __entry->dev = disk_devt(dev_to_disk(kobj_to_dev(q->kobj.parent))); 85 - strlcpy(__entry->domain, domain, DOMAIN_LEN); 85 + strlcpy(__entry->domain, domain, sizeof(__entry->domain)); 86 86 ), 87 87 88 88 TP_printk("%d,%d %s", MAJOR(__entry->dev), MINOR(__entry->dev),
+2
include/trace/events/rxrpc.h
··· 181 181 enum rxrpc_propose_ack_trace { 182 182 rxrpc_propose_ack_client_tx_end, 183 183 rxrpc_propose_ack_input_data, 184 + rxrpc_propose_ack_ping_for_check_life, 184 185 rxrpc_propose_ack_ping_for_keepalive, 185 186 rxrpc_propose_ack_ping_for_lost_ack, 186 187 rxrpc_propose_ack_ping_for_lost_reply, ··· 381 380 #define rxrpc_propose_ack_traces \ 382 381 EM(rxrpc_propose_ack_client_tx_end, "ClTxEnd") \ 383 382 EM(rxrpc_propose_ack_input_data, "DataIn ") \ 383 + EM(rxrpc_propose_ack_ping_for_check_life, "ChkLife") \ 384 384 EM(rxrpc_propose_ack_ping_for_keepalive, "KeepAlv") \ 385 385 EM(rxrpc_propose_ack_ping_for_lost_ack, "LostAck") \ 386 386 EM(rxrpc_propose_ack_ping_for_lost_reply, "LostRpl") \
+11 -1
include/trace/events/sched.h
··· 107 107 #ifdef CREATE_TRACE_POINTS 108 108 static inline long __trace_sched_switch_state(bool preempt, struct task_struct *p) 109 109 { 110 + unsigned int state; 111 + 110 112 #ifdef CONFIG_SCHED_DEBUG 111 113 BUG_ON(p != current); 112 114 #endif /* CONFIG_SCHED_DEBUG */ ··· 120 118 if (preempt) 121 119 return TASK_REPORT_MAX; 122 120 123 - return 1 << task_state_index(p); 121 + /* 122 + * task_state_index() uses fls() and returns a value from 0-8 range. 123 + * Decrement it by 1 (except TASK_RUNNING state i.e 0) before using 124 + * it for left shift operation to get the correct task->state 125 + * mapping. 126 + */ 127 + state = task_state_index(p); 128 + 129 + return state ? (1 << (state - 1)) : state; 124 130 } 125 131 #endif /* CREATE_TRACE_POINTS */ 126 132
-10
include/uapi/linux/input-event-codes.h
··· 716 716 * the situation described above. 717 717 */ 718 718 #define REL_RESERVED 0x0a 719 - #define REL_WHEEL_HI_RES 0x0b 720 719 #define REL_MAX 0x0f 721 720 #define REL_CNT (REL_MAX+1) 722 721 ··· 751 752 #define ABS_VOLUME 0x20 752 753 753 754 #define ABS_MISC 0x28 754 - 755 - /* 756 - * 0x2e is reserved and should not be used in input drivers. 757 - * It was used by HID as ABS_MISC+6 and userspace needs to detect if 758 - * the next ABS_* event is correct or is just ABS_MISC + n. 759 - * We define here ABS_RESERVED so userspace can rely on it and detect 760 - * the situation described above. 761 - */ 762 - #define ABS_RESERVED 0x2e 763 755 764 756 #define ABS_MT_SLOT 0x2f /* MT slot being modified */ 765 757 #define ABS_MT_TOUCH_MAJOR 0x30 /* Major axis of touching ellipse */
+9 -9
include/uapi/linux/kfd_ioctl.h
··· 83 83 }; 84 84 85 85 struct kfd_ioctl_get_queue_wave_state_args { 86 - uint64_t ctl_stack_address; /* to KFD */ 87 - uint32_t ctl_stack_used_size; /* from KFD */ 88 - uint32_t save_area_used_size; /* from KFD */ 89 - uint32_t queue_id; /* to KFD */ 90 - uint32_t pad; 86 + __u64 ctl_stack_address; /* to KFD */ 87 + __u32 ctl_stack_used_size; /* from KFD */ 88 + __u32 save_area_used_size; /* from KFD */ 89 + __u32 queue_id; /* to KFD */ 90 + __u32 pad; 91 91 }; 92 92 93 93 /* For kfd_ioctl_set_memory_policy_args.default_policy and alternate_policy */ ··· 255 255 256 256 /* hw exception data */ 257 257 struct kfd_hsa_hw_exception_data { 258 - uint32_t reset_type; 259 - uint32_t reset_cause; 260 - uint32_t memory_lost; 261 - uint32_t gpu_id; 258 + __u32 reset_type; 259 + __u32 reset_cause; 260 + __u32 memory_lost; 261 + __u32 gpu_id; 262 262 }; 263 263 264 264 /* Event data */
+2 -2
include/uapi/linux/netfilter/nf_tables.h
··· 1635 1635 NFTA_NG_MODULUS, 1636 1636 NFTA_NG_TYPE, 1637 1637 NFTA_NG_OFFSET, 1638 - NFTA_NG_SET_NAME, 1639 - NFTA_NG_SET_ID, 1638 + NFTA_NG_SET_NAME, /* deprecated */ 1639 + NFTA_NG_SET_ID, /* deprecated */ 1640 1640 __NFTA_NG_MAX 1641 1641 }; 1642 1642 #define NFTA_NG_MAX (__NFTA_NG_MAX - 1)
+4
include/uapi/linux/netfilter_bridge.h
··· 11 11 #include <linux/if_vlan.h> 12 12 #include <linux/if_pppox.h> 13 13 14 + #ifndef __KERNEL__ 15 + #include <limits.h> /* for INT_MIN, INT_MAX */ 16 + #endif 17 + 14 18 /* Bridge Hooks */ 15 19 /* After promisc drops, checksum checks. */ 16 20 #define NF_BR_PRE_ROUTING 0
+1
include/uapi/linux/prctl.h
··· 212 212 #define PR_SET_SPECULATION_CTRL 53 213 213 /* Speculation control variants */ 214 214 # define PR_SPEC_STORE_BYPASS 0 215 + # define PR_SPEC_INDIRECT_BRANCH 1 215 216 /* Return and control values for PR_SET/GET_SPECULATION_CTRL */ 216 217 # define PR_SPEC_NOT_AFFECTED 0 217 218 # define PR_SPEC_PRCTL (1UL << 0)
+3
include/uapi/linux/sctp.h
··· 568 568 569 569 #define SCTP_ASSOC_CHANGE_DENIED 0x0004 570 570 #define SCTP_ASSOC_CHANGE_FAILED 0x0008 571 + #define SCTP_STREAM_CHANGE_DENIED SCTP_ASSOC_CHANGE_DENIED 572 + #define SCTP_STREAM_CHANGE_FAILED SCTP_ASSOC_CHANGE_FAILED 571 573 struct sctp_stream_change_event { 572 574 __u16 strchange_type; 573 575 __u16 strchange_flags; ··· 1153 1151 /* SCTP Stream schedulers */ 1154 1152 enum sctp_sched_type { 1155 1153 SCTP_SS_FCFS, 1154 + SCTP_SS_DEFAULT = SCTP_SS_FCFS, 1156 1155 SCTP_SS_PRIO, 1157 1156 SCTP_SS_RR, 1158 1157 SCTP_SS_MAX = SCTP_SS_RR
+5
include/uapi/linux/v4l2-controls.h
··· 50 50 #ifndef __LINUX_V4L2_CONTROLS_H 51 51 #define __LINUX_V4L2_CONTROLS_H 52 52 53 + #include <linux/types.h> 54 + 53 55 /* Control classes */ 54 56 #define V4L2_CTRL_CLASS_USER 0x00980000 /* Old-style 'user' controls */ 55 57 #define V4L2_CTRL_CLASS_MPEG 0x00990000 /* MPEG-compression controls */ ··· 1112 1110 __u8 profile_and_level_indication; 1113 1111 __u8 progressive_sequence; 1114 1112 __u8 chroma_format; 1113 + __u8 pad; 1115 1114 }; 1116 1115 1117 1116 struct v4l2_mpeg2_picture { ··· 1131 1128 __u8 alternate_scan; 1132 1129 __u8 repeat_first_field; 1133 1130 __u8 progressive_frame; 1131 + __u8 pad; 1134 1132 }; 1135 1133 1136 1134 struct v4l2_ctrl_mpeg2_slice_params { ··· 1146 1142 1147 1143 __u8 backward_ref_index; 1148 1144 __u8 forward_ref_index; 1145 + __u8 pad; 1149 1146 }; 1150 1147 1151 1148 struct v4l2_ctrl_mpeg2_quantization {
-5
include/xen/balloon.h
··· 44 44 { 45 45 } 46 46 #endif 47 - 48 - #ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG 49 - struct resource; 50 - void arch_xen_balloon_init(struct resource *hostmem_resource); 51 - #endif
+7 -5
include/xen/xen-ops.h
··· 42 42 43 43 extern unsigned long *xen_contiguous_bitmap; 44 44 45 - #ifdef CONFIG_XEN_PV 45 + #if defined(CONFIG_XEN_PV) || defined(CONFIG_ARM) || defined(CONFIG_ARM64) 46 46 int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order, 47 47 unsigned int address_bits, 48 48 dma_addr_t *dma_handle); 49 49 50 50 void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order); 51 - 52 - int xen_remap_pfn(struct vm_area_struct *vma, unsigned long addr, 53 - xen_pfn_t *pfn, int nr, int *err_ptr, pgprot_t prot, 54 - unsigned int domid, bool no_translate, struct page **pages); 55 51 #else 56 52 static inline int xen_create_contiguous_region(phys_addr_t pstart, 57 53 unsigned int order, ··· 59 63 60 64 static inline void xen_destroy_contiguous_region(phys_addr_t pstart, 61 65 unsigned int order) { } 66 + #endif 62 67 68 + #if defined(CONFIG_XEN_PV) 69 + int xen_remap_pfn(struct vm_area_struct *vma, unsigned long addr, 70 + xen_pfn_t *pfn, int nr, int *err_ptr, pgprot_t prot, 71 + unsigned int domid, bool no_translate, struct page **pages); 72 + #else 63 73 static inline int xen_remap_pfn(struct vm_area_struct *vma, unsigned long addr, 64 74 xen_pfn_t *pfn, int nr, int *err_ptr, 65 75 pgprot_t prot, unsigned int domid,
+9
init/Kconfig
··· 509 509 510 510 Say N if unsure. 511 511 512 + config PSI_DEFAULT_DISABLED 513 + bool "Require boot parameter to enable pressure stall information tracking" 514 + default n 515 + depends on PSI 516 + help 517 + If set, pressure stall information tracking will be disabled 518 + per default but can be enabled through passing psi_enable=1 519 + on the kernel commandline during boot. 520 + 512 521 endmenu # "CPU/Task time and stats accounting" 513 522 514 523 config CPU_ISOLATION
+12 -10
init/initramfs.c
··· 291 291 return 1; 292 292 } 293 293 294 - static int __init maybe_link(void) 295 - { 296 - if (nlink >= 2) { 297 - char *old = find_link(major, minor, ino, mode, collected); 298 - if (old) 299 - return (ksys_link(old, collected) < 0) ? -1 : 1; 300 - } 301 - return 0; 302 - } 303 - 304 294 static void __init clean_path(char *path, umode_t fmode) 305 295 { 306 296 struct kstat st; ··· 301 311 else 302 312 ksys_unlink(path); 303 313 } 314 + } 315 + 316 + static int __init maybe_link(void) 317 + { 318 + if (nlink >= 2) { 319 + char *old = find_link(major, minor, ino, mode, collected); 320 + if (old) { 321 + clean_path(collected, 0); 322 + return (ksys_link(old, collected) < 0) ? -1 : 1; 323 + } 324 + } 325 + return 0; 304 326 } 305 327 306 328 static __initdata int wfd;
+35 -3
kernel/bpf/core.c
··· 553 553 int bpf_get_kallsym(unsigned int symnum, unsigned long *value, char *type, 554 554 char *sym) 555 555 { 556 - unsigned long symbol_start, symbol_end; 557 556 struct bpf_prog_aux *aux; 558 557 unsigned int it = 0; 559 558 int ret = -ERANGE; ··· 565 566 if (it++ != symnum) 566 567 continue; 567 568 568 - bpf_get_prog_addr_region(aux->prog, &symbol_start, &symbol_end); 569 569 bpf_get_prog_name(aux->prog, sym); 570 570 571 - *value = symbol_start; 571 + *value = (unsigned long)aux->prog->bpf_func; 572 572 *type = BPF_SYM_ELF_TYPE; 573 573 574 574 ret = 0; ··· 670 672 } 671 673 672 674 bpf_prog_unlock_free(fp); 675 + } 676 + 677 + int bpf_jit_get_func_addr(const struct bpf_prog *prog, 678 + const struct bpf_insn *insn, bool extra_pass, 679 + u64 *func_addr, bool *func_addr_fixed) 680 + { 681 + s16 off = insn->off; 682 + s32 imm = insn->imm; 683 + u8 *addr; 684 + 685 + *func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL; 686 + if (!*func_addr_fixed) { 687 + /* Place-holder address till the last pass has collected 688 + * all addresses for JITed subprograms in which case we 689 + * can pick them up from prog->aux. 690 + */ 691 + if (!extra_pass) 692 + addr = NULL; 693 + else if (prog->aux->func && 694 + off >= 0 && off < prog->aux->func_cnt) 695 + addr = (u8 *)prog->aux->func[off]->bpf_func; 696 + else 697 + return -EINVAL; 698 + } else { 699 + /* Address of a BPF helper call. Since part of the core 700 + * kernel, it's always at a fixed location. __bpf_call_base 701 + * and the helper with imm relative to it are both in core 702 + * kernel. 703 + */ 704 + addr = (u8 *)__bpf_call_base + imm; 705 + } 706 + 707 + *func_addr = (unsigned long)addr; 708 + return 0; 673 709 } 674 710 675 711 static int bpf_jit_blind_insn(const struct bpf_insn *from,
+2 -1
kernel/bpf/local_storage.c
··· 139 139 return -ENOENT; 140 140 141 141 new = kmalloc_node(sizeof(struct bpf_storage_buffer) + 142 - map->value_size, __GFP_ZERO | GFP_USER, 142 + map->value_size, 143 + __GFP_ZERO | GFP_ATOMIC | __GFP_NOWARN, 143 144 map->numa_node); 144 145 if (!new) 145 146 return -ENOMEM;
+8 -8
kernel/bpf/queue_stack_maps.c
··· 7 7 #include <linux/bpf.h> 8 8 #include <linux/list.h> 9 9 #include <linux/slab.h> 10 + #include <linux/capability.h> 10 11 #include "percpu_freelist.h" 11 12 12 13 #define QUEUE_STACK_CREATE_FLAG_MASK \ ··· 46 45 /* Called from syscall */ 47 46 static int queue_stack_map_alloc_check(union bpf_attr *attr) 48 47 { 48 + if (!capable(CAP_SYS_ADMIN)) 49 + return -EPERM; 50 + 49 51 /* check sanity of attributes */ 50 52 if (attr->max_entries == 0 || attr->key_size != 0 || 53 + attr->value_size == 0 || 51 54 attr->map_flags & ~QUEUE_STACK_CREATE_FLAG_MASK) 52 55 return -EINVAL; 53 56 ··· 68 63 { 69 64 int ret, numa_node = bpf_map_attr_numa_node(attr); 70 65 struct bpf_queue_stack *qs; 71 - u32 size, value_size; 72 - u64 queue_size, cost; 66 + u64 size, queue_size, cost; 73 67 74 - size = attr->max_entries + 1; 75 - value_size = attr->value_size; 76 - 77 - queue_size = sizeof(*qs) + (u64) value_size * size; 78 - 79 - cost = queue_size; 68 + size = (u64) attr->max_entries + 1; 69 + cost = queue_size = sizeof(*qs) + size * attr->value_size; 80 70 if (cost >= U32_MAX - PAGE_SIZE) 81 71 return ERR_PTR(-E2BIG); 82 72
+25 -10
kernel/bpf/syscall.c
··· 2078 2078 info.jited_prog_len = 0; 2079 2079 info.xlated_prog_len = 0; 2080 2080 info.nr_jited_ksyms = 0; 2081 + info.nr_jited_func_lens = 0; 2081 2082 goto done; 2082 2083 } 2083 2084 ··· 2159 2158 } 2160 2159 2161 2160 ulen = info.nr_jited_ksyms; 2162 - info.nr_jited_ksyms = prog->aux->func_cnt; 2161 + info.nr_jited_ksyms = prog->aux->func_cnt ? : 1; 2163 2162 if (info.nr_jited_ksyms && ulen) { 2164 2163 if (bpf_dump_raw_ok()) { 2164 + unsigned long ksym_addr; 2165 2165 u64 __user *user_ksyms; 2166 - ulong ksym_addr; 2167 2166 u32 i; 2168 2167 2169 2168 /* copy the address of the kernel symbol ··· 2171 2170 */ 2172 2171 ulen = min_t(u32, info.nr_jited_ksyms, ulen); 2173 2172 user_ksyms = u64_to_user_ptr(info.jited_ksyms); 2174 - for (i = 0; i < ulen; i++) { 2175 - ksym_addr = (ulong) prog->aux->func[i]->bpf_func; 2176 - ksym_addr &= PAGE_MASK; 2177 - if (put_user((u64) ksym_addr, &user_ksyms[i])) 2173 + if (prog->aux->func_cnt) { 2174 + for (i = 0; i < ulen; i++) { 2175 + ksym_addr = (unsigned long) 2176 + prog->aux->func[i]->bpf_func; 2177 + if (put_user((u64) ksym_addr, 2178 + &user_ksyms[i])) 2179 + return -EFAULT; 2180 + } 2181 + } else { 2182 + ksym_addr = (unsigned long) prog->bpf_func; 2183 + if (put_user((u64) ksym_addr, &user_ksyms[0])) 2178 2184 return -EFAULT; 2179 2185 } 2180 2186 } else { ··· 2190 2182 } 2191 2183 2192 2184 ulen = info.nr_jited_func_lens; 2193 - info.nr_jited_func_lens = prog->aux->func_cnt; 2185 + info.nr_jited_func_lens = prog->aux->func_cnt ? : 1; 2194 2186 if (info.nr_jited_func_lens && ulen) { 2195 2187 if (bpf_dump_raw_ok()) { 2196 2188 u32 __user *user_lens; ··· 2199 2191 /* copy the JITed image lengths for each function */ 2200 2192 ulen = min_t(u32, info.nr_jited_func_lens, ulen); 2201 2193 user_lens = u64_to_user_ptr(info.jited_func_lens); 2202 - for (i = 0; i < ulen; i++) { 2203 - func_len = prog->aux->func[i]->jited_len; 2204 - if (put_user(func_len, &user_lens[i])) 2194 + if (prog->aux->func_cnt) { 2195 + for (i = 0; i < ulen; i++) { 2196 + func_len = 2197 + prog->aux->func[i]->jited_len; 2198 + if (put_user(func_len, &user_lens[i])) 2199 + return -EFAULT; 2200 + } 2201 + } else { 2202 + func_len = prog->jited_len; 2203 + if (put_user(func_len, &user_lens[0])) 2205 2204 return -EFAULT; 2206 2205 } 2207 2206 } else {
+1 -1
kernel/bpf/verifier.c
··· 5650 5650 return; 5651 5651 /* NOTE: fake 'exit' subprog should be updated as well. */ 5652 5652 for (i = 0; i <= env->subprog_cnt; i++) { 5653 - if (env->subprog_info[i].start < off) 5653 + if (env->subprog_info[i].start <= off) 5654 5654 continue; 5655 5655 env->subprog_info[i].start += len - 1; 5656 5656 }
+9 -6
kernel/cpu.c
··· 10 10 #include <linux/sched/signal.h> 11 11 #include <linux/sched/hotplug.h> 12 12 #include <linux/sched/task.h> 13 + #include <linux/sched/smt.h> 13 14 #include <linux/unistd.h> 14 15 #include <linux/cpu.h> 15 16 #include <linux/oom.h> ··· 367 366 } 368 367 369 368 #endif /* CONFIG_HOTPLUG_CPU */ 369 + 370 + /* 371 + * Architectures that need SMT-specific errata handling during SMT hotplug 372 + * should override this. 373 + */ 374 + void __weak arch_smt_update(void) { } 370 375 371 376 #ifdef CONFIG_HOTPLUG_SMT 372 377 enum cpuhp_smt_control cpu_smt_control __read_mostly = CPU_SMT_ENABLED; ··· 1018 1011 * concurrent CPU hotplug via cpu_add_remove_lock. 1019 1012 */ 1020 1013 lockup_detector_cleanup(); 1014 + arch_smt_update(); 1021 1015 return ret; 1022 1016 } 1023 1017 ··· 1147 1139 ret = cpuhp_up_callbacks(cpu, st, target); 1148 1140 out: 1149 1141 cpus_write_unlock(); 1142 + arch_smt_update(); 1150 1143 return ret; 1151 1144 } 1152 1145 ··· 2063 2054 /* Tell user space about the state change */ 2064 2055 kobject_uevent(&dev->kobj, KOBJ_ONLINE); 2065 2056 } 2066 - 2067 - /* 2068 - * Architectures that need SMT-specific errata handling during SMT hotplug 2069 - * should override this. 2070 - */ 2071 - void __weak arch_smt_update(void) { }; 2072 2057 2073 2058 static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval) 2074 2059 {
+2 -2
kernel/debug/kdb/kdb_bt.c
··· 179 179 kdb_printf("no process for cpu %ld\n", cpu); 180 180 return 0; 181 181 } 182 - sprintf(buf, "btt 0x%p\n", KDB_TSK(cpu)); 182 + sprintf(buf, "btt 0x%px\n", KDB_TSK(cpu)); 183 183 kdb_parse(buf); 184 184 return 0; 185 185 } 186 186 kdb_printf("btc: cpu status: "); 187 187 kdb_parse("cpu\n"); 188 188 for_each_online_cpu(cpu) { 189 - sprintf(buf, "btt 0x%p\n", KDB_TSK(cpu)); 189 + sprintf(buf, "btt 0x%px\n", KDB_TSK(cpu)); 190 190 kdb_parse(buf); 191 191 touch_nmi_watchdog(); 192 192 }
+9 -6
kernel/debug/kdb/kdb_io.c
··· 216 216 int count; 217 217 int i; 218 218 int diag, dtab_count; 219 - int key; 219 + int key, buf_size, ret; 220 220 221 221 222 222 diag = kdbgetintenv("DTABCOUNT", &dtab_count); ··· 336 336 else 337 337 p_tmp = tmpbuffer; 338 338 len = strlen(p_tmp); 339 - count = kallsyms_symbol_complete(p_tmp, 340 - sizeof(tmpbuffer) - 341 - (p_tmp - tmpbuffer)); 339 + buf_size = sizeof(tmpbuffer) - (p_tmp - tmpbuffer); 340 + count = kallsyms_symbol_complete(p_tmp, buf_size); 342 341 if (tab == 2 && count > 0) { 343 342 kdb_printf("\n%d symbols are found.", count); 344 343 if (count > dtab_count) { ··· 349 350 } 350 351 kdb_printf("\n"); 351 352 for (i = 0; i < count; i++) { 352 - if (WARN_ON(!kallsyms_symbol_next(p_tmp, i))) 353 + ret = kallsyms_symbol_next(p_tmp, i, buf_size); 354 + if (WARN_ON(!ret)) 353 355 break; 354 - kdb_printf("%s ", p_tmp); 356 + if (ret != -E2BIG) 357 + kdb_printf("%s ", p_tmp); 358 + else 359 + kdb_printf("%s... ", p_tmp); 355 360 *(p_tmp + len) = '\0'; 356 361 } 357 362 if (i >= dtab_count)
+2 -2
kernel/debug/kdb/kdb_keyboard.c
··· 173 173 case KT_LATIN: 174 174 if (isprint(keychar)) 175 175 break; /* printable characters */ 176 - /* drop through */ 176 + /* fall through */ 177 177 case KT_SPEC: 178 178 if (keychar == K_ENTER) 179 179 break; 180 - /* drop through */ 180 + /* fall through */ 181 181 default: 182 182 return -1; /* ignore unprintables */ 183 183 }
+10 -25
kernel/debug/kdb/kdb_main.c
··· 1192 1192 if (reason == KDB_REASON_DEBUG) { 1193 1193 /* special case below */ 1194 1194 } else { 1195 - kdb_printf("\nEntering kdb (current=0x%p, pid %d) ", 1195 + kdb_printf("\nEntering kdb (current=0x%px, pid %d) ", 1196 1196 kdb_current, kdb_current ? kdb_current->pid : 0); 1197 1197 #if defined(CONFIG_SMP) 1198 1198 kdb_printf("on processor %d ", raw_smp_processor_id()); ··· 1208 1208 */ 1209 1209 switch (db_result) { 1210 1210 case KDB_DB_BPT: 1211 - kdb_printf("\nEntering kdb (0x%p, pid %d) ", 1211 + kdb_printf("\nEntering kdb (0x%px, pid %d) ", 1212 1212 kdb_current, kdb_current->pid); 1213 1213 #if defined(CONFIG_SMP) 1214 1214 kdb_printf("on processor %d ", raw_smp_processor_id()); ··· 1493 1493 char cbuf[32]; 1494 1494 char *c = cbuf; 1495 1495 int i; 1496 + int j; 1496 1497 unsigned long word; 1497 1498 1498 1499 memset(cbuf, '\0', sizeof(cbuf)); ··· 1539 1538 wc.word = word; 1540 1539 #define printable_char(c) \ 1541 1540 ({unsigned char __c = c; isascii(__c) && isprint(__c) ? __c : '.'; }) 1542 - switch (bytesperword) { 1543 - case 8: 1541 + for (j = 0; j < bytesperword; j++) 1544 1542 *c++ = printable_char(*cp++); 1545 - *c++ = printable_char(*cp++); 1546 - *c++ = printable_char(*cp++); 1547 - *c++ = printable_char(*cp++); 1548 - addr += 4; 1549 - case 4: 1550 - *c++ = printable_char(*cp++); 1551 - *c++ = printable_char(*cp++); 1552 - addr += 2; 1553 - case 2: 1554 - *c++ = printable_char(*cp++); 1555 - addr++; 1556 - case 1: 1557 - *c++ = printable_char(*cp++); 1558 - addr++; 1559 - break; 1560 - } 1543 + addr += bytesperword; 1561 1544 #undef printable_char 1562 1545 } 1563 1546 } ··· 2033 2048 if (mod->state == MODULE_STATE_UNFORMED) 2034 2049 continue; 2035 2050 2036 - kdb_printf("%-20s%8u 0x%p ", mod->name, 2051 + kdb_printf("%-20s%8u 0x%px ", mod->name, 2037 2052 mod->core_layout.size, (void *)mod); 2038 2053 #ifdef CONFIG_MODULE_UNLOAD 2039 2054 kdb_printf("%4d ", module_refcount(mod)); ··· 2044 2059 kdb_printf(" (Loading)"); 2045 2060 else 2046 2061 kdb_printf(" (Live)"); 2047 - kdb_printf(" 0x%p", mod->core_layout.base); 2062 + kdb_printf(" 0x%px", mod->core_layout.base); 2048 2063 2049 2064 #ifdef CONFIG_MODULE_UNLOAD 2050 2065 { ··· 2326 2341 return; 2327 2342 2328 2343 cpu = kdb_process_cpu(p); 2329 - kdb_printf("0x%p %8d %8d %d %4d %c 0x%p %c%s\n", 2344 + kdb_printf("0x%px %8d %8d %d %4d %c 0x%px %c%s\n", 2330 2345 (void *)p, p->pid, p->parent->pid, 2331 2346 kdb_task_has_cpu(p), kdb_process_cpu(p), 2332 2347 kdb_task_state_char(p), ··· 2339 2354 } else { 2340 2355 if (KDB_TSK(cpu) != p) 2341 2356 kdb_printf(" Error: does not match running " 2342 - "process table (0x%p)\n", KDB_TSK(cpu)); 2357 + "process table (0x%px)\n", KDB_TSK(cpu)); 2343 2358 } 2344 2359 } 2345 2360 } ··· 2672 2687 for_each_kdbcmd(kp, i) { 2673 2688 if (kp->cmd_name && (strcmp(kp->cmd_name, cmd) == 0)) { 2674 2689 kdb_printf("Duplicate kdb command registered: " 2675 - "%s, func %p help %s\n", cmd, func, help); 2690 + "%s, func %px help %s\n", cmd, func, help); 2676 2691 return 1; 2677 2692 } 2678 2693 }
+1 -1
kernel/debug/kdb/kdb_private.h
··· 83 83 unsigned long sym_start; 84 84 unsigned long sym_end; 85 85 } kdb_symtab_t; 86 - extern int kallsyms_symbol_next(char *prefix_name, int flag); 86 + extern int kallsyms_symbol_next(char *prefix_name, int flag, int buf_size); 87 87 extern int kallsyms_symbol_complete(char *prefix_name, int max_len); 88 88 89 89 /* Exported Symbols for kernel loadable modules to use. */
+14 -14
kernel/debug/kdb/kdb_support.c
··· 40 40 int kdbgetsymval(const char *symname, kdb_symtab_t *symtab) 41 41 { 42 42 if (KDB_DEBUG(AR)) 43 - kdb_printf("kdbgetsymval: symname=%s, symtab=%p\n", symname, 43 + kdb_printf("kdbgetsymval: symname=%s, symtab=%px\n", symname, 44 44 symtab); 45 45 memset(symtab, 0, sizeof(*symtab)); 46 46 symtab->sym_start = kallsyms_lookup_name(symname); ··· 88 88 char *knt1 = NULL; 89 89 90 90 if (KDB_DEBUG(AR)) 91 - kdb_printf("kdbnearsym: addr=0x%lx, symtab=%p\n", addr, symtab); 91 + kdb_printf("kdbnearsym: addr=0x%lx, symtab=%px\n", addr, symtab); 92 92 memset(symtab, 0, sizeof(*symtab)); 93 93 94 94 if (addr < 4096) ··· 149 149 symtab->mod_name = "kernel"; 150 150 if (KDB_DEBUG(AR)) 151 151 kdb_printf("kdbnearsym: returns %d symtab->sym_start=0x%lx, " 152 - "symtab->mod_name=%p, symtab->sym_name=%p (%s)\n", ret, 152 + "symtab->mod_name=%px, symtab->sym_name=%px (%s)\n", ret, 153 153 symtab->sym_start, symtab->mod_name, symtab->sym_name, 154 154 symtab->sym_name); 155 155 ··· 221 221 * Parameters: 222 222 * prefix_name prefix of a symbol name to lookup 223 223 * flag 0 means search from the head, 1 means continue search. 224 + * buf_size maximum length that can be written to prefix_name 225 + * buffer 224 226 * Returns: 225 227 * 1 if a symbol matches the given prefix. 226 228 * 0 if no string found 227 229 */ 228 - int kallsyms_symbol_next(char *prefix_name, int flag) 230 + int kallsyms_symbol_next(char *prefix_name, int flag, int buf_size) 229 231 { 230 232 int prefix_len = strlen(prefix_name); 231 233 static loff_t pos; ··· 237 235 pos = 0; 238 236 239 237 while ((name = kdb_walk_kallsyms(&pos))) { 240 - if (strncmp(name, prefix_name, prefix_len) == 0) { 241 - strncpy(prefix_name, name, strlen(name)+1); 242 - return 1; 243 - } 238 + if (!strncmp(name, prefix_name, prefix_len)) 239 + return strscpy(prefix_name, name, buf_size); 244 240 } 245 241 return 0; 246 242 } ··· 432 432 *word = w8; 433 433 break; 434 434 } 435 - /* drop through */ 435 + /* fall through */ 436 436 default: 437 437 diag = KDB_BADWIDTH; 438 438 kdb_printf("kdb_getphysword: bad width %ld\n", (long) size); ··· 481 481 *word = w8; 482 482 break; 483 483 } 484 - /* drop through */ 484 + /* fall through */ 485 485 default: 486 486 diag = KDB_BADWIDTH; 487 487 kdb_printf("kdb_getword: bad width %ld\n", (long) size); ··· 525 525 diag = kdb_putarea(addr, w8); 526 526 break; 527 527 } 528 - /* drop through */ 528 + /* fall through */ 529 529 default: 530 530 diag = KDB_BADWIDTH; 531 531 kdb_printf("kdb_putword: bad width %ld\n", (long) size); ··· 887 887 __func__, dah_first); 888 888 if (dah_first) { 889 889 h_used = (struct debug_alloc_header *)debug_alloc_pool; 890 - kdb_printf("%s: h_used %p size %d\n", __func__, h_used, 890 + kdb_printf("%s: h_used %px size %d\n", __func__, h_used, 891 891 h_used->size); 892 892 } 893 893 do { 894 894 h_used = (struct debug_alloc_header *) 895 895 ((char *)h_free + dah_overhead + h_free->size); 896 - kdb_printf("%s: h_used %p size %d caller %p\n", 896 + kdb_printf("%s: h_used %px size %d caller %px\n", 897 897 __func__, h_used, h_used->size, h_used->caller); 898 898 h_free = (struct debug_alloc_header *) 899 899 (debug_alloc_pool + h_free->next); ··· 902 902 ((char *)h_free + dah_overhead + h_free->size); 903 903 if ((char *)h_used - debug_alloc_pool != 904 904 sizeof(debug_alloc_pool_aligned)) 905 - kdb_printf("%s: h_used %p size %d caller %p\n", 905 + kdb_printf("%s: h_used %px size %d caller %px\n", 906 906 __func__, h_used, h_used->size, h_used->caller); 907 907 out: 908 908 spin_unlock(&dap_lock);
+2 -1
kernel/dma/swiotlb.c
··· 679 679 } 680 680 681 681 if (!dev_is_dma_coherent(dev) && 682 - (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) 682 + (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0 && 683 + dev_addr != DIRECT_MAPPING_ERROR) 683 684 arch_sync_dma_for_device(dev, phys, size, dir); 684 685 685 686 return dev_addr;
+10 -2
kernel/events/uprobes.c
··· 829 829 BUG_ON((uprobe->offset & ~PAGE_MASK) + 830 830 UPROBE_SWBP_INSN_SIZE > PAGE_SIZE); 831 831 832 - smp_wmb(); /* pairs with rmb() in find_active_uprobe() */ 832 + smp_wmb(); /* pairs with the smp_rmb() in handle_swbp() */ 833 833 set_bit(UPROBE_COPY_INSN, &uprobe->flags); 834 834 835 835 out: ··· 2178 2178 * After we hit the bp, _unregister + _register can install the 2179 2179 * new and not-yet-analyzed uprobe at the same address, restart. 2180 2180 */ 2181 - smp_rmb(); /* pairs with wmb() in install_breakpoint() */ 2182 2181 if (unlikely(!test_bit(UPROBE_COPY_INSN, &uprobe->flags))) 2183 2182 goto out; 2183 + 2184 + /* 2185 + * Pairs with the smp_wmb() in prepare_uprobe(). 2186 + * 2187 + * Guarantees that if we see the UPROBE_COPY_INSN bit set, then 2188 + * we must also see the stores to &uprobe->arch performed by the 2189 + * prepare_uprobe() call. 2190 + */ 2191 + smp_rmb(); 2184 2192 2185 2193 /* Tracing handlers use ->utask to communicate with fetch methods */ 2186 2194 if (!get_utask())
+2 -2
kernel/kcov.c
··· 56 56 struct task_struct *t; 57 57 }; 58 58 59 - static bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t) 59 + static notrace bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t) 60 60 { 61 61 unsigned int mode; 62 62 ··· 78 78 return mode == needed_mode; 79 79 } 80 80 81 - static unsigned long canonicalize_ip(unsigned long ip) 81 + static notrace unsigned long canonicalize_ip(unsigned long ip) 82 82 { 83 83 #ifdef CONFIG_RANDOMIZE_BASE 84 84 ip -= kaslr_offset();
-10
kernel/ptrace.c
··· 261 261 262 262 static int ptrace_has_cap(struct user_namespace *ns, unsigned int mode) 263 263 { 264 - if (mode & PTRACE_MODE_SCHED) 265 - return false; 266 - 267 264 if (mode & PTRACE_MODE_NOAUDIT) 268 265 return has_ns_capability_noaudit(current, ns, CAP_SYS_PTRACE); 269 266 else ··· 328 331 !ptrace_has_cap(mm->user_ns, mode))) 329 332 return -EPERM; 330 333 331 - if (mode & PTRACE_MODE_SCHED) 332 - return 0; 333 334 return security_ptrace_access_check(task, mode); 334 - } 335 - 336 - bool ptrace_may_access_sched(struct task_struct *task, unsigned int mode) 337 - { 338 - return __ptrace_may_access(task, mode | PTRACE_MODE_SCHED); 339 335 } 340 336 341 337 bool ptrace_may_access(struct task_struct *task, unsigned int mode)
+14 -5
kernel/resource.c
··· 319 319 EXPORT_SYMBOL(release_resource); 320 320 321 321 /** 322 - * Finds the lowest iomem resource that covers part of [start..end]. The 323 - * caller must specify start, end, flags, and desc (which may be 322 + * Finds the lowest iomem resource that covers part of [@start..@end]. The 323 + * caller must specify @start, @end, @flags, and @desc (which may be 324 324 * IORES_DESC_NONE). 325 325 * 326 - * If a resource is found, returns 0 and *res is overwritten with the part 327 - * of the resource that's within [start..end]; if none is found, returns 328 - * -1. 326 + * If a resource is found, returns 0 and @*res is overwritten with the part 327 + * of the resource that's within [@start..@end]; if none is found, returns 328 + * -1 or -EINVAL for other invalid parameters. 329 329 * 330 330 * This function walks the whole tree and not just first level children 331 331 * unless @first_lvl is true. 332 + * 333 + * @start: start address of the resource searched for 334 + * @end: end address of same resource 335 + * @flags: flags which the resource must have 336 + * @desc: descriptor the resource must have 337 + * @first_lvl: walk only the first level children, if set 338 + * @res: return ptr, if resource found 332 339 */ 333 340 static int find_next_iomem_res(resource_size_t start, resource_size_t end, 334 341 unsigned long flags, unsigned long desc, ··· 406 399 * @flags: I/O resource flags 407 400 * @start: start addr 408 401 * @end: end addr 402 + * @arg: function argument for the callback @func 403 + * @func: callback function that is called for each qualifying resource area 409 404 * 410 405 * NOTE: For a new descriptor search, define a new IORES_DESC in 411 406 * <linux/ioport.h> and set it in 'desc' of a target resource entry.
+15 -9
kernel/sched/core.c
··· 5738 5738 5739 5739 #ifdef CONFIG_SCHED_SMT 5740 5740 /* 5741 - * The sched_smt_present static key needs to be evaluated on every 5742 - * hotplug event because at boot time SMT might be disabled when 5743 - * the number of booted CPUs is limited. 5744 - * 5745 - * If then later a sibling gets hotplugged, then the key would stay 5746 - * off and SMT scheduling would never be functional. 5741 + * When going up, increment the number of cores with SMT present. 5747 5742 */ 5748 - if (cpumask_weight(cpu_smt_mask(cpu)) > 1) 5749 - static_branch_enable_cpuslocked(&sched_smt_present); 5743 + if (cpumask_weight(cpu_smt_mask(cpu)) == 2) 5744 + static_branch_inc_cpuslocked(&sched_smt_present); 5750 5745 #endif 5751 5746 set_cpu_active(cpu, true); 5752 5747 ··· 5784 5789 * Do sync before park smpboot threads to take care the rcu boost case. 5785 5790 */ 5786 5791 synchronize_rcu_mult(call_rcu, call_rcu_sched); 5792 + 5793 + #ifdef CONFIG_SCHED_SMT 5794 + /* 5795 + * When going down, decrement the number of cores with SMT present. 5796 + */ 5797 + if (cpumask_weight(cpu_smt_mask(cpu)) == 2) 5798 + static_branch_dec_cpuslocked(&sched_smt_present); 5799 + #endif 5787 5800 5788 5801 if (!sched_smp_initialized) 5789 5802 return 0; ··· 5854 5851 /* 5855 5852 * There's no userspace yet to cause hotplug operations; hence all the 5856 5853 * CPU masks are stable and all blatant races in the below code cannot 5857 - * happen. 5854 + * happen. The hotplug lock is nevertheless taken to satisfy lockdep, 5855 + * but there won't be any contention on it. 5858 5856 */ 5857 + cpus_read_lock(); 5859 5858 mutex_lock(&sched_domains_mutex); 5860 5859 sched_init_domains(cpu_active_mask); 5861 5860 mutex_unlock(&sched_domains_mutex); 5861 + cpus_read_unlock(); 5862 5862 5863 5863 /* Move init over to a non-isolated CPU */ 5864 5864 if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_FLAG_DOMAIN)) < 0)
+50 -16
kernel/sched/fair.c
··· 2400 2400 local = 1; 2401 2401 2402 2402 /* 2403 - * Retry task to preferred node migration periodically, in case it 2404 - * case it previously failed, or the scheduler moved us. 2403 + * Retry to migrate task to preferred node periodically, in case it 2404 + * previously failed, or the scheduler moved us. 2405 2405 */ 2406 2406 if (time_after(jiffies, p->numa_migrate_retry)) { 2407 2407 task_numa_placement(p); ··· 5674 5674 return target; 5675 5675 } 5676 5676 5677 - static unsigned long cpu_util_wake(int cpu, struct task_struct *p); 5677 + static unsigned long cpu_util_without(int cpu, struct task_struct *p); 5678 5678 5679 - static unsigned long capacity_spare_wake(int cpu, struct task_struct *p) 5679 + static unsigned long capacity_spare_without(int cpu, struct task_struct *p) 5680 5680 { 5681 - return max_t(long, capacity_of(cpu) - cpu_util_wake(cpu, p), 0); 5681 + return max_t(long, capacity_of(cpu) - cpu_util_without(cpu, p), 0); 5682 5682 } 5683 5683 5684 5684 /* ··· 5738 5738 5739 5739 avg_load += cfs_rq_load_avg(&cpu_rq(i)->cfs); 5740 5740 5741 - spare_cap = capacity_spare_wake(i, p); 5741 + spare_cap = capacity_spare_without(i, p); 5742 5742 5743 5743 if (spare_cap > max_spare_cap) 5744 5744 max_spare_cap = spare_cap; ··· 5889 5889 return prev_cpu; 5890 5890 5891 5891 /* 5892 - * We need task's util for capacity_spare_wake, sync it up to prev_cpu's 5893 - * last_update_time. 5892 + * We need task's util for capacity_spare_without, sync it up to 5893 + * prev_cpu's last_update_time. 5894 5894 */ 5895 5895 if (!(sd_flag & SD_BALANCE_FORK)) 5896 5896 sync_entity_load_avg(&p->se); ··· 6216 6216 } 6217 6217 6218 6218 /* 6219 - * cpu_util_wake: Compute CPU utilization with any contributions from 6220 - * the waking task p removed. 6219 + * cpu_util_without: compute cpu utilization without any contributions from *p 6220 + * @cpu: the CPU which utilization is requested 6221 + * @p: the task which utilization should be discounted 6222 + * 6223 + * The utilization of a CPU is defined by the utilization of tasks currently 6224 + * enqueued on that CPU as well as tasks which are currently sleeping after an 6225 + * execution on that CPU. 6226 + * 6227 + * This method returns the utilization of the specified CPU by discounting the 6228 + * utilization of the specified task, whenever the task is currently 6229 + * contributing to the CPU utilization. 6221 6230 */ 6222 - static unsigned long cpu_util_wake(int cpu, struct task_struct *p) 6231 + static unsigned long cpu_util_without(int cpu, struct task_struct *p) 6223 6232 { 6224 6233 struct cfs_rq *cfs_rq; 6225 6234 unsigned int util; ··· 6240 6231 cfs_rq = &cpu_rq(cpu)->cfs; 6241 6232 util = READ_ONCE(cfs_rq->avg.util_avg); 6242 6233 6243 - /* Discount task's blocked util from CPU's util */ 6234 + /* Discount task's util from CPU's util */ 6244 6235 util -= min_t(unsigned int, util, task_util(p)); 6245 6236 6246 6237 /* ··· 6249 6240 * a) if *p is the only task sleeping on this CPU, then: 6250 6241 * cpu_util (== task_util) > util_est (== 0) 6251 6242 * and thus we return: 6252 - * cpu_util_wake = (cpu_util - task_util) = 0 6243 + * cpu_util_without = (cpu_util - task_util) = 0 6253 6244 * 6254 6245 * b) if other tasks are SLEEPING on this CPU, which is now exiting 6255 6246 * IDLE, then: 6256 6247 * cpu_util >= task_util 6257 6248 * cpu_util > util_est (== 0) 6258 6249 * and thus we discount *p's blocked utilization to return: 6259 - * cpu_util_wake = (cpu_util - task_util) >= 0 6250 + * cpu_util_without = (cpu_util - task_util) >= 0 6260 6251 * 6261 6252 * c) if other tasks are RUNNABLE on that CPU and 6262 6253 * util_est > cpu_util ··· 6269 6260 * covered by the following code when estimated utilization is 6270 6261 * enabled. 6271 6262 */ 6272 - if (sched_feat(UTIL_EST)) 6273 - util = max(util, READ_ONCE(cfs_rq->avg.util_est.enqueued)); 6263 + if (sched_feat(UTIL_EST)) { 6264 + unsigned int estimated = 6265 + READ_ONCE(cfs_rq->avg.util_est.enqueued); 6266 + 6267 + /* 6268 + * Despite the following checks we still have a small window 6269 + * for a possible race, when an execl's select_task_rq_fair() 6270 + * races with LB's detach_task(): 6271 + * 6272 + * detach_task() 6273 + * p->on_rq = TASK_ON_RQ_MIGRATING; 6274 + * ---------------------------------- A 6275 + * deactivate_task() \ 6276 + * dequeue_task() + RaceTime 6277 + * util_est_dequeue() / 6278 + * ---------------------------------- B 6279 + * 6280 + * The additional check on "current == p" it's required to 6281 + * properly fix the execl regression and it helps in further 6282 + * reducing the chances for the above race. 6283 + */ 6284 + if (unlikely(task_on_rq_queued(p) || current == p)) { 6285 + estimated -= min_t(unsigned int, estimated, 6286 + (_task_util_est(p) | UTIL_AVG_UNCHANGED)); 6287 + } 6288 + util = max(util, estimated); 6289 + } 6274 6290 6275 6291 /* 6276 6292 * Utilization (estimated) can exceed the CPU capacity, thus let's
+44 -31
kernel/sched/psi.c
··· 136 136 137 137 static int psi_bug __read_mostly; 138 138 139 - bool psi_disabled __read_mostly; 140 - core_param(psi_disabled, psi_disabled, bool, 0644); 139 + DEFINE_STATIC_KEY_FALSE(psi_disabled); 140 + 141 + #ifdef CONFIG_PSI_DEFAULT_DISABLED 142 + bool psi_enable; 143 + #else 144 + bool psi_enable = true; 145 + #endif 146 + static int __init setup_psi(char *str) 147 + { 148 + return kstrtobool(str, &psi_enable) == 0; 149 + } 150 + __setup("psi=", setup_psi); 141 151 142 152 /* Running averages - we need to be higher-res than loadavg */ 143 153 #define PSI_FREQ (2*HZ+1) /* 2 sec intervals */ ··· 179 169 180 170 void __init psi_init(void) 181 171 { 182 - if (psi_disabled) 172 + if (!psi_enable) { 173 + static_branch_enable(&psi_disabled); 183 174 return; 175 + } 184 176 185 177 psi_period = jiffies_to_nsecs(PSI_FREQ); 186 178 group_init(&psi_system); ··· 561 549 struct rq_flags rf; 562 550 struct rq *rq; 563 551 564 - if (psi_disabled) 552 + if (static_branch_likely(&psi_disabled)) 565 553 return; 566 554 567 555 *flags = current->flags & PF_MEMSTALL; ··· 591 579 struct rq_flags rf; 592 580 struct rq *rq; 593 581 594 - if (psi_disabled) 582 + if (static_branch_likely(&psi_disabled)) 595 583 return; 596 584 597 585 if (*flags) ··· 612 600 #ifdef CONFIG_CGROUPS 613 601 int psi_cgroup_alloc(struct cgroup *cgroup) 614 602 { 615 - if (psi_disabled) 603 + if (static_branch_likely(&psi_disabled)) 616 604 return 0; 617 605 618 606 cgroup->psi.pcpu = alloc_percpu(struct psi_group_cpu); ··· 624 612 625 613 void psi_cgroup_free(struct cgroup *cgroup) 626 614 { 627 - if (psi_disabled) 615 + if (static_branch_likely(&psi_disabled)) 628 616 return; 629 617 630 618 cancel_delayed_work_sync(&cgroup->psi.clock_work); ··· 645 633 */ 646 634 void cgroup_move_task(struct task_struct *task, struct css_set *to) 647 635 { 648 - bool move_psi = !psi_disabled; 649 636 unsigned int task_flags = 0; 650 637 struct rq_flags rf; 651 638 struct rq *rq; 652 639 653 - if (move_psi) { 654 - rq = task_rq_lock(task, &rf); 655 - 656 - if (task_on_rq_queued(task)) 657 - task_flags = TSK_RUNNING; 658 - else if (task->in_iowait) 659 - task_flags = TSK_IOWAIT; 660 - 661 - if (task->flags & PF_MEMSTALL) 662 - task_flags |= TSK_MEMSTALL; 663 - 664 - if (task_flags) 665 - psi_task_change(task, task_flags, 0); 640 + if (static_branch_likely(&psi_disabled)) { 641 + /* 642 + * Lame to do this here, but the scheduler cannot be locked 643 + * from the outside, so we move cgroups from inside sched/. 644 + */ 645 + rcu_assign_pointer(task->cgroups, to); 646 + return; 666 647 } 667 648 668 - /* 669 - * Lame to do this here, but the scheduler cannot be locked 670 - * from the outside, so we move cgroups from inside sched/. 671 - */ 649 + rq = task_rq_lock(task, &rf); 650 + 651 + if (task_on_rq_queued(task)) 652 + task_flags = TSK_RUNNING; 653 + else if (task->in_iowait) 654 + task_flags = TSK_IOWAIT; 655 + 656 + if (task->flags & PF_MEMSTALL) 657 + task_flags |= TSK_MEMSTALL; 658 + 659 + if (task_flags) 660 + psi_task_change(task, task_flags, 0); 661 + 662 + /* See comment above */ 672 663 rcu_assign_pointer(task->cgroups, to); 673 664 674 - if (move_psi) { 675 - if (task_flags) 676 - psi_task_change(task, 0, task_flags); 665 + if (task_flags) 666 + psi_task_change(task, 0, task_flags); 677 667 678 - task_rq_unlock(rq, task, &rf); 679 - } 668 + task_rq_unlock(rq, task, &rf); 680 669 } 681 670 #endif /* CONFIG_CGROUPS */ 682 671 ··· 685 672 { 686 673 int full; 687 674 688 - if (psi_disabled) 675 + if (static_branch_likely(&psi_disabled)) 689 676 return -EOPNOTSUPP; 690 677 691 678 update_stats(group);
+1 -3
kernel/sched/sched.h
··· 23 23 #include <linux/sched/prio.h> 24 24 #include <linux/sched/rt.h> 25 25 #include <linux/sched/signal.h> 26 + #include <linux/sched/smt.h> 26 27 #include <linux/sched/stat.h> 27 28 #include <linux/sched/sysctl.h> 28 29 #include <linux/sched/task.h> ··· 937 936 938 937 939 938 #ifdef CONFIG_SCHED_SMT 940 - 941 - extern struct static_key_false sched_smt_present; 942 - 943 939 extern void __update_idle_core(struct rq *rq); 944 940 945 941 static inline void update_idle_core(struct rq *rq)
+4 -4
kernel/sched/stats.h
··· 66 66 { 67 67 int clear = 0, set = TSK_RUNNING; 68 68 69 - if (psi_disabled) 69 + if (static_branch_likely(&psi_disabled)) 70 70 return; 71 71 72 72 if (!wakeup || p->sched_psi_wake_requeue) { ··· 86 86 { 87 87 int clear = TSK_RUNNING, set = 0; 88 88 89 - if (psi_disabled) 89 + if (static_branch_likely(&psi_disabled)) 90 90 return; 91 91 92 92 if (!sleep) { ··· 102 102 103 103 static inline void psi_ttwu_dequeue(struct task_struct *p) 104 104 { 105 - if (psi_disabled) 105 + if (static_branch_likely(&psi_disabled)) 106 106 return; 107 107 /* 108 108 * Is the task being migrated during a wakeup? Make sure to ··· 128 128 129 129 static inline void psi_task_tick(struct rq *rq) 130 130 { 131 - if (psi_disabled) 131 + if (static_branch_likely(&psi_disabled)) 132 132 return; 133 133 134 134 if (unlikely(rq->curr->flags & PF_MEMSTALL))
+3 -1
kernel/stackleak.c
··· 11 11 */ 12 12 13 13 #include <linux/stackleak.h> 14 + #include <linux/kprobes.h> 14 15 15 16 #ifdef CONFIG_STACKLEAK_RUNTIME_DISABLE 16 17 #include <linux/jump_label.h> ··· 48 47 #define skip_erasing() false 49 48 #endif /* CONFIG_STACKLEAK_RUNTIME_DISABLE */ 50 49 51 - asmlinkage void stackleak_erase(void) 50 + asmlinkage void notrace stackleak_erase(void) 52 51 { 53 52 /* It would be nice not to have 'kstack_ptr' and 'boundary' on stack */ 54 53 unsigned long kstack_ptr = current->lowest_stack; ··· 102 101 /* Reset the 'lowest_stack' value for the next syscall */ 103 102 current->lowest_stack = current_top_of_stack() - THREAD_SIZE/64; 104 103 } 104 + NOKPROBE_SYMBOL(stackleak_erase); 105 105 106 106 void __used stackleak_track_stack(void) 107 107 {
-3
kernel/time/posix-cpu-timers.c
··· 917 917 struct task_cputime cputime; 918 918 unsigned long soft; 919 919 920 - if (dl_task(tsk)) 921 - check_dl_overrun(tsk); 922 - 923 920 /* 924 921 * If cputimer is not running, then there are no active 925 922 * process wide timers (POSIX 1.b, itimers, RLIMIT_CPU).
+5 -3
kernel/trace/bpf_trace.c
··· 196 196 i++; 197 197 } else if (fmt[i] == 'p' || fmt[i] == 's') { 198 198 mod[fmt_cnt]++; 199 - i++; 200 - if (!isspace(fmt[i]) && !ispunct(fmt[i]) && fmt[i] != 0) 199 + /* disallow any further format extensions */ 200 + if (fmt[i + 1] != 0 && 201 + !isspace(fmt[i + 1]) && 202 + !ispunct(fmt[i + 1])) 201 203 return -EINVAL; 202 204 fmt_cnt++; 203 - if (fmt[i - 1] == 's') { 205 + if (fmt[i] == 's') { 204 206 if (str_seen) 205 207 /* allow only one '%s' per fmt string */ 206 208 return -EINVAL;
+5 -2
kernel/trace/ftrace.c
··· 817 817 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 818 818 static int profile_graph_entry(struct ftrace_graph_ent *trace) 819 819 { 820 - int index = trace->depth; 820 + int index = current->curr_ret_stack; 821 821 822 822 function_profile_call(trace->func, 0, NULL, NULL); 823 823 ··· 852 852 if (!fgraph_graph_time) { 853 853 int index; 854 854 855 - index = trace->depth; 855 + index = current->curr_ret_stack; 856 856 857 857 /* Append this call time to the parent time to subtract */ 858 858 if (index) ··· 6814 6814 atomic_set(&t->tracing_graph_pause, 0); 6815 6815 atomic_set(&t->trace_overrun, 0); 6816 6816 t->curr_ret_stack = -1; 6817 + t->curr_ret_depth = -1; 6817 6818 /* Make sure the tasks see the -1 first: */ 6818 6819 smp_wmb(); 6819 6820 t->ret_stack = ret_stack_list[start++]; ··· 7039 7038 void ftrace_graph_init_idle_task(struct task_struct *t, int cpu) 7040 7039 { 7041 7040 t->curr_ret_stack = -1; 7041 + t->curr_ret_depth = -1; 7042 7042 /* 7043 7043 * The idle task has no parent, it either has its own 7044 7044 * stack or no stack at all. ··· 7070 7068 /* Make sure we do not use the parent ret_stack */ 7071 7069 t->ret_stack = NULL; 7072 7070 t->curr_ret_stack = -1; 7071 + t->curr_ret_depth = -1; 7073 7072 7074 7073 if (ftrace_graph_active) { 7075 7074 struct ftrace_ret_stack *ret_stack;
+54 -3
kernel/trace/trace.h
··· 512 512 * can only be modified by current, we can reuse trace_recursion. 513 513 */ 514 514 TRACE_IRQ_BIT, 515 + 516 + /* Set if the function is in the set_graph_function file */ 517 + TRACE_GRAPH_BIT, 518 + 519 + /* 520 + * In the very unlikely case that an interrupt came in 521 + * at a start of graph tracing, and we want to trace 522 + * the function in that interrupt, the depth can be greater 523 + * than zero, because of the preempted start of a previous 524 + * trace. In an even more unlikely case, depth could be 2 525 + * if a softirq interrupted the start of graph tracing, 526 + * followed by an interrupt preempting a start of graph 527 + * tracing in the softirq, and depth can even be 3 528 + * if an NMI came in at the start of an interrupt function 529 + * that preempted a softirq start of a function that 530 + * preempted normal context!!!! Luckily, it can't be 531 + * greater than 3, so the next two bits are a mask 532 + * of what the depth is when we set TRACE_GRAPH_BIT 533 + */ 534 + 535 + TRACE_GRAPH_DEPTH_START_BIT, 536 + TRACE_GRAPH_DEPTH_END_BIT, 515 537 }; 516 538 517 539 #define trace_recursion_set(bit) do { (current)->trace_recursion |= (1<<(bit)); } while (0) 518 540 #define trace_recursion_clear(bit) do { (current)->trace_recursion &= ~(1<<(bit)); } while (0) 519 541 #define trace_recursion_test(bit) ((current)->trace_recursion & (1<<(bit))) 542 + 543 + #define trace_recursion_depth() \ 544 + (((current)->trace_recursion >> TRACE_GRAPH_DEPTH_START_BIT) & 3) 545 + #define trace_recursion_set_depth(depth) \ 546 + do { \ 547 + current->trace_recursion &= \ 548 + ~(3 << TRACE_GRAPH_DEPTH_START_BIT); \ 549 + current->trace_recursion |= \ 550 + ((depth) & 3) << TRACE_GRAPH_DEPTH_START_BIT; \ 551 + } while (0) 520 552 521 553 #define TRACE_CONTEXT_BITS 4 522 554 ··· 875 843 extern struct ftrace_hash *ftrace_graph_hash; 876 844 extern struct ftrace_hash *ftrace_graph_notrace_hash; 877 845 878 - static inline int ftrace_graph_addr(unsigned long addr) 846 + static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) 879 847 { 848 + unsigned long addr = trace->func; 880 849 int ret = 0; 881 850 882 851 preempt_disable_notrace(); ··· 888 855 } 889 856 890 857 if (ftrace_lookup_ip(ftrace_graph_hash, addr)) { 858 + 859 + /* 860 + * This needs to be cleared on the return functions 861 + * when the depth is zero. 862 + */ 863 + trace_recursion_set(TRACE_GRAPH_BIT); 864 + trace_recursion_set_depth(trace->depth); 865 + 891 866 /* 892 867 * If no irqs are to be traced, but a set_graph_function 893 868 * is set, and called by an interrupt handler, we still ··· 913 872 return ret; 914 873 } 915 874 875 + static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace) 876 + { 877 + if (trace_recursion_test(TRACE_GRAPH_BIT) && 878 + trace->depth == trace_recursion_depth()) 879 + trace_recursion_clear(TRACE_GRAPH_BIT); 880 + } 881 + 916 882 static inline int ftrace_graph_notrace_addr(unsigned long addr) 917 883 { 918 884 int ret = 0; ··· 933 885 return ret; 934 886 } 935 887 #else 936 - static inline int ftrace_graph_addr(unsigned long addr) 888 + static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) 937 889 { 938 890 return 1; 939 891 } ··· 942 894 { 943 895 return 0; 944 896 } 897 + static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace) 898 + { } 945 899 #endif /* CONFIG_DYNAMIC_FTRACE */ 946 900 947 901 extern unsigned int fgraph_max_depth; ··· 951 901 static inline bool ftrace_graph_ignore_func(struct ftrace_graph_ent *trace) 952 902 { 953 903 /* trace it when it is-nested-in or is a function enabled. */ 954 - return !(trace->depth || ftrace_graph_addr(trace->func)) || 904 + return !(trace_recursion_test(TRACE_GRAPH_BIT) || 905 + ftrace_graph_addr(trace)) || 955 906 (trace->depth < 0) || 956 907 (fgraph_max_depth && trace->depth >= fgraph_max_depth); 957 908 }
+42 -11
kernel/trace/trace_functions_graph.c
··· 118 118 struct trace_seq *s, u32 flags); 119 119 120 120 /* Add a function return address to the trace stack on thread info.*/ 121 - int 122 - ftrace_push_return_trace(unsigned long ret, unsigned long func, int *depth, 121 + static int 122 + ftrace_push_return_trace(unsigned long ret, unsigned long func, 123 123 unsigned long frame_pointer, unsigned long *retp) 124 124 { 125 125 unsigned long long calltime; ··· 177 177 #ifdef HAVE_FUNCTION_GRAPH_RET_ADDR_PTR 178 178 current->ret_stack[index].retp = retp; 179 179 #endif 180 - *depth = current->curr_ret_stack; 180 + return 0; 181 + } 182 + 183 + int function_graph_enter(unsigned long ret, unsigned long func, 184 + unsigned long frame_pointer, unsigned long *retp) 185 + { 186 + struct ftrace_graph_ent trace; 187 + 188 + trace.func = func; 189 + trace.depth = ++current->curr_ret_depth; 190 + 191 + if (ftrace_push_return_trace(ret, func, 192 + frame_pointer, retp)) 193 + goto out; 194 + 195 + /* Only trace if the calling function expects to */ 196 + if (!ftrace_graph_entry(&trace)) 197 + goto out_ret; 181 198 182 199 return 0; 200 + out_ret: 201 + current->curr_ret_stack--; 202 + out: 203 + current->curr_ret_depth--; 204 + return -EBUSY; 183 205 } 184 206 185 207 /* Retrieve a function return address to the trace stack on thread info.*/ ··· 263 241 trace->func = current->ret_stack[index].func; 264 242 trace->calltime = current->ret_stack[index].calltime; 265 243 trace->overrun = atomic_read(&current->trace_overrun); 266 - trace->depth = index; 244 + trace->depth = current->curr_ret_depth--; 245 + /* 246 + * We still want to trace interrupts coming in if 247 + * max_depth is set to 1. Make sure the decrement is 248 + * seen before ftrace_graph_return. 249 + */ 250 + barrier(); 267 251 } 268 252 269 253 /* ··· 283 255 284 256 ftrace_pop_return_trace(&trace, &ret, frame_pointer); 285 257 trace.rettime = trace_clock_local(); 258 + ftrace_graph_return(&trace); 259 + /* 260 + * The ftrace_graph_return() may still access the current 261 + * ret_stack structure, we need to make sure the update of 262 + * curr_ret_stack is after that. 263 + */ 286 264 barrier(); 287 265 current->curr_ret_stack--; 288 266 /* ··· 300 266 current->curr_ret_stack += FTRACE_NOTRACE_DEPTH; 301 267 return ret; 302 268 } 303 - 304 - /* 305 - * The trace should run after decrementing the ret counter 306 - * in case an interrupt were to come in. We don't want to 307 - * lose the interrupt if max_depth is set. 308 - */ 309 - ftrace_graph_return(&trace); 310 269 311 270 if (unlikely(!ret)) { 312 271 ftrace_graph_stop(); ··· 509 482 int cpu; 510 483 int pc; 511 484 485 + ftrace_graph_addr_finish(trace); 486 + 512 487 local_irq_save(flags); 513 488 cpu = raw_smp_processor_id(); 514 489 data = per_cpu_ptr(tr->trace_buffer.data, cpu); ··· 534 505 535 506 static void trace_graph_thresh_return(struct ftrace_graph_ret *trace) 536 507 { 508 + ftrace_graph_addr_finish(trace); 509 + 537 510 if (tracing_thresh && 538 511 (trace->rettime - trace->calltime < tracing_thresh)) 539 512 return;
+2
kernel/trace/trace_irqsoff.c
··· 208 208 unsigned long flags; 209 209 int pc; 210 210 211 + ftrace_graph_addr_finish(trace); 212 + 211 213 if (!func_prolog_dec(tr, &data, &flags)) 212 214 return; 213 215
+1 -1
kernel/trace/trace_probe.c
··· 535 535 if (code[1].op != FETCH_OP_IMM) 536 536 return -EINVAL; 537 537 538 - tmp = strpbrk("+-", code->data); 538 + tmp = strpbrk(code->data, "+-"); 539 539 if (tmp) 540 540 c = *tmp; 541 541 ret = traceprobe_split_symbol_offset(code->data,
+2
kernel/trace/trace_sched_wakeup.c
··· 270 270 unsigned long flags; 271 271 int pc; 272 272 273 + ftrace_graph_addr_finish(trace); 274 + 273 275 if (!func_prolog_preempt_disable(tr, &data, &pc)) 274 276 return; 275 277
+8 -4
kernel/user_namespace.c
··· 974 974 if (!new_idmap_permitted(file, ns, cap_setid, &new_map)) 975 975 goto out; 976 976 977 - ret = sort_idmaps(&new_map); 978 - if (ret < 0) 979 - goto out; 980 - 981 977 ret = -EPERM; 982 978 /* Map the lower ids from the parent user namespace to the 983 979 * kernel global id space. ··· 999 1003 1000 1004 e->lower_first = lower_first; 1001 1005 } 1006 + 1007 + /* 1008 + * If we want to use binary search for lookup, this clones the extent 1009 + * array and sorts both copies. 1010 + */ 1011 + ret = sort_idmaps(&new_map); 1012 + if (ret < 0) 1013 + goto out; 1002 1014 1003 1015 /* Install the map */ 1004 1016 if (new_map.nr_extents <= UID_GID_MAP_MAX_BASE_EXTENTS) {
+2 -3
lib/debugobjects.c
··· 135 135 if (!new) 136 136 return; 137 137 138 - kmemleak_ignore(new); 139 138 raw_spin_lock_irqsave(&pool_lock, flags); 140 139 hlist_add_head(&new->node, &obj_pool); 141 140 debug_objects_allocated++; ··· 1127 1128 obj = kmem_cache_zalloc(obj_cache, GFP_KERNEL); 1128 1129 if (!obj) 1129 1130 goto free; 1130 - kmemleak_ignore(obj); 1131 1131 hlist_add_head(&obj->node, &objects); 1132 1132 } 1133 1133 ··· 1182 1184 1183 1185 obj_cache = kmem_cache_create("debug_objects_cache", 1184 1186 sizeof (struct debug_obj), 0, 1185 - SLAB_DEBUG_OBJECTS, NULL); 1187 + SLAB_DEBUG_OBJECTS | SLAB_NOLEAKTRACE, 1188 + NULL); 1186 1189 1187 1190 if (!obj_cache || debug_objects_replace_static_objects()) { 1188 1191 debug_objects_enabled = 0;
+37 -1
lib/iov_iter.c
··· 560 560 return bytes; 561 561 } 562 562 563 + static size_t csum_and_copy_to_pipe_iter(const void *addr, size_t bytes, 564 + __wsum *csum, struct iov_iter *i) 565 + { 566 + struct pipe_inode_info *pipe = i->pipe; 567 + size_t n, r; 568 + size_t off = 0; 569 + __wsum sum = *csum, next; 570 + int idx; 571 + 572 + if (!sanity(i)) 573 + return 0; 574 + 575 + bytes = n = push_pipe(i, bytes, &idx, &r); 576 + if (unlikely(!n)) 577 + return 0; 578 + for ( ; n; idx = next_idx(idx, pipe), r = 0) { 579 + size_t chunk = min_t(size_t, n, PAGE_SIZE - r); 580 + char *p = kmap_atomic(pipe->bufs[idx].page); 581 + next = csum_partial_copy_nocheck(addr, p + r, chunk, 0); 582 + sum = csum_block_add(sum, next, off); 583 + kunmap_atomic(p); 584 + i->idx = idx; 585 + i->iov_offset = r + chunk; 586 + n -= chunk; 587 + off += chunk; 588 + addr += chunk; 589 + } 590 + i->count -= bytes; 591 + *csum = sum; 592 + return bytes; 593 + } 594 + 563 595 size_t _copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i) 564 596 { 565 597 const char *from = addr; ··· 1470 1438 const char *from = addr; 1471 1439 __wsum sum, next; 1472 1440 size_t off = 0; 1441 + 1442 + if (unlikely(iov_iter_is_pipe(i))) 1443 + return csum_and_copy_to_pipe_iter(addr, bytes, csum, i); 1444 + 1473 1445 sum = *csum; 1474 - if (unlikely(iov_iter_is_pipe(i) || iov_iter_is_discard(i))) { 1446 + if (unlikely(iov_iter_is_discard(i))) { 1475 1447 WARN_ON(1); /* for now */ 1476 1448 return 0; 1477 1449 }
+2 -2
lib/raid6/test/Makefile
··· 27 27 CFLAGS += -I../../../arch/arm/include -mfpu=neon 28 28 HAS_NEON = yes 29 29 endif 30 - ifeq ($(ARCH),arm64) 30 + ifeq ($(ARCH),aarch64) 31 31 CFLAGS += -I../../../arch/arm64/include 32 32 HAS_NEON = yes 33 33 endif ··· 41 41 gcc -c -x assembler - >&/dev/null && \ 42 42 rm ./-.o && echo -DCONFIG_AS_AVX512=1) 43 43 else ifeq ($(HAS_NEON),yes) 44 - OBJS += neon.o neon1.o neon2.o neon4.o neon8.o 44 + OBJS += neon.o neon1.o neon2.o neon4.o neon8.o recov_neon.o recov_neon_inner.o 45 45 CFLAGS += -DCONFIG_KERNEL_MODE_NEON=1 46 46 else 47 47 HAS_ALTIVEC := $(shell printf '\#include <altivec.h>\nvector int a;\n' |\
+1
lib/test_firmware.c
··· 837 837 if (req->fw->size > PAGE_SIZE) { 838 838 pr_err("Testing interface must use PAGE_SIZE firmware for now\n"); 839 839 rc = -EINVAL; 840 + goto out; 840 841 } 841 842 memcpy(buf, req->fw->data, req->fw->size); 842 843
+1 -1
lib/test_hexdump.c
··· 99 99 const char *q = *result++; 100 100 size_t amount = strlen(q); 101 101 102 - strncpy(p, q, amount); 102 + memcpy(p, q, amount); 103 103 p += amount; 104 104 105 105 *p++ = ' ';
-1
lib/test_kmod.c
··· 1214 1214 1215 1215 dev_info(test_dev->dev, "removing interface\n"); 1216 1216 misc_deregister(&test_dev->misc_dev); 1217 - kfree(&test_dev->misc_dev.name); 1218 1217 1219 1218 mutex_unlock(&test_dev->config_mutex); 1220 1219 mutex_unlock(&test_dev->trigger_mutex);
+47 -3
lib/test_xarray.c
··· 208 208 XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_2)); 209 209 210 210 /* We should see two elements in the array */ 211 + rcu_read_lock(); 211 212 xas_for_each(&xas, entry, ULONG_MAX) 212 213 seen++; 214 + rcu_read_unlock(); 213 215 XA_BUG_ON(xa, seen != 2); 214 216 215 217 /* One of which is marked */ 216 218 xas_set(&xas, 0); 217 219 seen = 0; 220 + rcu_read_lock(); 218 221 xas_for_each_marked(&xas, entry, ULONG_MAX, XA_MARK_0) 219 222 seen++; 223 + rcu_read_unlock(); 220 224 XA_BUG_ON(xa, seen != 1); 221 225 } 222 226 XA_BUG_ON(xa, xa_get_mark(xa, next, XA_MARK_0)); ··· 377 373 xa_erase_index(xa, 12345678); 378 374 XA_BUG_ON(xa, !xa_empty(xa)); 379 375 376 + /* And so does xa_insert */ 377 + xa_reserve(xa, 12345678, GFP_KERNEL); 378 + XA_BUG_ON(xa, xa_insert(xa, 12345678, xa_mk_value(12345678), 0) != 0); 379 + xa_erase_index(xa, 12345678); 380 + XA_BUG_ON(xa, !xa_empty(xa)); 381 + 380 382 /* Can iterate through a reserved entry */ 381 383 xa_store_index(xa, 5, GFP_KERNEL); 382 384 xa_reserve(xa, 6, GFP_KERNEL); ··· 446 436 XA_BUG_ON(xa, xa_load(xa, max) != NULL); 447 437 XA_BUG_ON(xa, xa_load(xa, min - 1) != NULL); 448 438 439 + xas_lock(&xas); 449 440 XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(min)) != xa_mk_value(index)); 441 + xas_unlock(&xas); 450 442 XA_BUG_ON(xa, xa_load(xa, min) != xa_mk_value(min)); 451 443 XA_BUG_ON(xa, xa_load(xa, max - 1) != xa_mk_value(min)); 452 444 XA_BUG_ON(xa, xa_load(xa, max) != NULL); ··· 464 452 XA_STATE(xas, xa, index); 465 453 xa_store_order(xa, index, order, xa_mk_value(0), GFP_KERNEL); 466 454 455 + xas_lock(&xas); 467 456 XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(1)) != xa_mk_value(0)); 468 457 XA_BUG_ON(xa, xas.xa_index != index); 469 458 XA_BUG_ON(xa, xas_store(&xas, NULL) != xa_mk_value(1)); 459 + xas_unlock(&xas); 470 460 XA_BUG_ON(xa, !xa_empty(xa)); 471 461 } 472 462 #endif ··· 512 498 rcu_read_unlock(); 513 499 514 500 /* We can erase multiple values with a single store */ 515 - xa_store_order(xa, 0, 63, NULL, GFP_KERNEL); 501 + xa_store_order(xa, 0, BITS_PER_LONG - 1, NULL, GFP_KERNEL); 516 502 XA_BUG_ON(xa, !xa_empty(xa)); 517 503 518 504 /* Even when the first slot is empty but the others aren't */ ··· 716 702 } 717 703 } 718 704 719 - static noinline void check_find(struct xarray *xa) 705 + static noinline void check_find_1(struct xarray *xa) 720 706 { 721 707 unsigned long i, j, k; 722 708 ··· 762 748 XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_0)); 763 749 } 764 750 XA_BUG_ON(xa, !xa_empty(xa)); 751 + } 752 + 753 + static noinline void check_find_2(struct xarray *xa) 754 + { 755 + void *entry; 756 + unsigned long i, j, index = 0; 757 + 758 + xa_for_each(xa, entry, index, ULONG_MAX, XA_PRESENT) { 759 + XA_BUG_ON(xa, true); 760 + } 761 + 762 + for (i = 0; i < 1024; i++) { 763 + xa_store_index(xa, index, GFP_KERNEL); 764 + j = 0; 765 + index = 0; 766 + xa_for_each(xa, entry, index, ULONG_MAX, XA_PRESENT) { 767 + XA_BUG_ON(xa, xa_mk_value(index) != entry); 768 + XA_BUG_ON(xa, index != j++); 769 + } 770 + } 771 + 772 + xa_destroy(xa); 773 + } 774 + 775 + static noinline void check_find(struct xarray *xa) 776 + { 777 + check_find_1(xa); 778 + check_find_2(xa); 765 779 check_multi_find(xa); 766 780 check_multi_find_2(xa); 767 781 } ··· 1109 1067 __check_store_range(xa, 4095 + i, 4095 + j); 1110 1068 __check_store_range(xa, 4096 + i, 4096 + j); 1111 1069 __check_store_range(xa, 123456 + i, 123456 + j); 1112 - __check_store_range(xa, UINT_MAX + i, UINT_MAX + j); 1070 + __check_store_range(xa, (1 << 24) + i, (1 << 24) + j); 1113 1071 } 1114 1072 } 1115 1073 } ··· 1188 1146 XA_STATE(xas, xa, 1 << order); 1189 1147 1190 1148 xa_store_order(xa, 0, order, xa, GFP_KERNEL); 1149 + rcu_read_lock(); 1191 1150 xas_load(&xas); 1192 1151 XA_BUG_ON(xa, xas.xa_node->count == 0); 1193 1152 XA_BUG_ON(xa, xas.xa_node->count > (1 << order)); 1194 1153 XA_BUG_ON(xa, xas.xa_node->nr_values != 0); 1154 + rcu_read_unlock(); 1195 1155 1196 1156 xa_store_order(xa, 1 << order, order, xa_mk_value(1 << order), 1197 1157 GFP_KERNEL);
+1 -2
lib/ubsan.c
··· 427 427 EXPORT_SYMBOL(__ubsan_handle_shift_out_of_bounds); 428 428 429 429 430 - void __noreturn 431 - __ubsan_handle_builtin_unreachable(struct unreachable_data *data) 430 + void __ubsan_handle_builtin_unreachable(struct unreachable_data *data) 432 431 { 433 432 unsigned long flags; 434 433
+60 -79
lib/xarray.c
··· 610 610 * (see the xa_cmpxchg() implementation for an example). 611 611 * 612 612 * Return: If the slot already existed, returns the contents of this slot. 613 - * If the slot was newly created, returns NULL. If it failed to create the 614 - * slot, returns NULL and indicates the error in @xas. 613 + * If the slot was newly created, returns %NULL. If it failed to create the 614 + * slot, returns %NULL and indicates the error in @xas. 615 615 */ 616 616 static void *xas_create(struct xa_state *xas) 617 617 { ··· 1334 1334 XA_STATE(xas, xa, index); 1335 1335 return xas_result(&xas, xas_store(&xas, NULL)); 1336 1336 } 1337 - EXPORT_SYMBOL_GPL(__xa_erase); 1337 + EXPORT_SYMBOL(__xa_erase); 1338 1338 1339 1339 /** 1340 - * xa_store() - Store this entry in the XArray. 1340 + * xa_erase() - Erase this entry from the XArray. 1341 1341 * @xa: XArray. 1342 - * @index: Index into array. 1343 - * @entry: New entry. 1344 - * @gfp: Memory allocation flags. 1342 + * @index: Index of entry. 1345 1343 * 1346 - * After this function returns, loads from this index will return @entry. 1347 - * Storing into an existing multislot entry updates the entry of every index. 1348 - * The marks associated with @index are unaffected unless @entry is %NULL. 1344 + * This function is the equivalent of calling xa_store() with %NULL as 1345 + * the third argument. The XArray does not need to allocate memory, so 1346 + * the user does not need to provide GFP flags. 1349 1347 * 1350 - * Context: Process context. Takes and releases the xa_lock. May sleep 1351 - * if the @gfp flags permit. 1352 - * Return: The old entry at this index on success, xa_err(-EINVAL) if @entry 1353 - * cannot be stored in an XArray, or xa_err(-ENOMEM) if memory allocation 1354 - * failed. 1348 + * Context: Any context. Takes and releases the xa_lock. 1349 + * Return: The entry which used to be at this index. 1355 1350 */ 1356 - void *xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp) 1351 + void *xa_erase(struct xarray *xa, unsigned long index) 1357 1352 { 1358 - XA_STATE(xas, xa, index); 1359 - void *curr; 1353 + void *entry; 1360 1354 1361 - if (WARN_ON_ONCE(xa_is_internal(entry))) 1362 - return XA_ERROR(-EINVAL); 1355 + xa_lock(xa); 1356 + entry = __xa_erase(xa, index); 1357 + xa_unlock(xa); 1363 1358 1364 - do { 1365 - xas_lock(&xas); 1366 - curr = xas_store(&xas, entry); 1367 - if (xa_track_free(xa) && entry) 1368 - xas_clear_mark(&xas, XA_FREE_MARK); 1369 - xas_unlock(&xas); 1370 - } while (xas_nomem(&xas, gfp)); 1371 - 1372 - return xas_result(&xas, curr); 1359 + return entry; 1373 1360 } 1374 - EXPORT_SYMBOL(xa_store); 1361 + EXPORT_SYMBOL(xa_erase); 1375 1362 1376 1363 /** 1377 1364 * __xa_store() - Store this entry in the XArray. ··· 1382 1395 1383 1396 if (WARN_ON_ONCE(xa_is_internal(entry))) 1384 1397 return XA_ERROR(-EINVAL); 1398 + if (xa_track_free(xa) && !entry) 1399 + entry = XA_ZERO_ENTRY; 1385 1400 1386 1401 do { 1387 1402 curr = xas_store(&xas, entry); 1388 - if (xa_track_free(xa) && entry) 1403 + if (xa_track_free(xa)) 1389 1404 xas_clear_mark(&xas, XA_FREE_MARK); 1390 1405 } while (__xas_nomem(&xas, gfp)); 1391 1406 ··· 1396 1407 EXPORT_SYMBOL(__xa_store); 1397 1408 1398 1409 /** 1399 - * xa_cmpxchg() - Conditionally replace an entry in the XArray. 1410 + * xa_store() - Store this entry in the XArray. 1400 1411 * @xa: XArray. 1401 1412 * @index: Index into array. 1402 - * @old: Old value to test against. 1403 - * @entry: New value to place in array. 1413 + * @entry: New entry. 1404 1414 * @gfp: Memory allocation flags. 1405 1415 * 1406 - * If the entry at @index is the same as @old, replace it with @entry. 1407 - * If the return value is equal to @old, then the exchange was successful. 1416 + * After this function returns, loads from this index will return @entry. 1417 + * Storing into an existing multislot entry updates the entry of every index. 1418 + * The marks associated with @index are unaffected unless @entry is %NULL. 1408 1419 * 1409 - * Context: Process context. Takes and releases the xa_lock. May sleep 1410 - * if the @gfp flags permit. 1411 - * Return: The old value at this index or xa_err() if an error happened. 1420 + * Context: Any context. Takes and releases the xa_lock. 1421 + * May sleep if the @gfp flags permit. 1422 + * Return: The old entry at this index on success, xa_err(-EINVAL) if @entry 1423 + * cannot be stored in an XArray, or xa_err(-ENOMEM) if memory allocation 1424 + * failed. 1412 1425 */ 1413 - void *xa_cmpxchg(struct xarray *xa, unsigned long index, 1414 - void *old, void *entry, gfp_t gfp) 1426 + void *xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp) 1415 1427 { 1416 - XA_STATE(xas, xa, index); 1417 1428 void *curr; 1418 1429 1419 - if (WARN_ON_ONCE(xa_is_internal(entry))) 1420 - return XA_ERROR(-EINVAL); 1430 + xa_lock(xa); 1431 + curr = __xa_store(xa, index, entry, gfp); 1432 + xa_unlock(xa); 1421 1433 1422 - do { 1423 - xas_lock(&xas); 1424 - curr = xas_load(&xas); 1425 - if (curr == XA_ZERO_ENTRY) 1426 - curr = NULL; 1427 - if (curr == old) { 1428 - xas_store(&xas, entry); 1429 - if (xa_track_free(xa) && entry) 1430 - xas_clear_mark(&xas, XA_FREE_MARK); 1431 - } 1432 - xas_unlock(&xas); 1433 - } while (xas_nomem(&xas, gfp)); 1434 - 1435 - return xas_result(&xas, curr); 1434 + return curr; 1436 1435 } 1437 - EXPORT_SYMBOL(xa_cmpxchg); 1436 + EXPORT_SYMBOL(xa_store); 1438 1437 1439 1438 /** 1440 1439 * __xa_cmpxchg() - Store this entry in the XArray. ··· 1448 1471 1449 1472 if (WARN_ON_ONCE(xa_is_internal(entry))) 1450 1473 return XA_ERROR(-EINVAL); 1474 + if (xa_track_free(xa) && !entry) 1475 + entry = XA_ZERO_ENTRY; 1451 1476 1452 1477 do { 1453 1478 curr = xas_load(&xas); ··· 1457 1478 curr = NULL; 1458 1479 if (curr == old) { 1459 1480 xas_store(&xas, entry); 1460 - if (xa_track_free(xa) && entry) 1481 + if (xa_track_free(xa)) 1461 1482 xas_clear_mark(&xas, XA_FREE_MARK); 1462 1483 } 1463 1484 } while (__xas_nomem(&xas, gfp)); ··· 1467 1488 EXPORT_SYMBOL(__xa_cmpxchg); 1468 1489 1469 1490 /** 1470 - * xa_reserve() - Reserve this index in the XArray. 1491 + * __xa_reserve() - Reserve this index in the XArray. 1471 1492 * @xa: XArray. 1472 1493 * @index: Index into array. 1473 1494 * @gfp: Memory allocation flags. ··· 1475 1496 * Ensures there is somewhere to store an entry at @index in the array. 1476 1497 * If there is already something stored at @index, this function does 1477 1498 * nothing. If there was nothing there, the entry is marked as reserved. 1478 - * Loads from @index will continue to see a %NULL pointer until a 1479 - * subsequent store to @index. 1499 + * Loading from a reserved entry returns a %NULL pointer. 1480 1500 * 1481 1501 * If you do not use the entry that you have reserved, call xa_release() 1482 1502 * or xa_erase() to free any unnecessary memory. 1483 1503 * 1484 - * Context: Process context. Takes and releases the xa_lock, IRQ or BH safe 1485 - * if specified in XArray flags. May sleep if the @gfp flags permit. 1504 + * Context: Any context. Expects the xa_lock to be held on entry. May 1505 + * release the lock, sleep and reacquire the lock if the @gfp flags permit. 1486 1506 * Return: 0 if the reservation succeeded or -ENOMEM if it failed. 1487 1507 */ 1488 - int xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp) 1508 + int __xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp) 1489 1509 { 1490 1510 XA_STATE(xas, xa, index); 1491 - unsigned int lock_type = xa_lock_type(xa); 1492 1511 void *curr; 1493 1512 1494 1513 do { 1495 - xas_lock_type(&xas, lock_type); 1496 1514 curr = xas_load(&xas); 1497 - if (!curr) 1515 + if (!curr) { 1498 1516 xas_store(&xas, XA_ZERO_ENTRY); 1499 - xas_unlock_type(&xas, lock_type); 1500 - } while (xas_nomem(&xas, gfp)); 1517 + if (xa_track_free(xa)) 1518 + xas_clear_mark(&xas, XA_FREE_MARK); 1519 + } 1520 + } while (__xas_nomem(&xas, gfp)); 1501 1521 1502 1522 return xas_error(&xas); 1503 1523 } 1504 - EXPORT_SYMBOL(xa_reserve); 1524 + EXPORT_SYMBOL(__xa_reserve); 1505 1525 1506 1526 #ifdef CONFIG_XARRAY_MULTI 1507 1527 static void xas_set_range(struct xa_state *xas, unsigned long first, ··· 1565 1587 do { 1566 1588 xas_lock(&xas); 1567 1589 if (entry) { 1568 - unsigned int order = (last == ~0UL) ? 64 : 1569 - ilog2(last + 1); 1590 + unsigned int order = BITS_PER_LONG; 1591 + if (last + 1) 1592 + order = __ffs(last + 1); 1570 1593 xas_set_order(&xas, last, order); 1571 1594 xas_create(&xas); 1572 1595 if (xas_error(&xas)) ··· 1641 1662 * @index: Index of entry. 1642 1663 * @mark: Mark number. 1643 1664 * 1644 - * Attempting to set a mark on a NULL entry does not succeed. 1665 + * Attempting to set a mark on a %NULL entry does not succeed. 1645 1666 * 1646 1667 * Context: Any context. Expects xa_lock to be held on entry. 1647 1668 */ ··· 1653 1674 if (entry) 1654 1675 xas_set_mark(&xas, mark); 1655 1676 } 1656 - EXPORT_SYMBOL_GPL(__xa_set_mark); 1677 + EXPORT_SYMBOL(__xa_set_mark); 1657 1678 1658 1679 /** 1659 1680 * __xa_clear_mark() - Clear this mark on this entry while locked. ··· 1671 1692 if (entry) 1672 1693 xas_clear_mark(&xas, mark); 1673 1694 } 1674 - EXPORT_SYMBOL_GPL(__xa_clear_mark); 1695 + EXPORT_SYMBOL(__xa_clear_mark); 1675 1696 1676 1697 /** 1677 1698 * xa_get_mark() - Inquire whether this mark is set on this entry. ··· 1711 1732 * @index: Index of entry. 1712 1733 * @mark: Mark number. 1713 1734 * 1714 - * Attempting to set a mark on a NULL entry does not succeed. 1735 + * Attempting to set a mark on a %NULL entry does not succeed. 1715 1736 * 1716 1737 * Context: Process context. Takes and releases the xa_lock. 1717 1738 */ ··· 1808 1829 entry = xas_find_marked(&xas, max, filter); 1809 1830 else 1810 1831 entry = xas_find(&xas, max); 1832 + if (xas.xa_node == XAS_BOUNDS) 1833 + break; 1811 1834 if (xas.xa_shift) { 1812 1835 if (xas.xa_index & ((1UL << xas.xa_shift) - 1)) 1813 1836 continue; ··· 1880 1899 * 1881 1900 * The @filter may be an XArray mark value, in which case entries which are 1882 1901 * marked with that mark will be copied. It may also be %XA_PRESENT, in 1883 - * which case all entries which are not NULL will be copied. 1902 + * which case all entries which are not %NULL will be copied. 1884 1903 * 1885 1904 * The entries returned may not represent a snapshot of the XArray at a 1886 1905 * moment in time. For example, if another thread stores to index 5, then
+9 -4
mm/gup.c
··· 385 385 * @vma: vm_area_struct mapping @address 386 386 * @address: virtual address to look up 387 387 * @flags: flags modifying lookup behaviour 388 - * @page_mask: on output, *page_mask is set according to the size of the page 388 + * @ctx: contains dev_pagemap for %ZONE_DEVICE memory pinning and a 389 + * pointer to output page_mask 389 390 * 390 391 * @flags can have FOLL_ flags set, defined in <linux/mm.h> 391 392 * 392 - * Returns the mapped (struct page *), %NULL if no mapping exists, or 393 + * When getting pages from ZONE_DEVICE memory, the @ctx->pgmap caches 394 + * the device's dev_pagemap metadata to avoid repeating expensive lookups. 395 + * 396 + * On output, the @ctx->page_mask is set according to the size of the page. 397 + * 398 + * Return: the mapped (struct page *), %NULL if no mapping exists, or 393 399 * an error pointer if there is a mapping to something not represented 394 400 * by a page descriptor (see also vm_normal_page()). 395 401 */ ··· 702 696 if (!vma || start >= vma->vm_end) { 703 697 vma = find_extend_vma(mm, start); 704 698 if (!vma && in_gate_area(mm, start)) { 705 - int ret; 706 699 ret = get_gate_page(mm, start & PAGE_MASK, 707 700 gup_flags, &vma, 708 701 pages ? &pages[i] : NULL); 709 702 if (ret) 710 - return i ? : ret; 703 + goto out; 711 704 ctx.page_mask = 0; 712 705 goto next_page; 713 706 }
+25 -18
mm/huge_memory.c
··· 2350 2350 } 2351 2351 } 2352 2352 2353 - static void freeze_page(struct page *page) 2353 + static void unmap_page(struct page *page) 2354 2354 { 2355 2355 enum ttu_flags ttu_flags = TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS | 2356 2356 TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD; ··· 2365 2365 VM_BUG_ON_PAGE(!unmap_success, page); 2366 2366 } 2367 2367 2368 - static void unfreeze_page(struct page *page) 2368 + static void remap_page(struct page *page) 2369 2369 { 2370 2370 int i; 2371 2371 if (PageTransHuge(page)) { ··· 2402 2402 (1L << PG_unevictable) | 2403 2403 (1L << PG_dirty))); 2404 2404 2405 + /* ->mapping in first tail page is compound_mapcount */ 2406 + VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING, 2407 + page_tail); 2408 + page_tail->mapping = head->mapping; 2409 + page_tail->index = head->index + tail; 2410 + 2405 2411 /* Page flags must be visible before we make the page non-compound. */ 2406 2412 smp_wmb(); 2407 2413 ··· 2428 2422 if (page_is_idle(head)) 2429 2423 set_page_idle(page_tail); 2430 2424 2431 - /* ->mapping in first tail page is compound_mapcount */ 2432 - VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING, 2433 - page_tail); 2434 - page_tail->mapping = head->mapping; 2435 - 2436 - page_tail->index = head->index + tail; 2437 2425 page_cpupid_xchg_last(page_tail, page_cpupid_last(head)); 2438 2426 2439 2427 /* ··· 2439 2439 } 2440 2440 2441 2441 static void __split_huge_page(struct page *page, struct list_head *list, 2442 - unsigned long flags) 2442 + pgoff_t end, unsigned long flags) 2443 2443 { 2444 2444 struct page *head = compound_head(page); 2445 2445 struct zone *zone = page_zone(head); 2446 2446 struct lruvec *lruvec; 2447 - pgoff_t end = -1; 2448 2447 int i; 2449 2448 2450 2449 lruvec = mem_cgroup_page_lruvec(head, zone->zone_pgdat); 2451 2450 2452 2451 /* complete memcg works before add pages to LRU */ 2453 2452 mem_cgroup_split_huge_fixup(head); 2454 - 2455 - if (!PageAnon(page)) 2456 - end = DIV_ROUND_UP(i_size_read(head->mapping->host), PAGE_SIZE); 2457 2453 2458 2454 for (i = HPAGE_PMD_NR - 1; i >= 1; i--) { 2459 2455 __split_huge_page_tail(head, i, lruvec, list); ··· 2479 2483 2480 2484 spin_unlock_irqrestore(zone_lru_lock(page_zone(head)), flags); 2481 2485 2482 - unfreeze_page(head); 2486 + remap_page(head); 2483 2487 2484 2488 for (i = 0; i < HPAGE_PMD_NR; i++) { 2485 2489 struct page *subpage = head + i; ··· 2622 2626 int count, mapcount, extra_pins, ret; 2623 2627 bool mlocked; 2624 2628 unsigned long flags; 2629 + pgoff_t end; 2625 2630 2626 2631 VM_BUG_ON_PAGE(is_huge_zero_page(page), page); 2627 2632 VM_BUG_ON_PAGE(!PageLocked(page), page); ··· 2645 2648 ret = -EBUSY; 2646 2649 goto out; 2647 2650 } 2651 + end = -1; 2648 2652 mapping = NULL; 2649 2653 anon_vma_lock_write(anon_vma); 2650 2654 } else { ··· 2659 2661 2660 2662 anon_vma = NULL; 2661 2663 i_mmap_lock_read(mapping); 2664 + 2665 + /* 2666 + *__split_huge_page() may need to trim off pages beyond EOF: 2667 + * but on 32-bit, i_size_read() takes an irq-unsafe seqlock, 2668 + * which cannot be nested inside the page tree lock. So note 2669 + * end now: i_size itself may be changed at any moment, but 2670 + * head page lock is good enough to serialize the trimming. 2671 + */ 2672 + end = DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE); 2662 2673 } 2663 2674 2664 2675 /* 2665 - * Racy check if we can split the page, before freeze_page() will 2676 + * Racy check if we can split the page, before unmap_page() will 2666 2677 * split PMDs 2667 2678 */ 2668 2679 if (!can_split_huge_page(head, &extra_pins)) { ··· 2680 2673 } 2681 2674 2682 2675 mlocked = PageMlocked(page); 2683 - freeze_page(head); 2676 + unmap_page(head); 2684 2677 VM_BUG_ON_PAGE(compound_mapcount(head), head); 2685 2678 2686 2679 /* Make sure the page is not on per-CPU pagevec as it takes pin */ ··· 2714 2707 if (mapping) 2715 2708 __dec_node_page_state(page, NR_SHMEM_THPS); 2716 2709 spin_unlock(&pgdata->split_queue_lock); 2717 - __split_huge_page(page, list, flags); 2710 + __split_huge_page(page, list, end, flags); 2718 2711 if (PageSwapCache(head)) { 2719 2712 swp_entry_t entry = { .val = page_private(head) }; 2720 2713 ··· 2734 2727 fail: if (mapping) 2735 2728 xa_unlock(&mapping->i_pages); 2736 2729 spin_unlock_irqrestore(zone_lru_lock(page_zone(head)), flags); 2737 - unfreeze_page(head); 2730 + remap_page(head); 2738 2731 ret = -EBUSY; 2739 2732 } 2740 2733
+20 -5
mm/hugetlb.c
··· 3233 3233 int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, 3234 3234 struct vm_area_struct *vma) 3235 3235 { 3236 - pte_t *src_pte, *dst_pte, entry; 3236 + pte_t *src_pte, *dst_pte, entry, dst_entry; 3237 3237 struct page *ptepage; 3238 3238 unsigned long addr; 3239 3239 int cow; ··· 3261 3261 break; 3262 3262 } 3263 3263 3264 - /* If the pagetables are shared don't copy or take references */ 3265 - if (dst_pte == src_pte) 3264 + /* 3265 + * If the pagetables are shared don't copy or take references. 3266 + * dst_pte == src_pte is the common case of src/dest sharing. 3267 + * 3268 + * However, src could have 'unshared' and dst shares with 3269 + * another vma. If dst_pte !none, this implies sharing. 3270 + * Check here before taking page table lock, and once again 3271 + * after taking the lock below. 3272 + */ 3273 + dst_entry = huge_ptep_get(dst_pte); 3274 + if ((dst_pte == src_pte) || !huge_pte_none(dst_entry)) 3266 3275 continue; 3267 3276 3268 3277 dst_ptl = huge_pte_lock(h, dst, dst_pte); 3269 3278 src_ptl = huge_pte_lockptr(h, src, src_pte); 3270 3279 spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); 3271 3280 entry = huge_ptep_get(src_pte); 3272 - if (huge_pte_none(entry)) { /* skip none entry */ 3281 + dst_entry = huge_ptep_get(dst_pte); 3282 + if (huge_pte_none(entry) || !huge_pte_none(dst_entry)) { 3283 + /* 3284 + * Skip if src entry none. Also, skip in the 3285 + * unlikely case dst entry !none as this implies 3286 + * sharing with another vma. 3287 + */ 3273 3288 ; 3274 3289 } else if (unlikely(is_hugetlb_entry_migration(entry) || 3275 3290 is_hugetlb_entry_hwpoisoned(entry))) { ··· 4080 4065 4081 4066 /* fallback to copy_from_user outside mmap_sem */ 4082 4067 if (unlikely(ret)) { 4083 - ret = -EFAULT; 4068 + ret = -ENOENT; 4084 4069 *pagep = page; 4085 4070 /* don't free the page */ 4086 4071 goto out;
+82 -60
mm/khugepaged.c
··· 1287 1287 * collapse_shmem - collapse small tmpfs/shmem pages into huge one. 1288 1288 * 1289 1289 * Basic scheme is simple, details are more complex: 1290 - * - allocate and freeze a new huge page; 1290 + * - allocate and lock a new huge page; 1291 1291 * - scan page cache replacing old pages with the new one 1292 1292 * + swap in pages if necessary; 1293 1293 * + fill in gaps; ··· 1295 1295 * - if replacing succeeds: 1296 1296 * + copy data over; 1297 1297 * + free old pages; 1298 - * + unfreeze huge page; 1298 + * + unlock huge page; 1299 1299 * - if replacing failed; 1300 1300 * + put all pages back and unfreeze them; 1301 1301 * + restore gaps in the page cache; 1302 - * + free huge page; 1302 + * + unlock and free huge page; 1303 1303 */ 1304 1304 static void collapse_shmem(struct mm_struct *mm, 1305 1305 struct address_space *mapping, pgoff_t start, ··· 1329 1329 goto out; 1330 1330 } 1331 1331 1332 - new_page->index = start; 1333 - new_page->mapping = mapping; 1334 - __SetPageSwapBacked(new_page); 1335 - __SetPageLocked(new_page); 1336 - BUG_ON(!page_ref_freeze(new_page, 1)); 1337 - 1338 - /* 1339 - * At this point the new_page is 'frozen' (page_count() is zero), 1340 - * locked and not up-to-date. It's safe to insert it into the page 1341 - * cache, because nobody would be able to map it or use it in other 1342 - * way until we unfreeze it. 1343 - */ 1344 - 1345 1332 /* This will be less messy when we use multi-index entries */ 1346 1333 do { 1347 1334 xas_lock_irq(&xas); ··· 1336 1349 if (!xas_error(&xas)) 1337 1350 break; 1338 1351 xas_unlock_irq(&xas); 1339 - if (!xas_nomem(&xas, GFP_KERNEL)) 1352 + if (!xas_nomem(&xas, GFP_KERNEL)) { 1353 + mem_cgroup_cancel_charge(new_page, memcg, true); 1354 + result = SCAN_FAIL; 1340 1355 goto out; 1356 + } 1341 1357 } while (1); 1358 + 1359 + __SetPageLocked(new_page); 1360 + __SetPageSwapBacked(new_page); 1361 + new_page->index = start; 1362 + new_page->mapping = mapping; 1363 + 1364 + /* 1365 + * At this point the new_page is locked and not up-to-date. 1366 + * It's safe to insert it into the page cache, because nobody would 1367 + * be able to map it or use it in another way until we unlock it. 1368 + */ 1342 1369 1343 1370 xas_set(&xas, start); 1344 1371 for (index = start; index < end; index++) { ··· 1360 1359 1361 1360 VM_BUG_ON(index != xas.xa_index); 1362 1361 if (!page) { 1362 + /* 1363 + * Stop if extent has been truncated or hole-punched, 1364 + * and is now completely empty. 1365 + */ 1366 + if (index == start) { 1367 + if (!xas_next_entry(&xas, end - 1)) { 1368 + result = SCAN_TRUNCATED; 1369 + goto xa_locked; 1370 + } 1371 + xas_set(&xas, index); 1372 + } 1363 1373 if (!shmem_charge(mapping->host, 1)) { 1364 1374 result = SCAN_FAIL; 1365 - break; 1375 + goto xa_locked; 1366 1376 } 1367 1377 xas_store(&xas, new_page + (index % HPAGE_PMD_NR)); 1368 1378 nr_none++; ··· 1388 1376 result = SCAN_FAIL; 1389 1377 goto xa_unlocked; 1390 1378 } 1391 - xas_lock_irq(&xas); 1392 - xas_set(&xas, index); 1393 1379 } else if (trylock_page(page)) { 1394 1380 get_page(page); 1381 + xas_unlock_irq(&xas); 1395 1382 } else { 1396 1383 result = SCAN_PAGE_LOCK; 1397 - break; 1384 + goto xa_locked; 1398 1385 } 1399 1386 1400 1387 /* ··· 1402 1391 */ 1403 1392 VM_BUG_ON_PAGE(!PageLocked(page), page); 1404 1393 VM_BUG_ON_PAGE(!PageUptodate(page), page); 1405 - VM_BUG_ON_PAGE(PageTransCompound(page), page); 1394 + 1395 + /* 1396 + * If file was truncated then extended, or hole-punched, before 1397 + * we locked the first page, then a THP might be there already. 1398 + */ 1399 + if (PageTransCompound(page)) { 1400 + result = SCAN_PAGE_COMPOUND; 1401 + goto out_unlock; 1402 + } 1406 1403 1407 1404 if (page_mapping(page) != mapping) { 1408 1405 result = SCAN_TRUNCATED; 1409 1406 goto out_unlock; 1410 1407 } 1411 - xas_unlock_irq(&xas); 1412 1408 1413 1409 if (isolate_lru_page(page)) { 1414 1410 result = SCAN_DEL_PAGE_LRU; 1415 - goto out_isolate_failed; 1411 + goto out_unlock; 1416 1412 } 1417 1413 1418 1414 if (page_mapped(page)) ··· 1439 1421 */ 1440 1422 if (!page_ref_freeze(page, 3)) { 1441 1423 result = SCAN_PAGE_COUNT; 1442 - goto out_lru; 1424 + xas_unlock_irq(&xas); 1425 + putback_lru_page(page); 1426 + goto out_unlock; 1443 1427 } 1444 1428 1445 1429 /* ··· 1453 1433 /* Finally, replace with the new page. */ 1454 1434 xas_store(&xas, new_page + (index % HPAGE_PMD_NR)); 1455 1435 continue; 1456 - out_lru: 1457 - xas_unlock_irq(&xas); 1458 - putback_lru_page(page); 1459 - out_isolate_failed: 1460 - unlock_page(page); 1461 - put_page(page); 1462 - goto xa_unlocked; 1463 1436 out_unlock: 1464 1437 unlock_page(page); 1465 1438 put_page(page); 1466 - break; 1439 + goto xa_unlocked; 1467 1440 } 1468 - xas_unlock_irq(&xas); 1469 1441 1442 + __inc_node_page_state(new_page, NR_SHMEM_THPS); 1443 + if (nr_none) { 1444 + struct zone *zone = page_zone(new_page); 1445 + 1446 + __mod_node_page_state(zone->zone_pgdat, NR_FILE_PAGES, nr_none); 1447 + __mod_node_page_state(zone->zone_pgdat, NR_SHMEM, nr_none); 1448 + } 1449 + 1450 + xa_locked: 1451 + xas_unlock_irq(&xas); 1470 1452 xa_unlocked: 1453 + 1471 1454 if (result == SCAN_SUCCEED) { 1472 1455 struct page *page, *tmp; 1473 - struct zone *zone = page_zone(new_page); 1474 1456 1475 1457 /* 1476 1458 * Replacing old pages with new one has succeeded, now we 1477 1459 * need to copy the content and free the old pages. 1478 1460 */ 1461 + index = start; 1479 1462 list_for_each_entry_safe(page, tmp, &pagelist, lru) { 1463 + while (index < page->index) { 1464 + clear_highpage(new_page + (index % HPAGE_PMD_NR)); 1465 + index++; 1466 + } 1480 1467 copy_highpage(new_page + (page->index % HPAGE_PMD_NR), 1481 1468 page); 1482 1469 list_del(&page->lru); 1483 - unlock_page(page); 1484 - page_ref_unfreeze(page, 1); 1485 1470 page->mapping = NULL; 1471 + page_ref_unfreeze(page, 1); 1486 1472 ClearPageActive(page); 1487 1473 ClearPageUnevictable(page); 1474 + unlock_page(page); 1488 1475 put_page(page); 1476 + index++; 1477 + } 1478 + while (index < end) { 1479 + clear_highpage(new_page + (index % HPAGE_PMD_NR)); 1480 + index++; 1489 1481 } 1490 1482 1491 - local_irq_disable(); 1492 - __inc_node_page_state(new_page, NR_SHMEM_THPS); 1493 - if (nr_none) { 1494 - __mod_node_page_state(zone->zone_pgdat, NR_FILE_PAGES, nr_none); 1495 - __mod_node_page_state(zone->zone_pgdat, NR_SHMEM, nr_none); 1496 - } 1497 - local_irq_enable(); 1498 - 1499 - /* 1500 - * Remove pte page tables, so we can re-fault 1501 - * the page as huge. 1502 - */ 1503 - retract_page_tables(mapping, start); 1504 - 1505 - /* Everything is ready, let's unfreeze the new_page */ 1506 - set_page_dirty(new_page); 1507 1483 SetPageUptodate(new_page); 1508 - page_ref_unfreeze(new_page, HPAGE_PMD_NR); 1484 + page_ref_add(new_page, HPAGE_PMD_NR - 1); 1485 + set_page_dirty(new_page); 1509 1486 mem_cgroup_commit_charge(new_page, memcg, false, true); 1510 1487 lru_cache_add_anon(new_page); 1511 - unlock_page(new_page); 1512 1488 1489 + /* 1490 + * Remove pte page tables, so we can re-fault the page as huge. 1491 + */ 1492 + retract_page_tables(mapping, start); 1513 1493 *hpage = NULL; 1514 1494 1515 1495 khugepaged_pages_collapsed++; 1516 1496 } else { 1517 1497 struct page *page; 1498 + 1518 1499 /* Something went wrong: roll back page cache changes */ 1519 - shmem_uncharge(mapping->host, nr_none); 1520 1500 xas_lock_irq(&xas); 1501 + mapping->nrpages -= nr_none; 1502 + shmem_uncharge(mapping->host, nr_none); 1503 + 1521 1504 xas_set(&xas, start); 1522 1505 xas_for_each(&xas, page, end - 1) { 1523 1506 page = list_first_entry_or_null(&pagelist, ··· 1542 1519 xas_store(&xas, page); 1543 1520 xas_pause(&xas); 1544 1521 xas_unlock_irq(&xas); 1545 - putback_lru_page(page); 1546 1522 unlock_page(page); 1523 + putback_lru_page(page); 1547 1524 xas_lock_irq(&xas); 1548 1525 } 1549 1526 VM_BUG_ON(nr_none); 1550 1527 xas_unlock_irq(&xas); 1551 1528 1552 - /* Unfreeze new_page, caller would take care about freeing it */ 1553 - page_ref_unfreeze(new_page, 1); 1554 1529 mem_cgroup_cancel_charge(new_page, memcg, true); 1555 - unlock_page(new_page); 1556 1530 new_page->mapping = NULL; 1557 1531 } 1532 + 1533 + unlock_page(new_page); 1558 1534 out: 1559 1535 VM_BUG_ON(!list_empty(&pagelist)); 1560 1536 /* TODO: tracepoints */
+1 -1
mm/memblock.c
··· 1179 1179 1180 1180 #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP 1181 1181 /* 1182 - * Common iterator interface used to define for_each_mem_range(). 1182 + * Common iterator interface used to define for_each_mem_pfn_range(). 1183 1183 */ 1184 1184 void __init_memblock __next_mem_pfn_range(int *idx, int nid, 1185 1185 unsigned long *out_start_pfn,
+20 -12
mm/page_alloc.c
··· 4061 4061 int reserve_flags; 4062 4062 4063 4063 /* 4064 - * In the slowpath, we sanity check order to avoid ever trying to 4065 - * reclaim >= MAX_ORDER areas which will never succeed. Callers may 4066 - * be using allocators in order of preference for an area that is 4067 - * too large. 4068 - */ 4069 - if (order >= MAX_ORDER) { 4070 - WARN_ON_ONCE(!(gfp_mask & __GFP_NOWARN)); 4071 - return NULL; 4072 - } 4073 - 4074 - /* 4075 4064 * We also sanity check to catch abuse of atomic reserves being used by 4076 4065 * callers that are not in atomic context. 4077 4066 */ ··· 4352 4363 unsigned int alloc_flags = ALLOC_WMARK_LOW; 4353 4364 gfp_t alloc_mask; /* The gfp_t that was actually used for allocation */ 4354 4365 struct alloc_context ac = { }; 4366 + 4367 + /* 4368 + * There are several places where we assume that the order value is sane 4369 + * so bail out early if the request is out of bound. 4370 + */ 4371 + if (unlikely(order >= MAX_ORDER)) { 4372 + WARN_ON_ONCE(!(gfp_mask & __GFP_NOWARN)); 4373 + return NULL; 4374 + } 4355 4375 4356 4376 gfp_mask &= gfp_allowed_mask; 4357 4377 alloc_mask = gfp_mask; ··· 5813 5815 unsigned long size) 5814 5816 { 5815 5817 struct pglist_data *pgdat = zone->zone_pgdat; 5818 + int zone_idx = zone_idx(zone) + 1; 5816 5819 5817 - pgdat->nr_zones = zone_idx(zone) + 1; 5820 + if (zone_idx > pgdat->nr_zones) 5821 + pgdat->nr_zones = zone_idx; 5818 5822 5819 5823 zone->zone_start_pfn = zone_start_pfn; 5820 5824 ··· 7787 7787 7788 7788 if (PageReserved(page)) 7789 7789 goto unmovable; 7790 + 7791 + /* 7792 + * If the zone is movable and we have ruled out all reserved 7793 + * pages then it should be reasonably safe to assume the rest 7794 + * is movable. 7795 + */ 7796 + if (zone_idx(zone) == ZONE_MOVABLE) 7797 + continue; 7790 7798 7791 7799 /* 7792 7800 * Hugepages are not in LRU lists, but they're movable.
+3 -10
mm/rmap.c
··· 1627 1627 address + PAGE_SIZE); 1628 1628 } else { 1629 1629 /* 1630 - * We should not need to notify here as we reach this 1631 - * case only from freeze_page() itself only call from 1632 - * split_huge_page_to_list() so everything below must 1633 - * be true: 1634 - * - page is not anonymous 1635 - * - page is locked 1636 - * 1637 - * So as it is a locked file back page thus it can not 1638 - * be remove from the page cache and replace by a new 1639 - * page before mmu_notifier_invalidate_range_end so no 1630 + * This is a locked file-backed page, thus it cannot 1631 + * be removed from the page cache and replaced by a new 1632 + * page before mmu_notifier_invalidate_range_end, so no 1640 1633 * concurrent thread might update its page table to 1641 1634 * point at new page while a device still is using this 1642 1635 * page.
+38 -9
mm/shmem.c
··· 297 297 if (!shmem_inode_acct_block(inode, pages)) 298 298 return false; 299 299 300 + /* nrpages adjustment first, then shmem_recalc_inode() when balanced */ 301 + inode->i_mapping->nrpages += pages; 302 + 300 303 spin_lock_irqsave(&info->lock, flags); 301 304 info->alloced += pages; 302 305 inode->i_blocks += pages * BLOCKS_PER_PAGE; 303 306 shmem_recalc_inode(inode); 304 307 spin_unlock_irqrestore(&info->lock, flags); 305 - inode->i_mapping->nrpages += pages; 306 308 307 309 return true; 308 310 } ··· 313 311 { 314 312 struct shmem_inode_info *info = SHMEM_I(inode); 315 313 unsigned long flags; 314 + 315 + /* nrpages adjustment done by __delete_from_page_cache() or caller */ 316 316 317 317 spin_lock_irqsave(&info->lock, flags); 318 318 info->alloced -= pages; ··· 1513 1509 { 1514 1510 struct page *oldpage, *newpage; 1515 1511 struct address_space *swap_mapping; 1512 + swp_entry_t entry; 1516 1513 pgoff_t swap_index; 1517 1514 int error; 1518 1515 1519 1516 oldpage = *pagep; 1520 - swap_index = page_private(oldpage); 1517 + entry.val = page_private(oldpage); 1518 + swap_index = swp_offset(entry); 1521 1519 swap_mapping = page_mapping(oldpage); 1522 1520 1523 1521 /* ··· 1538 1532 __SetPageLocked(newpage); 1539 1533 __SetPageSwapBacked(newpage); 1540 1534 SetPageUptodate(newpage); 1541 - set_page_private(newpage, swap_index); 1535 + set_page_private(newpage, entry.val); 1542 1536 SetPageSwapCache(newpage); 1543 1537 1544 1538 /* ··· 2220 2214 struct page *page; 2221 2215 pte_t _dst_pte, *dst_pte; 2222 2216 int ret; 2217 + pgoff_t offset, max_off; 2223 2218 2224 2219 ret = -ENOMEM; 2225 2220 if (!shmem_inode_acct_block(inode, 1)) ··· 2243 2236 *pagep = page; 2244 2237 shmem_inode_unacct_blocks(inode, 1); 2245 2238 /* don't free the page */ 2246 - return -EFAULT; 2239 + return -ENOENT; 2247 2240 } 2248 2241 } else { /* mfill_zeropage_atomic */ 2249 2242 clear_highpage(page); ··· 2257 2250 __SetPageLocked(page); 2258 2251 __SetPageSwapBacked(page); 2259 2252 __SetPageUptodate(page); 2253 + 2254 + ret = -EFAULT; 2255 + offset = linear_page_index(dst_vma, dst_addr); 2256 + max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); 2257 + if (unlikely(offset >= max_off)) 2258 + goto out_release; 2260 2259 2261 2260 ret = mem_cgroup_try_charge_delay(page, dst_mm, gfp, &memcg, false); 2262 2261 if (ret) ··· 2278 2265 _dst_pte = mk_pte(page, dst_vma->vm_page_prot); 2279 2266 if (dst_vma->vm_flags & VM_WRITE) 2280 2267 _dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte)); 2268 + else { 2269 + /* 2270 + * We don't set the pte dirty if the vma has no 2271 + * VM_WRITE permission, so mark the page dirty or it 2272 + * could be freed from under us. We could do it 2273 + * unconditionally before unlock_page(), but doing it 2274 + * only if VM_WRITE is not set is faster. 2275 + */ 2276 + set_page_dirty(page); 2277 + } 2278 + 2279 + dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); 2280 + 2281 + ret = -EFAULT; 2282 + max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); 2283 + if (unlikely(offset >= max_off)) 2284 + goto out_release_uncharge_unlock; 2281 2285 2282 2286 ret = -EEXIST; 2283 - dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); 2284 2287 if (!pte_none(*dst_pte)) 2285 2288 goto out_release_uncharge_unlock; 2286 2289 ··· 2314 2285 2315 2286 /* No need to invalidate - it was non-present before */ 2316 2287 update_mmu_cache(dst_vma, dst_addr, dst_pte); 2317 - unlock_page(page); 2318 2288 pte_unmap_unlock(dst_pte, ptl); 2289 + unlock_page(page); 2319 2290 ret = 0; 2320 2291 out: 2321 2292 return ret; 2322 2293 out_release_uncharge_unlock: 2323 2294 pte_unmap_unlock(dst_pte, ptl); 2295 + ClearPageDirty(page); 2296 + delete_from_page_cache(page); 2324 2297 out_release_uncharge: 2325 2298 mem_cgroup_cancel_charge(page, memcg, false); 2326 2299 out_release: ··· 2594 2563 inode_lock(inode); 2595 2564 /* We're holding i_mutex so we can access i_size directly */ 2596 2565 2597 - if (offset < 0) 2598 - offset = -EINVAL; 2599 - else if (offset >= inode->i_size) 2566 + if (offset < 0 || offset >= inode->i_size) 2600 2567 offset = -ENXIO; 2601 2568 else { 2602 2569 start = offset >> PAGE_SHIFT;
+3 -3
mm/swapfile.c
··· 2813 2813 unsigned int type; 2814 2814 int i; 2815 2815 2816 - p = kzalloc(sizeof(*p), GFP_KERNEL); 2816 + p = kvzalloc(sizeof(*p), GFP_KERNEL); 2817 2817 if (!p) 2818 2818 return ERR_PTR(-ENOMEM); 2819 2819 ··· 2824 2824 } 2825 2825 if (type >= MAX_SWAPFILES) { 2826 2826 spin_unlock(&swap_lock); 2827 - kfree(p); 2827 + kvfree(p); 2828 2828 return ERR_PTR(-EPERM); 2829 2829 } 2830 2830 if (type >= nr_swapfiles) { ··· 2838 2838 smp_wmb(); 2839 2839 nr_swapfiles++; 2840 2840 } else { 2841 - kfree(p); 2841 + kvfree(p); 2842 2842 p = swap_info[type]; 2843 2843 /* 2844 2844 * Do not memset this entry: a racing procfs swap_next()
+6 -2
mm/truncate.c
··· 517 517 */ 518 518 xa_lock_irq(&mapping->i_pages); 519 519 xa_unlock_irq(&mapping->i_pages); 520 - 521 - truncate_inode_pages(mapping, 0); 522 520 } 521 + 522 + /* 523 + * Cleancache needs notification even if there are no pages or shadow 524 + * entries. 525 + */ 526 + truncate_inode_pages(mapping, 0); 523 527 } 524 528 EXPORT_SYMBOL(truncate_inode_pages_final); 525 529
+46 -16
mm/userfaultfd.c
··· 33 33 void *page_kaddr; 34 34 int ret; 35 35 struct page *page; 36 + pgoff_t offset, max_off; 37 + struct inode *inode; 36 38 37 39 if (!*pagep) { 38 40 ret = -ENOMEM; ··· 50 48 51 49 /* fallback to copy_from_user outside mmap_sem */ 52 50 if (unlikely(ret)) { 53 - ret = -EFAULT; 51 + ret = -ENOENT; 54 52 *pagep = page; 55 53 /* don't free the page */ 56 54 goto out; ··· 75 73 if (dst_vma->vm_flags & VM_WRITE) 76 74 _dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte)); 77 75 78 - ret = -EEXIST; 79 76 dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); 77 + if (dst_vma->vm_file) { 78 + /* the shmem MAP_PRIVATE case requires checking the i_size */ 79 + inode = dst_vma->vm_file->f_inode; 80 + offset = linear_page_index(dst_vma, dst_addr); 81 + max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); 82 + ret = -EFAULT; 83 + if (unlikely(offset >= max_off)) 84 + goto out_release_uncharge_unlock; 85 + } 86 + ret = -EEXIST; 80 87 if (!pte_none(*dst_pte)) 81 88 goto out_release_uncharge_unlock; 82 89 ··· 119 108 pte_t _dst_pte, *dst_pte; 120 109 spinlock_t *ptl; 121 110 int ret; 111 + pgoff_t offset, max_off; 112 + struct inode *inode; 122 113 123 114 _dst_pte = pte_mkspecial(pfn_pte(my_zero_pfn(dst_addr), 124 115 dst_vma->vm_page_prot)); 125 - ret = -EEXIST; 126 116 dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); 117 + if (dst_vma->vm_file) { 118 + /* the shmem MAP_PRIVATE case requires checking the i_size */ 119 + inode = dst_vma->vm_file->f_inode; 120 + offset = linear_page_index(dst_vma, dst_addr); 121 + max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); 122 + ret = -EFAULT; 123 + if (unlikely(offset >= max_off)) 124 + goto out_unlock; 125 + } 126 + ret = -EEXIST; 127 127 if (!pte_none(*dst_pte)) 128 128 goto out_unlock; 129 129 set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); ··· 227 205 if (!dst_vma || !is_vm_hugetlb_page(dst_vma)) 228 206 goto out_unlock; 229 207 /* 230 - * Only allow __mcopy_atomic_hugetlb on userfaultfd 231 - * registered ranges. 208 + * Check the vma is registered in uffd, this is 209 + * required to enforce the VM_MAYWRITE check done at 210 + * uffd registration time. 232 211 */ 233 212 if (!dst_vma->vm_userfaultfd_ctx.ctx) 234 213 goto out_unlock; ··· 297 274 298 275 cond_resched(); 299 276 300 - if (unlikely(err == -EFAULT)) { 277 + if (unlikely(err == -ENOENT)) { 301 278 up_read(&dst_mm->mmap_sem); 302 279 BUG_ON(!page); 303 280 ··· 403 380 { 404 381 ssize_t err; 405 382 406 - if (vma_is_anonymous(dst_vma)) { 383 + /* 384 + * The normal page fault path for a shmem will invoke the 385 + * fault, fill the hole in the file and COW it right away. The 386 + * result generates plain anonymous memory. So when we are 387 + * asked to fill an hole in a MAP_PRIVATE shmem mapping, we'll 388 + * generate anonymous memory directly without actually filling 389 + * the hole. For the MAP_PRIVATE case the robustness check 390 + * only happens in the pagetable (to verify it's still none) 391 + * and not in the radix tree. 392 + */ 393 + if (!(dst_vma->vm_flags & VM_SHARED)) { 407 394 if (!zeropage) 408 395 err = mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma, 409 396 dst_addr, src_addr, page); ··· 482 449 if (!dst_vma) 483 450 goto out_unlock; 484 451 /* 485 - * Be strict and only allow __mcopy_atomic on userfaultfd 486 - * registered ranges to prevent userland errors going 487 - * unnoticed. As far as the VM consistency is concerned, it 488 - * would be perfectly safe to remove this check, but there's 489 - * no useful usage for __mcopy_atomic ouside of userfaultfd 490 - * registered ranges. This is after all why these are ioctls 491 - * belonging to the userfaultfd and not syscalls. 452 + * Check the vma is registered in uffd, this is required to 453 + * enforce the VM_MAYWRITE check done at uffd registration 454 + * time. 492 455 */ 493 456 if (!dst_vma->vm_userfaultfd_ctx.ctx) 494 457 goto out_unlock; ··· 518 489 * dst_vma. 519 490 */ 520 491 err = -ENOMEM; 521 - if (vma_is_anonymous(dst_vma) && unlikely(anon_vma_prepare(dst_vma))) 492 + if (!(dst_vma->vm_flags & VM_SHARED) && 493 + unlikely(anon_vma_prepare(dst_vma))) 522 494 goto out_unlock; 523 495 524 496 while (src_addr < src_start + len) { ··· 560 530 src_addr, &page, zeropage); 561 531 cond_resched(); 562 532 563 - if (unlikely(err == -EFAULT)) { 533 + if (unlikely(err == -ENOENT)) { 564 534 void *page_kaddr; 565 535 566 536 up_read(&dst_mm->mmap_sem);
+4 -3
mm/vmstat.c
··· 1827 1827 1828 1828 /* 1829 1829 * The fast way of checking if there are any vmstat diffs. 1830 - * This works because the diffs are byte sized items. 1831 1830 */ 1832 - if (memchr_inv(p->vm_stat_diff, 0, NR_VM_ZONE_STAT_ITEMS)) 1831 + if (memchr_inv(p->vm_stat_diff, 0, NR_VM_ZONE_STAT_ITEMS * 1832 + sizeof(p->vm_stat_diff[0]))) 1833 1833 return true; 1834 1834 #ifdef CONFIG_NUMA 1835 - if (memchr_inv(p->vm_numa_stat_diff, 0, NR_VM_NUMA_STAT_ITEMS)) 1835 + if (memchr_inv(p->vm_numa_stat_diff, 0, NR_VM_NUMA_STAT_ITEMS * 1836 + sizeof(p->vm_numa_stat_diff[0]))) 1836 1837 return true; 1837 1838 #endif 1838 1839 }
+63 -40
mm/z3fold.c
··· 99 99 #define NCHUNKS ((PAGE_SIZE - ZHDR_SIZE_ALIGNED) >> CHUNK_SHIFT) 100 100 101 101 #define BUDDY_MASK (0x3) 102 + #define BUDDY_SHIFT 2 102 103 103 104 /** 104 105 * struct z3fold_pool - stores metadata for each z3fold pool ··· 146 145 MIDDLE_CHUNK_MAPPED, 147 146 NEEDS_COMPACTING, 148 147 PAGE_STALE, 149 - UNDER_RECLAIM 148 + PAGE_CLAIMED, /* by either reclaim or free */ 150 149 }; 151 150 152 151 /***************** ··· 175 174 clear_bit(MIDDLE_CHUNK_MAPPED, &page->private); 176 175 clear_bit(NEEDS_COMPACTING, &page->private); 177 176 clear_bit(PAGE_STALE, &page->private); 178 - clear_bit(UNDER_RECLAIM, &page->private); 177 + clear_bit(PAGE_CLAIMED, &page->private); 179 178 180 179 spin_lock_init(&zhdr->page_lock); 181 180 kref_init(&zhdr->refcount); ··· 224 223 unsigned long handle; 225 224 226 225 handle = (unsigned long)zhdr; 227 - if (bud != HEADLESS) 228 - handle += (bud + zhdr->first_num) & BUDDY_MASK; 226 + if (bud != HEADLESS) { 227 + handle |= (bud + zhdr->first_num) & BUDDY_MASK; 228 + if (bud == LAST) 229 + handle |= (zhdr->last_chunks << BUDDY_SHIFT); 230 + } 229 231 return handle; 230 232 } 231 233 ··· 236 232 static struct z3fold_header *handle_to_z3fold_header(unsigned long handle) 237 233 { 238 234 return (struct z3fold_header *)(handle & PAGE_MASK); 235 + } 236 + 237 + /* only for LAST bud, returns zero otherwise */ 238 + static unsigned short handle_to_chunks(unsigned long handle) 239 + { 240 + return (handle & ~PAGE_MASK) >> BUDDY_SHIFT; 239 241 } 240 242 241 243 /* ··· 730 720 page = virt_to_page(zhdr); 731 721 732 722 if (test_bit(PAGE_HEADLESS, &page->private)) { 733 - /* HEADLESS page stored */ 734 - bud = HEADLESS; 735 - } else { 736 - z3fold_page_lock(zhdr); 737 - bud = handle_to_buddy(handle); 738 - 739 - switch (bud) { 740 - case FIRST: 741 - zhdr->first_chunks = 0; 742 - break; 743 - case MIDDLE: 744 - zhdr->middle_chunks = 0; 745 - zhdr->start_middle = 0; 746 - break; 747 - case LAST: 748 - zhdr->last_chunks = 0; 749 - break; 750 - default: 751 - pr_err("%s: unknown bud %d\n", __func__, bud); 752 - WARN_ON(1); 753 - z3fold_page_unlock(zhdr); 754 - return; 723 + /* if a headless page is under reclaim, just leave. 724 + * NB: we use test_and_set_bit for a reason: if the bit 725 + * has not been set before, we release this page 726 + * immediately so we don't care about its value any more. 727 + */ 728 + if (!test_and_set_bit(PAGE_CLAIMED, &page->private)) { 729 + spin_lock(&pool->lock); 730 + list_del(&page->lru); 731 + spin_unlock(&pool->lock); 732 + free_z3fold_page(page); 733 + atomic64_dec(&pool->pages_nr); 755 734 } 735 + return; 756 736 } 757 737 758 - if (bud == HEADLESS) { 759 - spin_lock(&pool->lock); 760 - list_del(&page->lru); 761 - spin_unlock(&pool->lock); 762 - free_z3fold_page(page); 763 - atomic64_dec(&pool->pages_nr); 738 + /* Non-headless case */ 739 + z3fold_page_lock(zhdr); 740 + bud = handle_to_buddy(handle); 741 + 742 + switch (bud) { 743 + case FIRST: 744 + zhdr->first_chunks = 0; 745 + break; 746 + case MIDDLE: 747 + zhdr->middle_chunks = 0; 748 + break; 749 + case LAST: 750 + zhdr->last_chunks = 0; 751 + break; 752 + default: 753 + pr_err("%s: unknown bud %d\n", __func__, bud); 754 + WARN_ON(1); 755 + z3fold_page_unlock(zhdr); 764 756 return; 765 757 } 766 758 ··· 770 758 atomic64_dec(&pool->pages_nr); 771 759 return; 772 760 } 773 - if (test_bit(UNDER_RECLAIM, &page->private)) { 761 + if (test_bit(PAGE_CLAIMED, &page->private)) { 774 762 z3fold_page_unlock(zhdr); 775 763 return; 776 764 } ··· 848 836 } 849 837 list_for_each_prev(pos, &pool->lru) { 850 838 page = list_entry(pos, struct page, lru); 851 - if (test_bit(PAGE_HEADLESS, &page->private)) 852 - /* candidate found */ 853 - break; 839 + 840 + /* this bit could have been set by free, in which case 841 + * we pass over to the next page in the pool. 842 + */ 843 + if (test_and_set_bit(PAGE_CLAIMED, &page->private)) 844 + continue; 854 845 855 846 zhdr = page_address(page); 856 - if (!z3fold_page_trylock(zhdr)) 847 + if (test_bit(PAGE_HEADLESS, &page->private)) 848 + break; 849 + 850 + if (!z3fold_page_trylock(zhdr)) { 851 + zhdr = NULL; 857 852 continue; /* can't evict at this point */ 853 + } 858 854 kref_get(&zhdr->refcount); 859 855 list_del_init(&zhdr->buddy); 860 856 zhdr->cpu = -1; 861 - set_bit(UNDER_RECLAIM, &page->private); 862 857 break; 863 858 } 859 + 860 + if (!zhdr) 861 + break; 864 862 865 863 list_del_init(&page->lru); 866 864 spin_unlock(&pool->lock); ··· 920 898 if (test_bit(PAGE_HEADLESS, &page->private)) { 921 899 if (ret == 0) { 922 900 free_z3fold_page(page); 901 + atomic64_dec(&pool->pages_nr); 923 902 return 0; 924 903 } 925 904 spin_lock(&pool->lock); ··· 928 905 spin_unlock(&pool->lock); 929 906 } else { 930 907 z3fold_page_lock(zhdr); 931 - clear_bit(UNDER_RECLAIM, &page->private); 908 + clear_bit(PAGE_CLAIMED, &page->private); 932 909 if (kref_put(&zhdr->refcount, 933 910 release_z3fold_page_locked)) { 934 911 atomic64_dec(&pool->pages_nr); ··· 987 964 set_bit(MIDDLE_CHUNK_MAPPED, &page->private); 988 965 break; 989 966 case LAST: 990 - addr += PAGE_SIZE - (zhdr->last_chunks << CHUNK_SHIFT); 967 + addr += PAGE_SIZE - (handle_to_chunks(handle) << CHUNK_SHIFT); 991 968 break; 992 969 default: 993 970 pr_err("unknown buddy id %d\n", buddy);
+4 -2
net/batman-adv/bat_v_elp.c
··· 352 352 */ 353 353 int batadv_v_elp_iface_enable(struct batadv_hard_iface *hard_iface) 354 354 { 355 + static const size_t tvlv_padding = sizeof(__be32); 355 356 struct batadv_elp_packet *elp_packet; 356 357 unsigned char *elp_buff; 357 358 u32 random_seqno; 358 359 size_t size; 359 360 int res = -ENOMEM; 360 361 361 - size = ETH_HLEN + NET_IP_ALIGN + BATADV_ELP_HLEN; 362 + size = ETH_HLEN + NET_IP_ALIGN + BATADV_ELP_HLEN + tvlv_padding; 362 363 hard_iface->bat_v.elp_skb = dev_alloc_skb(size); 363 364 if (!hard_iface->bat_v.elp_skb) 364 365 goto out; 365 366 366 367 skb_reserve(hard_iface->bat_v.elp_skb, ETH_HLEN + NET_IP_ALIGN); 367 - elp_buff = skb_put_zero(hard_iface->bat_v.elp_skb, BATADV_ELP_HLEN); 368 + elp_buff = skb_put_zero(hard_iface->bat_v.elp_skb, 369 + BATADV_ELP_HLEN + tvlv_padding); 368 370 elp_packet = (struct batadv_elp_packet *)elp_buff; 369 371 370 372 elp_packet->packet_type = BATADV_ELP;
+1 -1
net/batman-adv/fragmentation.c
··· 275 275 kfree(entry); 276 276 277 277 packet = (struct batadv_frag_packet *)skb_out->data; 278 - size = ntohs(packet->total_size); 278 + size = ntohs(packet->total_size) + hdr_size; 279 279 280 280 /* Make room for the rest of the fragments. */ 281 281 if (pskb_expand_head(skb_out, 0, size - skb_out->len, GFP_ATOMIC) < 0) {
+7
net/bridge/br_private.h
··· 102 102 struct metadata_dst *tunnel_dst; 103 103 }; 104 104 105 + /* private vlan flags */ 106 + enum { 107 + BR_VLFLAG_PER_PORT_STATS = BIT(0), 108 + }; 109 + 105 110 /** 106 111 * struct net_bridge_vlan - per-vlan entry 107 112 * 108 113 * @vnode: rhashtable member 109 114 * @vid: VLAN id 110 115 * @flags: bridge vlan flags 116 + * @priv_flags: private (in-kernel) bridge vlan flags 111 117 * @stats: per-cpu VLAN statistics 112 118 * @br: if MASTER flag set, this points to a bridge struct 113 119 * @port: if MASTER flag unset, this points to a port struct ··· 133 127 struct rhash_head tnode; 134 128 u16 vid; 135 129 u16 flags; 130 + u16 priv_flags; 136 131 struct br_vlan_stats __percpu *stats; 137 132 union { 138 133 struct net_bridge *br;
+2 -1
net/bridge/br_vlan.c
··· 197 197 v = container_of(rcu, struct net_bridge_vlan, rcu); 198 198 WARN_ON(br_vlan_is_master(v)); 199 199 /* if we had per-port stats configured then free them here */ 200 - if (v->brvlan->stats != v->stats) 200 + if (v->priv_flags & BR_VLFLAG_PER_PORT_STATS) 201 201 free_percpu(v->stats); 202 202 v->stats = NULL; 203 203 kfree(v); ··· 264 264 err = -ENOMEM; 265 265 goto out_filt; 266 266 } 267 + v->priv_flags |= BR_VLFLAG_PER_PORT_STATS; 267 268 } else { 268 269 v->stats = masterv->stats; 269 270 }
+9 -8
net/can/raw.c
··· 745 745 } else 746 746 ifindex = ro->ifindex; 747 747 748 - if (ro->fd_frames) { 749 - if (unlikely(size != CANFD_MTU && size != CAN_MTU)) 750 - return -EINVAL; 751 - } else { 752 - if (unlikely(size != CAN_MTU)) 753 - return -EINVAL; 754 - } 755 - 756 748 dev = dev_get_by_index(sock_net(sk), ifindex); 757 749 if (!dev) 758 750 return -ENXIO; 751 + 752 + err = -EINVAL; 753 + if (ro->fd_frames && dev->mtu == CANFD_MTU) { 754 + if (unlikely(size != CANFD_MTU && size != CAN_MTU)) 755 + goto put_dev; 756 + } else { 757 + if (unlikely(size != CAN_MTU)) 758 + goto put_dev; 759 + } 759 760 760 761 skb = sock_alloc_send_skb(sk, size + sizeof(struct can_skb_priv), 761 762 msg->msg_flags & MSG_DONTWAIT, &err);
+9 -3
net/ceph/messenger.c
··· 580 580 struct bio_vec bvec; 581 581 int ret; 582 582 583 - /* sendpage cannot properly handle pages with page_count == 0, 584 - * we need to fallback to sendmsg if that's the case */ 585 - if (page_count(page) >= 1) 583 + /* 584 + * sendpage cannot properly handle pages with page_count == 0, 585 + * we need to fall back to sendmsg if that's the case. 586 + * 587 + * Same goes for slab pages: skb_can_coalesce() allows 588 + * coalescing neighboring slab objects into a single frag which 589 + * triggers one of hardened usercopy checks. 590 + */ 591 + if (page_count(page) >= 1 && !PageSlab(page)) 586 592 return __ceph_tcp_sendpage(sock, page, offset, size, more); 587 593 588 594 bvec.bv_page = page;
+10 -3
net/core/dev.c
··· 3272 3272 } 3273 3273 3274 3274 skb = next; 3275 - if (netif_xmit_stopped(txq) && skb) { 3275 + if (netif_tx_queue_stopped(txq) && skb) { 3276 3276 rc = NETDEV_TX_BUSY; 3277 3277 break; 3278 3278 } ··· 5655 5655 skb->vlan_tci = 0; 5656 5656 skb->dev = napi->dev; 5657 5657 skb->skb_iif = 0; 5658 + 5659 + /* eth_type_trans() assumes pkt_type is PACKET_HOST */ 5660 + skb->pkt_type = PACKET_HOST; 5661 + 5658 5662 skb->encapsulation = 0; 5659 5663 skb_shinfo(skb)->gso_type = 0; 5660 5664 skb->truesize = SKB_TRUESIZE(skb_end_offset(skb)); ··· 5970 5966 if (work_done) 5971 5967 timeout = n->dev->gro_flush_timeout; 5972 5968 5969 + /* When the NAPI instance uses a timeout and keeps postponing 5970 + * it, we need to bound somehow the time packets are kept in 5971 + * the GRO layer 5972 + */ 5973 + napi_gro_flush(n, !!timeout); 5973 5974 if (timeout) 5974 5975 hrtimer_start(&n->timer, ns_to_ktime(timeout), 5975 5976 HRTIMER_MODE_REL_PINNED); 5976 - else 5977 - napi_gro_flush(n, false); 5978 5977 } 5979 5978 if (unlikely(!list_empty(&n->poll_list))) { 5980 5979 /* If n->poll_list is not empty, we need to mask irqs */
+2 -3
net/core/filter.c
··· 4852 4852 } else { 4853 4853 struct in6_addr *src6 = (struct in6_addr *)&tuple->ipv6.saddr; 4854 4854 struct in6_addr *dst6 = (struct in6_addr *)&tuple->ipv6.daddr; 4855 - u16 hnum = ntohs(tuple->ipv6.dport); 4856 4855 int sdif = inet6_sdif(skb); 4857 4856 4858 4857 if (proto == IPPROTO_TCP) 4859 4858 sk = __inet6_lookup(net, &tcp_hashinfo, skb, 0, 4860 4859 src6, tuple->ipv6.sport, 4861 - dst6, hnum, 4860 + dst6, ntohs(tuple->ipv6.dport), 4862 4861 dif, sdif, &refcounted); 4863 4862 else if (likely(ipv6_bpf_stub)) 4864 4863 sk = ipv6_bpf_stub->udp6_lib_lookup(net, 4865 4864 src6, tuple->ipv6.sport, 4866 - dst6, hnum, 4865 + dst6, tuple->ipv6.dport, 4867 4866 dif, sdif, 4868 4867 &udp_table, skb); 4869 4868 #endif
+2 -2
net/core/flow_dissector.c
··· 1166 1166 break; 1167 1167 } 1168 1168 1169 - if (dissector_uses_key(flow_dissector, 1170 - FLOW_DISSECTOR_KEY_PORTS)) { 1169 + if (dissector_uses_key(flow_dissector, FLOW_DISSECTOR_KEY_PORTS) && 1170 + !(key_control->flags & FLOW_DIS_IS_FRAGMENT)) { 1171 1171 key_ports = skb_flow_dissector_target(flow_dissector, 1172 1172 FLOW_DISSECTOR_KEY_PORTS, 1173 1173 target_container);
+2 -1
net/core/netpoll.c
··· 717 717 718 718 read_lock_bh(&idev->lock); 719 719 list_for_each_entry(ifp, &idev->addr_list, if_list) { 720 - if (ipv6_addr_type(&ifp->addr) & IPV6_ADDR_LINKLOCAL) 720 + if (!!(ipv6_addr_type(&ifp->addr) & IPV6_ADDR_LINKLOCAL) != 721 + !!(ipv6_addr_type(&np->remote_ip.in6) & IPV6_ADDR_LINKLOCAL)) 721 722 continue; 722 723 np->local_ip.in6 = ifp->addr; 723 724 err = 0;
+1 -1
net/core/rtnetlink.c
··· 3367 3367 cb->seq = 0; 3368 3368 } 3369 3369 ret = dumpit(skb, cb); 3370 - if (ret < 0) 3370 + if (ret) 3371 3371 break; 3372 3372 } 3373 3373 cb->family = idx;
+7
net/core/skbuff.c
··· 4854 4854 nf_reset(skb); 4855 4855 nf_reset_trace(skb); 4856 4856 4857 + #ifdef CONFIG_NET_SWITCHDEV 4858 + skb->offload_fwd_mark = 0; 4859 + skb->offload_mr_fwd_mark = 0; 4860 + #endif 4861 + 4857 4862 if (!xnet) 4858 4863 return; 4859 4864 ··· 4948 4943 * - L2+L3+L4+payload size (e.g. sanity check before passing to driver) 4949 4944 * 4950 4945 * This is a helper to do that correctly considering GSO_BY_FRAGS. 4946 + * 4947 + * @skb: GSO skb 4951 4948 * 4952 4949 * @seg_len: The segmented length (from skb_gso_*_seglen). In the 4953 4950 * GSO_BY_FRAGS case this will be [header sizes + GSO_BY_FRAGS].
+1
net/core/sock.c
··· 3279 3279 3280 3280 #ifdef CONFIG_INET 3281 3281 if (family == AF_INET && 3282 + protocol != IPPROTO_RAW && 3282 3283 !rcu_access_pointer(inet_protos[protocol])) 3283 3284 return -ENOENT; 3284 3285 #endif
+15 -14
net/ipv4/inet_fragment.c
··· 178 178 } 179 179 180 180 static struct inet_frag_queue *inet_frag_create(struct netns_frags *nf, 181 - void *arg) 181 + void *arg, 182 + struct inet_frag_queue **prev) 182 183 { 183 184 struct inet_frags *f = nf->f; 184 185 struct inet_frag_queue *q; 185 - int err; 186 186 187 187 q = inet_frag_alloc(nf, f, arg); 188 - if (!q) 188 + if (!q) { 189 + *prev = ERR_PTR(-ENOMEM); 189 190 return NULL; 190 - 191 + } 191 192 mod_timer(&q->timer, jiffies + nf->timeout); 192 193 193 - err = rhashtable_insert_fast(&nf->rhashtable, &q->node, 194 - f->rhash_params); 195 - if (err < 0) { 194 + *prev = rhashtable_lookup_get_insert_key(&nf->rhashtable, &q->key, 195 + &q->node, f->rhash_params); 196 + if (*prev) { 196 197 q->flags |= INET_FRAG_COMPLETE; 197 198 inet_frag_kill(q); 198 199 inet_frag_destroy(q); ··· 205 204 /* TODO : call from rcu_read_lock() and no longer use refcount_inc_not_zero() */ 206 205 struct inet_frag_queue *inet_frag_find(struct netns_frags *nf, void *key) 207 206 { 208 - struct inet_frag_queue *fq; 207 + struct inet_frag_queue *fq = NULL, *prev; 209 208 210 209 if (!nf->high_thresh || frag_mem_limit(nf) > nf->high_thresh) 211 210 return NULL; 212 211 213 212 rcu_read_lock(); 214 213 215 - fq = rhashtable_lookup(&nf->rhashtable, key, nf->f->rhash_params); 216 - if (fq) { 214 + prev = rhashtable_lookup(&nf->rhashtable, key, nf->f->rhash_params); 215 + if (!prev) 216 + fq = inet_frag_create(nf, key, &prev); 217 + if (prev && !IS_ERR(prev)) { 218 + fq = prev; 217 219 if (!refcount_inc_not_zero(&fq->refcnt)) 218 220 fq = NULL; 219 - rcu_read_unlock(); 220 - return fq; 221 221 } 222 222 rcu_read_unlock(); 223 - 224 - return inet_frag_create(nf, key); 223 + return fq; 225 224 } 226 225 EXPORT_SYMBOL(inet_frag_find);
+8 -4
net/ipv4/ip_fragment.c
··· 722 722 if (ip_is_fragment(&iph)) { 723 723 skb = skb_share_check(skb, GFP_ATOMIC); 724 724 if (skb) { 725 - if (!pskb_may_pull(skb, netoff + iph.ihl * 4)) 726 - return skb; 727 - if (pskb_trim_rcsum(skb, netoff + len)) 728 - return skb; 725 + if (!pskb_may_pull(skb, netoff + iph.ihl * 4)) { 726 + kfree_skb(skb); 727 + return NULL; 728 + } 729 + if (pskb_trim_rcsum(skb, netoff + len)) { 730 + kfree_skb(skb); 731 + return NULL; 732 + } 729 733 memset(IPCB(skb), 0, sizeof(struct inet_skb_parm)); 730 734 if (ip_defrag(net, skb, user)) 731 735 return NULL;
+2 -1
net/ipv4/ip_output.c
··· 939 939 unsigned int fraglen; 940 940 unsigned int fraggap; 941 941 unsigned int alloclen; 942 - unsigned int pagedlen = 0; 942 + unsigned int pagedlen; 943 943 struct sk_buff *skb_prev; 944 944 alloc_new_skb: 945 945 skb_prev = skb; ··· 956 956 if (datalen > mtu - fragheaderlen) 957 957 datalen = maxfraglen - fragheaderlen; 958 958 fraglen = datalen + fragheaderlen; 959 + pagedlen = 0; 959 960 960 961 if ((flags & MSG_MORE) && 961 962 !(rt->dst.dev->features&NETIF_F_SG))
+3 -3
net/ipv4/ip_sockglue.c
··· 1246 1246 return -ENOPROTOOPT; 1247 1247 1248 1248 err = do_ip_setsockopt(sk, level, optname, optval, optlen); 1249 - #ifdef CONFIG_BPFILTER 1249 + #if IS_ENABLED(CONFIG_BPFILTER_UMH) 1250 1250 if (optname >= BPFILTER_IPT_SO_SET_REPLACE && 1251 1251 optname < BPFILTER_IPT_SET_MAX) 1252 1252 err = bpfilter_ip_set_sockopt(sk, optname, optval, optlen); ··· 1559 1559 int err; 1560 1560 1561 1561 err = do_ip_getsockopt(sk, level, optname, optval, optlen, 0); 1562 - #ifdef CONFIG_BPFILTER 1562 + #if IS_ENABLED(CONFIG_BPFILTER_UMH) 1563 1563 if (optname >= BPFILTER_IPT_SO_GET_INFO && 1564 1564 optname < BPFILTER_IPT_GET_MAX) 1565 1565 err = bpfilter_ip_get_sockopt(sk, optname, optval, optlen); ··· 1596 1596 err = do_ip_getsockopt(sk, level, optname, optval, optlen, 1597 1597 MSG_CMSG_COMPAT); 1598 1598 1599 - #ifdef CONFIG_BPFILTER 1599 + #if IS_ENABLED(CONFIG_BPFILTER_UMH) 1600 1600 if (optname >= BPFILTER_IPT_SO_GET_INFO && 1601 1601 optname < BPFILTER_IPT_GET_MAX) 1602 1602 err = bpfilter_ip_get_sockopt(sk, optname, optval, optlen);
+1 -1
net/ipv4/ip_tunnel_core.c
··· 80 80 81 81 iph->version = 4; 82 82 iph->ihl = sizeof(struct iphdr) >> 2; 83 - iph->frag_off = df; 83 + iph->frag_off = ip_mtu_locked(&rt->dst) ? 0 : df; 84 84 iph->protocol = proto; 85 85 iph->tos = tos; 86 86 iph->daddr = dst;
+5 -2
net/ipv4/netfilter/ipt_MASQUERADE.c
··· 81 81 int ret; 82 82 83 83 ret = xt_register_target(&masquerade_tg_reg); 84 + if (ret) 85 + return ret; 84 86 85 - if (ret == 0) 86 - nf_nat_masquerade_ipv4_register_notifier(); 87 + ret = nf_nat_masquerade_ipv4_register_notifier(); 88 + if (ret) 89 + xt_unregister_target(&masquerade_tg_reg); 87 90 88 91 return ret; 89 92 }
+30 -8
net/ipv4/netfilter/nf_nat_masquerade_ipv4.c
··· 147 147 .notifier_call = masq_inet_event, 148 148 }; 149 149 150 - static atomic_t masquerade_notifier_refcount = ATOMIC_INIT(0); 150 + static int masq_refcnt; 151 + static DEFINE_MUTEX(masq_mutex); 151 152 152 - void nf_nat_masquerade_ipv4_register_notifier(void) 153 + int nf_nat_masquerade_ipv4_register_notifier(void) 153 154 { 155 + int ret = 0; 156 + 157 + mutex_lock(&masq_mutex); 154 158 /* check if the notifier was already set */ 155 - if (atomic_inc_return(&masquerade_notifier_refcount) > 1) 156 - return; 159 + if (++masq_refcnt > 1) 160 + goto out_unlock; 157 161 158 162 /* Register for device down reports */ 159 - register_netdevice_notifier(&masq_dev_notifier); 163 + ret = register_netdevice_notifier(&masq_dev_notifier); 164 + if (ret) 165 + goto err_dec; 160 166 /* Register IP address change reports */ 161 - register_inetaddr_notifier(&masq_inet_notifier); 167 + ret = register_inetaddr_notifier(&masq_inet_notifier); 168 + if (ret) 169 + goto err_unregister; 170 + 171 + mutex_unlock(&masq_mutex); 172 + return ret; 173 + 174 + err_unregister: 175 + unregister_netdevice_notifier(&masq_dev_notifier); 176 + err_dec: 177 + masq_refcnt--; 178 + out_unlock: 179 + mutex_unlock(&masq_mutex); 180 + return ret; 162 181 } 163 182 EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv4_register_notifier); 164 183 165 184 void nf_nat_masquerade_ipv4_unregister_notifier(void) 166 185 { 186 + mutex_lock(&masq_mutex); 167 187 /* check if the notifier still has clients */ 168 - if (atomic_dec_return(&masquerade_notifier_refcount) > 0) 169 - return; 188 + if (--masq_refcnt > 0) 189 + goto out_unlock; 170 190 171 191 unregister_netdevice_notifier(&masq_dev_notifier); 172 192 unregister_inetaddr_notifier(&masq_inet_notifier); 193 + out_unlock: 194 + mutex_unlock(&masq_mutex); 173 195 } 174 196 EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv4_unregister_notifier);
+3 -1
net/ipv4/netfilter/nft_masq_ipv4.c
··· 69 69 if (ret < 0) 70 70 return ret; 71 71 72 - nf_nat_masquerade_ipv4_register_notifier(); 72 + ret = nf_nat_masquerade_ipv4_register_notifier(); 73 + if (ret) 74 + nft_unregister_expr(&nft_masq_ipv4_type); 73 75 74 76 return ret; 75 77 }
+23 -8
net/ipv4/tcp_input.c
··· 579 579 u32 delta = tcp_time_stamp(tp) - tp->rx_opt.rcv_tsecr; 580 580 u32 delta_us; 581 581 582 - if (!delta) 583 - delta = 1; 584 - delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ); 585 - tcp_rcv_rtt_update(tp, delta_us, 0); 582 + if (likely(delta < INT_MAX / (USEC_PER_SEC / TCP_TS_HZ))) { 583 + if (!delta) 584 + delta = 1; 585 + delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ); 586 + tcp_rcv_rtt_update(tp, delta_us, 0); 587 + } 586 588 } 587 589 } 588 590 ··· 2912 2910 if (seq_rtt_us < 0 && tp->rx_opt.saw_tstamp && tp->rx_opt.rcv_tsecr && 2913 2911 flag & FLAG_ACKED) { 2914 2912 u32 delta = tcp_time_stamp(tp) - tp->rx_opt.rcv_tsecr; 2915 - u32 delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ); 2916 2913 2917 - seq_rtt_us = ca_rtt_us = delta_us; 2914 + if (likely(delta < INT_MAX / (USEC_PER_SEC / TCP_TS_HZ))) { 2915 + seq_rtt_us = delta * (USEC_PER_SEC / TCP_TS_HZ); 2916 + ca_rtt_us = seq_rtt_us; 2917 + } 2918 2918 } 2919 2919 rs->rtt_us = ca_rtt_us; /* RTT of last (S)ACKed packet (or -1) */ 2920 2920 if (seq_rtt_us < 0) ··· 4272 4268 * If the sack array is full, forget about the last one. 4273 4269 */ 4274 4270 if (this_sack >= TCP_NUM_SACKS) { 4275 - if (tp->compressed_ack) 4271 + if (tp->compressed_ack > TCP_FASTRETRANS_THRESH) 4276 4272 tcp_send_ack(sk); 4277 4273 this_sack--; 4278 4274 tp->rx_opt.num_sacks--; ··· 4367 4363 if (TCP_SKB_CB(from)->has_rxtstamp) { 4368 4364 TCP_SKB_CB(to)->has_rxtstamp = true; 4369 4365 to->tstamp = from->tstamp; 4366 + skb_hwtstamps(to)->hwtstamp = skb_hwtstamps(from)->hwtstamp; 4370 4367 } 4371 4368 4372 4369 return true; ··· 5193 5188 if (!tcp_is_sack(tp) || 5194 5189 tp->compressed_ack >= sock_net(sk)->ipv4.sysctl_tcp_comp_sack_nr) 5195 5190 goto send_now; 5196 - tp->compressed_ack++; 5191 + 5192 + if (tp->compressed_ack_rcv_nxt != tp->rcv_nxt) { 5193 + tp->compressed_ack_rcv_nxt = tp->rcv_nxt; 5194 + if (tp->compressed_ack > TCP_FASTRETRANS_THRESH) 5195 + NET_ADD_STATS(sock_net(sk), LINUX_MIB_TCPACKCOMPRESSED, 5196 + tp->compressed_ack - TCP_FASTRETRANS_THRESH); 5197 + tp->compressed_ack = 0; 5198 + } 5199 + 5200 + if (++tp->compressed_ack <= TCP_FASTRETRANS_THRESH) 5201 + goto send_now; 5197 5202 5198 5203 if (hrtimer_is_queued(&tp->compressed_ack_timer)) 5199 5204 return;
+3 -3
net/ipv4/tcp_output.c
··· 180 180 { 181 181 struct tcp_sock *tp = tcp_sk(sk); 182 182 183 - if (unlikely(tp->compressed_ack)) { 183 + if (unlikely(tp->compressed_ack > TCP_FASTRETRANS_THRESH)) { 184 184 NET_ADD_STATS(sock_net(sk), LINUX_MIB_TCPACKCOMPRESSED, 185 - tp->compressed_ack); 186 - tp->compressed_ack = 0; 185 + tp->compressed_ack - TCP_FASTRETRANS_THRESH); 186 + tp->compressed_ack = TCP_FASTRETRANS_THRESH; 187 187 if (hrtimer_try_to_cancel(&tp->compressed_ack_timer) == 1) 188 188 __sock_put(sk); 189 189 }
+7 -5
net/ipv4/tcp_timer.c
··· 40 40 { 41 41 struct inet_connection_sock *icsk = inet_csk(sk); 42 42 u32 elapsed, start_ts; 43 + s32 remaining; 43 44 44 45 start_ts = tcp_retransmit_stamp(sk); 45 46 if (!icsk->icsk_user_timeout || !start_ts) 46 47 return icsk->icsk_rto; 47 48 elapsed = tcp_time_stamp(tcp_sk(sk)) - start_ts; 48 - if (elapsed >= icsk->icsk_user_timeout) 49 + remaining = icsk->icsk_user_timeout - elapsed; 50 + if (remaining <= 0) 49 51 return 1; /* user timeout has passed; fire ASAP */ 50 - else 51 - return min_t(u32, icsk->icsk_rto, msecs_to_jiffies(icsk->icsk_user_timeout - elapsed)); 52 + 53 + return min_t(u32, icsk->icsk_rto, msecs_to_jiffies(remaining)); 52 54 } 53 55 54 56 /** ··· 211 209 (boundary - linear_backoff_thresh) * TCP_RTO_MAX; 212 210 timeout = jiffies_to_msecs(timeout); 213 211 } 214 - return (tcp_time_stamp(tcp_sk(sk)) - start_ts) >= timeout; 212 + return (s32)(tcp_time_stamp(tcp_sk(sk)) - start_ts - timeout) >= 0; 215 213 } 216 214 217 215 /* A write timeout has occurred. Process the after effects. */ ··· 742 740 743 741 bh_lock_sock(sk); 744 742 if (!sock_owned_by_user(sk)) { 745 - if (tp->compressed_ack) 743 + if (tp->compressed_ack > TCP_FASTRETRANS_THRESH) 746 744 tcp_send_ack(sk); 747 745 } else { 748 746 if (!test_and_set_bit(TCP_DELACK_TIMER_DEFERRED,
+13 -6
net/ipv6/addrconf.c
··· 179 179 static void addrconf_dad_work(struct work_struct *w); 180 180 static void addrconf_dad_completed(struct inet6_ifaddr *ifp, bool bump_id, 181 181 bool send_na); 182 - static void addrconf_dad_run(struct inet6_dev *idev); 182 + static void addrconf_dad_run(struct inet6_dev *idev, bool restart); 183 183 static void addrconf_rs_timer(struct timer_list *t); 184 184 static void __ipv6_ifa_notify(int event, struct inet6_ifaddr *ifa); 185 185 static void ipv6_ifa_notify(int event, struct inet6_ifaddr *ifa); ··· 3439 3439 void *ptr) 3440 3440 { 3441 3441 struct net_device *dev = netdev_notifier_info_to_dev(ptr); 3442 + struct netdev_notifier_change_info *change_info; 3442 3443 struct netdev_notifier_changeupper_info *info; 3443 3444 struct inet6_dev *idev = __in6_dev_get(dev); 3444 3445 struct net *net = dev_net(dev); ··· 3514 3513 break; 3515 3514 } 3516 3515 3517 - if (idev) { 3516 + if (!IS_ERR_OR_NULL(idev)) { 3518 3517 if (idev->if_flags & IF_READY) { 3519 3518 /* device is already configured - 3520 3519 * but resend MLD reports, we might ··· 3522 3521 * multicast snooping switches 3523 3522 */ 3524 3523 ipv6_mc_up(idev); 3524 + change_info = ptr; 3525 + if (change_info->flags_changed & IFF_NOARP) 3526 + addrconf_dad_run(idev, true); 3525 3527 rt6_sync_up(dev, RTNH_F_LINKDOWN); 3526 3528 break; 3527 3529 } ··· 3559 3555 3560 3556 if (!IS_ERR_OR_NULL(idev)) { 3561 3557 if (run_pending) 3562 - addrconf_dad_run(idev); 3558 + addrconf_dad_run(idev, false); 3563 3559 3564 3560 /* Device has an address by now */ 3565 3561 rt6_sync_up(dev, RTNH_F_DEAD); ··· 4177 4173 addrconf_verify_rtnl(); 4178 4174 } 4179 4175 4180 - static void addrconf_dad_run(struct inet6_dev *idev) 4176 + static void addrconf_dad_run(struct inet6_dev *idev, bool restart) 4181 4177 { 4182 4178 struct inet6_ifaddr *ifp; 4183 4179 4184 4180 read_lock_bh(&idev->lock); 4185 4181 list_for_each_entry(ifp, &idev->addr_list, if_list) { 4186 4182 spin_lock(&ifp->lock); 4187 - if (ifp->flags & IFA_F_TENTATIVE && 4188 - ifp->state == INET6_IFADDR_STATE_DAD) 4183 + if ((ifp->flags & IFA_F_TENTATIVE && 4184 + ifp->state == INET6_IFADDR_STATE_DAD) || restart) { 4185 + if (restart) 4186 + ifp->state = INET6_IFADDR_STATE_PREDAD; 4189 4187 addrconf_dad_kick(ifp); 4188 + } 4190 4189 spin_unlock(&ifp->lock); 4191 4190 } 4192 4191 read_unlock_bh(&idev->lock);
+5
net/ipv6/af_inet6.c
··· 1001 1001 err = ip6_flowlabel_init(); 1002 1002 if (err) 1003 1003 goto ip6_flowlabel_fail; 1004 + err = ipv6_anycast_init(); 1005 + if (err) 1006 + goto ipv6_anycast_fail; 1004 1007 err = addrconf_init(); 1005 1008 if (err) 1006 1009 goto addrconf_fail; ··· 1094 1091 ipv6_exthdrs_fail: 1095 1092 addrconf_cleanup(); 1096 1093 addrconf_fail: 1094 + ipv6_anycast_cleanup(); 1095 + ipv6_anycast_fail: 1097 1096 ip6_flowlabel_cleanup(); 1098 1097 ip6_flowlabel_fail: 1099 1098 ndisc_late_cleanup();
+76 -4
net/ipv6/anycast.c
··· 44 44 45 45 #include <net/checksum.h> 46 46 47 + #define IN6_ADDR_HSIZE_SHIFT 8 48 + #define IN6_ADDR_HSIZE BIT(IN6_ADDR_HSIZE_SHIFT) 49 + /* anycast address hash table 50 + */ 51 + static struct hlist_head inet6_acaddr_lst[IN6_ADDR_HSIZE]; 52 + static DEFINE_SPINLOCK(acaddr_hash_lock); 53 + 47 54 static int ipv6_dev_ac_dec(struct net_device *dev, const struct in6_addr *addr); 55 + 56 + static u32 inet6_acaddr_hash(struct net *net, const struct in6_addr *addr) 57 + { 58 + u32 val = ipv6_addr_hash(addr) ^ net_hash_mix(net); 59 + 60 + return hash_32(val, IN6_ADDR_HSIZE_SHIFT); 61 + } 48 62 49 63 /* 50 64 * socket join an anycast group ··· 218 204 rtnl_unlock(); 219 205 } 220 206 207 + static void ipv6_add_acaddr_hash(struct net *net, struct ifacaddr6 *aca) 208 + { 209 + unsigned int hash = inet6_acaddr_hash(net, &aca->aca_addr); 210 + 211 + spin_lock(&acaddr_hash_lock); 212 + hlist_add_head_rcu(&aca->aca_addr_lst, &inet6_acaddr_lst[hash]); 213 + spin_unlock(&acaddr_hash_lock); 214 + } 215 + 216 + static void ipv6_del_acaddr_hash(struct ifacaddr6 *aca) 217 + { 218 + spin_lock(&acaddr_hash_lock); 219 + hlist_del_init_rcu(&aca->aca_addr_lst); 220 + spin_unlock(&acaddr_hash_lock); 221 + } 222 + 221 223 static void aca_get(struct ifacaddr6 *aca) 222 224 { 223 225 refcount_inc(&aca->aca_refcnt); 224 226 } 225 227 228 + static void aca_free_rcu(struct rcu_head *h) 229 + { 230 + struct ifacaddr6 *aca = container_of(h, struct ifacaddr6, rcu); 231 + 232 + fib6_info_release(aca->aca_rt); 233 + kfree(aca); 234 + } 235 + 226 236 static void aca_put(struct ifacaddr6 *ac) 227 237 { 228 238 if (refcount_dec_and_test(&ac->aca_refcnt)) { 229 - fib6_info_release(ac->aca_rt); 230 - kfree(ac); 239 + call_rcu(&ac->rcu, aca_free_rcu); 231 240 } 232 241 } 233 242 ··· 266 229 aca->aca_addr = *addr; 267 230 fib6_info_hold(f6i); 268 231 aca->aca_rt = f6i; 232 + INIT_HLIST_NODE(&aca->aca_addr_lst); 269 233 aca->aca_users = 1; 270 234 /* aca_tstamp should be updated upon changes */ 271 235 aca->aca_cstamp = aca->aca_tstamp = jiffies; ··· 323 285 aca_get(aca); 324 286 write_unlock_bh(&idev->lock); 325 287 288 + ipv6_add_acaddr_hash(net, aca); 289 + 326 290 ip6_ins_rt(net, f6i); 327 291 328 292 addrconf_join_solict(idev->dev, &aca->aca_addr); ··· 365 325 else 366 326 idev->ac_list = aca->aca_next; 367 327 write_unlock_bh(&idev->lock); 328 + ipv6_del_acaddr_hash(aca); 368 329 addrconf_leave_solict(idev, &aca->aca_addr); 369 330 370 331 ip6_del_rt(dev_net(idev->dev), aca->aca_rt); ··· 392 351 while ((aca = idev->ac_list) != NULL) { 393 352 idev->ac_list = aca->aca_next; 394 353 write_unlock_bh(&idev->lock); 354 + 355 + ipv6_del_acaddr_hash(aca); 395 356 396 357 addrconf_leave_solict(idev, &aca->aca_addr); 397 358 ··· 433 390 bool ipv6_chk_acast_addr(struct net *net, struct net_device *dev, 434 391 const struct in6_addr *addr) 435 392 { 393 + unsigned int hash = inet6_acaddr_hash(net, addr); 394 + struct net_device *nh_dev; 395 + struct ifacaddr6 *aca; 436 396 bool found = false; 437 397 438 398 rcu_read_lock(); 439 399 if (dev) 440 400 found = ipv6_chk_acast_dev(dev, addr); 441 401 else 442 - for_each_netdev_rcu(net, dev) 443 - if (ipv6_chk_acast_dev(dev, addr)) { 402 + hlist_for_each_entry_rcu(aca, &inet6_acaddr_lst[hash], 403 + aca_addr_lst) { 404 + nh_dev = fib6_info_nh_dev(aca->aca_rt); 405 + if (!nh_dev || !net_eq(dev_net(nh_dev), net)) 406 + continue; 407 + if (ipv6_addr_equal(&aca->aca_addr, addr)) { 444 408 found = true; 445 409 break; 446 410 } 411 + } 447 412 rcu_read_unlock(); 448 413 return found; 449 414 } ··· 591 540 remove_proc_entry("anycast6", net->proc_net); 592 541 } 593 542 #endif 543 + 544 + /* Init / cleanup code 545 + */ 546 + int __init ipv6_anycast_init(void) 547 + { 548 + int i; 549 + 550 + for (i = 0; i < IN6_ADDR_HSIZE; i++) 551 + INIT_HLIST_HEAD(&inet6_acaddr_lst[i]); 552 + return 0; 553 + } 554 + 555 + void ipv6_anycast_cleanup(void) 556 + { 557 + int i; 558 + 559 + spin_lock(&acaddr_hash_lock); 560 + for (i = 0; i < IN6_ADDR_HSIZE; i++) 561 + WARN_ON(!hlist_empty(&inet6_acaddr_lst[i])); 562 + spin_unlock(&acaddr_hash_lock); 563 + }
+2 -2
net/ipv6/ip6_fib.c
··· 591 591 592 592 /* fib entries are never clones */ 593 593 if (arg.filter.flags & RTM_F_CLONED) 594 - return skb->len; 594 + goto out; 595 595 596 596 w = (void *)cb->args[2]; 597 597 if (!w) { ··· 621 621 tb = fib6_get_table(net, arg.filter.table_id); 622 622 if (!tb) { 623 623 if (arg.filter.dump_all_families) 624 - return skb->len; 624 + goto out; 625 625 626 626 NL_SET_ERR_MSG_MOD(cb->extack, "FIB table does not exist"); 627 627 return -ENOENT;
+2 -1
net/ipv6/ip6_output.c
··· 1354 1354 unsigned int fraglen; 1355 1355 unsigned int fraggap; 1356 1356 unsigned int alloclen; 1357 - unsigned int pagedlen = 0; 1357 + unsigned int pagedlen; 1358 1358 alloc_new_skb: 1359 1359 /* There's no room in the current skb */ 1360 1360 if (skb) ··· 1378 1378 if (datalen > (cork->length <= mtu && !(cork->flags & IPCORK_ALLFRAG) ? mtu : maxfraglen) - fragheaderlen) 1379 1379 datalen = maxfraglen - fragheaderlen - rt->dst.trailer_len; 1380 1380 fraglen = datalen + fragheaderlen; 1381 + pagedlen = 0; 1381 1382 1382 1383 if ((flags & MSG_MORE) && 1383 1384 !(rt->dst.dev->features&NETIF_F_SG))
+2 -1
net/ipv6/netfilter.c
··· 24 24 unsigned int hh_len; 25 25 struct dst_entry *dst; 26 26 struct flowi6 fl6 = { 27 - .flowi6_oif = sk ? sk->sk_bound_dev_if : 0, 27 + .flowi6_oif = sk && sk->sk_bound_dev_if ? sk->sk_bound_dev_if : 28 + rt6_need_strict(&iph->daddr) ? skb_dst(skb)->dev->ifindex : 0, 28 29 .flowi6_mark = skb->mark, 29 30 .flowi6_uid = sock_net_uid(net, sk), 30 31 .daddr = iph->daddr,
+6 -2
net/ipv6/netfilter/ip6t_MASQUERADE.c
··· 58 58 int err; 59 59 60 60 err = xt_register_target(&masquerade_tg6_reg); 61 - if (err == 0) 62 - nf_nat_masquerade_ipv6_register_notifier(); 61 + if (err) 62 + return err; 63 + 64 + err = nf_nat_masquerade_ipv6_register_notifier(); 65 + if (err) 66 + xt_unregister_target(&masquerade_tg6_reg); 63 67 64 68 return err; 65 69 }
+9 -4
net/ipv6/netfilter/nf_conntrack_reasm.c
··· 587 587 */ 588 588 ret = -EINPROGRESS; 589 589 if (fq->q.flags == (INET_FRAG_FIRST_IN | INET_FRAG_LAST_IN) && 590 - fq->q.meat == fq->q.len && 591 - nf_ct_frag6_reasm(fq, skb, dev)) 592 - ret = 0; 593 - else 590 + fq->q.meat == fq->q.len) { 591 + unsigned long orefdst = skb->_skb_refdst; 592 + 593 + skb->_skb_refdst = 0UL; 594 + if (nf_ct_frag6_reasm(fq, skb, dev)) 595 + ret = 0; 596 + skb->_skb_refdst = orefdst; 597 + } else { 594 598 skb_dst_drop(skb); 599 + } 595 600 596 601 out_unlock: 597 602 spin_unlock_bh(&fq->q.lock);
+37 -14
net/ipv6/netfilter/nf_nat_masquerade_ipv6.c
··· 132 132 * of ipv6 addresses being deleted), we also need to add an upper 133 133 * limit to the number of queued work items. 134 134 */ 135 - static int masq_inet_event(struct notifier_block *this, 136 - unsigned long event, void *ptr) 135 + static int masq_inet6_event(struct notifier_block *this, 136 + unsigned long event, void *ptr) 137 137 { 138 138 struct inet6_ifaddr *ifa = ptr; 139 139 const struct net_device *dev; ··· 171 171 return NOTIFY_DONE; 172 172 } 173 173 174 - static struct notifier_block masq_inet_notifier = { 175 - .notifier_call = masq_inet_event, 174 + static struct notifier_block masq_inet6_notifier = { 175 + .notifier_call = masq_inet6_event, 176 176 }; 177 177 178 - static atomic_t masquerade_notifier_refcount = ATOMIC_INIT(0); 178 + static int masq_refcnt; 179 + static DEFINE_MUTEX(masq_mutex); 179 180 180 - void nf_nat_masquerade_ipv6_register_notifier(void) 181 + int nf_nat_masquerade_ipv6_register_notifier(void) 181 182 { 182 - /* check if the notifier is already set */ 183 - if (atomic_inc_return(&masquerade_notifier_refcount) > 1) 184 - return; 183 + int ret = 0; 185 184 186 - register_netdevice_notifier(&masq_dev_notifier); 187 - register_inet6addr_notifier(&masq_inet_notifier); 185 + mutex_lock(&masq_mutex); 186 + /* check if the notifier is already set */ 187 + if (++masq_refcnt > 1) 188 + goto out_unlock; 189 + 190 + ret = register_netdevice_notifier(&masq_dev_notifier); 191 + if (ret) 192 + goto err_dec; 193 + 194 + ret = register_inet6addr_notifier(&masq_inet6_notifier); 195 + if (ret) 196 + goto err_unregister; 197 + 198 + mutex_unlock(&masq_mutex); 199 + return ret; 200 + 201 + err_unregister: 202 + unregister_netdevice_notifier(&masq_dev_notifier); 203 + err_dec: 204 + masq_refcnt--; 205 + out_unlock: 206 + mutex_unlock(&masq_mutex); 207 + return ret; 188 208 } 189 209 EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv6_register_notifier); 190 210 191 211 void nf_nat_masquerade_ipv6_unregister_notifier(void) 192 212 { 213 + mutex_lock(&masq_mutex); 193 214 /* check if the notifier still has clients */ 194 - if (atomic_dec_return(&masquerade_notifier_refcount) > 0) 195 - return; 215 + if (--masq_refcnt > 0) 216 + goto out_unlock; 196 217 197 - unregister_inet6addr_notifier(&masq_inet_notifier); 218 + unregister_inet6addr_notifier(&masq_inet6_notifier); 198 219 unregister_netdevice_notifier(&masq_dev_notifier); 220 + out_unlock: 221 + mutex_unlock(&masq_mutex); 199 222 } 200 223 EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv6_unregister_notifier);
+3 -1
net/ipv6/netfilter/nft_masq_ipv6.c
··· 70 70 if (ret < 0) 71 71 return ret; 72 72 73 - nf_nat_masquerade_ipv6_register_notifier(); 73 + ret = nf_nat_masquerade_ipv6_register_notifier(); 74 + if (ret) 75 + nft_unregister_expr(&nft_masq_ipv6_type); 74 76 75 77 return ret; 76 78 }
+8 -6
net/ipv6/route.c
··· 2232 2232 if (rt) { 2233 2233 rcu_read_lock(); 2234 2234 if (rt->rt6i_flags & RTF_CACHE) { 2235 - if (dst_hold_safe(&rt->dst)) 2236 - rt6_remove_exception_rt(rt); 2235 + rt6_remove_exception_rt(rt); 2237 2236 } else { 2238 2237 struct fib6_info *from; 2239 2238 struct fib6_node *fn; ··· 2359 2360 2360 2361 void ip6_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, __be32 mtu) 2361 2362 { 2363 + int oif = sk->sk_bound_dev_if; 2362 2364 struct dst_entry *dst; 2363 2365 2364 - ip6_update_pmtu(skb, sock_net(sk), mtu, 2365 - sk->sk_bound_dev_if, sk->sk_mark, sk->sk_uid); 2366 + if (!oif && skb->dev) 2367 + oif = l3mdev_master_ifindex(skb->dev); 2368 + 2369 + ip6_update_pmtu(skb, sock_net(sk), mtu, oif, sk->sk_mark, sk->sk_uid); 2366 2370 2367 2371 dst = __sk_dst_get(sk); 2368 2372 if (!dst || !dst->obsolete || ··· 3216 3214 if (cfg->fc_flags & RTF_GATEWAY && 3217 3215 !ipv6_addr_equal(&cfg->fc_gateway, &rt->rt6i_gateway)) 3218 3216 goto out; 3219 - if (dst_hold_safe(&rt->dst)) 3220 - rc = rt6_remove_exception_rt(rt); 3217 + 3218 + rc = rt6_remove_exception_rt(rt); 3221 3219 out: 3222 3220 return rc; 3223 3221 }
+4 -5
net/l2tp/l2tp_core.c
··· 1490 1490 goto err_sock; 1491 1491 } 1492 1492 1493 - sk = sock->sk; 1494 - 1495 - sock_hold(sk); 1496 - tunnel->sock = sk; 1497 1493 tunnel->l2tp_net = net; 1498 - 1499 1494 pn = l2tp_pernet(net); 1500 1495 1501 1496 spin_lock_bh(&pn->l2tp_tunnel_list_lock); ··· 1504 1509 } 1505 1510 list_add_rcu(&tunnel->list, &pn->l2tp_tunnel_list); 1506 1511 spin_unlock_bh(&pn->l2tp_tunnel_list_lock); 1512 + 1513 + sk = sock->sk; 1514 + sock_hold(sk); 1515 + tunnel->sock = sk; 1507 1516 1508 1517 if (tunnel->encap == L2TP_ENCAPTYPE_UDP) { 1509 1518 struct udp_tunnel_sock_cfg udp_cfg = {
+23 -20
net/netfilter/ipset/ip_set_core.c
··· 55 55 MODULE_DESCRIPTION("core IP set support"); 56 56 MODULE_ALIAS_NFNL_SUBSYS(NFNL_SUBSYS_IPSET); 57 57 58 - /* When the nfnl mutex is held: */ 58 + /* When the nfnl mutex or ip_set_ref_lock is held: */ 59 59 #define ip_set_dereference(p) \ 60 - rcu_dereference_protected(p, lockdep_nfnl_is_held(NFNL_SUBSYS_IPSET)) 60 + rcu_dereference_protected(p, \ 61 + lockdep_nfnl_is_held(NFNL_SUBSYS_IPSET) || \ 62 + lockdep_is_held(&ip_set_ref_lock)) 61 63 #define ip_set(inst, id) \ 62 64 ip_set_dereference((inst)->ip_set_list)[id] 65 + #define ip_set_ref_netlink(inst,id) \ 66 + rcu_dereference_raw((inst)->ip_set_list)[id] 63 67 64 68 /* The set types are implemented in modules and registered set types 65 69 * can be found in ip_set_type_list. Adding/deleting types is ··· 697 693 EXPORT_SYMBOL_GPL(ip_set_put_byindex); 698 694 699 695 /* Get the name of a set behind a set index. 700 - * We assume the set is referenced, so it does exist and 701 - * can't be destroyed. The set cannot be renamed due to 702 - * the referencing either. 703 - * 696 + * Set itself is protected by RCU, but its name isn't: to protect against 697 + * renaming, grab ip_set_ref_lock as reader (see ip_set_rename()) and copy the 698 + * name. 704 699 */ 705 - const char * 706 - ip_set_name_byindex(struct net *net, ip_set_id_t index) 700 + void 701 + ip_set_name_byindex(struct net *net, ip_set_id_t index, char *name) 707 702 { 708 - const struct ip_set *set = ip_set_rcu_get(net, index); 703 + struct ip_set *set = ip_set_rcu_get(net, index); 709 704 710 705 BUG_ON(!set); 711 - BUG_ON(set->ref == 0); 712 706 713 - /* Referenced, so it's safe */ 714 - return set->name; 707 + read_lock_bh(&ip_set_ref_lock); 708 + strncpy(name, set->name, IPSET_MAXNAMELEN); 709 + read_unlock_bh(&ip_set_ref_lock); 715 710 } 716 711 EXPORT_SYMBOL_GPL(ip_set_name_byindex); 717 712 ··· 964 961 /* Wraparound */ 965 962 goto cleanup; 966 963 967 - list = kcalloc(i, sizeof(struct ip_set *), GFP_KERNEL); 964 + list = kvcalloc(i, sizeof(struct ip_set *), GFP_KERNEL); 968 965 if (!list) 969 966 goto cleanup; 970 967 /* nfnl mutex is held, both lists are valid */ ··· 976 973 /* Use new list */ 977 974 index = inst->ip_set_max; 978 975 inst->ip_set_max = i; 979 - kfree(tmp); 976 + kvfree(tmp); 980 977 ret = 0; 981 978 } else if (ret) { 982 979 goto cleanup; ··· 1156 1153 if (!set) 1157 1154 return -ENOENT; 1158 1155 1159 - read_lock_bh(&ip_set_ref_lock); 1156 + write_lock_bh(&ip_set_ref_lock); 1160 1157 if (set->ref != 0) { 1161 1158 ret = -IPSET_ERR_REFERENCED; 1162 1159 goto out; ··· 1173 1170 strncpy(set->name, name2, IPSET_MAXNAMELEN); 1174 1171 1175 1172 out: 1176 - read_unlock_bh(&ip_set_ref_lock); 1173 + write_unlock_bh(&ip_set_ref_lock); 1177 1174 return ret; 1178 1175 } 1179 1176 ··· 1255 1252 struct ip_set_net *inst = 1256 1253 (struct ip_set_net *)cb->args[IPSET_CB_NET]; 1257 1254 ip_set_id_t index = (ip_set_id_t)cb->args[IPSET_CB_INDEX]; 1258 - struct ip_set *set = ip_set(inst, index); 1255 + struct ip_set *set = ip_set_ref_netlink(inst, index); 1259 1256 1260 1257 if (set->variant->uref) 1261 1258 set->variant->uref(set, cb, false); ··· 1444 1441 release_refcount: 1445 1442 /* If there was an error or set is done, release set */ 1446 1443 if (ret || !cb->args[IPSET_CB_ARG0]) { 1447 - set = ip_set(inst, index); 1444 + set = ip_set_ref_netlink(inst, index); 1448 1445 if (set->variant->uref) 1449 1446 set->variant->uref(set, cb, false); 1450 1447 pr_debug("release set %s\n", set->name); ··· 2062 2059 if (inst->ip_set_max >= IPSET_INVALID_ID) 2063 2060 inst->ip_set_max = IPSET_INVALID_ID - 1; 2064 2061 2065 - list = kcalloc(inst->ip_set_max, sizeof(struct ip_set *), GFP_KERNEL); 2062 + list = kvcalloc(inst->ip_set_max, sizeof(struct ip_set *), GFP_KERNEL); 2066 2063 if (!list) 2067 2064 return -ENOMEM; 2068 2065 inst->is_deleted = false; ··· 2090 2087 } 2091 2088 } 2092 2089 nfnl_unlock(NFNL_SUBSYS_IPSET); 2093 - kfree(rcu_dereference_protected(inst->ip_set_list, 1)); 2090 + kvfree(rcu_dereference_protected(inst->ip_set_list, 1)); 2094 2091 } 2095 2092 2096 2093 static struct pernet_operations ip_set_net_ops = {
+4 -4
net/netfilter/ipset/ip_set_hash_netportnet.c
··· 213 213 214 214 if (tb[IPSET_ATTR_CIDR]) { 215 215 e.cidr[0] = nla_get_u8(tb[IPSET_ATTR_CIDR]); 216 - if (!e.cidr[0] || e.cidr[0] > HOST_MASK) 216 + if (e.cidr[0] > HOST_MASK) 217 217 return -IPSET_ERR_INVALID_CIDR; 218 218 } 219 219 220 220 if (tb[IPSET_ATTR_CIDR2]) { 221 221 e.cidr[1] = nla_get_u8(tb[IPSET_ATTR_CIDR2]); 222 - if (!e.cidr[1] || e.cidr[1] > HOST_MASK) 222 + if (e.cidr[1] > HOST_MASK) 223 223 return -IPSET_ERR_INVALID_CIDR; 224 224 } 225 225 ··· 493 493 494 494 if (tb[IPSET_ATTR_CIDR]) { 495 495 e.cidr[0] = nla_get_u8(tb[IPSET_ATTR_CIDR]); 496 - if (!e.cidr[0] || e.cidr[0] > HOST_MASK) 496 + if (e.cidr[0] > HOST_MASK) 497 497 return -IPSET_ERR_INVALID_CIDR; 498 498 } 499 499 500 500 if (tb[IPSET_ATTR_CIDR2]) { 501 501 e.cidr[1] = nla_get_u8(tb[IPSET_ATTR_CIDR2]); 502 - if (!e.cidr[1] || e.cidr[1] > HOST_MASK) 502 + if (e.cidr[1] > HOST_MASK) 503 503 return -IPSET_ERR_INVALID_CIDR; 504 504 } 505 505
+11 -6
net/netfilter/ipset/ip_set_list_set.c
··· 148 148 { 149 149 struct set_elem *e = container_of(rcu, struct set_elem, rcu); 150 150 struct ip_set *set = e->set; 151 - struct list_set *map = set->data; 152 151 153 - ip_set_put_byindex(map->net, e->id); 154 152 ip_set_ext_destroy(set, e); 155 153 kfree(e); 156 154 } ··· 156 158 static inline void 157 159 list_set_del(struct ip_set *set, struct set_elem *e) 158 160 { 161 + struct list_set *map = set->data; 162 + 159 163 set->elements--; 160 164 list_del_rcu(&e->list); 165 + ip_set_put_byindex(map->net, e->id); 161 166 call_rcu(&e->rcu, __list_set_del_rcu); 162 167 } 163 168 164 169 static inline void 165 - list_set_replace(struct set_elem *e, struct set_elem *old) 170 + list_set_replace(struct ip_set *set, struct set_elem *e, struct set_elem *old) 166 171 { 172 + struct list_set *map = set->data; 173 + 167 174 list_replace_rcu(&old->list, &e->list); 175 + ip_set_put_byindex(map->net, old->id); 168 176 call_rcu(&old->rcu, __list_set_del_rcu); 169 177 } 170 178 ··· 302 298 INIT_LIST_HEAD(&e->list); 303 299 list_set_init_extensions(set, ext, e); 304 300 if (n) 305 - list_set_replace(e, n); 301 + list_set_replace(set, e, n); 306 302 else if (next) 307 303 list_add_tail_rcu(&e->list, &next->list); 308 304 else if (prev) ··· 490 486 const struct list_set *map = set->data; 491 487 struct nlattr *atd, *nested; 492 488 u32 i = 0, first = cb->args[IPSET_CB_ARG0]; 489 + char name[IPSET_MAXNAMELEN]; 493 490 struct set_elem *e; 494 491 int ret = 0; 495 492 ··· 509 504 nested = ipset_nest_start(skb, IPSET_ATTR_DATA); 510 505 if (!nested) 511 506 goto nla_put_failure; 512 - if (nla_put_string(skb, IPSET_ATTR_NAME, 513 - ip_set_name_byindex(map->net, e->id))) 507 + ip_set_name_byindex(map->net, e->id, name); 508 + if (nla_put_string(skb, IPSET_ATTR_NAME, name)) 514 509 goto nla_put_failure; 515 510 if (ip_set_put_extensions(skb, set, e, true)) 516 511 goto nla_put_failure;
+3
net/netfilter/ipvs/ip_vs_ctl.c
··· 3980 3980 3981 3981 static struct notifier_block ip_vs_dst_notifier = { 3982 3982 .notifier_call = ip_vs_dst_event, 3983 + #ifdef CONFIG_IP_VS_IPV6 3984 + .priority = ADDRCONF_NOTIFY_PRIORITY + 5, 3985 + #endif 3983 3986 }; 3984 3987 3985 3988 int __net_init ip_vs_control_net_init(struct netns_ipvs *ipvs)
+28 -16
net/netfilter/nf_conncount.c
··· 49 49 struct nf_conntrack_zone zone; 50 50 int cpu; 51 51 u32 jiffies32; 52 + bool dead; 52 53 struct rcu_head rcu_head; 53 54 }; 54 55 ··· 107 106 conn->zone = *zone; 108 107 conn->cpu = raw_smp_processor_id(); 109 108 conn->jiffies32 = (u32)jiffies; 110 - spin_lock(&list->list_lock); 109 + conn->dead = false; 110 + spin_lock_bh(&list->list_lock); 111 111 if (list->dead == true) { 112 112 kmem_cache_free(conncount_conn_cachep, conn); 113 - spin_unlock(&list->list_lock); 113 + spin_unlock_bh(&list->list_lock); 114 114 return NF_CONNCOUNT_SKIP; 115 115 } 116 116 list_add_tail(&conn->node, &list->head); 117 117 list->count++; 118 - spin_unlock(&list->list_lock); 118 + spin_unlock_bh(&list->list_lock); 119 119 return NF_CONNCOUNT_ADDED; 120 120 } 121 121 EXPORT_SYMBOL_GPL(nf_conncount_add); ··· 134 132 { 135 133 bool free_entry = false; 136 134 137 - spin_lock(&list->list_lock); 135 + spin_lock_bh(&list->list_lock); 138 136 139 - if (list->count == 0) { 140 - spin_unlock(&list->list_lock); 141 - return free_entry; 137 + if (conn->dead) { 138 + spin_unlock_bh(&list->list_lock); 139 + return free_entry; 142 140 } 143 141 144 142 list->count--; 143 + conn->dead = true; 145 144 list_del_rcu(&conn->node); 146 - if (list->count == 0) 145 + if (list->count == 0) { 146 + list->dead = true; 147 147 free_entry = true; 148 + } 148 149 149 - spin_unlock(&list->list_lock); 150 + spin_unlock_bh(&list->list_lock); 150 151 call_rcu(&conn->rcu_head, __conn_free); 151 152 return free_entry; 152 153 } ··· 250 245 { 251 246 spin_lock_init(&list->list_lock); 252 247 INIT_LIST_HEAD(&list->head); 253 - list->count = 1; 248 + list->count = 0; 254 249 list->dead = false; 255 250 } 256 251 EXPORT_SYMBOL_GPL(nf_conncount_list_init); ··· 264 259 struct nf_conn *found_ct; 265 260 unsigned int collected = 0; 266 261 bool free_entry = false; 262 + bool ret = false; 267 263 268 264 list_for_each_entry_safe(conn, conn_n, &list->head, node) { 269 265 found = find_or_evict(net, list, conn, &free_entry); ··· 294 288 if (collected > CONNCOUNT_GC_MAX_NODES) 295 289 return false; 296 290 } 297 - return false; 291 + 292 + spin_lock_bh(&list->list_lock); 293 + if (!list->count) { 294 + list->dead = true; 295 + ret = true; 296 + } 297 + spin_unlock_bh(&list->list_lock); 298 + 299 + return ret; 298 300 } 299 301 EXPORT_SYMBOL_GPL(nf_conncount_gc_list); 300 302 ··· 323 309 while (gc_count) { 324 310 rbconn = gc_nodes[--gc_count]; 325 311 spin_lock(&rbconn->list.list_lock); 326 - if (rbconn->list.count == 0 && rbconn->list.dead == false) { 327 - rbconn->list.dead = true; 328 - rb_erase(&rbconn->node, root); 329 - call_rcu(&rbconn->rcu_head, __tree_nodes_free); 330 - } 312 + rb_erase(&rbconn->node, root); 313 + call_rcu(&rbconn->rcu_head, __tree_nodes_free); 331 314 spin_unlock(&rbconn->list.list_lock); 332 315 } 333 316 } ··· 425 414 nf_conncount_list_init(&rbconn->list); 426 415 list_add(&conn->node, &rbconn->list.head); 427 416 count = 1; 417 + rbconn->list.count = count; 428 418 429 419 rb_link_node(&rbconn->node, parent, rbnode); 430 420 rb_insert_color(&rbconn->node, root);
+8 -5
net/netfilter/nf_conntrack_core.c
··· 1073 1073 return drops; 1074 1074 } 1075 1075 1076 - static noinline int early_drop(struct net *net, unsigned int _hash) 1076 + static noinline int early_drop(struct net *net, unsigned int hash) 1077 1077 { 1078 - unsigned int i; 1078 + unsigned int i, bucket; 1079 1079 1080 1080 for (i = 0; i < NF_CT_EVICTION_RANGE; i++) { 1081 1081 struct hlist_nulls_head *ct_hash; 1082 - unsigned int hash, hsize, drops; 1082 + unsigned int hsize, drops; 1083 1083 1084 1084 rcu_read_lock(); 1085 1085 nf_conntrack_get_ht(&ct_hash, &hsize); 1086 - hash = reciprocal_scale(_hash++, hsize); 1086 + if (!i) 1087 + bucket = reciprocal_scale(hash, hsize); 1088 + else 1089 + bucket = (bucket + 1) % hsize; 1087 1090 1088 - drops = early_drop_list(net, &ct_hash[hash]); 1091 + drops = early_drop_list(net, &ct_hash[bucket]); 1089 1092 rcu_read_unlock(); 1090 1093 1091 1094 if (drops) {
+4 -9
net/netfilter/nf_conntrack_proto_dccp.c
··· 384 384 }, 385 385 }; 386 386 387 - static inline struct nf_dccp_net *dccp_pernet(struct net *net) 388 - { 389 - return &net->ct.nf_ct_proto.dccp; 390 - } 391 - 392 387 static noinline bool 393 388 dccp_new(struct nf_conn *ct, const struct sk_buff *skb, 394 389 const struct dccp_hdr *dh) ··· 396 401 state = dccp_state_table[CT_DCCP_ROLE_CLIENT][dh->dccph_type][CT_DCCP_NONE]; 397 402 switch (state) { 398 403 default: 399 - dn = dccp_pernet(net); 404 + dn = nf_dccp_pernet(net); 400 405 if (dn->dccp_loose == 0) { 401 406 msg = "not picking up existing connection "; 402 407 goto out_invalid; ··· 563 568 564 569 timeouts = nf_ct_timeout_lookup(ct); 565 570 if (!timeouts) 566 - timeouts = dccp_pernet(nf_ct_net(ct))->dccp_timeout; 571 + timeouts = nf_dccp_pernet(nf_ct_net(ct))->dccp_timeout; 567 572 nf_ct_refresh_acct(ct, ctinfo, skb, timeouts[new_state]); 568 573 569 574 return NF_ACCEPT; ··· 676 681 static int dccp_timeout_nlattr_to_obj(struct nlattr *tb[], 677 682 struct net *net, void *data) 678 683 { 679 - struct nf_dccp_net *dn = dccp_pernet(net); 684 + struct nf_dccp_net *dn = nf_dccp_pernet(net); 680 685 unsigned int *timeouts = data; 681 686 int i; 682 687 ··· 809 814 810 815 static int dccp_init_net(struct net *net) 811 816 { 812 - struct nf_dccp_net *dn = dccp_pernet(net); 817 + struct nf_dccp_net *dn = nf_dccp_pernet(net); 813 818 struct nf_proto_net *pn = &dn->pn; 814 819 815 820 if (!pn->users) {
+3 -8
net/netfilter/nf_conntrack_proto_generic.c
··· 27 27 } 28 28 } 29 29 30 - static inline struct nf_generic_net *generic_pernet(struct net *net) 31 - { 32 - return &net->ct.nf_ct_proto.generic; 33 - } 34 - 35 30 static bool generic_pkt_to_tuple(const struct sk_buff *skb, 36 31 unsigned int dataoff, 37 32 struct net *net, struct nf_conntrack_tuple *tuple) ··· 53 58 } 54 59 55 60 if (!timeout) 56 - timeout = &generic_pernet(nf_ct_net(ct))->timeout; 61 + timeout = &nf_generic_pernet(nf_ct_net(ct))->timeout; 57 62 58 63 nf_ct_refresh_acct(ct, ctinfo, skb, *timeout); 59 64 return NF_ACCEPT; ··· 67 72 static int generic_timeout_nlattr_to_obj(struct nlattr *tb[], 68 73 struct net *net, void *data) 69 74 { 70 - struct nf_generic_net *gn = generic_pernet(net); 75 + struct nf_generic_net *gn = nf_generic_pernet(net); 71 76 unsigned int *timeout = data; 72 77 73 78 if (!timeout) ··· 133 138 134 139 static int generic_init_net(struct net *net) 135 140 { 136 - struct nf_generic_net *gn = generic_pernet(net); 141 + struct nf_generic_net *gn = nf_generic_pernet(net); 137 142 struct nf_proto_net *pn = &gn->pn; 138 143 139 144 gn->timeout = nf_ct_generic_timeout;
+2 -12
net/netfilter/nf_conntrack_proto_gre.c
··· 43 43 #include <linux/netfilter/nf_conntrack_proto_gre.h> 44 44 #include <linux/netfilter/nf_conntrack_pptp.h> 45 45 46 - enum grep_conntrack { 47 - GRE_CT_UNREPLIED, 48 - GRE_CT_REPLIED, 49 - GRE_CT_MAX 50 - }; 51 - 52 46 static const unsigned int gre_timeouts[GRE_CT_MAX] = { 53 47 [GRE_CT_UNREPLIED] = 30*HZ, 54 48 [GRE_CT_REPLIED] = 180*HZ, 55 49 }; 56 50 57 51 static unsigned int proto_gre_net_id __read_mostly; 58 - struct netns_proto_gre { 59 - struct nf_proto_net nf; 60 - rwlock_t keymap_lock; 61 - struct list_head keymap_list; 62 - unsigned int gre_timeouts[GRE_CT_MAX]; 63 - }; 64 52 65 53 static inline struct netns_proto_gre *gre_pernet(struct net *net) 66 54 { ··· 389 401 static int __init nf_ct_proto_gre_init(void) 390 402 { 391 403 int ret; 404 + 405 + BUILD_BUG_ON(offsetof(struct netns_proto_gre, nf) != 0); 392 406 393 407 ret = register_pernet_subsys(&proto_gre_net_ops); 394 408 if (ret < 0)
+3 -8
net/netfilter/nf_conntrack_proto_icmp.c
··· 25 25 26 26 static const unsigned int nf_ct_icmp_timeout = 30*HZ; 27 27 28 - static inline struct nf_icmp_net *icmp_pernet(struct net *net) 29 - { 30 - return &net->ct.nf_ct_proto.icmp; 31 - } 32 - 33 28 static bool icmp_pkt_to_tuple(const struct sk_buff *skb, unsigned int dataoff, 34 29 struct net *net, struct nf_conntrack_tuple *tuple) 35 30 { ··· 98 103 } 99 104 100 105 if (!timeout) 101 - timeout = &icmp_pernet(nf_ct_net(ct))->timeout; 106 + timeout = &nf_icmp_pernet(nf_ct_net(ct))->timeout; 102 107 103 108 nf_ct_refresh_acct(ct, ctinfo, skb, *timeout); 104 109 return NF_ACCEPT; ··· 270 275 struct net *net, void *data) 271 276 { 272 277 unsigned int *timeout = data; 273 - struct nf_icmp_net *in = icmp_pernet(net); 278 + struct nf_icmp_net *in = nf_icmp_pernet(net); 274 279 275 280 if (tb[CTA_TIMEOUT_ICMP_TIMEOUT]) { 276 281 if (!timeout) ··· 332 337 333 338 static int icmp_init_net(struct net *net) 334 339 { 335 - struct nf_icmp_net *in = icmp_pernet(net); 340 + struct nf_icmp_net *in = nf_icmp_pernet(net); 336 341 struct nf_proto_net *pn = &in->pn; 337 342 338 343 in->timeout = nf_ct_icmp_timeout;
+3 -8
net/netfilter/nf_conntrack_proto_icmpv6.c
··· 30 30 31 31 static const unsigned int nf_ct_icmpv6_timeout = 30*HZ; 32 32 33 - static inline struct nf_icmp_net *icmpv6_pernet(struct net *net) 34 - { 35 - return &net->ct.nf_ct_proto.icmpv6; 36 - } 37 - 38 33 static bool icmpv6_pkt_to_tuple(const struct sk_buff *skb, 39 34 unsigned int dataoff, 40 35 struct net *net, ··· 82 87 83 88 static unsigned int *icmpv6_get_timeouts(struct net *net) 84 89 { 85 - return &icmpv6_pernet(net)->timeout; 90 + return &nf_icmpv6_pernet(net)->timeout; 86 91 } 87 92 88 93 /* Returns verdict for packet, or -1 for invalid. */ ··· 281 286 struct net *net, void *data) 282 287 { 283 288 unsigned int *timeout = data; 284 - struct nf_icmp_net *in = icmpv6_pernet(net); 289 + struct nf_icmp_net *in = nf_icmpv6_pernet(net); 285 290 286 291 if (!timeout) 287 292 timeout = icmpv6_get_timeouts(net); ··· 343 348 344 349 static int icmpv6_init_net(struct net *net) 345 350 { 346 - struct nf_icmp_net *in = icmpv6_pernet(net); 351 + struct nf_icmp_net *in = nf_icmpv6_pernet(net); 347 352 struct nf_proto_net *pn = &in->pn; 348 353 349 354 in->timeout = nf_ct_icmpv6_timeout;
+3 -8
net/netfilter/nf_conntrack_proto_sctp.c
··· 146 146 } 147 147 }; 148 148 149 - static inline struct nf_sctp_net *sctp_pernet(struct net *net) 150 - { 151 - return &net->ct.nf_ct_proto.sctp; 152 - } 153 - 154 149 #ifdef CONFIG_NF_CONNTRACK_PROCFS 155 150 /* Print out the private part of the conntrack. */ 156 151 static void sctp_print_conntrack(struct seq_file *s, struct nf_conn *ct) ··· 475 480 476 481 timeouts = nf_ct_timeout_lookup(ct); 477 482 if (!timeouts) 478 - timeouts = sctp_pernet(nf_ct_net(ct))->timeouts; 483 + timeouts = nf_sctp_pernet(nf_ct_net(ct))->timeouts; 479 484 480 485 nf_ct_refresh_acct(ct, ctinfo, skb, timeouts[new_state]); 481 486 ··· 594 599 struct net *net, void *data) 595 600 { 596 601 unsigned int *timeouts = data; 597 - struct nf_sctp_net *sn = sctp_pernet(net); 602 + struct nf_sctp_net *sn = nf_sctp_pernet(net); 598 603 int i; 599 604 600 605 /* set default SCTP timeouts. */ ··· 731 736 732 737 static int sctp_init_net(struct net *net) 733 738 { 734 - struct nf_sctp_net *sn = sctp_pernet(net); 739 + struct nf_sctp_net *sn = nf_sctp_pernet(net); 735 740 struct nf_proto_net *pn = &sn->pn; 736 741 737 742 if (!pn->users) {
+5 -10
net/netfilter/nf_conntrack_proto_tcp.c
··· 272 272 } 273 273 }; 274 274 275 - static inline struct nf_tcp_net *tcp_pernet(struct net *net) 276 - { 277 - return &net->ct.nf_ct_proto.tcp; 278 - } 279 - 280 275 #ifdef CONFIG_NF_CONNTRACK_PROCFS 281 276 /* Print out the private part of the conntrack. */ 282 277 static void tcp_print_conntrack(struct seq_file *s, struct nf_conn *ct) ··· 470 475 const struct tcphdr *tcph) 471 476 { 472 477 struct net *net = nf_ct_net(ct); 473 - struct nf_tcp_net *tn = tcp_pernet(net); 478 + struct nf_tcp_net *tn = nf_tcp_pernet(net); 474 479 struct ip_ct_tcp_state *sender = &state->seen[dir]; 475 480 struct ip_ct_tcp_state *receiver = &state->seen[!dir]; 476 481 const struct nf_conntrack_tuple *tuple = &ct->tuplehash[dir].tuple; ··· 762 767 { 763 768 enum tcp_conntrack new_state; 764 769 struct net *net = nf_ct_net(ct); 765 - const struct nf_tcp_net *tn = tcp_pernet(net); 770 + const struct nf_tcp_net *tn = nf_tcp_pernet(net); 766 771 const struct ip_ct_tcp_state *sender = &ct->proto.tcp.seen[0]; 767 772 const struct ip_ct_tcp_state *receiver = &ct->proto.tcp.seen[1]; 768 773 ··· 836 841 const struct nf_hook_state *state) 837 842 { 838 843 struct net *net = nf_ct_net(ct); 839 - struct nf_tcp_net *tn = tcp_pernet(net); 844 + struct nf_tcp_net *tn = nf_tcp_pernet(net); 840 845 struct nf_conntrack_tuple *tuple; 841 846 enum tcp_conntrack new_state, old_state; 842 847 unsigned int index, *timeouts; ··· 1278 1283 static int tcp_timeout_nlattr_to_obj(struct nlattr *tb[], 1279 1284 struct net *net, void *data) 1280 1285 { 1281 - struct nf_tcp_net *tn = tcp_pernet(net); 1286 + struct nf_tcp_net *tn = nf_tcp_pernet(net); 1282 1287 unsigned int *timeouts = data; 1283 1288 int i; 1284 1289 ··· 1503 1508 1504 1509 static int tcp_init_net(struct net *net) 1505 1510 { 1506 - struct nf_tcp_net *tn = tcp_pernet(net); 1511 + struct nf_tcp_net *tn = nf_tcp_pernet(net); 1507 1512 struct nf_proto_net *pn = &tn->pn; 1508 1513 1509 1514 if (!pn->users) {
+3 -8
net/netfilter/nf_conntrack_proto_udp.c
··· 32 32 [UDP_CT_REPLIED] = 180*HZ, 33 33 }; 34 34 35 - static inline struct nf_udp_net *udp_pernet(struct net *net) 36 - { 37 - return &net->ct.nf_ct_proto.udp; 38 - } 39 - 40 35 static unsigned int *udp_get_timeouts(struct net *net) 41 36 { 42 - return udp_pernet(net)->timeouts; 37 + return nf_udp_pernet(net)->timeouts; 43 38 } 44 39 45 40 static void udp_error_log(const struct sk_buff *skb, ··· 207 212 struct net *net, void *data) 208 213 { 209 214 unsigned int *timeouts = data; 210 - struct nf_udp_net *un = udp_pernet(net); 215 + struct nf_udp_net *un = nf_udp_pernet(net); 211 216 212 217 if (!timeouts) 213 218 timeouts = un->timeouts; ··· 287 292 288 293 static int udp_init_net(struct net *net) 289 294 { 290 - struct nf_udp_net *un = udp_pernet(net); 295 + struct nf_udp_net *un = nf_udp_pernet(net); 291 296 struct nf_proto_net *pn = &un->pn; 292 297 293 298 if (!pn->users) {
+17 -29
net/netfilter/nf_tables_api.c
··· 2457 2457 static void nf_tables_rule_destroy(const struct nft_ctx *ctx, 2458 2458 struct nft_rule *rule) 2459 2459 { 2460 - struct nft_expr *expr; 2460 + struct nft_expr *expr, *next; 2461 2461 2462 2462 /* 2463 2463 * Careful: some expressions might not be initialized in case this ··· 2465 2465 */ 2466 2466 expr = nft_expr_first(rule); 2467 2467 while (expr != nft_expr_last(rule) && expr->ops) { 2468 + next = nft_expr_next(expr); 2468 2469 nf_tables_expr_destroy(ctx, expr); 2469 - expr = nft_expr_next(expr); 2470 + expr = next; 2470 2471 } 2471 2472 kfree(rule); 2472 2473 } ··· 2590 2589 2591 2590 if (chain->use == UINT_MAX) 2592 2591 return -EOVERFLOW; 2593 - } 2594 2592 2595 - if (nla[NFTA_RULE_POSITION]) { 2596 - if (!(nlh->nlmsg_flags & NLM_F_CREATE)) 2597 - return -EOPNOTSUPP; 2598 - 2599 - pos_handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_POSITION])); 2600 - old_rule = __nft_rule_lookup(chain, pos_handle); 2601 - if (IS_ERR(old_rule)) { 2602 - NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_POSITION]); 2603 - return PTR_ERR(old_rule); 2593 + if (nla[NFTA_RULE_POSITION]) { 2594 + pos_handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_POSITION])); 2595 + old_rule = __nft_rule_lookup(chain, pos_handle); 2596 + if (IS_ERR(old_rule)) { 2597 + NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_POSITION]); 2598 + return PTR_ERR(old_rule); 2599 + } 2604 2600 } 2605 2601 } 2606 2602 ··· 2667 2669 } 2668 2670 2669 2671 if (nlh->nlmsg_flags & NLM_F_REPLACE) { 2670 - if (!nft_is_active_next(net, old_rule)) { 2671 - err = -ENOENT; 2672 - goto err2; 2673 - } 2674 - trans = nft_trans_rule_add(&ctx, NFT_MSG_DELRULE, 2675 - old_rule); 2672 + trans = nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule); 2676 2673 if (trans == NULL) { 2677 2674 err = -ENOMEM; 2678 2675 goto err2; 2679 2676 } 2680 - nft_deactivate_next(net, old_rule); 2681 - chain->use--; 2682 - 2683 - if (nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule) == NULL) { 2684 - err = -ENOMEM; 2677 + err = nft_delrule(&ctx, old_rule); 2678 + if (err < 0) { 2679 + nft_trans_destroy(trans); 2685 2680 goto err2; 2686 2681 } 2687 2682 ··· 6315 6324 call_rcu(&old->h, __nf_tables_commit_chain_free_rules_old); 6316 6325 } 6317 6326 6318 - static void nf_tables_commit_chain_active(struct net *net, struct nft_chain *chain) 6327 + static void nf_tables_commit_chain(struct net *net, struct nft_chain *chain) 6319 6328 { 6320 6329 struct nft_rule **g0, **g1; 6321 6330 bool next_genbit; ··· 6432 6441 6433 6442 /* step 2. Make rules_gen_X visible to packet path */ 6434 6443 list_for_each_entry(table, &net->nft.tables, list) { 6435 - list_for_each_entry(chain, &table->chains, list) { 6436 - if (!nft_is_active_next(net, chain)) 6437 - continue; 6438 - nf_tables_commit_chain_active(net, chain); 6439 - } 6444 + list_for_each_entry(chain, &table->chains, list) 6445 + nf_tables_commit_chain(net, chain); 6440 6446 } 6441 6447 6442 6448 /*
+14 -10
net/netfilter/nft_compat.c
··· 54 54 return false; 55 55 } 56 56 57 - static int nft_compat_chain_validate_dependency(const char *tablename, 58 - const struct nft_chain *chain) 57 + static int nft_compat_chain_validate_dependency(const struct nft_ctx *ctx, 58 + const char *tablename) 59 59 { 60 + enum nft_chain_types type = NFT_CHAIN_T_DEFAULT; 61 + const struct nft_chain *chain = ctx->chain; 60 62 const struct nft_base_chain *basechain; 61 63 62 64 if (!tablename || ··· 66 64 return 0; 67 65 68 66 basechain = nft_base_chain(chain); 69 - if (strcmp(tablename, "nat") == 0 && 70 - basechain->type->type != NFT_CHAIN_T_NAT) 71 - return -EINVAL; 67 + if (strcmp(tablename, "nat") == 0) { 68 + if (ctx->family != NFPROTO_BRIDGE) 69 + type = NFT_CHAIN_T_NAT; 70 + if (basechain->type->type != type) 71 + return -EINVAL; 72 + } 72 73 73 74 return 0; 74 75 } ··· 347 342 if (target->hooks && !(hook_mask & target->hooks)) 348 343 return -EINVAL; 349 344 350 - ret = nft_compat_chain_validate_dependency(target->table, 351 - ctx->chain); 345 + ret = nft_compat_chain_validate_dependency(ctx, target->table); 352 346 if (ret < 0) 353 347 return ret; 354 348 } ··· 520 516 void *info) 521 517 { 522 518 struct xt_match *match = expr->ops->data; 519 + struct module *me = match->me; 523 520 struct xt_mtdtor_param par; 524 521 525 522 par.net = ctx->net; ··· 531 526 par.match->destroy(&par); 532 527 533 528 if (nft_xt_put(container_of(expr->ops, struct nft_xt, ops))) 534 - module_put(match->me); 529 + module_put(me); 535 530 } 536 531 537 532 static void ··· 595 590 if (match->hooks && !(hook_mask & match->hooks)) 596 591 return -EINVAL; 597 592 598 - ret = nft_compat_chain_validate_dependency(match->table, 599 - ctx->chain); 593 + ret = nft_compat_chain_validate_dependency(ctx, match->table); 600 594 if (ret < 0) 601 595 return ret; 602 596 }
+4 -1
net/netfilter/nft_flow_offload.c
··· 214 214 { 215 215 int err; 216 216 217 - register_netdevice_notifier(&flow_offload_netdev_notifier); 217 + err = register_netdevice_notifier(&flow_offload_netdev_notifier); 218 + if (err) 219 + goto err; 218 220 219 221 err = nft_register_expr(&nft_flow_offload_type); 220 222 if (err < 0) ··· 226 224 227 225 register_expr: 228 226 unregister_netdevice_notifier(&flow_offload_netdev_notifier); 227 + err: 229 228 return err; 230 229 } 231 230
-127
net/netfilter/nft_numgen.c
··· 24 24 u32 modulus; 25 25 atomic_t counter; 26 26 u32 offset; 27 - struct nft_set *map; 28 27 }; 29 28 30 29 static u32 nft_ng_inc_gen(struct nft_ng_inc *priv) ··· 47 48 regs->data[priv->dreg] = nft_ng_inc_gen(priv); 48 49 } 49 50 50 - static void nft_ng_inc_map_eval(const struct nft_expr *expr, 51 - struct nft_regs *regs, 52 - const struct nft_pktinfo *pkt) 53 - { 54 - struct nft_ng_inc *priv = nft_expr_priv(expr); 55 - const struct nft_set *map = priv->map; 56 - const struct nft_set_ext *ext; 57 - u32 result; 58 - bool found; 59 - 60 - result = nft_ng_inc_gen(priv); 61 - found = map->ops->lookup(nft_net(pkt), map, &result, &ext); 62 - 63 - if (!found) 64 - return; 65 - 66 - nft_data_copy(&regs->data[priv->dreg], 67 - nft_set_ext_data(ext), map->dlen); 68 - } 69 - 70 51 static const struct nla_policy nft_ng_policy[NFTA_NG_MAX + 1] = { 71 52 [NFTA_NG_DREG] = { .type = NLA_U32 }, 72 53 [NFTA_NG_MODULUS] = { .type = NLA_U32 }, 73 54 [NFTA_NG_TYPE] = { .type = NLA_U32 }, 74 55 [NFTA_NG_OFFSET] = { .type = NLA_U32 }, 75 - [NFTA_NG_SET_NAME] = { .type = NLA_STRING, 76 - .len = NFT_SET_MAXNAMELEN - 1 }, 77 - [NFTA_NG_SET_ID] = { .type = NLA_U32 }, 78 56 }; 79 57 80 58 static int nft_ng_inc_init(const struct nft_ctx *ctx, ··· 75 99 76 100 return nft_validate_register_store(ctx, priv->dreg, NULL, 77 101 NFT_DATA_VALUE, sizeof(u32)); 78 - } 79 - 80 - static int nft_ng_inc_map_init(const struct nft_ctx *ctx, 81 - const struct nft_expr *expr, 82 - const struct nlattr * const tb[]) 83 - { 84 - struct nft_ng_inc *priv = nft_expr_priv(expr); 85 - u8 genmask = nft_genmask_next(ctx->net); 86 - 87 - nft_ng_inc_init(ctx, expr, tb); 88 - 89 - priv->map = nft_set_lookup_global(ctx->net, ctx->table, 90 - tb[NFTA_NG_SET_NAME], 91 - tb[NFTA_NG_SET_ID], genmask); 92 - 93 - return PTR_ERR_OR_ZERO(priv->map); 94 102 } 95 103 96 104 static int nft_ng_dump(struct sk_buff *skb, enum nft_registers dreg, ··· 103 143 priv->offset); 104 144 } 105 145 106 - static int nft_ng_inc_map_dump(struct sk_buff *skb, 107 - const struct nft_expr *expr) 108 - { 109 - const struct nft_ng_inc *priv = nft_expr_priv(expr); 110 - 111 - if (nft_ng_dump(skb, priv->dreg, priv->modulus, 112 - NFT_NG_INCREMENTAL, priv->offset) || 113 - nla_put_string(skb, NFTA_NG_SET_NAME, priv->map->name)) 114 - goto nla_put_failure; 115 - 116 - return 0; 117 - 118 - nla_put_failure: 119 - return -1; 120 - } 121 - 122 146 struct nft_ng_random { 123 147 enum nft_registers dreg:8; 124 148 u32 modulus; 125 149 u32 offset; 126 - struct nft_set *map; 127 150 }; 128 151 129 152 static u32 nft_ng_random_gen(struct nft_ng_random *priv) ··· 124 181 struct nft_ng_random *priv = nft_expr_priv(expr); 125 182 126 183 regs->data[priv->dreg] = nft_ng_random_gen(priv); 127 - } 128 - 129 - static void nft_ng_random_map_eval(const struct nft_expr *expr, 130 - struct nft_regs *regs, 131 - const struct nft_pktinfo *pkt) 132 - { 133 - struct nft_ng_random *priv = nft_expr_priv(expr); 134 - const struct nft_set *map = priv->map; 135 - const struct nft_set_ext *ext; 136 - u32 result; 137 - bool found; 138 - 139 - result = nft_ng_random_gen(priv); 140 - found = map->ops->lookup(nft_net(pkt), map, &result, &ext); 141 - if (!found) 142 - return; 143 - 144 - nft_data_copy(&regs->data[priv->dreg], 145 - nft_set_ext_data(ext), map->dlen); 146 184 } 147 185 148 186 static int nft_ng_random_init(const struct nft_ctx *ctx, ··· 150 226 NFT_DATA_VALUE, sizeof(u32)); 151 227 } 152 228 153 - static int nft_ng_random_map_init(const struct nft_ctx *ctx, 154 - const struct nft_expr *expr, 155 - const struct nlattr * const tb[]) 156 - { 157 - struct nft_ng_random *priv = nft_expr_priv(expr); 158 - u8 genmask = nft_genmask_next(ctx->net); 159 - 160 - nft_ng_random_init(ctx, expr, tb); 161 - priv->map = nft_set_lookup_global(ctx->net, ctx->table, 162 - tb[NFTA_NG_SET_NAME], 163 - tb[NFTA_NG_SET_ID], genmask); 164 - 165 - return PTR_ERR_OR_ZERO(priv->map); 166 - } 167 - 168 229 static int nft_ng_random_dump(struct sk_buff *skb, const struct nft_expr *expr) 169 230 { 170 231 const struct nft_ng_random *priv = nft_expr_priv(expr); 171 232 172 233 return nft_ng_dump(skb, priv->dreg, priv->modulus, NFT_NG_RANDOM, 173 234 priv->offset); 174 - } 175 - 176 - static int nft_ng_random_map_dump(struct sk_buff *skb, 177 - const struct nft_expr *expr) 178 - { 179 - const struct nft_ng_random *priv = nft_expr_priv(expr); 180 - 181 - if (nft_ng_dump(skb, priv->dreg, priv->modulus, 182 - NFT_NG_RANDOM, priv->offset) || 183 - nla_put_string(skb, NFTA_NG_SET_NAME, priv->map->name)) 184 - goto nla_put_failure; 185 - 186 - return 0; 187 - 188 - nla_put_failure: 189 - return -1; 190 235 } 191 236 192 237 static struct nft_expr_type nft_ng_type; ··· 167 274 .dump = nft_ng_inc_dump, 168 275 }; 169 276 170 - static const struct nft_expr_ops nft_ng_inc_map_ops = { 171 - .type = &nft_ng_type, 172 - .size = NFT_EXPR_SIZE(sizeof(struct nft_ng_inc)), 173 - .eval = nft_ng_inc_map_eval, 174 - .init = nft_ng_inc_map_init, 175 - .dump = nft_ng_inc_map_dump, 176 - }; 177 - 178 277 static const struct nft_expr_ops nft_ng_random_ops = { 179 278 .type = &nft_ng_type, 180 279 .size = NFT_EXPR_SIZE(sizeof(struct nft_ng_random)), 181 280 .eval = nft_ng_random_eval, 182 281 .init = nft_ng_random_init, 183 282 .dump = nft_ng_random_dump, 184 - }; 185 - 186 - static const struct nft_expr_ops nft_ng_random_map_ops = { 187 - .type = &nft_ng_type, 188 - .size = NFT_EXPR_SIZE(sizeof(struct nft_ng_random)), 189 - .eval = nft_ng_random_map_eval, 190 - .init = nft_ng_random_map_init, 191 - .dump = nft_ng_random_map_dump, 192 283 }; 193 284 194 285 static const struct nft_expr_ops * ··· 189 312 190 313 switch (type) { 191 314 case NFT_NG_INCREMENTAL: 192 - if (tb[NFTA_NG_SET_NAME]) 193 - return &nft_ng_inc_map_ops; 194 315 return &nft_ng_inc_ops; 195 316 case NFT_NG_RANDOM: 196 - if (tb[NFTA_NG_SET_NAME]) 197 - return &nft_ng_random_map_ops; 198 317 return &nft_ng_random_ops; 199 318 } 200 319
+1 -1
net/netfilter/nft_osf.c
··· 50 50 int err; 51 51 u8 ttl; 52 52 53 - if (nla_get_u8(tb[NFTA_OSF_TTL])) { 53 + if (tb[NFTA_OSF_TTL]) { 54 54 ttl = nla_get_u8(tb[NFTA_OSF_TTL]); 55 55 if (ttl > 2) 56 56 return -EINVAL;
+20
net/netfilter/xt_IDLETIMER.c
··· 114 114 schedule_work(&timer->work); 115 115 } 116 116 117 + static int idletimer_check_sysfs_name(const char *name, unsigned int size) 118 + { 119 + int ret; 120 + 121 + ret = xt_check_proc_name(name, size); 122 + if (ret < 0) 123 + return ret; 124 + 125 + if (!strcmp(name, "power") || 126 + !strcmp(name, "subsystem") || 127 + !strcmp(name, "uevent")) 128 + return -EINVAL; 129 + 130 + return 0; 131 + } 132 + 117 133 static int idletimer_tg_create(struct idletimer_tg_info *info) 118 134 { 119 135 int ret; ··· 139 123 ret = -ENOMEM; 140 124 goto out; 141 125 } 126 + 127 + ret = idletimer_check_sysfs_name(info->label, sizeof(info->label)); 128 + if (ret < 0) 129 + goto out_free_timer; 142 130 143 131 sysfs_attr_init(&info->timer->attr.attr); 144 132 info->timer->attr.attr.name = kstrdup(info->label, GFP_KERNEL);
-10
net/netfilter/xt_RATEEST.c
··· 201 201 return 0; 202 202 } 203 203 204 - static void __net_exit xt_rateest_net_exit(struct net *net) 205 - { 206 - struct xt_rateest_net *xn = net_generic(net, xt_rateest_id); 207 - int i; 208 - 209 - for (i = 0; i < ARRAY_SIZE(xn->hash); i++) 210 - WARN_ON_ONCE(!hlist_empty(&xn->hash[i])); 211 - } 212 - 213 204 static struct pernet_operations xt_rateest_net_ops = { 214 205 .init = xt_rateest_net_init, 215 - .exit = xt_rateest_net_exit, 216 206 .id = &xt_rateest_id, 217 207 .size = sizeof(struct xt_rateest_net), 218 208 };
+3 -6
net/netfilter/xt_hashlimit.c
··· 295 295 296 296 /* copy match config into hashtable config */ 297 297 ret = cfg_copy(&hinfo->cfg, (void *)cfg, 3); 298 - 299 - if (ret) 298 + if (ret) { 299 + vfree(hinfo); 300 300 return ret; 301 + } 301 302 302 303 hinfo->cfg.size = size; 303 304 if (hinfo->cfg.max == 0) ··· 815 814 int ret; 816 815 817 816 ret = cfg_copy(&cfg, (void *)&info->cfg, 1); 818 - 819 817 if (ret) 820 818 return ret; 821 819 ··· 830 830 int ret; 831 831 832 832 ret = cfg_copy(&cfg, (void *)&info->cfg, 2); 833 - 834 833 if (ret) 835 834 return ret; 836 835 ··· 920 921 return ret; 921 922 922 923 ret = cfg_copy(&cfg, (void *)&info->cfg, 1); 923 - 924 924 if (ret) 925 925 return ret; 926 926 ··· 938 940 return ret; 939 941 940 942 ret = cfg_copy(&cfg, (void *)&info->cfg, 2); 941 - 942 943 if (ret) 943 944 return ret; 944 945
+2 -1
net/openvswitch/conntrack.c
··· 1203 1203 &info->labels.mask); 1204 1204 if (err) 1205 1205 return err; 1206 - } else if (labels_nonzero(&info->labels.mask)) { 1206 + } else if (IS_ENABLED(CONFIG_NF_CONNTRACK_LABELS) && 1207 + labels_nonzero(&info->labels.mask)) { 1207 1208 err = ovs_ct_set_labels(ct, key, &info->labels.value, 1208 1209 &info->labels.mask); 1209 1210 if (err)
+2 -2
net/packet/af_packet.c
··· 2394 2394 void *ph; 2395 2395 __u32 ts; 2396 2396 2397 - ph = skb_shinfo(skb)->destructor_arg; 2397 + ph = skb_zcopy_get_nouarg(skb); 2398 2398 packet_dec_pending(&po->tx_ring); 2399 2399 2400 2400 ts = __packet_set_timestamp(po, ph, skb); ··· 2461 2461 skb->mark = po->sk.sk_mark; 2462 2462 skb->tstamp = sockc->transmit_time; 2463 2463 sock_tx_timestamp(&po->sk, sockc->tsflags, &skb_shinfo(skb)->tx_flags); 2464 - skb_shinfo(skb)->destructor_arg = ph.raw; 2464 + skb_zcopy_set_nouarg(skb, ph.raw); 2465 2465 2466 2466 skb_reserve(skb, hlen); 2467 2467 skb_reset_network_header(skb);
+23 -4
net/rxrpc/af_rxrpc.c
··· 375 375 * getting ACKs from the server. Returns a number representing the life state 376 376 * which can be compared to that returned by a previous call. 377 377 * 378 - * If this is a client call, ping ACKs will be sent to the server to find out 379 - * whether it's still responsive and whether the call is still alive on the 380 - * server. 378 + * If the life state stalls, rxrpc_kernel_probe_life() should be called and 379 + * then 2RTT waited. 381 380 */ 382 - u32 rxrpc_kernel_check_life(struct socket *sock, struct rxrpc_call *call) 381 + u32 rxrpc_kernel_check_life(const struct socket *sock, 382 + const struct rxrpc_call *call) 383 383 { 384 384 return call->acks_latest; 385 385 } 386 386 EXPORT_SYMBOL(rxrpc_kernel_check_life); 387 + 388 + /** 389 + * rxrpc_kernel_probe_life - Poke the peer to see if it's still alive 390 + * @sock: The socket the call is on 391 + * @call: The call to check 392 + * 393 + * In conjunction with rxrpc_kernel_check_life(), allow a kernel service to 394 + * find out whether a call is still alive by pinging it. This should cause the 395 + * life state to be bumped in about 2*RTT. 396 + * 397 + * The must be called in TASK_RUNNING state on pain of might_sleep() objecting. 398 + */ 399 + void rxrpc_kernel_probe_life(struct socket *sock, struct rxrpc_call *call) 400 + { 401 + rxrpc_propose_ACK(call, RXRPC_ACK_PING, 0, 0, true, false, 402 + rxrpc_propose_ack_ping_for_check_life); 403 + rxrpc_send_ack_packet(call, true, NULL); 404 + } 405 + EXPORT_SYMBOL(rxrpc_kernel_probe_life); 387 406 388 407 /** 389 408 * rxrpc_kernel_get_epoch - Retrieve the epoch value from a call.
+1
net/rxrpc/ar-internal.h
··· 611 611 * not hard-ACK'd packet follows this. 612 612 */ 613 613 rxrpc_seq_t tx_top; /* Highest Tx slot allocated. */ 614 + u16 tx_backoff; /* Delay to insert due to Tx failure */ 614 615 615 616 /* TCP-style slow-start congestion control [RFC5681]. Since the SMSS 616 617 * is fixed, we keep these numbers in terms of segments (ie. DATA
+14 -4
net/rxrpc/call_event.c
··· 123 123 else 124 124 ack_at = expiry; 125 125 126 + ack_at += READ_ONCE(call->tx_backoff); 126 127 ack_at += now; 127 128 if (time_before(ack_at, call->ack_at)) { 128 129 WRITE_ONCE(call->ack_at, ack_at); ··· 312 311 container_of(work, struct rxrpc_call, processor); 313 312 rxrpc_serial_t *send_ack; 314 313 unsigned long now, next, t; 314 + unsigned int iterations = 0; 315 315 316 316 rxrpc_see_call(call); 317 317 ··· 321 319 call->debug_id, rxrpc_call_states[call->state], call->events); 322 320 323 321 recheck_state: 322 + /* Limit the number of times we do this before returning to the manager */ 323 + iterations++; 324 + if (iterations > 5) 325 + goto requeue; 326 + 324 327 if (test_and_clear_bit(RXRPC_CALL_EV_ABORT, &call->events)) { 325 328 rxrpc_send_abort_packet(call); 326 329 goto recheck_state; ··· 454 447 rxrpc_reduce_call_timer(call, next, now, rxrpc_timer_restart); 455 448 456 449 /* other events may have been raised since we started checking */ 457 - if (call->events && call->state < RXRPC_CALL_COMPLETE) { 458 - __rxrpc_queue_call(call); 459 - goto out; 460 - } 450 + if (call->events && call->state < RXRPC_CALL_COMPLETE) 451 + goto requeue; 461 452 462 453 out_put: 463 454 rxrpc_put_call(call, rxrpc_call_put); 464 455 out: 465 456 _leave(""); 457 + return; 458 + 459 + requeue: 460 + __rxrpc_queue_call(call); 461 + goto out; 466 462 }
+31 -4
net/rxrpc/output.c
··· 35 35 static const char rxrpc_keepalive_string[] = ""; 36 36 37 37 /* 38 + * Increase Tx backoff on transmission failure and clear it on success. 39 + */ 40 + static void rxrpc_tx_backoff(struct rxrpc_call *call, int ret) 41 + { 42 + if (ret < 0) { 43 + u16 tx_backoff = READ_ONCE(call->tx_backoff); 44 + 45 + if (tx_backoff < HZ) 46 + WRITE_ONCE(call->tx_backoff, tx_backoff + 1); 47 + } else { 48 + WRITE_ONCE(call->tx_backoff, 0); 49 + } 50 + } 51 + 52 + /* 38 53 * Arrange for a keepalive ping a certain time after we last transmitted. This 39 54 * lets the far side know we're still interested in this call and helps keep 40 55 * the route through any intervening firewall open. ··· 225 210 else 226 211 trace_rxrpc_tx_packet(call->debug_id, &pkt->whdr, 227 212 rxrpc_tx_point_call_ack); 213 + rxrpc_tx_backoff(call, ret); 228 214 229 215 if (call->state < RXRPC_CALL_COMPLETE) { 230 216 if (ret < 0) { ··· 234 218 rxrpc_propose_ACK(call, pkt->ack.reason, 235 219 ntohs(pkt->ack.maxSkew), 236 220 ntohl(pkt->ack.serial), 237 - true, true, 221 + false, true, 238 222 rxrpc_propose_ack_retry_tx); 239 223 } else { 240 224 spin_lock_bh(&call->lock); ··· 316 300 else 317 301 trace_rxrpc_tx_packet(call->debug_id, &pkt.whdr, 318 302 rxrpc_tx_point_call_abort); 319 - 303 + rxrpc_tx_backoff(call, ret); 320 304 321 305 rxrpc_put_connection(conn); 322 306 return ret; ··· 429 413 else 430 414 trace_rxrpc_tx_packet(call->debug_id, &whdr, 431 415 rxrpc_tx_point_call_data_nofrag); 416 + rxrpc_tx_backoff(call, ret); 432 417 if (ret == -EMSGSIZE) 433 418 goto send_fragmentable; 434 419 ··· 462 445 rxrpc_reduce_call_timer(call, expect_rx_by, nowj, 463 446 rxrpc_timer_set_for_normal); 464 447 } 465 - } 466 448 467 - rxrpc_set_keepalive(call); 449 + rxrpc_set_keepalive(call); 450 + } else { 451 + /* Cancel the call if the initial transmission fails, 452 + * particularly if that's due to network routing issues that 453 + * aren't going away anytime soon. The layer above can arrange 454 + * the retransmission. 455 + */ 456 + if (!test_and_set_bit(RXRPC_CALL_BEGAN_RX_TIMER, &call->flags)) 457 + rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR, 458 + RX_USER_ABORT, ret); 459 + } 468 460 469 461 _leave(" = %d [%u]", ret, call->peer->maxdata); 470 462 return ret; ··· 532 506 else 533 507 trace_rxrpc_tx_packet(call->debug_id, &whdr, 534 508 rxrpc_tx_point_call_data_frag); 509 + rxrpc_tx_backoff(call, ret); 535 510 536 511 up_write(&conn->params.local->defrag_sem); 537 512 goto done;
+2 -1
net/sched/act_mirred.c
··· 258 258 if (is_redirect) { 259 259 skb2->tc_redirected = 1; 260 260 skb2->tc_from_ingress = skb2->tc_at_ingress; 261 - 261 + if (skb2->tc_from_ingress) 262 + skb2->tstamp = 0; 262 263 /* let's the caller reinsert the packet, if possible */ 263 264 if (use_reinsert) { 264 265 res->ingress = want_ingress;
+2 -1
net/sched/act_pedit.c
··· 201 201 goto out_release; 202 202 } 203 203 } else { 204 - return err; 204 + ret = err; 205 + goto out_free; 205 206 } 206 207 207 208 p = to_pedit(*a);
+22 -14
net/sched/act_police.c
··· 27 27 u32 tcfp_ewma_rate; 28 28 s64 tcfp_burst; 29 29 u32 tcfp_mtu; 30 - s64 tcfp_toks; 31 - s64 tcfp_ptoks; 32 30 s64 tcfp_mtu_ptoks; 33 - s64 tcfp_t_c; 34 31 struct psched_ratecfg rate; 35 32 bool rate_present; 36 33 struct psched_ratecfg peak; ··· 38 41 struct tcf_police { 39 42 struct tc_action common; 40 43 struct tcf_police_params __rcu *params; 44 + 45 + spinlock_t tcfp_lock ____cacheline_aligned_in_smp; 46 + s64 tcfp_toks; 47 + s64 tcfp_ptoks; 48 + s64 tcfp_t_c; 41 49 }; 42 50 43 51 #define to_police(pc) ((struct tcf_police *)pc) ··· 124 122 return ret; 125 123 } 126 124 ret = ACT_P_CREATED; 125 + spin_lock_init(&(to_police(*a)->tcfp_lock)); 127 126 } else if (!ovr) { 128 127 tcf_idr_release(*a, bind); 129 128 return -EEXIST; ··· 189 186 } 190 187 191 188 new->tcfp_burst = PSCHED_TICKS2NS(parm->burst); 192 - new->tcfp_toks = new->tcfp_burst; 193 - if (new->peak_present) { 189 + if (new->peak_present) 194 190 new->tcfp_mtu_ptoks = (s64)psched_l2t_ns(&new->peak, 195 191 new->tcfp_mtu); 196 - new->tcfp_ptoks = new->tcfp_mtu_ptoks; 197 - } 198 192 199 193 if (tb[TCA_POLICE_AVRATE]) 200 194 new->tcfp_ewma_rate = nla_get_u32(tb[TCA_POLICE_AVRATE]); ··· 207 207 } 208 208 209 209 spin_lock_bh(&police->tcf_lock); 210 - new->tcfp_t_c = ktime_get_ns(); 210 + spin_lock_bh(&police->tcfp_lock); 211 + police->tcfp_t_c = ktime_get_ns(); 212 + police->tcfp_toks = new->tcfp_burst; 213 + if (new->peak_present) 214 + police->tcfp_ptoks = new->tcfp_mtu_ptoks; 215 + spin_unlock_bh(&police->tcfp_lock); 211 216 police->tcf_action = parm->action; 212 217 rcu_swap_protected(police->params, 213 218 new, ··· 262 257 } 263 258 264 259 now = ktime_get_ns(); 265 - toks = min_t(s64, now - p->tcfp_t_c, p->tcfp_burst); 260 + spin_lock_bh(&police->tcfp_lock); 261 + toks = min_t(s64, now - police->tcfp_t_c, p->tcfp_burst); 266 262 if (p->peak_present) { 267 - ptoks = toks + p->tcfp_ptoks; 263 + ptoks = toks + police->tcfp_ptoks; 268 264 if (ptoks > p->tcfp_mtu_ptoks) 269 265 ptoks = p->tcfp_mtu_ptoks; 270 266 ptoks -= (s64)psched_l2t_ns(&p->peak, 271 267 qdisc_pkt_len(skb)); 272 268 } 273 - toks += p->tcfp_toks; 269 + toks += police->tcfp_toks; 274 270 if (toks > p->tcfp_burst) 275 271 toks = p->tcfp_burst; 276 272 toks -= (s64)psched_l2t_ns(&p->rate, qdisc_pkt_len(skb)); 277 273 if ((toks|ptoks) >= 0) { 278 - p->tcfp_t_c = now; 279 - p->tcfp_toks = toks; 280 - p->tcfp_ptoks = ptoks; 274 + police->tcfp_t_c = now; 275 + police->tcfp_toks = toks; 276 + police->tcfp_ptoks = ptoks; 277 + spin_unlock_bh(&police->tcfp_lock); 281 278 ret = p->tcfp_result; 282 279 goto inc_drops; 283 280 } 281 + spin_unlock_bh(&police->tcfp_lock); 284 282 } 285 283 286 284 inc_overlimits:
+13 -1
net/sched/cls_flower.c
··· 709 709 struct netlink_ext_ack *extack) 710 710 { 711 711 const struct nlattr *nla_enc_key, *nla_opt_key, *nla_opt_msk = NULL; 712 - int option_len, key_depth, msk_depth = 0; 712 + int err, option_len, key_depth, msk_depth = 0; 713 + 714 + err = nla_validate_nested(tb[TCA_FLOWER_KEY_ENC_OPTS], 715 + TCA_FLOWER_KEY_ENC_OPTS_MAX, 716 + enc_opts_policy, extack); 717 + if (err) 718 + return err; 713 719 714 720 nla_enc_key = nla_data(tb[TCA_FLOWER_KEY_ENC_OPTS]); 715 721 716 722 if (tb[TCA_FLOWER_KEY_ENC_OPTS_MASK]) { 723 + err = nla_validate_nested(tb[TCA_FLOWER_KEY_ENC_OPTS_MASK], 724 + TCA_FLOWER_KEY_ENC_OPTS_MAX, 725 + enc_opts_policy, extack); 726 + if (err) 727 + return err; 728 + 717 729 nla_opt_msk = nla_data(tb[TCA_FLOWER_KEY_ENC_OPTS_MASK]); 718 730 msk_depth = nla_len(tb[TCA_FLOWER_KEY_ENC_OPTS_MASK]); 719 731 }
+18 -11
net/sched/sch_fq.c
··· 469 469 goto begin; 470 470 } 471 471 prefetch(&skb->end); 472 - f->credit -= qdisc_pkt_len(skb); 472 + plen = qdisc_pkt_len(skb); 473 + f->credit -= plen; 473 474 474 - if (ktime_to_ns(skb->tstamp) || !q->rate_enable) 475 + if (!q->rate_enable) 475 476 goto out; 476 477 477 478 rate = q->flow_max_rate; 478 - if (skb->sk) 479 - rate = min(skb->sk->sk_pacing_rate, rate); 480 479 481 - if (rate <= q->low_rate_threshold) { 482 - f->credit = 0; 483 - plen = qdisc_pkt_len(skb); 484 - } else { 485 - plen = max(qdisc_pkt_len(skb), q->quantum); 486 - if (f->credit > 0) 487 - goto out; 480 + /* If EDT time was provided for this skb, we need to 481 + * update f->time_next_packet only if this qdisc enforces 482 + * a flow max rate. 483 + */ 484 + if (!skb->tstamp) { 485 + if (skb->sk) 486 + rate = min(skb->sk->sk_pacing_rate, rate); 487 + 488 + if (rate <= q->low_rate_threshold) { 489 + f->credit = 0; 490 + } else { 491 + plen = max(plen, q->quantum); 492 + if (f->credit > 0) 493 + goto out; 494 + } 488 495 } 489 496 if (rate != ~0UL) { 490 497 u64 len = (u64)plen * NSEC_PER_SEC;
-9
net/sched/sch_netem.c
··· 648 648 */ 649 649 skb->dev = qdisc_dev(sch); 650 650 651 - #ifdef CONFIG_NET_CLS_ACT 652 - /* 653 - * If it's at ingress let's pretend the delay is 654 - * from the network (tstamp will be updated). 655 - */ 656 - if (skb->tc_redirected && skb->tc_from_ingress) 657 - skb->tstamp = 0; 658 - #endif 659 - 660 651 if (q->slot.slot_next) { 661 652 q->slot.packets_left--; 662 653 q->slot.bytes_left -= qdisc_pkt_len(skb);
+5 -20
net/sctp/output.c
··· 118 118 sctp_transport_route(tp, NULL, sp); 119 119 if (asoc->param_flags & SPP_PMTUD_ENABLE) 120 120 sctp_assoc_sync_pmtu(asoc); 121 + } else if (!sctp_transport_pmtu_check(tp)) { 122 + if (asoc->param_flags & SPP_PMTUD_ENABLE) 123 + sctp_assoc_sync_pmtu(asoc); 121 124 } 122 125 123 126 if (asoc->pmtu_pending) { ··· 399 396 return retval; 400 397 } 401 398 402 - static void sctp_packet_release_owner(struct sk_buff *skb) 403 - { 404 - sk_free(skb->sk); 405 - } 406 - 407 - static void sctp_packet_set_owner_w(struct sk_buff *skb, struct sock *sk) 408 - { 409 - skb_orphan(skb); 410 - skb->sk = sk; 411 - skb->destructor = sctp_packet_release_owner; 412 - 413 - /* 414 - * The data chunks have already been accounted for in sctp_sendmsg(), 415 - * therefore only reserve a single byte to keep socket around until 416 - * the packet has been transmitted. 417 - */ 418 - refcount_inc(&sk->sk_wmem_alloc); 419 - } 420 - 421 399 static void sctp_packet_gso_append(struct sk_buff *head, struct sk_buff *skb) 422 400 { 423 401 if (SCTP_OUTPUT_CB(head)->last == head) ··· 410 426 head->truesize += skb->truesize; 411 427 head->data_len += skb->len; 412 428 head->len += skb->len; 429 + refcount_add(skb->truesize, &head->sk->sk_wmem_alloc); 413 430 414 431 __skb_header_release(skb); 415 432 } ··· 586 601 if (!head) 587 602 goto out; 588 603 skb_reserve(head, packet->overhead + MAX_HEADER); 589 - sctp_packet_set_owner_w(head, sk); 604 + skb_set_owner_w(head, sk); 590 605 591 606 /* set sctp header */ 592 607 sh = skb_push(head, sizeof(struct sctphdr));
+1 -1
net/sctp/outqueue.c
··· 212 212 INIT_LIST_HEAD(&q->retransmit); 213 213 INIT_LIST_HEAD(&q->sacked); 214 214 INIT_LIST_HEAD(&q->abandoned); 215 - sctp_sched_set_sched(asoc, SCTP_SS_FCFS); 215 + sctp_sched_set_sched(asoc, SCTP_SS_DEFAULT); 216 216 } 217 217 218 218 /* Free the outqueue structure and any related pending chunks.
+5 -21
net/sctp/socket.c
··· 3940 3940 unsigned int optlen) 3941 3941 { 3942 3942 struct sctp_assoc_value params; 3943 - struct sctp_association *asoc; 3944 - int retval = -EINVAL; 3945 3943 3946 3944 if (optlen != sizeof(params)) 3947 - goto out; 3945 + return -EINVAL; 3948 3946 3949 - if (copy_from_user(&params, optval, optlen)) { 3950 - retval = -EFAULT; 3951 - goto out; 3952 - } 3947 + if (copy_from_user(&params, optval, optlen)) 3948 + return -EFAULT; 3953 3949 3954 - asoc = sctp_id2assoc(sk, params.assoc_id); 3955 - if (asoc) { 3956 - asoc->prsctp_enable = !!params.assoc_value; 3957 - } else if (!params.assoc_id) { 3958 - struct sctp_sock *sp = sctp_sk(sk); 3950 + sctp_sk(sk)->ep->prsctp_enable = !!params.assoc_value; 3959 3951 3960 - sp->ep->prsctp_enable = !!params.assoc_value; 3961 - } else { 3962 - goto out; 3963 - } 3964 - 3965 - retval = 0; 3966 - 3967 - out: 3968 - return retval; 3952 + return 0; 3969 3953 } 3970 3954 3971 3955 static int sctp_setsockopt_default_prinfo(struct sock *sk,
-1
net/sctp/stream.c
··· 535 535 goto out; 536 536 } 537 537 538 - stream->incnt = incnt; 539 538 stream->outcnt = outcnt; 540 539 541 540 asoc->strreset_outstanding = !!out + !!in;
+7 -4
net/smc/af_smc.c
··· 127 127 smc = smc_sk(sk); 128 128 129 129 /* cleanup for a dangling non-blocking connect */ 130 + if (smc->connect_info && sk->sk_state == SMC_INIT) 131 + tcp_abort(smc->clcsock->sk, ECONNABORTED); 130 132 flush_work(&smc->connect_work); 131 133 kfree(smc->connect_info); 132 134 smc->connect_info = NULL; ··· 549 547 550 548 mutex_lock(&smc_create_lgr_pending); 551 549 local_contact = smc_conn_create(smc, false, aclc->hdr.flag, ibdev, 552 - ibport, &aclc->lcl, NULL, 0); 550 + ibport, ntoh24(aclc->qpn), &aclc->lcl, 551 + NULL, 0); 553 552 if (local_contact < 0) { 554 553 if (local_contact == -ENOMEM) 555 554 reason_code = SMC_CLC_DECL_MEM;/* insufficient memory*/ ··· 621 618 int rc = 0; 622 619 623 620 mutex_lock(&smc_create_lgr_pending); 624 - local_contact = smc_conn_create(smc, true, aclc->hdr.flag, NULL, 0, 621 + local_contact = smc_conn_create(smc, true, aclc->hdr.flag, NULL, 0, 0, 625 622 NULL, ismdev, aclc->gid); 626 623 if (local_contact < 0) 627 624 return smc_connect_abort(smc, SMC_CLC_DECL_MEM, 0); ··· 1086 1083 int *local_contact) 1087 1084 { 1088 1085 /* allocate connection / link group */ 1089 - *local_contact = smc_conn_create(new_smc, false, 0, ibdev, ibport, 1086 + *local_contact = smc_conn_create(new_smc, false, 0, ibdev, ibport, 0, 1090 1087 &pclc->lcl, NULL, 0); 1091 1088 if (*local_contact < 0) { 1092 1089 if (*local_contact == -ENOMEM) ··· 1110 1107 struct smc_clc_msg_smcd *pclc_smcd; 1111 1108 1112 1109 pclc_smcd = smc_get_clc_msg_smcd(pclc); 1113 - *local_contact = smc_conn_create(new_smc, true, 0, NULL, 0, NULL, 1110 + *local_contact = smc_conn_create(new_smc, true, 0, NULL, 0, 0, NULL, 1114 1111 ismdev, pclc_smcd->gid); 1115 1112 if (*local_contact < 0) { 1116 1113 if (*local_contact == -ENOMEM)
+15 -11
net/smc/smc_cdc.c
··· 81 81 sizeof(struct smc_cdc_msg) > SMC_WR_BUF_SIZE, 82 82 "must increase SMC_WR_BUF_SIZE to at least sizeof(struct smc_cdc_msg)"); 83 83 BUILD_BUG_ON_MSG( 84 - sizeof(struct smc_cdc_msg) != SMC_WR_TX_SIZE, 84 + offsetofend(struct smc_cdc_msg, reserved) > SMC_WR_TX_SIZE, 85 85 "must adapt SMC_WR_TX_SIZE to sizeof(struct smc_cdc_msg); if not all smc_wr upper layer protocols use the same message size any more, must start to set link->wr_tx_sges[i].length on each individual smc_wr_tx_send()"); 86 86 BUILD_BUG_ON_MSG( 87 87 sizeof(struct smc_cdc_tx_pend) > SMC_WR_TX_PEND_PRIV_SIZE, ··· 177 177 int smcd_cdc_msg_send(struct smc_connection *conn) 178 178 { 179 179 struct smc_sock *smc = container_of(conn, struct smc_sock, conn); 180 + union smc_host_cursor curs; 180 181 struct smcd_cdc_msg cdc; 181 182 int rc, diff; 182 183 183 184 memset(&cdc, 0, sizeof(cdc)); 184 185 cdc.common.type = SMC_CDC_MSG_TYPE; 185 - cdc.prod_wrap = conn->local_tx_ctrl.prod.wrap; 186 - cdc.prod_count = conn->local_tx_ctrl.prod.count; 187 - 188 - cdc.cons_wrap = conn->local_tx_ctrl.cons.wrap; 189 - cdc.cons_count = conn->local_tx_ctrl.cons.count; 190 - cdc.prod_flags = conn->local_tx_ctrl.prod_flags; 191 - cdc.conn_state_flags = conn->local_tx_ctrl.conn_state_flags; 186 + curs.acurs.counter = atomic64_read(&conn->local_tx_ctrl.prod.acurs); 187 + cdc.prod.wrap = curs.wrap; 188 + cdc.prod.count = curs.count; 189 + curs.acurs.counter = atomic64_read(&conn->local_tx_ctrl.cons.acurs); 190 + cdc.cons.wrap = curs.wrap; 191 + cdc.cons.count = curs.count; 192 + cdc.cons.prod_flags = conn->local_tx_ctrl.prod_flags; 193 + cdc.cons.conn_state_flags = conn->local_tx_ctrl.conn_state_flags; 192 194 rc = smcd_tx_ism_write(conn, &cdc, sizeof(cdc), 0, 1); 193 195 if (rc) 194 196 return rc; 195 - smc_curs_copy(&conn->rx_curs_confirmed, &conn->local_tx_ctrl.cons, 196 - conn); 197 + smc_curs_copy(&conn->rx_curs_confirmed, &curs, conn); 197 198 /* Calculate transmitted data and increment free send buffer space */ 198 199 diff = smc_curs_diff(conn->sndbuf_desc->len, &conn->tx_curs_fin, 199 200 &conn->tx_curs_sent); ··· 332 331 static void smcd_cdc_rx_tsklet(unsigned long data) 333 332 { 334 333 struct smc_connection *conn = (struct smc_connection *)data; 334 + struct smcd_cdc_msg *data_cdc; 335 335 struct smcd_cdc_msg cdc; 336 336 struct smc_sock *smc; 337 337 338 338 if (!conn) 339 339 return; 340 340 341 - memcpy(&cdc, conn->rmb_desc->cpu_addr, sizeof(cdc)); 341 + data_cdc = (struct smcd_cdc_msg *)conn->rmb_desc->cpu_addr; 342 + smcd_curs_copy(&cdc.prod, &data_cdc->prod, conn); 343 + smcd_curs_copy(&cdc.cons, &data_cdc->cons, conn); 342 344 smc = container_of(conn, struct smc_sock, conn); 343 345 smc_cdc_msg_recv(smc, (struct smc_cdc_msg *)&cdc); 344 346 }
+45 -15
net/smc/smc_cdc.h
··· 48 48 struct smc_cdc_producer_flags prod_flags; 49 49 struct smc_cdc_conn_state_flags conn_state_flags; 50 50 u8 reserved[18]; 51 - } __packed; /* format defined in RFC7609 */ 51 + }; 52 + 53 + /* SMC-D cursor format */ 54 + union smcd_cdc_cursor { 55 + struct { 56 + u16 wrap; 57 + u32 count; 58 + struct smc_cdc_producer_flags prod_flags; 59 + struct smc_cdc_conn_state_flags conn_state_flags; 60 + } __packed; 61 + #ifdef KERNEL_HAS_ATOMIC64 62 + atomic64_t acurs; /* for atomic processing */ 63 + #else 64 + u64 acurs; /* for atomic processing */ 65 + #endif 66 + } __aligned(8); 52 67 53 68 /* CDC message for SMC-D */ 54 69 struct smcd_cdc_msg { 55 70 struct smc_wr_rx_hdr common; /* Type = 0xFE */ 56 71 u8 res1[7]; 57 - u16 prod_wrap; 58 - u32 prod_count; 59 - u8 res2[2]; 60 - u16 cons_wrap; 61 - u32 cons_count; 62 - struct smc_cdc_producer_flags prod_flags; 63 - struct smc_cdc_conn_state_flags conn_state_flags; 72 + union smcd_cdc_cursor prod; 73 + union smcd_cdc_cursor cons; 64 74 u8 res3[8]; 65 - } __packed; 75 + } __aligned(8); 66 76 67 77 static inline bool smc_cdc_rxed_any_close(struct smc_connection *conn) 68 78 { ··· 133 123 static inline void smc_curs_copy_net(union smc_cdc_cursor *tgt, 134 124 union smc_cdc_cursor *src, 135 125 struct smc_connection *conn) 126 + { 127 + #ifndef KERNEL_HAS_ATOMIC64 128 + unsigned long flags; 129 + 130 + spin_lock_irqsave(&conn->acurs_lock, flags); 131 + tgt->acurs = src->acurs; 132 + spin_unlock_irqrestore(&conn->acurs_lock, flags); 133 + #else 134 + atomic64_set(&tgt->acurs, atomic64_read(&src->acurs)); 135 + #endif 136 + } 137 + 138 + static inline void smcd_curs_copy(union smcd_cdc_cursor *tgt, 139 + union smcd_cdc_cursor *src, 140 + struct smc_connection *conn) 136 141 { 137 142 #ifndef KERNEL_HAS_ATOMIC64 138 143 unsigned long flags; ··· 247 222 static inline void smcd_cdc_msg_to_host(struct smc_host_cdc_msg *local, 248 223 struct smcd_cdc_msg *peer) 249 224 { 250 - local->prod.wrap = peer->prod_wrap; 251 - local->prod.count = peer->prod_count; 252 - local->cons.wrap = peer->cons_wrap; 253 - local->cons.count = peer->cons_count; 254 - local->prod_flags = peer->prod_flags; 255 - local->conn_state_flags = peer->conn_state_flags; 225 + union smc_host_cursor temp; 226 + 227 + temp.wrap = peer->prod.wrap; 228 + temp.count = peer->prod.count; 229 + atomic64_set(&local->prod.acurs, atomic64_read(&temp.acurs)); 230 + 231 + temp.wrap = peer->cons.wrap; 232 + temp.count = peer->cons.count; 233 + atomic64_set(&local->cons.acurs, atomic64_read(&temp.acurs)); 234 + local->prod_flags = peer->cons.prod_flags; 235 + local->conn_state_flags = peer->cons.conn_state_flags; 256 236 } 257 237 258 238 static inline void smc_cdc_msg_to_host(struct smc_host_cdc_msg *local,
+14 -6
net/smc/smc_core.c
··· 184 184 185 185 if (!lgr->is_smcd && lnk->state != SMC_LNK_INACTIVE) 186 186 smc_llc_link_inactive(lnk); 187 + if (lgr->is_smcd) 188 + smc_ism_signal_shutdown(lgr); 187 189 smc_lgr_free(lgr); 188 190 } 189 191 } ··· 487 485 } 488 486 489 487 /* Called when SMC-D device is terminated or peer is lost */ 490 - void smc_smcd_terminate(struct smcd_dev *dev, u64 peer_gid) 488 + void smc_smcd_terminate(struct smcd_dev *dev, u64 peer_gid, unsigned short vlan) 491 489 { 492 490 struct smc_link_group *lgr, *l; 493 491 LIST_HEAD(lgr_free_list); ··· 497 495 list_for_each_entry_safe(lgr, l, &smc_lgr_list.list, list) { 498 496 if (lgr->is_smcd && lgr->smcd == dev && 499 497 (!peer_gid || lgr->peer_gid == peer_gid) && 500 - !list_empty(&lgr->list)) { 498 + (vlan == VLAN_VID_MASK || lgr->vlan_id == vlan)) { 501 499 __smc_lgr_terminate(lgr); 502 500 list_move(&lgr->list, &lgr_free_list); 503 501 } ··· 508 506 list_for_each_entry_safe(lgr, l, &lgr_free_list, list) { 509 507 list_del_init(&lgr->list); 510 508 cancel_delayed_work_sync(&lgr->free_work); 509 + if (!peer_gid && vlan == VLAN_VID_MASK) /* dev terminated? */ 510 + smc_ism_signal_shutdown(lgr); 511 511 smc_lgr_free(lgr); 512 512 } 513 513 } ··· 563 559 564 560 static bool smcr_lgr_match(struct smc_link_group *lgr, 565 561 struct smc_clc_msg_local *lcl, 566 - enum smc_lgr_role role) 562 + enum smc_lgr_role role, u32 clcqpn) 567 563 { 568 564 return !memcmp(lgr->peer_systemid, lcl->id_for_peer, 569 565 SMC_SYSTEMID_LEN) && ··· 571 567 SMC_GID_SIZE) && 572 568 !memcmp(lgr->lnk[SMC_SINGLE_LINK].peer_mac, lcl->mac, 573 569 sizeof(lcl->mac)) && 574 - lgr->role == role; 570 + lgr->role == role && 571 + (lgr->role == SMC_SERV || 572 + lgr->lnk[SMC_SINGLE_LINK].peer_qpn == clcqpn); 575 573 } 576 574 577 575 static bool smcd_lgr_match(struct smc_link_group *lgr, ··· 584 578 585 579 /* create a new SMC connection (and a new link group if necessary) */ 586 580 int smc_conn_create(struct smc_sock *smc, bool is_smcd, int srv_first_contact, 587 - struct smc_ib_device *smcibdev, u8 ibport, 581 + struct smc_ib_device *smcibdev, u8 ibport, u32 clcqpn, 588 582 struct smc_clc_msg_local *lcl, struct smcd_dev *smcd, 589 583 u64 peer_gid) 590 584 { ··· 609 603 list_for_each_entry(lgr, &smc_lgr_list.list, list) { 610 604 write_lock_bh(&lgr->conns_lock); 611 605 if ((is_smcd ? smcd_lgr_match(lgr, smcd, peer_gid) : 612 - smcr_lgr_match(lgr, lcl, role)) && 606 + smcr_lgr_match(lgr, lcl, role, clcqpn)) && 613 607 !lgr->sync_err && 614 608 lgr->vlan_id == vlan_id && 615 609 (role == SMC_CLNT || ··· 1030 1024 smc_llc_link_inactive(lnk); 1031 1025 } 1032 1026 cancel_delayed_work_sync(&lgr->free_work); 1027 + if (lgr->is_smcd) 1028 + smc_ism_signal_shutdown(lgr); 1033 1029 smc_lgr_free(lgr); /* free link group */ 1034 1030 } 1035 1031 }
+3 -2
net/smc/smc_core.h
··· 247 247 void smc_lgr_forget(struct smc_link_group *lgr); 248 248 void smc_lgr_terminate(struct smc_link_group *lgr); 249 249 void smc_port_terminate(struct smc_ib_device *smcibdev, u8 ibport); 250 - void smc_smcd_terminate(struct smcd_dev *dev, u64 peer_gid); 250 + void smc_smcd_terminate(struct smcd_dev *dev, u64 peer_gid, 251 + unsigned short vlan); 251 252 int smc_buf_create(struct smc_sock *smc, bool is_smcd); 252 253 int smc_uncompress_bufsize(u8 compressed); 253 254 int smc_rmb_rtoken_handling(struct smc_connection *conn, ··· 263 262 264 263 void smc_conn_free(struct smc_connection *conn); 265 264 int smc_conn_create(struct smc_sock *smc, bool is_smcd, int srv_first_contact, 266 - struct smc_ib_device *smcibdev, u8 ibport, 265 + struct smc_ib_device *smcibdev, u8 ibport, u32 clcqpn, 267 266 struct smc_clc_msg_local *lcl, struct smcd_dev *smcd, 268 267 u64 peer_gid); 269 268 void smcd_conn_free(struct smc_connection *conn);
+32 -11
net/smc/smc_ism.c
··· 187 187 #define ISM_EVENT_REQUEST 0x0001 188 188 #define ISM_EVENT_RESPONSE 0x0002 189 189 #define ISM_EVENT_REQUEST_IR 0x00000001 190 + #define ISM_EVENT_CODE_SHUTDOWN 0x80 190 191 #define ISM_EVENT_CODE_TESTLINK 0x83 192 + 193 + union smcd_sw_event_info { 194 + u64 info; 195 + struct { 196 + u8 uid[SMC_LGR_ID_SIZE]; 197 + unsigned short vlan_id; 198 + u16 code; 199 + }; 200 + }; 191 201 192 202 static void smcd_handle_sw_event(struct smc_ism_event_work *wrk) 193 203 { 194 - union { 195 - u64 info; 196 - struct { 197 - u32 uid; 198 - unsigned short vlanid; 199 - u16 code; 200 - }; 201 - } ev_info; 204 + union smcd_sw_event_info ev_info; 202 205 206 + ev_info.info = wrk->event.info; 203 207 switch (wrk->event.code) { 208 + case ISM_EVENT_CODE_SHUTDOWN: /* Peer shut down DMBs */ 209 + smc_smcd_terminate(wrk->smcd, wrk->event.tok, ev_info.vlan_id); 210 + break; 204 211 case ISM_EVENT_CODE_TESTLINK: /* Activity timer */ 205 - ev_info.info = wrk->event.info; 206 212 if (ev_info.code == ISM_EVENT_REQUEST) { 207 213 ev_info.code = ISM_EVENT_RESPONSE; 208 214 wrk->smcd->ops->signal_event(wrk->smcd, ··· 221 215 } 222 216 } 223 217 218 + int smc_ism_signal_shutdown(struct smc_link_group *lgr) 219 + { 220 + int rc; 221 + union smcd_sw_event_info ev_info; 222 + 223 + memcpy(ev_info.uid, lgr->id, SMC_LGR_ID_SIZE); 224 + ev_info.vlan_id = lgr->vlan_id; 225 + ev_info.code = ISM_EVENT_REQUEST; 226 + rc = lgr->smcd->ops->signal_event(lgr->smcd, lgr->peer_gid, 227 + ISM_EVENT_REQUEST_IR, 228 + ISM_EVENT_CODE_SHUTDOWN, 229 + ev_info.info); 230 + return rc; 231 + } 232 + 224 233 /* worker for SMC-D events */ 225 234 static void smc_ism_event_work(struct work_struct *work) 226 235 { ··· 244 223 245 224 switch (wrk->event.type) { 246 225 case ISM_EVENT_GID: /* GID event, token is peer GID */ 247 - smc_smcd_terminate(wrk->smcd, wrk->event.tok); 226 + smc_smcd_terminate(wrk->smcd, wrk->event.tok, VLAN_VID_MASK); 248 227 break; 249 228 case ISM_EVENT_DMB: 250 229 break; ··· 310 289 spin_unlock(&smcd_dev_list.lock); 311 290 flush_workqueue(smcd->event_wq); 312 291 destroy_workqueue(smcd->event_wq); 313 - smc_smcd_terminate(smcd, 0); 292 + smc_smcd_terminate(smcd, 0, VLAN_VID_MASK); 314 293 315 294 device_del(&smcd->dev); 316 295 }
+1
net/smc/smc_ism.h
··· 45 45 int smc_ism_unregister_dmb(struct smcd_dev *dev, struct smc_buf_desc *dmb_desc); 46 46 int smc_ism_write(struct smcd_dev *dev, const struct smc_ism_position *pos, 47 47 void *data, size_t len); 48 + int smc_ism_signal_shutdown(struct smc_link_group *lgr); 48 49 #endif
+3 -1
net/smc/smc_wr.c
··· 215 215 216 216 pend = container_of(wr_pend_priv, struct smc_wr_tx_pend, priv); 217 217 if (pend->idx < link->wr_tx_cnt) { 218 + u32 idx = pend->idx; 219 + 218 220 /* clear the full struct smc_wr_tx_pend including .priv */ 219 221 memset(&link->wr_tx_pends[pend->idx], 0, 220 222 sizeof(link->wr_tx_pends[pend->idx])); 221 223 memset(&link->wr_tx_bufs[pend->idx], 0, 222 224 sizeof(link->wr_tx_bufs[pend->idx])); 223 - test_and_clear_bit(pend->idx, link->wr_tx_mask); 225 + test_and_clear_bit(idx, link->wr_tx_mask); 224 226 return 1; 225 227 } 226 228
+1 -1
net/socket.c
··· 853 853 struct socket *sock = file->private_data; 854 854 855 855 if (unlikely(!sock->ops->splice_read)) 856 - return -EINVAL; 856 + return generic_file_splice_read(file, ppos, pipe, len, flags); 857 857 858 858 return sock->ops->splice_read(sock, ppos, pipe, len, flags); 859 859 }
+1 -7
net/sunrpc/auth_generic.c
··· 281 281 { 282 282 struct auth_cred *acred = &container_of(cred, struct generic_cred, 283 283 gc_base)->acred; 284 - bool ret; 285 - 286 - get_rpccred(cred); 287 - ret = test_bit(RPC_CRED_KEY_EXPIRE_SOON, &acred->ac_flags); 288 - put_rpccred(cred); 289 - 290 - return ret; 284 + return test_bit(RPC_CRED_KEY_EXPIRE_SOON, &acred->ac_flags); 291 285 } 292 286 293 287 static const struct rpc_credops generic_credops = {
+42 -19
net/sunrpc/auth_gss/auth_gss.c
··· 1239 1239 return &gss_auth->rpc_auth; 1240 1240 } 1241 1241 1242 + static struct gss_cred * 1243 + gss_dup_cred(struct gss_auth *gss_auth, struct gss_cred *gss_cred) 1244 + { 1245 + struct gss_cred *new; 1246 + 1247 + /* Make a copy of the cred so that we can reference count it */ 1248 + new = kzalloc(sizeof(*gss_cred), GFP_NOIO); 1249 + if (new) { 1250 + struct auth_cred acred = { 1251 + .uid = gss_cred->gc_base.cr_uid, 1252 + }; 1253 + struct gss_cl_ctx *ctx = 1254 + rcu_dereference_protected(gss_cred->gc_ctx, 1); 1255 + 1256 + rpcauth_init_cred(&new->gc_base, &acred, 1257 + &gss_auth->rpc_auth, 1258 + &gss_nullops); 1259 + new->gc_base.cr_flags = 1UL << RPCAUTH_CRED_UPTODATE; 1260 + new->gc_service = gss_cred->gc_service; 1261 + new->gc_principal = gss_cred->gc_principal; 1262 + kref_get(&gss_auth->kref); 1263 + rcu_assign_pointer(new->gc_ctx, ctx); 1264 + gss_get_ctx(ctx); 1265 + } 1266 + return new; 1267 + } 1268 + 1242 1269 /* 1243 - * gss_destroying_context will cause the RPCSEC_GSS to send a NULL RPC call 1270 + * gss_send_destroy_context will cause the RPCSEC_GSS to send a NULL RPC call 1244 1271 * to the server with the GSS control procedure field set to 1245 1272 * RPC_GSS_PROC_DESTROY. This should normally cause the server to release 1246 1273 * all RPCSEC_GSS state associated with that context. 1247 1274 */ 1248 - static int 1249 - gss_destroying_context(struct rpc_cred *cred) 1275 + static void 1276 + gss_send_destroy_context(struct rpc_cred *cred) 1250 1277 { 1251 1278 struct gss_cred *gss_cred = container_of(cred, struct gss_cred, gc_base); 1252 1279 struct gss_auth *gss_auth = container_of(cred->cr_auth, struct gss_auth, rpc_auth); 1253 1280 struct gss_cl_ctx *ctx = rcu_dereference_protected(gss_cred->gc_ctx, 1); 1281 + struct gss_cred *new; 1254 1282 struct rpc_task *task; 1255 1283 1256 - if (test_bit(RPCAUTH_CRED_UPTODATE, &cred->cr_flags) == 0) 1257 - return 0; 1284 + new = gss_dup_cred(gss_auth, gss_cred); 1285 + if (new) { 1286 + ctx->gc_proc = RPC_GSS_PROC_DESTROY; 1258 1287 1259 - ctx->gc_proc = RPC_GSS_PROC_DESTROY; 1260 - cred->cr_ops = &gss_nullops; 1288 + task = rpc_call_null(gss_auth->client, &new->gc_base, 1289 + RPC_TASK_ASYNC|RPC_TASK_SOFT); 1290 + if (!IS_ERR(task)) 1291 + rpc_put_task(task); 1261 1292 1262 - /* Take a reference to ensure the cred will be destroyed either 1263 - * by the RPC call or by the put_rpccred() below */ 1264 - get_rpccred(cred); 1265 - 1266 - task = rpc_call_null(gss_auth->client, cred, RPC_TASK_ASYNC|RPC_TASK_SOFT); 1267 - if (!IS_ERR(task)) 1268 - rpc_put_task(task); 1269 - 1270 - put_rpccred(cred); 1271 - return 1; 1293 + put_rpccred(&new->gc_base); 1294 + } 1272 1295 } 1273 1296 1274 1297 /* gss_destroy_cred (and gss_free_ctx) are used to clean up after failure ··· 1353 1330 gss_destroy_cred(struct rpc_cred *cred) 1354 1331 { 1355 1332 1356 - if (gss_destroying_context(cred)) 1357 - return; 1333 + if (test_and_clear_bit(RPCAUTH_CRED_UPTODATE, &cred->cr_flags) != 0) 1334 + gss_send_destroy_context(cred); 1358 1335 gss_destroy_nullcred(cred); 1359 1336 } 1360 1337
+3 -4
net/sunrpc/xdr.c
··· 546 546 static __be32 *xdr_get_next_encode_buffer(struct xdr_stream *xdr, 547 547 size_t nbytes) 548 548 { 549 - static __be32 *p; 549 + __be32 *p; 550 550 int space_left; 551 551 int frag1bytes, frag2bytes; 552 552 ··· 673 673 WARN_ON_ONCE(xdr->iov); 674 674 return; 675 675 } 676 - if (fraglen) { 676 + if (fraglen) 677 677 xdr->end = head->iov_base + head->iov_len; 678 - xdr->page_ptr--; 679 - } 680 678 /* (otherwise assume xdr->end is already set) */ 679 + xdr->page_ptr--; 681 680 head->iov_len = len; 682 681 buf->len = len; 683 682 xdr->p = head->iov_base + head->iov_len;
+10 -9
net/tipc/discover.c
··· 166 166 167 167 /* Apply trial address if we just left trial period */ 168 168 if (!trial && !self) { 169 - tipc_net_finalize(net, tn->trial_addr); 169 + tipc_sched_net_finalize(net, tn->trial_addr); 170 + msg_set_prevnode(buf_msg(d->skb), tn->trial_addr); 170 171 msg_set_type(buf_msg(d->skb), DSC_REQ_MSG); 171 172 } 172 173 ··· 301 300 goto exit; 302 301 } 303 302 304 - /* Trial period over ? */ 305 - if (!time_before(jiffies, tn->addr_trial_end)) { 306 - /* Did we just leave it ? */ 307 - if (!tipc_own_addr(net)) 308 - tipc_net_finalize(net, tn->trial_addr); 309 - 310 - msg_set_type(buf_msg(d->skb), DSC_REQ_MSG); 311 - msg_set_prevnode(buf_msg(d->skb), tipc_own_addr(net)); 303 + /* Did we just leave trial period ? */ 304 + if (!time_before(jiffies, tn->addr_trial_end) && !tipc_own_addr(net)) { 305 + mod_timer(&d->timer, jiffies + TIPC_DISC_INIT); 306 + spin_unlock_bh(&d->lock); 307 + tipc_sched_net_finalize(net, tn->trial_addr); 308 + return; 312 309 } 313 310 314 311 /* Adjust timeout interval according to discovery phase */ ··· 318 319 d->timer_intv = TIPC_DISC_SLOW; 319 320 else if (!d->num_nodes && d->timer_intv > TIPC_DISC_FAST) 320 321 d->timer_intv = TIPC_DISC_FAST; 322 + msg_set_type(buf_msg(d->skb), DSC_REQ_MSG); 323 + msg_set_prevnode(buf_msg(d->skb), tn->trial_addr); 321 324 } 322 325 323 326 mod_timer(&d->timer, jiffies + d->timer_intv);
+7 -4
net/tipc/link.c
··· 1594 1594 if (in_range(peers_prio, l->priority + 1, TIPC_MAX_LINK_PRI)) 1595 1595 l->priority = peers_prio; 1596 1596 1597 - /* ACTIVATE_MSG serves as PEER_RESET if link is already down */ 1598 - if (msg_peer_stopping(hdr)) 1597 + /* If peer is going down we want full re-establish cycle */ 1598 + if (msg_peer_stopping(hdr)) { 1599 1599 rc = tipc_link_fsm_evt(l, LINK_FAILURE_EVT); 1600 - else if ((mtyp == RESET_MSG) || !link_is_up(l)) 1600 + break; 1601 + } 1602 + /* ACTIVATE_MSG serves as PEER_RESET if link is already down */ 1603 + if (mtyp == RESET_MSG || !link_is_up(l)) 1601 1604 rc = tipc_link_fsm_evt(l, LINK_PEER_RESET_EVT); 1602 1605 1603 1606 /* ACTIVATE_MSG takes up link if it was already locally reset */ 1604 - if ((mtyp == ACTIVATE_MSG) && (l->state == LINK_ESTABLISHING)) 1607 + if (mtyp == ACTIVATE_MSG && l->state == LINK_ESTABLISHING) 1605 1608 rc = TIPC_LINK_UP_EVT; 1606 1609 1607 1610 l->peer_session = msg_session(hdr);
+37 -8
net/tipc/net.c
··· 104 104 * - A local spin_lock protecting the queue of subscriber events. 105 105 */ 106 106 107 + struct tipc_net_work { 108 + struct work_struct work; 109 + struct net *net; 110 + u32 addr; 111 + }; 112 + 113 + static void tipc_net_finalize(struct net *net, u32 addr); 114 + 107 115 int tipc_net_init(struct net *net, u8 *node_id, u32 addr) 108 116 { 109 117 if (tipc_own_id(net)) { ··· 127 119 return 0; 128 120 } 129 121 130 - void tipc_net_finalize(struct net *net, u32 addr) 122 + static void tipc_net_finalize(struct net *net, u32 addr) 131 123 { 132 124 struct tipc_net *tn = tipc_net(net); 133 125 134 - if (!cmpxchg(&tn->node_addr, 0, addr)) { 135 - tipc_set_node_addr(net, addr); 136 - tipc_named_reinit(net); 137 - tipc_sk_reinit(net); 138 - tipc_nametbl_publish(net, TIPC_CFG_SRV, addr, addr, 139 - TIPC_CLUSTER_SCOPE, 0, addr); 140 - } 126 + if (cmpxchg(&tn->node_addr, 0, addr)) 127 + return; 128 + tipc_set_node_addr(net, addr); 129 + tipc_named_reinit(net); 130 + tipc_sk_reinit(net); 131 + tipc_nametbl_publish(net, TIPC_CFG_SRV, addr, addr, 132 + TIPC_CLUSTER_SCOPE, 0, addr); 133 + } 134 + 135 + static void tipc_net_finalize_work(struct work_struct *work) 136 + { 137 + struct tipc_net_work *fwork; 138 + 139 + fwork = container_of(work, struct tipc_net_work, work); 140 + tipc_net_finalize(fwork->net, fwork->addr); 141 + kfree(fwork); 142 + } 143 + 144 + void tipc_sched_net_finalize(struct net *net, u32 addr) 145 + { 146 + struct tipc_net_work *fwork = kzalloc(sizeof(*fwork), GFP_ATOMIC); 147 + 148 + if (!fwork) 149 + return; 150 + INIT_WORK(&fwork->work, tipc_net_finalize_work); 151 + fwork->net = net; 152 + fwork->addr = addr; 153 + schedule_work(&fwork->work); 141 154 } 142 155 143 156 void tipc_net_stop(struct net *net)
+1 -1
net/tipc/net.h
··· 42 42 extern const struct nla_policy tipc_nl_net_policy[]; 43 43 44 44 int tipc_net_init(struct net *net, u8 *node_id, u32 addr); 45 - void tipc_net_finalize(struct net *net, u32 addr); 45 + void tipc_sched_net_finalize(struct net *net, u32 addr); 46 46 void tipc_net_stop(struct net *net); 47 47 int tipc_nl_net_dump(struct sk_buff *skb, struct netlink_callback *cb); 48 48 int tipc_nl_net_set(struct sk_buff *skb, struct genl_info *info);
+5 -2
net/tipc/node.c
··· 584 584 /* tipc_node_cleanup - delete nodes that does not 585 585 * have active links for NODE_CLEANUP_AFTER time 586 586 */ 587 - static int tipc_node_cleanup(struct tipc_node *peer) 587 + static bool tipc_node_cleanup(struct tipc_node *peer) 588 588 { 589 589 struct tipc_net *tn = tipc_net(peer->net); 590 590 bool deleted = false; 591 591 592 - spin_lock_bh(&tn->node_list_lock); 592 + /* If lock held by tipc_node_stop() the node will be deleted anyway */ 593 + if (!spin_trylock_bh(&tn->node_list_lock)) 594 + return false; 595 + 593 596 tipc_node_write_lock(peer); 594 597 595 598 if (!node_is_up(peer) && time_after(jiffies, peer->delete_at)) {
+11 -4
net/tipc/socket.c
··· 1555 1555 /** 1556 1556 * tipc_sk_anc_data_recv - optionally capture ancillary data for received message 1557 1557 * @m: descriptor for message info 1558 - * @msg: received message header 1558 + * @skb: received message buffer 1559 1559 * @tsk: TIPC port associated with message 1560 1560 * 1561 1561 * Note: Ancillary data is not captured if not requested by receiver. 1562 1562 * 1563 1563 * Returns 0 if successful, otherwise errno 1564 1564 */ 1565 - static int tipc_sk_anc_data_recv(struct msghdr *m, struct tipc_msg *msg, 1565 + static int tipc_sk_anc_data_recv(struct msghdr *m, struct sk_buff *skb, 1566 1566 struct tipc_sock *tsk) 1567 1567 { 1568 + struct tipc_msg *msg; 1568 1569 u32 anc_data[3]; 1569 1570 u32 err; 1570 1571 u32 dest_type; ··· 1574 1573 1575 1574 if (likely(m->msg_controllen == 0)) 1576 1575 return 0; 1576 + msg = buf_msg(skb); 1577 1577 1578 1578 /* Optionally capture errored message object(s) */ 1579 1579 err = msg ? msg_errcode(msg) : 0; ··· 1585 1583 if (res) 1586 1584 return res; 1587 1585 if (anc_data[1]) { 1586 + if (skb_linearize(skb)) 1587 + return -ENOMEM; 1588 + msg = buf_msg(skb); 1588 1589 res = put_cmsg(m, SOL_TIPC, TIPC_RETDATA, anc_data[1], 1589 1590 msg_data(msg)); 1590 1591 if (res) ··· 1749 1744 1750 1745 /* Collect msg meta data, including error code and rejected data */ 1751 1746 tipc_sk_set_orig_addr(m, skb); 1752 - rc = tipc_sk_anc_data_recv(m, hdr, tsk); 1747 + rc = tipc_sk_anc_data_recv(m, skb, tsk); 1753 1748 if (unlikely(rc)) 1754 1749 goto exit; 1750 + hdr = buf_msg(skb); 1755 1751 1756 1752 /* Capture data if non-error msg, otherwise just set return value */ 1757 1753 if (likely(!err)) { ··· 1862 1856 /* Collect msg meta data, incl. error code and rejected data */ 1863 1857 if (!copied) { 1864 1858 tipc_sk_set_orig_addr(m, skb); 1865 - rc = tipc_sk_anc_data_recv(m, hdr, tsk); 1859 + rc = tipc_sk_anc_data_recv(m, skb, tsk); 1866 1860 if (rc) 1867 1861 break; 1862 + hdr = buf_msg(skb); 1868 1863 } 1869 1864 1870 1865 /* Copy data if msg ok, otherwise return error/partial data */
-2
scripts/Makefile.build
··· 236 236 objtool_args += --no-unreachable 237 237 endif 238 238 ifdef CONFIG_RETPOLINE 239 - ifneq ($(RETPOLINE_CFLAGS),) 240 239 objtool_args += --retpoline 241 - endif 242 240 endif 243 241 244 242
+1 -1
scripts/faddr2line
··· 71 71 72 72 # Try to figure out the source directory prefix so we can remove it from the 73 73 # addr2line output. HACK ALERT: This assumes that start_kernel() is in 74 - # kernel/init.c! This only works for vmlinux. Otherwise it falls back to 74 + # init/main.c! This only works for vmlinux. Otherwise it falls back to 75 75 # printing the absolute path. 76 76 find_dir_prefix() { 77 77 local objfile=$1
+4 -3
scripts/kconfig/merge_config.sh
··· 102 102 fi 103 103 104 104 MERGE_LIST=$* 105 - SED_CONFIG_EXP="s/^\(# \)\{0,1\}\(${CONFIG_PREFIX}[a-zA-Z0-9_]*\)[= ].*/\2/p" 105 + SED_CONFIG_EXP1="s/^\(${CONFIG_PREFIX}[a-zA-Z0-9_]*\)=.*/\1/p" 106 + SED_CONFIG_EXP2="s/^# \(${CONFIG_PREFIX}[a-zA-Z0-9_]*\) is not set$/\1/p" 106 107 107 108 TMP_FILE=$(mktemp ./.tmp.config.XXXXXXXXXX) 108 109 ··· 117 116 echo "The merge file '$MERGE_FILE' does not exist. Exit." >&2 118 117 exit 1 119 118 fi 120 - CFG_LIST=$(sed -n "$SED_CONFIG_EXP" $MERGE_FILE) 119 + CFG_LIST=$(sed -n -e "$SED_CONFIG_EXP1" -e "$SED_CONFIG_EXP2" $MERGE_FILE) 121 120 122 121 for CFG in $CFG_LIST ; do 123 122 grep -q -w $CFG $TMP_FILE || continue ··· 160 159 161 160 162 161 # Check all specified config values took (might have missed-dependency issues) 163 - for CFG in $(sed -n "$SED_CONFIG_EXP" $TMP_FILE); do 162 + for CFG in $(sed -n -e "$SED_CONFIG_EXP1" -e "$SED_CONFIG_EXP2" $TMP_FILE); do 164 163 165 164 REQUESTED_VAL=$(grep -w -e "$CFG" $TMP_FILE) 166 165 ACTUAL_VAL=$(grep -w -e "$CFG" "$KCONFIG_CONFIG")
+3 -3
scripts/package/builddeb
··· 81 81 cp System.map "$tmpdir/boot/System.map-$version" 82 82 cp $KCONFIG_CONFIG "$tmpdir/boot/config-$version" 83 83 fi 84 - cp "$($MAKE -s image_name)" "$tmpdir/$installed_image_path" 84 + cp "$($MAKE -s -f $srctree/Makefile image_name)" "$tmpdir/$installed_image_path" 85 85 86 - if grep -q "^CONFIG_OF=y" $KCONFIG_CONFIG ; then 86 + if grep -q "^CONFIG_OF_EARLY_FLATTREE=y" $KCONFIG_CONFIG ; then 87 87 # Only some architectures with OF support have this target 88 - if grep -q dtbs_install "${srctree}/arch/$SRCARCH/Makefile"; then 88 + if [ -d "${srctree}/arch/$SRCARCH/boot/dts" ]; then 89 89 $MAKE KBUILD_SRC= INSTALL_DTBS_PATH="$tmpdir/usr/lib/$packagename" dtbs_install 90 90 fi 91 91 fi
+5 -2
scripts/package/mkdebian
··· 88 88 version=$KERNELRELEASE 89 89 if [ -n "$KDEB_PKGVERSION" ]; then 90 90 packageversion=$KDEB_PKGVERSION 91 + revision=${packageversion##*-} 91 92 else 92 93 revision=$(cat .version 2>/dev/null||echo 1) 93 94 packageversion=$version-$revision ··· 206 205 #!$(command -v $MAKE) -f 207 206 208 207 build: 209 - \$(MAKE) KERNELRELEASE=${version} ARCH=${ARCH} KBUILD_SRC= 208 + \$(MAKE) KERNELRELEASE=${version} ARCH=${ARCH} \ 209 + KBUILD_BUILD_VERSION=${revision} KBUILD_SRC= 210 210 211 211 binary-arch: 212 - \$(MAKE) KERNELRELEASE=${version} ARCH=${ARCH} KBUILD_SRC= intdeb-pkg 212 + \$(MAKE) KERNELRELEASE=${version} ARCH=${ARCH} \ 213 + KBUILD_BUILD_VERSION=${revision} KBUILD_SRC= intdeb-pkg 213 214 214 215 clean: 215 216 rm -rf debian/*tmp debian/files
+6 -5
scripts/package/mkspec
··· 12 12 # how we were called determines which rpms we build and how we build them 13 13 if [ "$1" = prebuilt ]; then 14 14 S=DEL 15 + MAKE="$MAKE -f $srctree/Makefile" 15 16 else 16 17 S= 17 18 fi ··· 79 78 $S %setup -q 80 79 $S 81 80 $S %build 82 - $S make %{?_smp_mflags} KBUILD_BUILD_VERSION=%{release} 81 + $S $MAKE %{?_smp_mflags} KBUILD_BUILD_VERSION=%{release} 83 82 $S 84 83 %install 85 84 mkdir -p %{buildroot}/boot 86 85 %ifarch ia64 87 86 mkdir -p %{buildroot}/boot/efi 88 - cp \$(make image_name) %{buildroot}/boot/efi/vmlinuz-$KERNELRELEASE 87 + cp \$($MAKE image_name) %{buildroot}/boot/efi/vmlinuz-$KERNELRELEASE 89 88 ln -s efi/vmlinuz-$KERNELRELEASE %{buildroot}/boot/ 90 89 %else 91 - cp \$(make image_name) %{buildroot}/boot/vmlinuz-$KERNELRELEASE 90 + cp \$($MAKE image_name) %{buildroot}/boot/vmlinuz-$KERNELRELEASE 92 91 %endif 93 - $M make %{?_smp_mflags} INSTALL_MOD_PATH=%{buildroot} KBUILD_SRC= modules_install 94 - make %{?_smp_mflags} INSTALL_HDR_PATH=%{buildroot}/usr KBUILD_SRC= headers_install 92 + $M $MAKE %{?_smp_mflags} INSTALL_MOD_PATH=%{buildroot} modules_install 93 + $MAKE %{?_smp_mflags} INSTALL_HDR_PATH=%{buildroot}/usr headers_install 95 94 cp System.map %{buildroot}/boot/System.map-$KERNELRELEASE 96 95 cp .config %{buildroot}/boot/config-$KERNELRELEASE 97 96 bzip2 -9 --keep vmlinux
+1 -1
scripts/setlocalversion
··· 74 74 fi 75 75 76 76 # Check for uncommitted changes 77 - if git status -uno --porcelain | grep -qv '^.. scripts/package'; then 77 + if git diff-index --name-only HEAD | grep -qv "^scripts/package"; then 78 78 printf '%s' -dirty 79 79 fi 80 80
-1
scripts/spdxcheck.py
··· 168 168 self.curline = 0 169 169 try: 170 170 for line in fd: 171 - line = line.decode(locale.getpreferredencoding(False), errors='ignore') 172 171 self.curline += 1 173 172 if self.curline > maxlines: 174 173 break
+2 -2
scripts/unifdef.c
··· 395 395 * When we have processed a group that starts off with a known-false 396 396 * #if/#elif sequence (which has therefore been deleted) followed by a 397 397 * #elif that we don't understand and therefore must keep, we edit the 398 - * latter into a #if to keep the nesting correct. We use strncpy() to 398 + * latter into a #if to keep the nesting correct. We use memcpy() to 399 399 * overwrite the 4 byte token "elif" with "if " without a '\0' byte. 400 400 * 401 401 * When we find a true #elif in a group, the following block will ··· 450 450 static void Itrue (void) { Ftrue(); ignoreon(); } 451 451 static void Ifalse(void) { Ffalse(); ignoreon(); } 452 452 /* modify this line */ 453 - static void Mpass (void) { strncpy(keyword, "if ", 4); Pelif(); } 453 + static void Mpass (void) { memcpy(keyword, "if ", 4); Pelif(); } 454 454 static void Mtrue (void) { keywordedit("else"); state(IS_TRUE_MIDDLE); } 455 455 static void Melif (void) { keywordedit("endif"); state(IS_FALSE_TRAILER); } 456 456 static void Melse (void) { keywordedit("endif"); state(IS_FALSE_ELSE); }
+1
security/integrity/digsig_asymmetric.c
··· 106 106 107 107 pks.pkey_algo = "rsa"; 108 108 pks.hash_algo = hash_algo_name[hdr->hash_algo]; 109 + pks.encoding = "pkcs1"; 109 110 pks.digest = (u8 *)data; 110 111 pks.digest_size = datalen; 111 112 pks.s = hdr->sig;
+3
security/selinux/hooks.c
··· 5318 5318 addr_buf = address; 5319 5319 5320 5320 while (walk_size < addrlen) { 5321 + if (walk_size + sizeof(sa_family_t) > addrlen) 5322 + return -EINVAL; 5323 + 5321 5324 addr = addr_buf; 5322 5325 switch (addr->sa_family) { 5323 5326 case AF_UNSPEC:
+12 -1
security/selinux/nlmsgtab.c
··· 80 80 { RTM_NEWSTATS, NETLINK_ROUTE_SOCKET__NLMSG_READ }, 81 81 { RTM_GETSTATS, NETLINK_ROUTE_SOCKET__NLMSG_READ }, 82 82 { RTM_NEWCACHEREPORT, NETLINK_ROUTE_SOCKET__NLMSG_READ }, 83 + { RTM_NEWCHAIN, NETLINK_ROUTE_SOCKET__NLMSG_WRITE }, 84 + { RTM_DELCHAIN, NETLINK_ROUTE_SOCKET__NLMSG_WRITE }, 85 + { RTM_GETCHAIN, NETLINK_ROUTE_SOCKET__NLMSG_READ }, 83 86 }; 84 87 85 88 static const struct nlmsg_perm nlmsg_tcpdiag_perms[] = ··· 161 158 162 159 switch (sclass) { 163 160 case SECCLASS_NETLINK_ROUTE_SOCKET: 164 - /* RTM_MAX always point to RTM_SETxxxx, ie RTM_NEWxxx + 3 */ 161 + /* RTM_MAX always points to RTM_SETxxxx, ie RTM_NEWxxx + 3. 162 + * If the BUILD_BUG_ON() below fails you must update the 163 + * structures at the top of this file with the new mappings 164 + * before updating the BUILD_BUG_ON() macro! 165 + */ 165 166 BUILD_BUG_ON(RTM_MAX != (RTM_NEWCHAIN + 3)); 166 167 err = nlmsg_perm(nlmsg_type, perm, nlmsg_route_perms, 167 168 sizeof(nlmsg_route_perms)); ··· 177 170 break; 178 171 179 172 case SECCLASS_NETLINK_XFRM_SOCKET: 173 + /* If the BUILD_BUG_ON() below fails you must update the 174 + * structures at the top of this file with the new mappings 175 + * before updating the BUILD_BUG_ON() macro! 176 + */ 180 177 BUILD_BUG_ON(XFRM_MSG_MAX != XFRM_MSG_MAPPING); 181 178 err = nlmsg_perm(nlmsg_type, perm, nlmsg_xfrm_perms, 182 179 sizeof(nlmsg_xfrm_perms));
+7 -3
security/selinux/ss/mls.c
··· 245 245 char *rangep[2]; 246 246 247 247 if (!pol->mls_enabled) { 248 - if ((def_sid != SECSID_NULL && oldc) || (*scontext) == '\0') 249 - return 0; 250 - return -EINVAL; 248 + /* 249 + * With no MLS, only return -EINVAL if there is a MLS field 250 + * and it did not come from an xattr. 251 + */ 252 + if (oldc && def_sid == SECSID_NULL) 253 + return -EINVAL; 254 + return 0; 251 255 } 252 256 253 257 /*
+45 -35
sound/core/control.c
··· 348 348 return 0; 349 349 } 350 350 351 + /* add a new kcontrol object; call with card->controls_rwsem locked */ 352 + static int __snd_ctl_add(struct snd_card *card, struct snd_kcontrol *kcontrol) 353 + { 354 + struct snd_ctl_elem_id id; 355 + unsigned int idx; 356 + unsigned int count; 357 + 358 + id = kcontrol->id; 359 + if (id.index > UINT_MAX - kcontrol->count) 360 + return -EINVAL; 361 + 362 + if (snd_ctl_find_id(card, &id)) { 363 + dev_err(card->dev, 364 + "control %i:%i:%i:%s:%i is already present\n", 365 + id.iface, id.device, id.subdevice, id.name, id.index); 366 + return -EBUSY; 367 + } 368 + 369 + if (snd_ctl_find_hole(card, kcontrol->count) < 0) 370 + return -ENOMEM; 371 + 372 + list_add_tail(&kcontrol->list, &card->controls); 373 + card->controls_count += kcontrol->count; 374 + kcontrol->id.numid = card->last_numid + 1; 375 + card->last_numid += kcontrol->count; 376 + 377 + id = kcontrol->id; 378 + count = kcontrol->count; 379 + for (idx = 0; idx < count; idx++, id.index++, id.numid++) 380 + snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_ADD, &id); 381 + 382 + return 0; 383 + } 384 + 351 385 /** 352 386 * snd_ctl_add - add the control instance to the card 353 387 * @card: the card instance ··· 398 364 */ 399 365 int snd_ctl_add(struct snd_card *card, struct snd_kcontrol *kcontrol) 400 366 { 401 - struct snd_ctl_elem_id id; 402 - unsigned int idx; 403 - unsigned int count; 404 367 int err = -EINVAL; 405 368 406 369 if (! kcontrol) 407 370 return err; 408 371 if (snd_BUG_ON(!card || !kcontrol->info)) 409 372 goto error; 410 - id = kcontrol->id; 411 - if (id.index > UINT_MAX - kcontrol->count) 412 - goto error; 413 373 414 374 down_write(&card->controls_rwsem); 415 - if (snd_ctl_find_id(card, &id)) { 416 - up_write(&card->controls_rwsem); 417 - dev_err(card->dev, "control %i:%i:%i:%s:%i is already present\n", 418 - id.iface, 419 - id.device, 420 - id.subdevice, 421 - id.name, 422 - id.index); 423 - err = -EBUSY; 424 - goto error; 425 - } 426 - if (snd_ctl_find_hole(card, kcontrol->count) < 0) { 427 - up_write(&card->controls_rwsem); 428 - err = -ENOMEM; 429 - goto error; 430 - } 431 - list_add_tail(&kcontrol->list, &card->controls); 432 - card->controls_count += kcontrol->count; 433 - kcontrol->id.numid = card->last_numid + 1; 434 - card->last_numid += kcontrol->count; 435 - id = kcontrol->id; 436 - count = kcontrol->count; 375 + err = __snd_ctl_add(card, kcontrol); 437 376 up_write(&card->controls_rwsem); 438 - for (idx = 0; idx < count; idx++, id.index++, id.numid++) 439 - snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_ADD, &id); 377 + if (err < 0) 378 + goto error; 440 379 return 0; 441 380 442 381 error: ··· 1368 1361 kctl->tlv.c = snd_ctl_elem_user_tlv; 1369 1362 1370 1363 /* This function manage to free the instance on failure. */ 1371 - err = snd_ctl_add(card, kctl); 1372 - if (err < 0) 1373 - return err; 1364 + down_write(&card->controls_rwsem); 1365 + err = __snd_ctl_add(card, kctl); 1366 + if (err < 0) { 1367 + snd_ctl_free_one(kctl); 1368 + goto unlock; 1369 + } 1374 1370 offset = snd_ctl_get_ioff(kctl, &info->id); 1375 1371 snd_ctl_build_ioff(&info->id, kctl, offset); 1376 1372 /* ··· 1384 1374 * which locks the element. 1385 1375 */ 1386 1376 1387 - down_write(&card->controls_rwsem); 1388 1377 card->user_ctl_count++; 1389 - up_write(&card->controls_rwsem); 1390 1378 1379 + unlock: 1380 + up_write(&card->controls_rwsem); 1391 1381 return 0; 1392 1382 } 1393 1383
+3 -3
sound/core/oss/pcm_oss.c
··· 1062 1062 runtime->oss.channels = params_channels(params); 1063 1063 runtime->oss.rate = params_rate(params); 1064 1064 1065 - vfree(runtime->oss.buffer); 1066 - runtime->oss.buffer = vmalloc(runtime->oss.period_bytes); 1065 + kvfree(runtime->oss.buffer); 1066 + runtime->oss.buffer = kvzalloc(runtime->oss.period_bytes, GFP_KERNEL); 1067 1067 if (!runtime->oss.buffer) { 1068 1068 err = -ENOMEM; 1069 1069 goto failure; ··· 2328 2328 { 2329 2329 struct snd_pcm_runtime *runtime; 2330 2330 runtime = substream->runtime; 2331 - vfree(runtime->oss.buffer); 2331 + kvfree(runtime->oss.buffer); 2332 2332 runtime->oss.buffer = NULL; 2333 2333 #ifdef CONFIG_SND_PCM_OSS_PLUGINS 2334 2334 snd_pcm_oss_plugin_clear(substream);
+3 -3
sound/core/oss/pcm_plugin.c
··· 66 66 return -ENXIO; 67 67 size /= 8; 68 68 if (plugin->buf_frames < frames) { 69 - vfree(plugin->buf); 70 - plugin->buf = vmalloc(size); 69 + kvfree(plugin->buf); 70 + plugin->buf = kvzalloc(size, GFP_KERNEL); 71 71 plugin->buf_frames = frames; 72 72 } 73 73 if (!plugin->buf) { ··· 191 191 if (plugin->private_free) 192 192 plugin->private_free(plugin); 193 193 kfree(plugin->buf_channels); 194 - vfree(plugin->buf); 194 + kvfree(plugin->buf); 195 195 kfree(plugin); 196 196 return 0; 197 197 }
-2
sound/isa/wss/wss_lib.c
··· 1531 1531 if (err < 0) { 1532 1532 if (chip->release_dma) 1533 1533 chip->release_dma(chip, chip->dma_private_data, chip->dma1); 1534 - snd_free_pages(runtime->dma_area, runtime->dma_bytes); 1535 1534 return err; 1536 1535 } 1537 1536 chip->playback_substream = substream; ··· 1571 1572 if (err < 0) { 1572 1573 if (chip->release_dma) 1573 1574 chip->release_dma(chip, chip->dma_private_data, chip->dma2); 1574 - snd_free_pages(runtime->dma_area, runtime->dma_bytes); 1575 1575 return err; 1576 1576 } 1577 1577 chip->capture_substream = substream;
+1 -1
sound/pci/ac97/ac97_codec.c
··· 824 824 { 825 825 struct snd_ac97 *ac97 = snd_kcontrol_chip(kcontrol); 826 826 int reg = kcontrol->private_value & 0xff; 827 - int shift = (kcontrol->private_value >> 8) & 0xff; 827 + int shift = (kcontrol->private_value >> 8) & 0x0f; 828 828 int mask = (kcontrol->private_value >> 16) & 0xff; 829 829 // int invert = (kcontrol->private_value >> 24) & 0xff; 830 830 unsigned short value, old, new;
+2
sound/pci/hda/hda_intel.c
··· 2169 2169 /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */ 2170 2170 SND_PCI_QUIRK(0x1849, 0xc892, "Asrock B85M-ITX", 0), 2171 2171 /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */ 2172 + SND_PCI_QUIRK(0x1849, 0x0397, "Asrock N68C-S UCC", 0), 2173 + /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */ 2172 2174 SND_PCI_QUIRK(0x1849, 0x7662, "Asrock H81M-HDS", 0), 2173 2175 /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */ 2174 2176 SND_PCI_QUIRK(0x1043, 0x8733, "Asus Prime X370-Pro", 0),
+3 -2
sound/pci/hda/patch_ca0132.c
··· 1177 1177 SND_PCI_QUIRK(0x1028, 0x0708, "Alienware 15 R2 2016", QUIRK_ALIENWARE), 1178 1178 SND_PCI_QUIRK(0x1102, 0x0010, "Sound Blaster Z", QUIRK_SBZ), 1179 1179 SND_PCI_QUIRK(0x1102, 0x0023, "Sound Blaster Z", QUIRK_SBZ), 1180 + SND_PCI_QUIRK(0x1102, 0x0033, "Sound Blaster ZxR", QUIRK_SBZ), 1180 1181 SND_PCI_QUIRK(0x1458, 0xA016, "Recon3Di", QUIRK_R3DI), 1181 1182 SND_PCI_QUIRK(0x1458, 0xA026, "Gigabyte G1.Sniper Z97", QUIRK_R3DI), 1182 1183 SND_PCI_QUIRK(0x1458, 0xA036, "Gigabyte GA-Z170X-Gaming 7", QUIRK_R3DI), ··· 8414 8413 8415 8414 snd_hda_power_down(codec); 8416 8415 if (spec->mem_base) 8417 - iounmap(spec->mem_base); 8416 + pci_iounmap(codec->bus->pci, spec->mem_base); 8418 8417 kfree(spec->spec_init_verbs); 8419 8418 kfree(codec->spec); 8420 8419 } ··· 8489 8488 break; 8490 8489 case QUIRK_AE5: 8491 8490 codec_dbg(codec, "%s: QUIRK_AE5 applied.\n", __func__); 8492 - snd_hda_apply_pincfgs(codec, r3di_pincfgs); 8491 + snd_hda_apply_pincfgs(codec, ae5_pincfgs); 8493 8492 break; 8494 8493 } 8495 8494
+37
sound/pci/hda/patch_realtek.c
··· 388 388 case 0x10ec0285: 389 389 case 0x10ec0298: 390 390 case 0x10ec0289: 391 + case 0x10ec0300: 391 392 alc_update_coef_idx(codec, 0x10, 1<<9, 0); 392 393 break; 393 394 case 0x10ec0275: ··· 2831 2830 ALC269_TYPE_ALC215, 2832 2831 ALC269_TYPE_ALC225, 2833 2832 ALC269_TYPE_ALC294, 2833 + ALC269_TYPE_ALC300, 2834 2834 ALC269_TYPE_ALC700, 2835 2835 }; 2836 2836 ··· 2866 2864 case ALC269_TYPE_ALC215: 2867 2865 case ALC269_TYPE_ALC225: 2868 2866 case ALC269_TYPE_ALC294: 2867 + case ALC269_TYPE_ALC300: 2869 2868 case ALC269_TYPE_ALC700: 2870 2869 ssids = alc269_ssids; 2871 2870 break; ··· 5361 5358 spec->gen.preferred_dacs = preferred_pairs; 5362 5359 } 5363 5360 5361 + /* The DAC of NID 0x3 will introduce click/pop noise on headphones, so invalidate it */ 5362 + static void alc285_fixup_invalidate_dacs(struct hda_codec *codec, 5363 + const struct hda_fixup *fix, int action) 5364 + { 5365 + if (action != HDA_FIXUP_ACT_PRE_PROBE) 5366 + return; 5367 + 5368 + snd_hda_override_wcaps(codec, 0x03, 0); 5369 + } 5370 + 5364 5371 /* for hda_fixup_thinkpad_acpi() */ 5365 5372 #include "thinkpad_helper.c" 5366 5373 ··· 5508 5495 ALC255_FIXUP_DELL_HEADSET_MIC, 5509 5496 ALC295_FIXUP_HP_X360, 5510 5497 ALC221_FIXUP_HP_HEADSET_MIC, 5498 + ALC285_FIXUP_LENOVO_HEADPHONE_NOISE, 5499 + ALC295_FIXUP_HP_AUTO_MUTE, 5511 5500 }; 5512 5501 5513 5502 static const struct hda_fixup alc269_fixups[] = { ··· 5674 5659 [ALC269_FIXUP_HP_MUTE_LED_MIC3] = { 5675 5660 .type = HDA_FIXUP_FUNC, 5676 5661 .v.func = alc269_fixup_hp_mute_led_mic3, 5662 + .chained = true, 5663 + .chain_id = ALC295_FIXUP_HP_AUTO_MUTE 5677 5664 }, 5678 5665 [ALC269_FIXUP_HP_GPIO_LED] = { 5679 5666 .type = HDA_FIXUP_FUNC, ··· 6379 6362 .chained = true, 6380 6363 .chain_id = ALC269_FIXUP_HEADSET_MIC 6381 6364 }, 6365 + [ALC285_FIXUP_LENOVO_HEADPHONE_NOISE] = { 6366 + .type = HDA_FIXUP_FUNC, 6367 + .v.func = alc285_fixup_invalidate_dacs, 6368 + }, 6369 + [ALC295_FIXUP_HP_AUTO_MUTE] = { 6370 + .type = HDA_FIXUP_FUNC, 6371 + .v.func = alc_fixup_auto_mute_via_amp, 6372 + }, 6382 6373 }; 6383 6374 6384 6375 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 6506 6481 SND_PCI_QUIRK(0x103c, 0x2336, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1), 6507 6482 SND_PCI_QUIRK(0x103c, 0x2337, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1), 6508 6483 SND_PCI_QUIRK(0x103c, 0x221c, "HP EliteBook 755 G2", ALC280_FIXUP_HP_HEADSET_MIC), 6484 + SND_PCI_QUIRK(0x103c, 0x820d, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3), 6509 6485 SND_PCI_QUIRK(0x103c, 0x8256, "HP", ALC221_FIXUP_HP_FRONT_MIC), 6510 6486 SND_PCI_QUIRK(0x103c, 0x827e, "HP x360", ALC295_FIXUP_HP_X360), 6511 6487 SND_PCI_QUIRK(0x103c, 0x82bf, "HP", ALC221_FIXUP_HP_MIC_NO_PRESENCE), ··· 6557 6531 SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8), 6558 6532 SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC), 6559 6533 SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC), 6534 + SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC), 6560 6535 SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC233_FIXUP_LENOVO_MULTI_CODECS), 6561 6536 SND_PCI_QUIRK(0x17aa, 0x20f2, "Thinkpad SL410/510", ALC269_FIXUP_SKU_IGNORE), 6562 6537 SND_PCI_QUIRK(0x17aa, 0x215e, "Thinkpad L512", ALC269_FIXUP_SKU_IGNORE), ··· 7060 7033 {0x12, 0x90a60130}, 7061 7034 {0x19, 0x03a11020}, 7062 7035 {0x21, 0x0321101f}), 7036 + SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_LENOVO_HEADPHONE_NOISE, 7037 + {0x12, 0x90a60130}, 7038 + {0x14, 0x90170110}, 7039 + {0x19, 0x04a11040}, 7040 + {0x21, 0x04211020}), 7063 7041 SND_HDA_PIN_QUIRK(0x10ec0288, 0x1028, "Dell", ALC288_FIXUP_DELL1_MIC_NO_PRESENCE, 7064 7042 {0x12, 0x90a60120}, 7065 7043 {0x14, 0x90170110}, ··· 7325 7293 spec->codec_variant = ALC269_TYPE_ALC294; 7326 7294 spec->gen.mixer_nid = 0; /* ALC2x4 does not have any loopback mixer path */ 7327 7295 alc_update_coef_idx(codec, 0x6b, 0x0018, (1<<4) | (1<<3)); /* UAJ MIC Vref control by verb */ 7296 + break; 7297 + case 0x10ec0300: 7298 + spec->codec_variant = ALC269_TYPE_ALC300; 7299 + spec->gen.mixer_nid = 0; /* no loopback on ALC300 */ 7328 7300 break; 7329 7301 case 0x10ec0700: 7330 7302 case 0x10ec0701: ··· 8440 8404 HDA_CODEC_ENTRY(0x10ec0295, "ALC295", patch_alc269), 8441 8405 HDA_CODEC_ENTRY(0x10ec0298, "ALC298", patch_alc269), 8442 8406 HDA_CODEC_ENTRY(0x10ec0299, "ALC299", patch_alc269), 8407 + HDA_CODEC_ENTRY(0x10ec0300, "ALC300", patch_alc269), 8443 8408 HDA_CODEC_REV_ENTRY(0x10ec0861, 0x100340, "ALC660", patch_alc861), 8444 8409 HDA_CODEC_ENTRY(0x10ec0660, "ALC660-VD", patch_alc861vd), 8445 8410 HDA_CODEC_ENTRY(0x10ec0861, "ALC861", patch_alc861),
+2 -2
sound/pci/hda/thinkpad_helper.c
··· 58 58 removefunc = false; 59 59 } 60 60 if (led_set_func(TPACPI_LED_MICMUTE, false) >= 0 && 61 - snd_hda_gen_add_micmute_led(codec, 62 - update_tpacpi_micmute) > 0) 61 + !snd_hda_gen_add_micmute_led(codec, 62 + update_tpacpi_micmute)) 63 63 removefunc = false; 64 64 } 65 65
+5 -6
sound/soc/codecs/hdac_hdmi.c
··· 2187 2187 */ 2188 2188 snd_hdac_codec_read(hdev, hdev->afg, 0, AC_VERB_SET_POWER_STATE, 2189 2189 AC_PWRST_D3); 2190 - err = snd_hdac_display_power(bus, false); 2191 - if (err < 0) { 2192 - dev_err(dev, "Cannot turn on display power on i915\n"); 2193 - return err; 2194 - } 2195 2190 2196 2191 hlink = snd_hdac_ext_bus_get_link(bus, dev_name(dev)); 2197 2192 if (!hlink) { ··· 2196 2201 2197 2202 snd_hdac_ext_bus_link_put(bus, hlink); 2198 2203 2199 - return 0; 2204 + err = snd_hdac_display_power(bus, false); 2205 + if (err < 0) 2206 + dev_err(dev, "Cannot turn off display power on i915\n"); 2207 + 2208 + return err; 2200 2209 } 2201 2210 2202 2211 static int hdac_hdmi_runtime_resume(struct device *dev)
+1 -1
sound/soc/codecs/pcm186x.h
··· 139 139 #define PCM186X_MAX_REGISTER PCM186X_CURR_TRIM_CTRL 140 140 141 141 /* PCM186X_PAGE */ 142 - #define PCM186X_RESET 0xff 142 + #define PCM186X_RESET 0xfe 143 143 144 144 /* PCM186X_ADCX_INPUT_SEL_X */ 145 145 #define PCM186X_ADC_INPUT_SEL_POL BIT(7)
+4 -8
sound/soc/codecs/pcm3060.c
··· 198 198 }; 199 199 200 200 static const struct snd_soc_dapm_widget pcm3060_dapm_widgets[] = { 201 - SND_SOC_DAPM_OUTPUT("OUTL+"), 202 - SND_SOC_DAPM_OUTPUT("OUTR+"), 203 - SND_SOC_DAPM_OUTPUT("OUTL-"), 204 - SND_SOC_DAPM_OUTPUT("OUTR-"), 201 + SND_SOC_DAPM_OUTPUT("OUTL"), 202 + SND_SOC_DAPM_OUTPUT("OUTR"), 205 203 206 204 SND_SOC_DAPM_INPUT("INL"), 207 205 SND_SOC_DAPM_INPUT("INR"), 208 206 }; 209 207 210 208 static const struct snd_soc_dapm_route pcm3060_dapm_map[] = { 211 - { "OUTL+", NULL, "Playback" }, 212 - { "OUTR+", NULL, "Playback" }, 213 - { "OUTL-", NULL, "Playback" }, 214 - { "OUTR-", NULL, "Playback" }, 209 + { "OUTL", NULL, "Playback" }, 210 + { "OUTR", NULL, "Playback" }, 215 211 216 212 { "Capture", NULL, "INL" }, 217 213 { "Capture", NULL, "INR" },
+20 -17
sound/soc/codecs/wm_adsp.c
··· 765 765 766 766 static void wm_adsp2_show_fw_status(struct wm_adsp *dsp) 767 767 { 768 - u16 scratch[4]; 768 + unsigned int scratch[4]; 769 + unsigned int addr = dsp->base + ADSP2_SCRATCH0; 770 + unsigned int i; 769 771 int ret; 770 772 771 - ret = regmap_raw_read(dsp->regmap, dsp->base + ADSP2_SCRATCH0, 772 - scratch, sizeof(scratch)); 773 - if (ret) { 774 - adsp_err(dsp, "Failed to read SCRATCH regs: %d\n", ret); 775 - return; 773 + for (i = 0; i < ARRAY_SIZE(scratch); ++i) { 774 + ret = regmap_read(dsp->regmap, addr + i, &scratch[i]); 775 + if (ret) { 776 + adsp_err(dsp, "Failed to read SCRATCH%u: %d\n", i, ret); 777 + return; 778 + } 776 779 } 777 780 778 781 adsp_dbg(dsp, "FW SCRATCH 0:0x%x 1:0x%x 2:0x%x 3:0x%x\n", 779 - be16_to_cpu(scratch[0]), 780 - be16_to_cpu(scratch[1]), 781 - be16_to_cpu(scratch[2]), 782 - be16_to_cpu(scratch[3])); 782 + scratch[0], scratch[1], scratch[2], scratch[3]); 783 783 } 784 784 785 785 static void wm_adsp2v2_show_fw_status(struct wm_adsp *dsp) 786 786 { 787 - u32 scratch[2]; 787 + unsigned int scratch[2]; 788 788 int ret; 789 789 790 - ret = regmap_raw_read(dsp->regmap, dsp->base + ADSP2V2_SCRATCH0_1, 791 - scratch, sizeof(scratch)); 792 - 790 + ret = regmap_read(dsp->regmap, dsp->base + ADSP2V2_SCRATCH0_1, 791 + &scratch[0]); 793 792 if (ret) { 794 - adsp_err(dsp, "Failed to read SCRATCH regs: %d\n", ret); 793 + adsp_err(dsp, "Failed to read SCRATCH0_1: %d\n", ret); 795 794 return; 796 795 } 797 796 798 - scratch[0] = be32_to_cpu(scratch[0]); 799 - scratch[1] = be32_to_cpu(scratch[1]); 797 + ret = regmap_read(dsp->regmap, dsp->base + ADSP2V2_SCRATCH2_3, 798 + &scratch[1]); 799 + if (ret) { 800 + adsp_err(dsp, "Failed to read SCRATCH2_3: %d\n", ret); 801 + return; 802 + } 800 803 801 804 adsp_dbg(dsp, "FW SCRATCH 0:0x%x 1:0x%x 2:0x%x 3:0x%x\n", 802 805 scratch[0] & 0xFFFF,
+23 -3
sound/soc/intel/Kconfig
··· 101 101 codec, then enable this option by saying Y or m. This is a 102 102 recommended option 103 103 104 - config SND_SOC_INTEL_SKYLAKE_SSP_CLK 105 - tristate 106 - 107 104 config SND_SOC_INTEL_SKYLAKE 108 105 tristate "SKL/BXT/KBL/GLK/CNL... Platforms" 109 106 depends on PCI && ACPI 107 + select SND_SOC_INTEL_SKYLAKE_COMMON 108 + help 109 + If you have a Intel Skylake/Broxton/ApolloLake/KabyLake/ 110 + GeminiLake or CannonLake platform with the DSP enabled in the BIOS 111 + then enable this option by saying Y or m. 112 + 113 + if SND_SOC_INTEL_SKYLAKE 114 + 115 + config SND_SOC_INTEL_SKYLAKE_SSP_CLK 116 + tristate 117 + 118 + config SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC 119 + bool "HDAudio codec support" 120 + help 121 + If you have a Intel Skylake/Broxton/ApolloLake/KabyLake/ 122 + GeminiLake or CannonLake platform with an HDaudio codec 123 + then enable this option by saying Y 124 + 125 + config SND_SOC_INTEL_SKYLAKE_COMMON 126 + tristate 110 127 select SND_HDA_EXT_CORE 111 128 select SND_HDA_DSP_LOADER 112 129 select SND_SOC_TOPOLOGY 113 130 select SND_SOC_INTEL_SST 131 + select SND_SOC_HDAC_HDA if SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC 114 132 select SND_SOC_ACPI_INTEL_MATCH 115 133 help 116 134 If you have a Intel Skylake/Broxton/ApolloLake/KabyLake/ 117 135 GeminiLake or CannonLake platform with the DSP enabled in the BIOS 118 136 then enable this option by saying Y or m. 137 + 138 + endif ## SND_SOC_INTEL_SKYLAKE 119 139 120 140 config SND_SOC_ACPI_INTEL_MATCH 121 141 tristate
+14 -10
sound/soc/intel/boards/Kconfig
··· 293 293 Say Y if you have such a device. 294 294 If unsure select "N". 295 295 296 - config SND_SOC_INTEL_SKL_HDA_DSP_GENERIC_MACH 297 - tristate "SKL/KBL/BXT/APL with HDA Codecs" 298 - select SND_SOC_HDAC_HDMI 299 - select SND_SOC_HDAC_HDA 300 - help 301 - This adds support for ASoC machine driver for Intel platforms 302 - SKL/KBL/BXT/APL with iDisp, HDA audio codecs. 303 - Say Y or m if you have such a device. This is a recommended option. 304 - If unsure select "N". 305 - 306 296 config SND_SOC_INTEL_GLK_RT5682_MAX98357A_MACH 307 297 tristate "GLK with RT5682 and MAX98357A in I2S Mode" 308 298 depends on MFD_INTEL_LPSS && I2C && ACPI ··· 308 318 If unsure select "N". 309 319 310 320 endif ## SND_SOC_INTEL_SKYLAKE 321 + 322 + if SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC 323 + 324 + config SND_SOC_INTEL_SKL_HDA_DSP_GENERIC_MACH 325 + tristate "SKL/KBL/BXT/APL with HDA Codecs" 326 + select SND_SOC_HDAC_HDMI 327 + # SND_SOC_HDAC_HDA is already selected 328 + help 329 + This adds support for ASoC machine driver for Intel platforms 330 + SKL/KBL/BXT/APL with iDisp, HDA audio codecs. 331 + Say Y or m if you have such a device. This is a recommended option. 332 + If unsure select "N". 333 + 334 + endif ## SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC 311 335 312 336 endif ## SND_SOC_INTEL_MACH
+29 -3
sound/soc/intel/boards/cht_bsw_max98090_ti.c
··· 19 19 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 20 20 */ 21 21 22 + #include <linux/dmi.h> 22 23 #include <linux/module.h> 23 24 #include <linux/platform_device.h> 24 25 #include <linux/slab.h> ··· 35 34 36 35 #define CHT_PLAT_CLK_3_HZ 19200000 37 36 #define CHT_CODEC_DAI "HiFi" 37 + 38 + #define QUIRK_PMC_PLT_CLK_0 0x01 38 39 39 40 struct cht_mc_private { 40 41 struct clk *mclk; ··· 388 385 .num_controls = ARRAY_SIZE(cht_mc_controls), 389 386 }; 390 387 388 + static const struct dmi_system_id cht_max98090_quirk_table[] = { 389 + { 390 + /* Swanky model Chromebook (Toshiba Chromebook 2) */ 391 + .matches = { 392 + DMI_MATCH(DMI_PRODUCT_NAME, "Swanky"), 393 + }, 394 + .driver_data = (void *)QUIRK_PMC_PLT_CLK_0, 395 + }, 396 + {} 397 + }; 398 + 391 399 static int snd_cht_mc_probe(struct platform_device *pdev) 392 400 { 401 + const struct dmi_system_id *dmi_id; 393 402 struct device *dev = &pdev->dev; 394 403 int ret_val = 0; 395 404 struct cht_mc_private *drv; 405 + const char *mclk_name; 406 + int quirks = 0; 407 + 408 + dmi_id = dmi_first_match(cht_max98090_quirk_table); 409 + if (dmi_id) 410 + quirks = (unsigned long)dmi_id->driver_data; 396 411 397 412 drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_KERNEL); 398 413 if (!drv) ··· 432 411 snd_soc_card_cht.dev = &pdev->dev; 433 412 snd_soc_card_set_drvdata(&snd_soc_card_cht, drv); 434 413 435 - drv->mclk = devm_clk_get(&pdev->dev, "pmc_plt_clk_3"); 414 + if (quirks & QUIRK_PMC_PLT_CLK_0) 415 + mclk_name = "pmc_plt_clk_0"; 416 + else 417 + mclk_name = "pmc_plt_clk_3"; 418 + 419 + drv->mclk = devm_clk_get(&pdev->dev, mclk_name); 436 420 if (IS_ERR(drv->mclk)) { 437 421 dev_err(&pdev->dev, 438 - "Failed to get MCLK from pmc_plt_clk_3: %ld\n", 439 - PTR_ERR(drv->mclk)); 422 + "Failed to get MCLK from %s: %ld\n", 423 + mclk_name, PTR_ERR(drv->mclk)); 440 424 return PTR_ERR(drv->mclk); 441 425 } 442 426
+24 -8
sound/soc/intel/skylake/skl.c
··· 37 37 #include "skl.h" 38 38 #include "skl-sst-dsp.h" 39 39 #include "skl-sst-ipc.h" 40 + #if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC) 40 41 #include "../../../soc/codecs/hdac_hda.h" 42 + #endif 41 43 42 44 /* 43 45 * initialize the PCI registers ··· 660 658 platform_device_unregister(skl->clk_dev); 661 659 } 662 660 661 + #if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC) 662 + 663 663 #define IDISP_INTEL_VENDOR_ID 0x80860000 664 664 665 665 /* ··· 680 676 #endif 681 677 } 682 678 679 + #endif /* CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC */ 680 + 683 681 /* 684 682 * Probe the given codec address 685 683 */ ··· 691 685 (AC_VERB_PARAMETERS << 8) | AC_PAR_VENDOR_ID; 692 686 unsigned int res = -1; 693 687 struct skl *skl = bus_to_skl(bus); 688 + #if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC) 694 689 struct hdac_hda_priv *hda_codec; 695 - struct hdac_device *hdev; 696 690 int err; 691 + #endif 692 + struct hdac_device *hdev; 697 693 698 694 mutex_lock(&bus->cmd_mutex); 699 695 snd_hdac_bus_send_cmd(bus, cmd); ··· 705 697 return -EIO; 706 698 dev_dbg(bus->dev, "codec #%d probed OK: %x\n", addr, res); 707 699 700 + #if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC) 708 701 hda_codec = devm_kzalloc(&skl->pci->dev, sizeof(*hda_codec), 709 702 GFP_KERNEL); 710 703 if (!hda_codec) ··· 724 715 load_codec_module(&hda_codec->codec); 725 716 } 726 717 return 0; 718 + #else 719 + hdev = devm_kzalloc(&skl->pci->dev, sizeof(*hdev), GFP_KERNEL); 720 + if (!hdev) 721 + return -ENOMEM; 722 + 723 + return snd_hdac_ext_bus_device_init(bus, addr, hdev); 724 + #endif /* CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC */ 727 725 } 728 726 729 727 /* Codec initialization */ ··· 831 815 } 832 816 } 833 817 818 + /* 819 + * we are done probing so decrement link counts 820 + */ 821 + list_for_each_entry(hlink, &bus->hlink_list, list) 822 + snd_hdac_ext_bus_link_put(bus, hlink); 823 + 834 824 if (IS_ENABLED(CONFIG_SND_SOC_HDAC_HDMI)) { 835 825 err = snd_hdac_display_power(bus, false); 836 826 if (err < 0) { ··· 845 823 return; 846 824 } 847 825 } 848 - 849 - /* 850 - * we are done probing so decrement link counts 851 - */ 852 - list_for_each_entry(hlink, &bus->hlink_list, list) 853 - snd_hdac_ext_bus_link_put(bus, hlink); 854 826 855 827 /* configure PM */ 856 828 pm_runtime_put_noidle(bus->dev); ··· 886 870 hbus = skl_to_hbus(skl); 887 871 bus = skl_to_bus(skl); 888 872 889 - #if IS_ENABLED(CONFIG_SND_SOC_HDAC_HDA) 873 + #if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC) 890 874 ext_ops = snd_soc_hdac_hda_get_ops(); 891 875 #endif 892 876 snd_hdac_ext_bus_init(bus, &pci->dev, &bus_core_ops, io_ops, ext_ops);
+29 -38
sound/soc/omap/omap-abe-twl6040.c
··· 36 36 #include "../codecs/twl6040.h" 37 37 38 38 struct abe_twl6040 { 39 + struct snd_soc_card card; 40 + struct snd_soc_dai_link dai_links[2]; 39 41 int jack_detection; /* board can detect jack events */ 40 42 int mclk_freq; /* MCLK frequency speed for twl6040 */ 41 43 }; ··· 210 208 ARRAY_SIZE(dmic_audio_map)); 211 209 } 212 210 213 - /* Digital audio interface glue - connects codec <--> CPU */ 214 - static struct snd_soc_dai_link abe_twl6040_dai_links[] = { 215 - { 216 - .name = "TWL6040", 217 - .stream_name = "TWL6040", 218 - .codec_dai_name = "twl6040-legacy", 219 - .codec_name = "twl6040-codec", 220 - .init = omap_abe_twl6040_init, 221 - .ops = &omap_abe_ops, 222 - }, 223 - { 224 - .name = "DMIC", 225 - .stream_name = "DMIC Capture", 226 - .codec_dai_name = "dmic-hifi", 227 - .codec_name = "dmic-codec", 228 - .init = omap_abe_dmic_init, 229 - .ops = &omap_abe_dmic_ops, 230 - }, 231 - }; 232 - 233 - /* Audio machine driver */ 234 - static struct snd_soc_card omap_abe_card = { 235 - .owner = THIS_MODULE, 236 - 237 - .dapm_widgets = twl6040_dapm_widgets, 238 - .num_dapm_widgets = ARRAY_SIZE(twl6040_dapm_widgets), 239 - .dapm_routes = audio_map, 240 - .num_dapm_routes = ARRAY_SIZE(audio_map), 241 - }; 242 - 243 211 static int omap_abe_probe(struct platform_device *pdev) 244 212 { 245 213 struct device_node *node = pdev->dev.of_node; 246 - struct snd_soc_card *card = &omap_abe_card; 214 + struct snd_soc_card *card; 247 215 struct device_node *dai_node; 248 216 struct abe_twl6040 *priv; 249 217 int num_links = 0; ··· 224 252 return -ENODEV; 225 253 } 226 254 227 - card->dev = &pdev->dev; 228 - 229 255 priv = devm_kzalloc(&pdev->dev, sizeof(struct abe_twl6040), GFP_KERNEL); 230 256 if (priv == NULL) 231 257 return -ENOMEM; 258 + 259 + card = &priv->card; 260 + card->dev = &pdev->dev; 261 + card->owner = THIS_MODULE; 262 + card->dapm_widgets = twl6040_dapm_widgets; 263 + card->num_dapm_widgets = ARRAY_SIZE(twl6040_dapm_widgets); 264 + card->dapm_routes = audio_map; 265 + card->num_dapm_routes = ARRAY_SIZE(audio_map); 232 266 233 267 if (snd_soc_of_parse_card_name(card, "ti,model")) { 234 268 dev_err(&pdev->dev, "Card name is not provided\n"); ··· 252 274 dev_err(&pdev->dev, "McPDM node is not provided\n"); 253 275 return -EINVAL; 254 276 } 255 - abe_twl6040_dai_links[0].cpu_of_node = dai_node; 256 - abe_twl6040_dai_links[0].platform_of_node = dai_node; 277 + 278 + priv->dai_links[0].name = "DMIC"; 279 + priv->dai_links[0].stream_name = "TWL6040"; 280 + priv->dai_links[0].cpu_of_node = dai_node; 281 + priv->dai_links[0].platform_of_node = dai_node; 282 + priv->dai_links[0].codec_dai_name = "twl6040-legacy"; 283 + priv->dai_links[0].codec_name = "twl6040-codec"; 284 + priv->dai_links[0].init = omap_abe_twl6040_init; 285 + priv->dai_links[0].ops = &omap_abe_ops; 257 286 258 287 dai_node = of_parse_phandle(node, "ti,dmic", 0); 259 288 if (dai_node) { 260 289 num_links = 2; 261 - abe_twl6040_dai_links[1].cpu_of_node = dai_node; 262 - abe_twl6040_dai_links[1].platform_of_node = dai_node; 290 + priv->dai_links[1].name = "TWL6040"; 291 + priv->dai_links[1].stream_name = "DMIC Capture"; 292 + priv->dai_links[1].cpu_of_node = dai_node; 293 + priv->dai_links[1].platform_of_node = dai_node; 294 + priv->dai_links[1].codec_dai_name = "dmic-hifi"; 295 + priv->dai_links[1].codec_name = "dmic-codec"; 296 + priv->dai_links[1].init = omap_abe_dmic_init; 297 + priv->dai_links[1].ops = &omap_abe_dmic_ops; 263 298 } else { 264 299 num_links = 1; 265 300 } ··· 291 300 return -ENODEV; 292 301 } 293 302 294 - card->dai_link = abe_twl6040_dai_links; 303 + card->dai_link = priv->dai_links; 295 304 card->num_links = num_links; 296 305 297 306 snd_soc_card_set_drvdata(card, priv);
+9
sound/soc/omap/omap-dmic.c
··· 48 48 struct device *dev; 49 49 void __iomem *io_base; 50 50 struct clk *fclk; 51 + struct pm_qos_request pm_qos_req; 52 + int latency; 51 53 int fclk_freq; 52 54 int out_freq; 53 55 int clk_div; ··· 125 123 struct omap_dmic *dmic = snd_soc_dai_get_drvdata(dai); 126 124 127 125 mutex_lock(&dmic->mutex); 126 + 127 + pm_qos_remove_request(&dmic->pm_qos_req); 128 128 129 129 if (!dai->active) 130 130 dmic->active = 0; ··· 232 228 /* packet size is threshold * channels */ 233 229 dma_data = snd_soc_dai_get_dma_data(dai, substream); 234 230 dma_data->maxburst = dmic->threshold * channels; 231 + dmic->latency = (OMAP_DMIC_THRES_MAX - dmic->threshold) * USEC_PER_SEC / 232 + params_rate(params); 235 233 236 234 return 0; 237 235 } ··· 243 237 { 244 238 struct omap_dmic *dmic = snd_soc_dai_get_drvdata(dai); 245 239 u32 ctrl; 240 + 241 + if (pm_qos_request_active(&dmic->pm_qos_req)) 242 + pm_qos_update_request(&dmic->pm_qos_req, dmic->latency); 246 243 247 244 /* Configure uplink threshold */ 248 245 omap_dmic_write(dmic, OMAP_DMIC_FIFO_CTRL_REG, dmic->threshold);
+3 -3
sound/soc/omap/omap-mcbsp.c
··· 308 308 pkt_size = channels; 309 309 } 310 310 311 - latency = ((((buffer_size - pkt_size) / channels) * 1000) 312 - / (params->rate_num / params->rate_den)); 313 - 311 + latency = (buffer_size - pkt_size) / channels; 312 + latency = latency * USEC_PER_SEC / 313 + (params->rate_num / params->rate_den); 314 314 mcbsp->latency[substream->stream] = latency; 315 315 316 316 omap_mcbsp_set_threshold(substream, pkt_size);
+42 -1
sound/soc/omap/omap-mcpdm.c
··· 54 54 unsigned long phys_base; 55 55 void __iomem *io_base; 56 56 int irq; 57 + struct pm_qos_request pm_qos_req; 58 + int latency[2]; 57 59 58 60 struct mutex mutex; 59 61 ··· 279 277 struct snd_soc_dai *dai) 280 278 { 281 279 struct omap_mcpdm *mcpdm = snd_soc_dai_get_drvdata(dai); 280 + int tx = (substream->stream == SNDRV_PCM_STREAM_PLAYBACK); 281 + int stream1 = tx ? SNDRV_PCM_STREAM_PLAYBACK : SNDRV_PCM_STREAM_CAPTURE; 282 + int stream2 = tx ? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK; 282 283 283 284 mutex_lock(&mcpdm->mutex); 284 285 ··· 294 289 } 295 290 } 296 291 292 + if (mcpdm->latency[stream2]) 293 + pm_qos_update_request(&mcpdm->pm_qos_req, 294 + mcpdm->latency[stream2]); 295 + else if (mcpdm->latency[stream1]) 296 + pm_qos_remove_request(&mcpdm->pm_qos_req); 297 + 298 + mcpdm->latency[stream1] = 0; 299 + 297 300 mutex_unlock(&mcpdm->mutex); 298 301 } 299 302 ··· 313 300 int stream = substream->stream; 314 301 struct snd_dmaengine_dai_dma_data *dma_data; 315 302 u32 threshold; 316 - int channels; 303 + int channels, latency; 317 304 int link_mask = 0; 318 305 319 306 channels = params_channels(params); ··· 357 344 358 345 dma_data->maxburst = 359 346 (MCPDM_DN_THRES_MAX - threshold) * channels; 347 + latency = threshold; 360 348 } else { 361 349 /* If playback is not running assume a stereo stream to come */ 362 350 if (!mcpdm->config[!stream].link_mask) 363 351 mcpdm->config[!stream].link_mask = (0x3 << 3); 364 352 365 353 dma_data->maxburst = threshold * channels; 354 + latency = (MCPDM_DN_THRES_MAX - threshold); 366 355 } 356 + 357 + /* 358 + * The DMA must act to a DMA request within latency time (usec) to avoid 359 + * under/overflow 360 + */ 361 + mcpdm->latency[stream] = latency * USEC_PER_SEC / params_rate(params); 362 + 363 + if (!mcpdm->latency[stream]) 364 + mcpdm->latency[stream] = 10; 367 365 368 366 /* Check if we need to restart McPDM with this stream */ 369 367 if (mcpdm->config[stream].link_mask && ··· 390 366 struct snd_soc_dai *dai) 391 367 { 392 368 struct omap_mcpdm *mcpdm = snd_soc_dai_get_drvdata(dai); 369 + struct pm_qos_request *pm_qos_req = &mcpdm->pm_qos_req; 370 + int tx = (substream->stream == SNDRV_PCM_STREAM_PLAYBACK); 371 + int stream1 = tx ? SNDRV_PCM_STREAM_PLAYBACK : SNDRV_PCM_STREAM_CAPTURE; 372 + int stream2 = tx ? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK; 373 + int latency = mcpdm->latency[stream2]; 374 + 375 + /* Prevent omap hardware from hitting off between FIFO fills */ 376 + if (!latency || mcpdm->latency[stream1] < latency) 377 + latency = mcpdm->latency[stream1]; 378 + 379 + if (pm_qos_request_active(pm_qos_req)) 380 + pm_qos_update_request(pm_qos_req, latency); 381 + else if (latency) 382 + pm_qos_add_request(pm_qos_req, PM_QOS_CPU_DMA_LATENCY, latency); 393 383 394 384 if (!omap_mcpdm_active(mcpdm)) { 395 385 omap_mcpdm_start(mcpdm); ··· 464 426 465 427 free_irq(mcpdm->irq, (void *)mcpdm); 466 428 pm_runtime_disable(mcpdm->dev); 429 + 430 + if (pm_qos_request_active(&mcpdm->pm_qos_req)) 431 + pm_qos_remove_request(&mcpdm->pm_qos_req); 467 432 468 433 return 0; 469 434 }
+6 -3
sound/soc/qcom/common.c
··· 13 13 struct device_node *cpu = NULL; 14 14 struct device *dev = card->dev; 15 15 struct snd_soc_dai_link *link; 16 + struct of_phandle_args args; 16 17 int ret, num_links; 17 18 18 19 ret = snd_soc_of_parse_card_name(card, "model"); ··· 48 47 goto err; 49 48 } 50 49 51 - link->cpu_of_node = of_parse_phandle(cpu, "sound-dai", 0); 52 - if (!link->cpu_of_node) { 50 + ret = of_parse_phandle_with_args(cpu, "sound-dai", 51 + "#sound-dai-cells", 0, &args); 52 + if (ret) { 53 53 dev_err(card->dev, "error getting cpu phandle\n"); 54 - ret = -EINVAL; 55 54 goto err; 56 55 } 56 + link->cpu_of_node = args.np; 57 + link->id = args.args[0]; 57 58 58 59 ret = snd_soc_of_get_dai_name(cpu, &link->cpu_dai_name); 59 60 if (ret) {
+138 -138
sound/soc/qcom/qdsp6/q6afe-dai.c
··· 1112 1112 } 1113 1113 1114 1114 static const struct snd_soc_dapm_widget q6afe_dai_widgets[] = { 1115 - SND_SOC_DAPM_AIF_OUT("HDMI_RX", "HDMI Playback", 0, 0, 0, 0), 1116 - SND_SOC_DAPM_AIF_OUT("SLIMBUS_0_RX", "Slimbus Playback", 0, 0, 0, 0), 1117 - SND_SOC_DAPM_AIF_OUT("SLIMBUS_1_RX", "Slimbus1 Playback", 0, 0, 0, 0), 1118 - SND_SOC_DAPM_AIF_OUT("SLIMBUS_2_RX", "Slimbus2 Playback", 0, 0, 0, 0), 1119 - SND_SOC_DAPM_AIF_OUT("SLIMBUS_3_RX", "Slimbus3 Playback", 0, 0, 0, 0), 1120 - SND_SOC_DAPM_AIF_OUT("SLIMBUS_4_RX", "Slimbus4 Playback", 0, 0, 0, 0), 1121 - SND_SOC_DAPM_AIF_OUT("SLIMBUS_5_RX", "Slimbus5 Playback", 0, 0, 0, 0), 1122 - SND_SOC_DAPM_AIF_OUT("SLIMBUS_6_RX", "Slimbus6 Playback", 0, 0, 0, 0), 1123 - SND_SOC_DAPM_AIF_IN("SLIMBUS_0_TX", "Slimbus Capture", 0, 0, 0, 0), 1124 - SND_SOC_DAPM_AIF_IN("SLIMBUS_1_TX", "Slimbus1 Capture", 0, 0, 0, 0), 1125 - SND_SOC_DAPM_AIF_IN("SLIMBUS_2_TX", "Slimbus2 Capture", 0, 0, 0, 0), 1126 - SND_SOC_DAPM_AIF_IN("SLIMBUS_3_TX", "Slimbus3 Capture", 0, 0, 0, 0), 1127 - SND_SOC_DAPM_AIF_IN("SLIMBUS_4_TX", "Slimbus4 Capture", 0, 0, 0, 0), 1128 - SND_SOC_DAPM_AIF_IN("SLIMBUS_5_TX", "Slimbus5 Capture", 0, 0, 0, 0), 1129 - SND_SOC_DAPM_AIF_IN("SLIMBUS_6_TX", "Slimbus6 Capture", 0, 0, 0, 0), 1130 - SND_SOC_DAPM_AIF_OUT("QUAT_MI2S_RX", "Quaternary MI2S Playback", 1115 + SND_SOC_DAPM_AIF_IN("HDMI_RX", NULL, 0, 0, 0, 0), 1116 + SND_SOC_DAPM_AIF_IN("SLIMBUS_0_RX", NULL, 0, 0, 0, 0), 1117 + SND_SOC_DAPM_AIF_IN("SLIMBUS_1_RX", NULL, 0, 0, 0, 0), 1118 + SND_SOC_DAPM_AIF_IN("SLIMBUS_2_RX", NULL, 0, 0, 0, 0), 1119 + SND_SOC_DAPM_AIF_IN("SLIMBUS_3_RX", NULL, 0, 0, 0, 0), 1120 + SND_SOC_DAPM_AIF_IN("SLIMBUS_4_RX", NULL, 0, 0, 0, 0), 1121 + SND_SOC_DAPM_AIF_IN("SLIMBUS_5_RX", NULL, 0, 0, 0, 0), 1122 + SND_SOC_DAPM_AIF_IN("SLIMBUS_6_RX", NULL, 0, 0, 0, 0), 1123 + SND_SOC_DAPM_AIF_OUT("SLIMBUS_0_TX", NULL, 0, 0, 0, 0), 1124 + SND_SOC_DAPM_AIF_OUT("SLIMBUS_1_TX", NULL, 0, 0, 0, 0), 1125 + SND_SOC_DAPM_AIF_OUT("SLIMBUS_2_TX", NULL, 0, 0, 0, 0), 1126 + SND_SOC_DAPM_AIF_OUT("SLIMBUS_3_TX", NULL, 0, 0, 0, 0), 1127 + SND_SOC_DAPM_AIF_OUT("SLIMBUS_4_TX", NULL, 0, 0, 0, 0), 1128 + SND_SOC_DAPM_AIF_OUT("SLIMBUS_5_TX", NULL, 0, 0, 0, 0), 1129 + SND_SOC_DAPM_AIF_OUT("SLIMBUS_6_TX", NULL, 0, 0, 0, 0), 1130 + SND_SOC_DAPM_AIF_IN("QUAT_MI2S_RX", NULL, 1131 1131 0, 0, 0, 0), 1132 - SND_SOC_DAPM_AIF_IN("QUAT_MI2S_TX", "Quaternary MI2S Capture", 1132 + SND_SOC_DAPM_AIF_OUT("QUAT_MI2S_TX", NULL, 1133 1133 0, 0, 0, 0), 1134 - SND_SOC_DAPM_AIF_OUT("TERT_MI2S_RX", "Tertiary MI2S Playback", 1134 + SND_SOC_DAPM_AIF_IN("TERT_MI2S_RX", NULL, 1135 1135 0, 0, 0, 0), 1136 - SND_SOC_DAPM_AIF_IN("TERT_MI2S_TX", "Tertiary MI2S Capture", 1136 + SND_SOC_DAPM_AIF_OUT("TERT_MI2S_TX", NULL, 1137 1137 0, 0, 0, 0), 1138 - SND_SOC_DAPM_AIF_OUT("SEC_MI2S_RX", "Secondary MI2S Playback", 1138 + SND_SOC_DAPM_AIF_IN("SEC_MI2S_RX", NULL, 1139 1139 0, 0, 0, 0), 1140 - SND_SOC_DAPM_AIF_IN("SEC_MI2S_TX", "Secondary MI2S Capture", 1140 + SND_SOC_DAPM_AIF_OUT("SEC_MI2S_TX", NULL, 1141 1141 0, 0, 0, 0), 1142 - SND_SOC_DAPM_AIF_OUT("SEC_MI2S_RX_SD1", 1142 + SND_SOC_DAPM_AIF_IN("SEC_MI2S_RX_SD1", 1143 1143 "Secondary MI2S Playback SD1", 1144 1144 0, 0, 0, 0), 1145 - SND_SOC_DAPM_AIF_OUT("PRI_MI2S_RX", "Primary MI2S Playback", 1145 + SND_SOC_DAPM_AIF_IN("PRI_MI2S_RX", NULL, 1146 1146 0, 0, 0, 0), 1147 - SND_SOC_DAPM_AIF_IN("PRI_MI2S_TX", "Primary MI2S Capture", 1147 + SND_SOC_DAPM_AIF_OUT("PRI_MI2S_TX", NULL, 1148 1148 0, 0, 0, 0), 1149 1149 1150 - SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_0", "Primary TDM0 Playback", 1150 + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_0", NULL, 1151 1151 0, 0, 0, 0), 1152 - SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_1", "Primary TDM1 Playback", 1152 + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_1", NULL, 1153 1153 0, 0, 0, 0), 1154 - SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_2", "Primary TDM2 Playback", 1154 + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_2", NULL, 1155 1155 0, 0, 0, 0), 1156 - SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_3", "Primary TDM3 Playback", 1156 + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_3", NULL, 1157 1157 0, 0, 0, 0), 1158 - SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_4", "Primary TDM4 Playback", 1158 + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_4", NULL, 1159 1159 0, 0, 0, 0), 1160 - SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_5", "Primary TDM5 Playback", 1160 + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_5", NULL, 1161 1161 0, 0, 0, 0), 1162 - SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_6", "Primary TDM6 Playback", 1162 + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_6", NULL, 1163 1163 0, 0, 0, 0), 1164 - SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_7", "Primary TDM7 Playback", 1164 + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_7", NULL, 1165 1165 0, 0, 0, 0), 1166 - SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_0", "Primary TDM0 Capture", 1166 + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_0", NULL, 1167 1167 0, 0, 0, 0), 1168 - SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_1", "Primary TDM1 Capture", 1168 + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_1", NULL, 1169 1169 0, 0, 0, 0), 1170 - SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_2", "Primary TDM2 Capture", 1170 + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_2", NULL, 1171 1171 0, 0, 0, 0), 1172 - SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_3", "Primary TDM3 Capture", 1172 + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_3", NULL, 1173 1173 0, 0, 0, 0), 1174 - SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_4", "Primary TDM4 Capture", 1174 + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_4", NULL, 1175 1175 0, 0, 0, 0), 1176 - SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_5", "Primary TDM5 Capture", 1176 + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_5", NULL, 1177 1177 0, 0, 0, 0), 1178 - SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_6", "Primary TDM6 Capture", 1178 + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_6", NULL, 1179 1179 0, 0, 0, 0), 1180 - SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_7", "Primary TDM7 Capture", 1181 - 0, 0, 0, 0), 1182 - 1183 - SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_0", "Secondary TDM0 Playback", 1184 - 0, 0, 0, 0), 1185 - SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_1", "Secondary TDM1 Playback", 1186 - 0, 0, 0, 0), 1187 - SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_2", "Secondary TDM2 Playback", 1188 - 0, 0, 0, 0), 1189 - SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_3", "Secondary TDM3 Playback", 1190 - 0, 0, 0, 0), 1191 - SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_4", "Secondary TDM4 Playback", 1192 - 0, 0, 0, 0), 1193 - SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_5", "Secondary TDM5 Playback", 1194 - 0, 0, 0, 0), 1195 - SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_6", "Secondary TDM6 Playback", 1196 - 0, 0, 0, 0), 1197 - SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_7", "Secondary TDM7 Playback", 1198 - 0, 0, 0, 0), 1199 - SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_0", "Secondary TDM0 Capture", 1200 - 0, 0, 0, 0), 1201 - SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_1", "Secondary TDM1 Capture", 1202 - 0, 0, 0, 0), 1203 - SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_2", "Secondary TDM2 Capture", 1204 - 0, 0, 0, 0), 1205 - SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_3", "Secondary TDM3 Capture", 1206 - 0, 0, 0, 0), 1207 - SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_4", "Secondary TDM4 Capture", 1208 - 0, 0, 0, 0), 1209 - SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_5", "Secondary TDM5 Capture", 1210 - 0, 0, 0, 0), 1211 - SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_6", "Secondary TDM6 Capture", 1212 - 0, 0, 0, 0), 1213 - SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_7", "Secondary TDM7 Capture", 1180 + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_7", NULL, 1214 1181 0, 0, 0, 0), 1215 1182 1216 - SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_0", "Tertiary TDM0 Playback", 1183 + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_0", NULL, 1217 1184 0, 0, 0, 0), 1218 - SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_1", "Tertiary TDM1 Playback", 1185 + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_1", NULL, 1219 1186 0, 0, 0, 0), 1220 - SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_2", "Tertiary TDM2 Playback", 1187 + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_2", NULL, 1221 1188 0, 0, 0, 0), 1222 - SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_3", "Tertiary TDM3 Playback", 1189 + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_3", NULL, 1223 1190 0, 0, 0, 0), 1224 - SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_4", "Tertiary TDM4 Playback", 1191 + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_4", NULL, 1225 1192 0, 0, 0, 0), 1226 - SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_5", "Tertiary TDM5 Playback", 1193 + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_5", NULL, 1227 1194 0, 0, 0, 0), 1228 - SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_6", "Tertiary TDM6 Playback", 1195 + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_6", NULL, 1229 1196 0, 0, 0, 0), 1230 - SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_7", "Tertiary TDM7 Playback", 1197 + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_7", NULL, 1231 1198 0, 0, 0, 0), 1232 - SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_0", "Tertiary TDM0 Capture", 1199 + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_0", NULL, 1233 1200 0, 0, 0, 0), 1234 - SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_1", "Tertiary TDM1 Capture", 1201 + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_1", NULL, 1235 1202 0, 0, 0, 0), 1236 - SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_2", "Tertiary TDM2 Capture", 1203 + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_2", NULL, 1237 1204 0, 0, 0, 0), 1238 - SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_3", "Tertiary TDM3 Capture", 1205 + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_3", NULL, 1239 1206 0, 0, 0, 0), 1240 - SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_4", "Tertiary TDM4 Capture", 1207 + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_4", NULL, 1241 1208 0, 0, 0, 0), 1242 - SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_5", "Tertiary TDM5 Capture", 1209 + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_5", NULL, 1243 1210 0, 0, 0, 0), 1244 - SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_6", "Tertiary TDM6 Capture", 1211 + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_6", NULL, 1245 1212 0, 0, 0, 0), 1246 - SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_7", "Tertiary TDM7 Capture", 1247 - 0, 0, 0, 0), 1248 - 1249 - SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_0", "Quaternary TDM0 Playback", 1250 - 0, 0, 0, 0), 1251 - SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_1", "Quaternary TDM1 Playback", 1252 - 0, 0, 0, 0), 1253 - SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_2", "Quaternary TDM2 Playback", 1254 - 0, 0, 0, 0), 1255 - SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_3", "Quaternary TDM3 Playback", 1256 - 0, 0, 0, 0), 1257 - SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_4", "Quaternary TDM4 Playback", 1258 - 0, 0, 0, 0), 1259 - SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_5", "Quaternary TDM5 Playback", 1260 - 0, 0, 0, 0), 1261 - SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_6", "Quaternary TDM6 Playback", 1262 - 0, 0, 0, 0), 1263 - SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_7", "Quaternary TDM7 Playback", 1264 - 0, 0, 0, 0), 1265 - SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_0", "Quaternary TDM0 Capture", 1266 - 0, 0, 0, 0), 1267 - SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_1", "Quaternary TDM1 Capture", 1268 - 0, 0, 0, 0), 1269 - SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_2", "Quaternary TDM2 Capture", 1270 - 0, 0, 0, 0), 1271 - SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_3", "Quaternary TDM3 Capture", 1272 - 0, 0, 0, 0), 1273 - SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_4", "Quaternary TDM4 Capture", 1274 - 0, 0, 0, 0), 1275 - SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_5", "Quaternary TDM5 Capture", 1276 - 0, 0, 0, 0), 1277 - SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_6", "Quaternary TDM6 Capture", 1278 - 0, 0, 0, 0), 1279 - SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_7", "Quaternary TDM7 Capture", 1213 + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_7", NULL, 1280 1214 0, 0, 0, 0), 1281 1215 1282 - SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_0", "Quinary TDM0 Playback", 1216 + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_0", NULL, 1283 1217 0, 0, 0, 0), 1284 - SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_1", "Quinary TDM1 Playback", 1218 + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_1", NULL, 1285 1219 0, 0, 0, 0), 1286 - SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_2", "Quinary TDM2 Playback", 1220 + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_2", NULL, 1287 1221 0, 0, 0, 0), 1288 - SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_3", "Quinary TDM3 Playback", 1222 + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_3", NULL, 1289 1223 0, 0, 0, 0), 1290 - SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_4", "Quinary TDM4 Playback", 1224 + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_4", NULL, 1291 1225 0, 0, 0, 0), 1292 - SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_5", "Quinary TDM5 Playback", 1226 + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_5", NULL, 1293 1227 0, 0, 0, 0), 1294 - SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_6", "Quinary TDM6 Playback", 1228 + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_6", NULL, 1295 1229 0, 0, 0, 0), 1296 - SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_7", "Quinary TDM7 Playback", 1230 + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_7", NULL, 1297 1231 0, 0, 0, 0), 1298 - SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_0", "Quinary TDM0 Capture", 1232 + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_0", NULL, 1299 1233 0, 0, 0, 0), 1300 - SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_1", "Quinary TDM1 Capture", 1234 + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_1", NULL, 1301 1235 0, 0, 0, 0), 1302 - SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_2", "Quinary TDM2 Capture", 1236 + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_2", NULL, 1303 1237 0, 0, 0, 0), 1304 - SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_3", "Quinary TDM3 Capture", 1238 + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_3", NULL, 1305 1239 0, 0, 0, 0), 1306 - SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_4", "Quinary TDM4 Capture", 1240 + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_4", NULL, 1307 1241 0, 0, 0, 0), 1308 - SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_5", "Quinary TDM5 Capture", 1242 + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_5", NULL, 1309 1243 0, 0, 0, 0), 1310 - SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_6", "Quinary TDM6 Capture", 1244 + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_6", NULL, 1311 1245 0, 0, 0, 0), 1312 - SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_7", "Quinary TDM7 Capture", 1246 + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_7", NULL, 1247 + 0, 0, 0, 0), 1248 + 1249 + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_0", NULL, 1250 + 0, 0, 0, 0), 1251 + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_1", NULL, 1252 + 0, 0, 0, 0), 1253 + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_2", NULL, 1254 + 0, 0, 0, 0), 1255 + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_3", NULL, 1256 + 0, 0, 0, 0), 1257 + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_4", NULL, 1258 + 0, 0, 0, 0), 1259 + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_5", NULL, 1260 + 0, 0, 0, 0), 1261 + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_6", NULL, 1262 + 0, 0, 0, 0), 1263 + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_7", NULL, 1264 + 0, 0, 0, 0), 1265 + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_0", NULL, 1266 + 0, 0, 0, 0), 1267 + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_1", NULL, 1268 + 0, 0, 0, 0), 1269 + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_2", NULL, 1270 + 0, 0, 0, 0), 1271 + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_3", NULL, 1272 + 0, 0, 0, 0), 1273 + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_4", NULL, 1274 + 0, 0, 0, 0), 1275 + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_5", NULL, 1276 + 0, 0, 0, 0), 1277 + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_6", NULL, 1278 + 0, 0, 0, 0), 1279 + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_7", NULL, 1280 + 0, 0, 0, 0), 1281 + 1282 + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_0", NULL, 1283 + 0, 0, 0, 0), 1284 + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_1", NULL, 1285 + 0, 0, 0, 0), 1286 + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_2", NULL, 1287 + 0, 0, 0, 0), 1288 + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_3", NULL, 1289 + 0, 0, 0, 0), 1290 + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_4", NULL, 1291 + 0, 0, 0, 0), 1292 + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_5", NULL, 1293 + 0, 0, 0, 0), 1294 + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_6", NULL, 1295 + 0, 0, 0, 0), 1296 + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_7", NULL, 1297 + 0, 0, 0, 0), 1298 + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_0", NULL, 1299 + 0, 0, 0, 0), 1300 + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_1", NULL, 1301 + 0, 0, 0, 0), 1302 + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_2", NULL, 1303 + 0, 0, 0, 0), 1304 + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_3", NULL, 1305 + 0, 0, 0, 0), 1306 + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_4", NULL, 1307 + 0, 0, 0, 0), 1308 + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_5", NULL, 1309 + 0, 0, 0, 0), 1310 + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_6", NULL, 1311 + 0, 0, 0, 0), 1312 + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_7", NULL, 1313 1313 0, 0, 0, 0), 1314 1314 }; 1315 1315
+8 -8
sound/soc/qcom/qdsp6/q6afe.c
··· 49 49 #define AFE_PORT_I2S_SD1 0x2 50 50 #define AFE_PORT_I2S_SD2 0x3 51 51 #define AFE_PORT_I2S_SD3 0x4 52 - #define AFE_PORT_I2S_SD0_MASK BIT(0x1) 53 - #define AFE_PORT_I2S_SD1_MASK BIT(0x2) 54 - #define AFE_PORT_I2S_SD2_MASK BIT(0x3) 55 - #define AFE_PORT_I2S_SD3_MASK BIT(0x4) 56 - #define AFE_PORT_I2S_SD0_1_MASK GENMASK(2, 1) 57 - #define AFE_PORT_I2S_SD2_3_MASK GENMASK(4, 3) 58 - #define AFE_PORT_I2S_SD0_1_2_MASK GENMASK(3, 1) 59 - #define AFE_PORT_I2S_SD0_1_2_3_MASK GENMASK(4, 1) 52 + #define AFE_PORT_I2S_SD0_MASK BIT(0x0) 53 + #define AFE_PORT_I2S_SD1_MASK BIT(0x1) 54 + #define AFE_PORT_I2S_SD2_MASK BIT(0x2) 55 + #define AFE_PORT_I2S_SD3_MASK BIT(0x3) 56 + #define AFE_PORT_I2S_SD0_1_MASK GENMASK(1, 0) 57 + #define AFE_PORT_I2S_SD2_3_MASK GENMASK(3, 2) 58 + #define AFE_PORT_I2S_SD0_1_2_MASK GENMASK(2, 0) 59 + #define AFE_PORT_I2S_SD0_1_2_3_MASK GENMASK(3, 0) 60 60 #define AFE_PORT_I2S_QUAD01 0x5 61 61 #define AFE_PORT_I2S_QUAD23 0x6 62 62 #define AFE_PORT_I2S_6CHS 0x7
-33
sound/soc/qcom/qdsp6/q6asm-dai.c
··· 122 122 .rate_max = 48000, \ 123 123 }, \ 124 124 .name = "MultiMedia"#num, \ 125 - .probe = fe_dai_probe, \ 126 125 .id = MSM_FRONTEND_DAI_MULTIMEDIA##num, \ 127 126 } 128 127 ··· 509 510 } 510 511 } 511 512 } 512 - 513 - static const struct snd_soc_dapm_route afe_pcm_routes[] = { 514 - {"MM_DL1", NULL, "MultiMedia1 Playback" }, 515 - {"MM_DL2", NULL, "MultiMedia2 Playback" }, 516 - {"MM_DL3", NULL, "MultiMedia3 Playback" }, 517 - {"MM_DL4", NULL, "MultiMedia4 Playback" }, 518 - {"MM_DL5", NULL, "MultiMedia5 Playback" }, 519 - {"MM_DL6", NULL, "MultiMedia6 Playback" }, 520 - {"MM_DL7", NULL, "MultiMedia7 Playback" }, 521 - {"MM_DL7", NULL, "MultiMedia8 Playback" }, 522 - {"MultiMedia1 Capture", NULL, "MM_UL1"}, 523 - {"MultiMedia2 Capture", NULL, "MM_UL2"}, 524 - {"MultiMedia3 Capture", NULL, "MM_UL3"}, 525 - {"MultiMedia4 Capture", NULL, "MM_UL4"}, 526 - {"MultiMedia5 Capture", NULL, "MM_UL5"}, 527 - {"MultiMedia6 Capture", NULL, "MM_UL6"}, 528 - {"MultiMedia7 Capture", NULL, "MM_UL7"}, 529 - {"MultiMedia8 Capture", NULL, "MM_UL8"}, 530 - 531 - }; 532 - 533 - static int fe_dai_probe(struct snd_soc_dai *dai) 534 - { 535 - struct snd_soc_dapm_context *dapm; 536 - 537 - dapm = snd_soc_component_get_dapm(dai->component); 538 - snd_soc_dapm_add_routes(dapm, afe_pcm_routes, 539 - ARRAY_SIZE(afe_pcm_routes)); 540 - 541 - return 0; 542 - } 543 - 544 513 545 514 static const struct snd_soc_component_driver q6asm_fe_dai_component = { 546 515 .name = DRV_NAME,
+19
sound/soc/qcom/qdsp6/q6routing.c
··· 909 909 {"MM_UL6", NULL, "MultiMedia6 Mixer"}, 910 910 {"MM_UL7", NULL, "MultiMedia7 Mixer"}, 911 911 {"MM_UL8", NULL, "MultiMedia8 Mixer"}, 912 + 913 + {"MM_DL1", NULL, "MultiMedia1 Playback" }, 914 + {"MM_DL2", NULL, "MultiMedia2 Playback" }, 915 + {"MM_DL3", NULL, "MultiMedia3 Playback" }, 916 + {"MM_DL4", NULL, "MultiMedia4 Playback" }, 917 + {"MM_DL5", NULL, "MultiMedia5 Playback" }, 918 + {"MM_DL6", NULL, "MultiMedia6 Playback" }, 919 + {"MM_DL7", NULL, "MultiMedia7 Playback" }, 920 + {"MM_DL8", NULL, "MultiMedia8 Playback" }, 921 + 922 + {"MultiMedia1 Capture", NULL, "MM_UL1"}, 923 + {"MultiMedia2 Capture", NULL, "MM_UL2"}, 924 + {"MultiMedia3 Capture", NULL, "MM_UL3"}, 925 + {"MultiMedia4 Capture", NULL, "MM_UL4"}, 926 + {"MultiMedia5 Capture", NULL, "MM_UL5"}, 927 + {"MultiMedia6 Capture", NULL, "MM_UL6"}, 928 + {"MultiMedia7 Capture", NULL, "MM_UL7"}, 929 + {"MultiMedia8 Capture", NULL, "MM_UL8"}, 930 + 912 931 }; 913 932 914 933 static int routing_hw_params(struct snd_pcm_substream *substream,
+1
sound/soc/rockchip/rockchip_pcm.c
··· 33 33 34 34 static const struct snd_dmaengine_pcm_config rk_dmaengine_pcm_config = { 35 35 .pcm_hardware = &snd_rockchip_hardware, 36 + .prepare_slave_config = snd_dmaengine_pcm_prepare_slave_config, 36 37 .prealloc_buffer_size = 32 * 1024, 37 38 }; 38 39
+1 -1
sound/soc/sh/rcar/ssi.c
··· 306 306 if (rsnd_ssi_is_multi_slave(mod, io)) 307 307 return 0; 308 308 309 - if (ssi->rate) { 309 + if (ssi->usrcnt > 1) { 310 310 if (ssi->rate != rate) { 311 311 dev_err(dev, "SSI parent/child should use same rate\n"); 312 312 return -EINVAL;
+8 -2
sound/soc/soc-acpi.c
··· 10 10 snd_soc_acpi_find_machine(struct snd_soc_acpi_mach *machines) 11 11 { 12 12 struct snd_soc_acpi_mach *mach; 13 + struct snd_soc_acpi_mach *mach_alt; 13 14 14 15 for (mach = machines; mach->id[0]; mach++) { 15 16 if (acpi_dev_present(mach->id, NULL, -1)) { 16 - if (mach->machine_quirk) 17 - mach = mach->machine_quirk(mach); 17 + if (mach->machine_quirk) { 18 + mach_alt = mach->machine_quirk(mach); 19 + if (!mach_alt) 20 + continue; /* not full match, ignore */ 21 + mach = mach_alt; 22 + } 23 + 18 24 return mach; 19 25 } 20 26 }
+1
sound/soc/soc-core.c
··· 2131 2131 } 2132 2132 2133 2133 card->instantiated = 1; 2134 + dapm_mark_endpoints_dirty(card); 2134 2135 snd_soc_dapm_sync(&card->dapm); 2135 2136 mutex_unlock(&card->mutex); 2136 2137 mutex_unlock(&client_mutex);
+1 -1
sound/soc/stm/stm32_sai_sub.c
··· 390 390 char *mclk_name, *p, *s = (char *)pname; 391 391 int ret, i = 0; 392 392 393 - mclk = devm_kzalloc(dev, sizeof(mclk), GFP_KERNEL); 393 + mclk = devm_kzalloc(dev, sizeof(*mclk), GFP_KERNEL); 394 394 if (!mclk) 395 395 return -ENOMEM; 396 396
+1 -1
sound/soc/sunxi/Kconfig
··· 31 31 config SND_SUN50I_CODEC_ANALOG 32 32 tristate "Allwinner sun50i Codec Analog Controls Support" 33 33 depends on (ARM64 && ARCH_SUNXI) || COMPILE_TEST 34 - select SND_SUNXI_ADDA_PR_REGMAP 34 + select SND_SUN8I_ADDA_PR_REGMAP 35 35 help 36 36 Say Y or M if you want to add support for the analog controls for 37 37 the codec embedded in Allwinner A64 SoC.
+5 -7
sound/soc/sunxi/sun8i-codec.c
··· 481 481 { "Right Digital DAC Mixer", "AIF1 Slot 0 Digital DAC Playback Switch", 482 482 "AIF1 Slot 0 Right"}, 483 483 484 - /* ADC routes */ 484 + /* ADC Routes */ 485 + { "AIF1 Slot 0 Right ADC", NULL, "ADC" }, 486 + { "AIF1 Slot 0 Left ADC", NULL, "ADC" }, 487 + 488 + /* ADC Mixer Routes */ 485 489 { "Left Digital ADC Mixer", "AIF1 Data Digital ADC Capture Switch", 486 490 "AIF1 Slot 0 Left ADC" }, 487 491 { "Right Digital ADC Mixer", "AIF1 Data Digital ADC Capture Switch", ··· 609 605 610 606 static int sun8i_codec_remove(struct platform_device *pdev) 611 607 { 612 - struct snd_soc_card *card = platform_get_drvdata(pdev); 613 - struct sun8i_codec *scodec = snd_soc_card_get_drvdata(card); 614 - 615 608 pm_runtime_disable(&pdev->dev); 616 609 if (!pm_runtime_status_suspended(&pdev->dev)) 617 610 sun8i_codec_runtime_suspend(&pdev->dev); 618 - 619 - clk_disable_unprepare(scodec->clk_module); 620 - clk_disable_unprepare(scodec->clk_bus); 621 611 622 612 return 0; 623 613 }
+2 -6
sound/sparc/cs4231.c
··· 1146 1146 runtime->hw = snd_cs4231_playback; 1147 1147 1148 1148 err = snd_cs4231_open(chip, CS4231_MODE_PLAY); 1149 - if (err < 0) { 1150 - snd_free_pages(runtime->dma_area, runtime->dma_bytes); 1149 + if (err < 0) 1151 1150 return err; 1152 - } 1153 1151 chip->playback_substream = substream; 1154 1152 chip->p_periods_sent = 0; 1155 1153 snd_pcm_set_sync(substream); ··· 1165 1167 runtime->hw = snd_cs4231_capture; 1166 1168 1167 1169 err = snd_cs4231_open(chip, CS4231_MODE_RECORD); 1168 - if (err < 0) { 1169 - snd_free_pages(runtime->dma_area, runtime->dma_bytes); 1170 + if (err < 0) 1170 1171 return err; 1171 - } 1172 1172 chip->capture_substream = substream; 1173 1173 chip->c_periods_sent = 0; 1174 1174 snd_pcm_set_sync(substream);
+10
sound/usb/quirks-table.h
··· 3382 3382 .ifnum = QUIRK_NO_INTERFACE 3383 3383 } 3384 3384 }, 3385 + /* Dell WD19 Dock */ 3386 + { 3387 + USB_DEVICE(0x0bda, 0x402e), 3388 + .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { 3389 + .vendor_name = "Dell", 3390 + .product_name = "WD19 Dock", 3391 + .profile_name = "Dell-WD15-Dock", 3392 + .ifnum = QUIRK_NO_INTERFACE 3393 + } 3394 + }, 3385 3395 3386 3396 #undef USB_DEVICE_VENDOR_SPEC
+67 -66
tools/arch/arm64/include/asm/barrier.h
··· 14 14 #define wmb() asm volatile("dmb ishst" ::: "memory") 15 15 #define rmb() asm volatile("dmb ishld" ::: "memory") 16 16 17 - #define smp_store_release(p, v) \ 18 - do { \ 19 - union { typeof(*p) __val; char __c[1]; } __u = \ 20 - { .__val = (__force typeof(*p)) (v) }; \ 21 - \ 22 - switch (sizeof(*p)) { \ 23 - case 1: \ 24 - asm volatile ("stlrb %w1, %0" \ 25 - : "=Q" (*p) \ 26 - : "r" (*(__u8 *)__u.__c) \ 27 - : "memory"); \ 28 - break; \ 29 - case 2: \ 30 - asm volatile ("stlrh %w1, %0" \ 31 - : "=Q" (*p) \ 32 - : "r" (*(__u16 *)__u.__c) \ 33 - : "memory"); \ 34 - break; \ 35 - case 4: \ 36 - asm volatile ("stlr %w1, %0" \ 37 - : "=Q" (*p) \ 38 - : "r" (*(__u32 *)__u.__c) \ 39 - : "memory"); \ 40 - break; \ 41 - case 8: \ 42 - asm volatile ("stlr %1, %0" \ 43 - : "=Q" (*p) \ 44 - : "r" (*(__u64 *)__u.__c) \ 45 - : "memory"); \ 46 - break; \ 47 - default: \ 48 - /* Only to shut up gcc ... */ \ 49 - mb(); \ 50 - break; \ 51 - } \ 17 + #define smp_store_release(p, v) \ 18 + do { \ 19 + union { typeof(*p) __val; char __c[1]; } __u = \ 20 + { .__val = (v) }; \ 21 + \ 22 + switch (sizeof(*p)) { \ 23 + case 1: \ 24 + asm volatile ("stlrb %w1, %0" \ 25 + : "=Q" (*p) \ 26 + : "r" (*(__u8_alias_t *)__u.__c) \ 27 + : "memory"); \ 28 + break; \ 29 + case 2: \ 30 + asm volatile ("stlrh %w1, %0" \ 31 + : "=Q" (*p) \ 32 + : "r" (*(__u16_alias_t *)__u.__c) \ 33 + : "memory"); \ 34 + break; \ 35 + case 4: \ 36 + asm volatile ("stlr %w1, %0" \ 37 + : "=Q" (*p) \ 38 + : "r" (*(__u32_alias_t *)__u.__c) \ 39 + : "memory"); \ 40 + break; \ 41 + case 8: \ 42 + asm volatile ("stlr %1, %0" \ 43 + : "=Q" (*p) \ 44 + : "r" (*(__u64_alias_t *)__u.__c) \ 45 + : "memory"); \ 46 + break; \ 47 + default: \ 48 + /* Only to shut up gcc ... */ \ 49 + mb(); \ 50 + break; \ 51 + } \ 52 52 } while (0) 53 53 54 - #define smp_load_acquire(p) \ 55 - ({ \ 56 - union { typeof(*p) __val; char __c[1]; } __u; \ 57 - \ 58 - switch (sizeof(*p)) { \ 59 - case 1: \ 60 - asm volatile ("ldarb %w0, %1" \ 61 - : "=r" (*(__u8 *)__u.__c) \ 62 - : "Q" (*p) : "memory"); \ 63 - break; \ 64 - case 2: \ 65 - asm volatile ("ldarh %w0, %1" \ 66 - : "=r" (*(__u16 *)__u.__c) \ 67 - : "Q" (*p) : "memory"); \ 68 - break; \ 69 - case 4: \ 70 - asm volatile ("ldar %w0, %1" \ 71 - : "=r" (*(__u32 *)__u.__c) \ 72 - : "Q" (*p) : "memory"); \ 73 - break; \ 74 - case 8: \ 75 - asm volatile ("ldar %0, %1" \ 76 - : "=r" (*(__u64 *)__u.__c) \ 77 - : "Q" (*p) : "memory"); \ 78 - break; \ 79 - default: \ 80 - /* Only to shut up gcc ... */ \ 81 - mb(); \ 82 - break; \ 83 - } \ 84 - __u.__val; \ 54 + #define smp_load_acquire(p) \ 55 + ({ \ 56 + union { typeof(*p) __val; char __c[1]; } __u = \ 57 + { .__c = { 0 } }; \ 58 + \ 59 + switch (sizeof(*p)) { \ 60 + case 1: \ 61 + asm volatile ("ldarb %w0, %1" \ 62 + : "=r" (*(__u8_alias_t *)__u.__c) \ 63 + : "Q" (*p) : "memory"); \ 64 + break; \ 65 + case 2: \ 66 + asm volatile ("ldarh %w0, %1" \ 67 + : "=r" (*(__u16_alias_t *)__u.__c) \ 68 + : "Q" (*p) : "memory"); \ 69 + break; \ 70 + case 4: \ 71 + asm volatile ("ldar %w0, %1" \ 72 + : "=r" (*(__u32_alias_t *)__u.__c) \ 73 + : "Q" (*p) : "memory"); \ 74 + break; \ 75 + case 8: \ 76 + asm volatile ("ldar %0, %1" \ 77 + : "=r" (*(__u64_alias_t *)__u.__c) \ 78 + : "Q" (*p) : "memory"); \ 79 + break; \ 80 + default: \ 81 + /* Only to shut up gcc ... */ \ 82 + mb(); \ 83 + break; \ 84 + } \ 85 + __u.__val; \ 85 86 }) 86 87 87 88 #endif /* _TOOLS_LINUX_ASM_AARCH64_BARRIER_H */
+2
tools/arch/x86/include/asm/cpufeatures.h
··· 331 331 #define X86_FEATURE_LA57 (16*32+16) /* 5-level page tables */ 332 332 #define X86_FEATURE_RDPID (16*32+22) /* RDPID instruction */ 333 333 #define X86_FEATURE_CLDEMOTE (16*32+25) /* CLDEMOTE instruction */ 334 + #define X86_FEATURE_MOVDIRI (16*32+27) /* MOVDIRI instruction */ 335 + #define X86_FEATURE_MOVDIR64B (16*32+28) /* MOVDIR64B instruction */ 334 336 335 337 /* AMD-defined CPU features, CPUID level 0x80000007 (EBX), word 17 */ 336 338 #define X86_FEATURE_OVERFLOW_RECOV (17*32+ 0) /* MCA overflow recovery support */
+7 -1
tools/bpf/bpftool/Documentation/bpftool-cgroup.rst
··· 137 137 138 138 SEE ALSO 139 139 ======== 140 - **bpftool**\ (8), **bpftool-prog**\ (8), **bpftool-map**\ (8) 140 + **bpf**\ (2), 141 + **bpf-helpers**\ (7), 142 + **bpftool**\ (8), 143 + **bpftool-prog**\ (8), 144 + **bpftool-map**\ (8), 145 + **bpftool-net**\ (8), 146 + **bpftool-perf**\ (8)
+7 -1
tools/bpf/bpftool/Documentation/bpftool-map.rst
··· 171 171 172 172 SEE ALSO 173 173 ======== 174 - **bpftool**\ (8), **bpftool-prog**\ (8), **bpftool-cgroup**\ (8) 174 + **bpf**\ (2), 175 + **bpf-helpers**\ (7), 176 + **bpftool**\ (8), 177 + **bpftool-prog**\ (8), 178 + **bpftool-cgroup**\ (8), 179 + **bpftool-net**\ (8), 180 + **bpftool-perf**\ (8)
+7 -1
tools/bpf/bpftool/Documentation/bpftool-net.rst
··· 136 136 137 137 SEE ALSO 138 138 ======== 139 - **bpftool**\ (8), **bpftool-prog**\ (8), **bpftool-map**\ (8) 139 + **bpf**\ (2), 140 + **bpf-helpers**\ (7), 141 + **bpftool**\ (8), 142 + **bpftool-prog**\ (8), 143 + **bpftool-map**\ (8), 144 + **bpftool-cgroup**\ (8), 145 + **bpftool-perf**\ (8)
+7 -1
tools/bpf/bpftool/Documentation/bpftool-perf.rst
··· 78 78 79 79 SEE ALSO 80 80 ======== 81 - **bpftool**\ (8), **bpftool-prog**\ (8), **bpftool-map**\ (8) 81 + **bpf**\ (2), 82 + **bpf-helpers**\ (7), 83 + **bpftool**\ (8), 84 + **bpftool-prog**\ (8), 85 + **bpftool-map**\ (8), 86 + **bpftool-cgroup**\ (8), 87 + **bpftool-net**\ (8)
+9 -2
tools/bpf/bpftool/Documentation/bpftool-prog.rst
··· 124 124 Generate human-readable JSON output. Implies **-j**. 125 125 126 126 -f, --bpffs 127 - Show file names of pinned programs. 127 + When showing BPF programs, show file names of pinned 128 + programs. 128 129 129 130 EXAMPLES 130 131 ======== ··· 207 206 208 207 SEE ALSO 209 208 ======== 210 - **bpftool**\ (8), **bpftool-map**\ (8), **bpftool-cgroup**\ (8) 209 + **bpf**\ (2), 210 + **bpf-helpers**\ (7), 211 + **bpftool**\ (8), 212 + **bpftool-map**\ (8), 213 + **bpftool-cgroup**\ (8), 214 + **bpftool-net**\ (8), 215 + **bpftool-perf**\ (8)
+7 -2
tools/bpf/bpftool/Documentation/bpftool.rst
··· 63 63 64 64 SEE ALSO 65 65 ======== 66 - **bpftool-map**\ (8), **bpftool-prog**\ (8), **bpftool-cgroup**\ (8) 67 - **bpftool-perf**\ (8), **bpftool-net**\ (8) 66 + **bpf**\ (2), 67 + **bpf-helpers**\ (7), 68 + **bpftool-prog**\ (8), 69 + **bpftool-map**\ (8), 70 + **bpftool-cgroup**\ (8), 71 + **bpftool-net**\ (8), 72 + **bpftool-perf**\ (8)
+9 -8
tools/bpf/bpftool/common.c
··· 130 130 return 0; 131 131 } 132 132 133 - int open_obj_pinned(char *path) 133 + int open_obj_pinned(char *path, bool quiet) 134 134 { 135 135 int fd; 136 136 137 137 fd = bpf_obj_get(path); 138 138 if (fd < 0) { 139 - p_err("bpf obj get (%s): %s", path, 140 - errno == EACCES && !is_bpffs(dirname(path)) ? 141 - "directory not in bpf file system (bpffs)" : 142 - strerror(errno)); 139 + if (!quiet) 140 + p_err("bpf obj get (%s): %s", path, 141 + errno == EACCES && !is_bpffs(dirname(path)) ? 142 + "directory not in bpf file system (bpffs)" : 143 + strerror(errno)); 143 144 return -1; 144 145 } 145 146 ··· 152 151 enum bpf_obj_type type; 153 152 int fd; 154 153 155 - fd = open_obj_pinned(path); 154 + fd = open_obj_pinned(path, false); 156 155 if (fd < 0) 157 156 return -1; 158 157 ··· 305 304 return NULL; 306 305 } 307 306 308 - while ((n = getline(&line, &line_n, fdi))) { 307 + while ((n = getline(&line, &line_n, fdi)) > 0) { 309 308 char *value; 310 309 int len; 311 310 ··· 385 384 while ((ftse = fts_read(fts))) { 386 385 if (!(ftse->fts_info & FTS_F)) 387 386 continue; 388 - fd = open_obj_pinned(ftse->fts_path); 387 + fd = open_obj_pinned(ftse->fts_path, true); 389 388 if (fd < 0) 390 389 continue; 391 390
+1 -1
tools/bpf/bpftool/main.h
··· 127 127 int get_fd_type(int fd); 128 128 const char *get_fd_type_name(enum bpf_obj_type type); 129 129 char *get_fdinfo(int fd, const char *key); 130 - int open_obj_pinned(char *path); 130 + int open_obj_pinned(char *path, bool quiet); 131 131 int open_obj_pinned_any(char *path, enum bpf_obj_type exp_type); 132 132 int do_pin_any(int argc, char **argv, int (*get_fd_by_id)(__u32)); 133 133 int do_pin_fd(int fd, const char *name);
+8 -5
tools/bpf/bpftool/prog.c
··· 357 357 if (!hash_empty(prog_table.table)) { 358 358 struct pinned_obj *obj; 359 359 360 - printf("\n"); 361 360 hash_for_each_possible(prog_table.table, obj, hash, info->id) { 362 361 if (obj->id == info->id) 363 - printf("\tpinned %s\n", obj->path); 362 + printf("\n\tpinned %s", obj->path); 364 363 } 365 364 } 366 365 ··· 844 845 } 845 846 NEXT_ARG(); 846 847 } else if (is_prefix(*argv, "map")) { 848 + void *new_map_replace; 847 849 char *endptr, *name; 848 850 int fd; 849 851 ··· 878 878 if (fd < 0) 879 879 goto err_free_reuse_maps; 880 880 881 - map_replace = reallocarray(map_replace, old_map_fds + 1, 882 - sizeof(*map_replace)); 883 - if (!map_replace) { 881 + new_map_replace = reallocarray(map_replace, 882 + old_map_fds + 1, 883 + sizeof(*map_replace)); 884 + if (!new_map_replace) { 884 885 p_err("mem alloc failed"); 885 886 goto err_free_reuse_maps; 886 887 } 888 + map_replace = new_map_replace; 889 + 887 890 map_replace[old_map_fds].idx = idx; 888 891 map_replace[old_map_fds].name = name; 889 892 map_replace[old_map_fds].fd = fd;
+1
tools/build/Makefile.feature
··· 33 33 dwarf_getlocations \ 34 34 fortify-source \ 35 35 sync-compare-and-swap \ 36 + get_current_dir_name \ 36 37 glibc \ 37 38 gtk2 \ 38 39 gtk2-infobar \
+4
tools/build/feature/Makefile
··· 7 7 test-dwarf_getlocations.bin \ 8 8 test-fortify-source.bin \ 9 9 test-sync-compare-and-swap.bin \ 10 + test-get_current_dir_name.bin \ 10 11 test-glibc.bin \ 11 12 test-gtk2.bin \ 12 13 test-gtk2-infobar.bin \ ··· 101 100 102 101 $(OUTPUT)test-libelf.bin: 103 102 $(BUILD) -lelf 103 + 104 + $(OUTPUT)test-get_current_dir_name.bin: 105 + $(BUILD) 104 106 105 107 $(OUTPUT)test-glibc.bin: 106 108 $(BUILD)
+5
tools/build/feature/test-all.c
··· 34 34 # include "test-libelf-mmap.c" 35 35 #undef main 36 36 37 + #define main main_test_get_current_dir_name 38 + # include "test-get_current_dir_name.c" 39 + #undef main 40 + 37 41 #define main main_test_glibc 38 42 # include "test-glibc.c" 39 43 #undef main ··· 178 174 main_test_hello(); 179 175 main_test_libelf(); 180 176 main_test_libelf_mmap(); 177 + main_test_get_current_dir_name(); 181 178 main_test_glibc(); 182 179 main_test_dwarf(); 183 180 main_test_dwarf_getlocations();
+10
tools/build/feature/test-get_current_dir_name.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #define _GNU_SOURCE 3 + #include <unistd.h> 4 + #include <stdlib.h> 5 + 6 + int main(void) 7 + { 8 + free(get_current_dir_name()); 9 + return 0; 10 + }
+2
tools/include/uapi/asm-generic/ioctls.h
··· 79 79 #define TIOCGPTLCK _IOR('T', 0x39, int) /* Get Pty lock state */ 80 80 #define TIOCGEXCL _IOR('T', 0x40, int) /* Get exclusive mode state */ 81 81 #define TIOCGPTPEER _IO('T', 0x41) /* Safely open the slave */ 82 + #define TIOCGISO7816 _IOR('T', 0x42, struct serial_iso7816) 83 + #define TIOCSISO7816 _IOWR('T', 0x43, struct serial_iso7816) 82 84 83 85 #define FIONCLEX 0x5450 84 86 #define FIOCLEX 0x5451
+22
tools/include/uapi/drm/i915_drm.h
··· 529 529 */ 530 530 #define I915_PARAM_CS_TIMESTAMP_FREQUENCY 51 531 531 532 + /* 533 + * Once upon a time we supposed that writes through the GGTT would be 534 + * immediately in physical memory (once flushed out of the CPU path). However, 535 + * on a few different processors and chipsets, this is not necessarily the case 536 + * as the writes appear to be buffered internally. Thus a read of the backing 537 + * storage (physical memory) via a different path (with different physical tags 538 + * to the indirect write via the GGTT) will see stale values from before 539 + * the GGTT write. Inside the kernel, we can for the most part keep track of 540 + * the different read/write domains in use (e.g. set-domain), but the assumption 541 + * of coherency is baked into the ABI, hence reporting its true state in this 542 + * parameter. 543 + * 544 + * Reports true when writes via mmap_gtt are immediately visible following an 545 + * lfence to flush the WCB. 546 + * 547 + * Reports false when writes via mmap_gtt are indeterminately delayed in an in 548 + * internal buffer and are _not_ immediately visible to third parties accessing 549 + * directly via mmap_cpu/mmap_wc. Use of mmap_gtt as part of an IPC 550 + * communications channel when reporting false is strongly disadvised. 551 + */ 552 + #define I915_PARAM_MMAP_GTT_COHERENT 52 553 + 532 554 typedef struct drm_i915_getparam { 533 555 __s32 param; 534 556 /*
+612
tools/include/uapi/linux/pkt_cls.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 + #ifndef __LINUX_PKT_CLS_H 3 + #define __LINUX_PKT_CLS_H 4 + 5 + #include <linux/types.h> 6 + #include <linux/pkt_sched.h> 7 + 8 + #define TC_COOKIE_MAX_SIZE 16 9 + 10 + /* Action attributes */ 11 + enum { 12 + TCA_ACT_UNSPEC, 13 + TCA_ACT_KIND, 14 + TCA_ACT_OPTIONS, 15 + TCA_ACT_INDEX, 16 + TCA_ACT_STATS, 17 + TCA_ACT_PAD, 18 + TCA_ACT_COOKIE, 19 + __TCA_ACT_MAX 20 + }; 21 + 22 + #define TCA_ACT_MAX __TCA_ACT_MAX 23 + #define TCA_OLD_COMPAT (TCA_ACT_MAX+1) 24 + #define TCA_ACT_MAX_PRIO 32 25 + #define TCA_ACT_BIND 1 26 + #define TCA_ACT_NOBIND 0 27 + #define TCA_ACT_UNBIND 1 28 + #define TCA_ACT_NOUNBIND 0 29 + #define TCA_ACT_REPLACE 1 30 + #define TCA_ACT_NOREPLACE 0 31 + 32 + #define TC_ACT_UNSPEC (-1) 33 + #define TC_ACT_OK 0 34 + #define TC_ACT_RECLASSIFY 1 35 + #define TC_ACT_SHOT 2 36 + #define TC_ACT_PIPE 3 37 + #define TC_ACT_STOLEN 4 38 + #define TC_ACT_QUEUED 5 39 + #define TC_ACT_REPEAT 6 40 + #define TC_ACT_REDIRECT 7 41 + #define TC_ACT_TRAP 8 /* For hw path, this means "trap to cpu" 42 + * and don't further process the frame 43 + * in hardware. For sw path, this is 44 + * equivalent of TC_ACT_STOLEN - drop 45 + * the skb and act like everything 46 + * is alright. 47 + */ 48 + #define TC_ACT_VALUE_MAX TC_ACT_TRAP 49 + 50 + /* There is a special kind of actions called "extended actions", 51 + * which need a value parameter. These have a local opcode located in 52 + * the highest nibble, starting from 1. The rest of the bits 53 + * are used to carry the value. These two parts together make 54 + * a combined opcode. 55 + */ 56 + #define __TC_ACT_EXT_SHIFT 28 57 + #define __TC_ACT_EXT(local) ((local) << __TC_ACT_EXT_SHIFT) 58 + #define TC_ACT_EXT_VAL_MASK ((1 << __TC_ACT_EXT_SHIFT) - 1) 59 + #define TC_ACT_EXT_OPCODE(combined) ((combined) & (~TC_ACT_EXT_VAL_MASK)) 60 + #define TC_ACT_EXT_CMP(combined, opcode) (TC_ACT_EXT_OPCODE(combined) == opcode) 61 + 62 + #define TC_ACT_JUMP __TC_ACT_EXT(1) 63 + #define TC_ACT_GOTO_CHAIN __TC_ACT_EXT(2) 64 + #define TC_ACT_EXT_OPCODE_MAX TC_ACT_GOTO_CHAIN 65 + 66 + /* Action type identifiers*/ 67 + enum { 68 + TCA_ID_UNSPEC=0, 69 + TCA_ID_POLICE=1, 70 + /* other actions go here */ 71 + __TCA_ID_MAX=255 72 + }; 73 + 74 + #define TCA_ID_MAX __TCA_ID_MAX 75 + 76 + struct tc_police { 77 + __u32 index; 78 + int action; 79 + #define TC_POLICE_UNSPEC TC_ACT_UNSPEC 80 + #define TC_POLICE_OK TC_ACT_OK 81 + #define TC_POLICE_RECLASSIFY TC_ACT_RECLASSIFY 82 + #define TC_POLICE_SHOT TC_ACT_SHOT 83 + #define TC_POLICE_PIPE TC_ACT_PIPE 84 + 85 + __u32 limit; 86 + __u32 burst; 87 + __u32 mtu; 88 + struct tc_ratespec rate; 89 + struct tc_ratespec peakrate; 90 + int refcnt; 91 + int bindcnt; 92 + __u32 capab; 93 + }; 94 + 95 + struct tcf_t { 96 + __u64 install; 97 + __u64 lastuse; 98 + __u64 expires; 99 + __u64 firstuse; 100 + }; 101 + 102 + struct tc_cnt { 103 + int refcnt; 104 + int bindcnt; 105 + }; 106 + 107 + #define tc_gen \ 108 + __u32 index; \ 109 + __u32 capab; \ 110 + int action; \ 111 + int refcnt; \ 112 + int bindcnt 113 + 114 + enum { 115 + TCA_POLICE_UNSPEC, 116 + TCA_POLICE_TBF, 117 + TCA_POLICE_RATE, 118 + TCA_POLICE_PEAKRATE, 119 + TCA_POLICE_AVRATE, 120 + TCA_POLICE_RESULT, 121 + TCA_POLICE_TM, 122 + TCA_POLICE_PAD, 123 + __TCA_POLICE_MAX 124 + #define TCA_POLICE_RESULT TCA_POLICE_RESULT 125 + }; 126 + 127 + #define TCA_POLICE_MAX (__TCA_POLICE_MAX - 1) 128 + 129 + /* tca flags definitions */ 130 + #define TCA_CLS_FLAGS_SKIP_HW (1 << 0) /* don't offload filter to HW */ 131 + #define TCA_CLS_FLAGS_SKIP_SW (1 << 1) /* don't use filter in SW */ 132 + #define TCA_CLS_FLAGS_IN_HW (1 << 2) /* filter is offloaded to HW */ 133 + #define TCA_CLS_FLAGS_NOT_IN_HW (1 << 3) /* filter isn't offloaded to HW */ 134 + #define TCA_CLS_FLAGS_VERBOSE (1 << 4) /* verbose logging */ 135 + 136 + /* U32 filters */ 137 + 138 + #define TC_U32_HTID(h) ((h)&0xFFF00000) 139 + #define TC_U32_USERHTID(h) (TC_U32_HTID(h)>>20) 140 + #define TC_U32_HASH(h) (((h)>>12)&0xFF) 141 + #define TC_U32_NODE(h) ((h)&0xFFF) 142 + #define TC_U32_KEY(h) ((h)&0xFFFFF) 143 + #define TC_U32_UNSPEC 0 144 + #define TC_U32_ROOT (0xFFF00000) 145 + 146 + enum { 147 + TCA_U32_UNSPEC, 148 + TCA_U32_CLASSID, 149 + TCA_U32_HASH, 150 + TCA_U32_LINK, 151 + TCA_U32_DIVISOR, 152 + TCA_U32_SEL, 153 + TCA_U32_POLICE, 154 + TCA_U32_ACT, 155 + TCA_U32_INDEV, 156 + TCA_U32_PCNT, 157 + TCA_U32_MARK, 158 + TCA_U32_FLAGS, 159 + TCA_U32_PAD, 160 + __TCA_U32_MAX 161 + }; 162 + 163 + #define TCA_U32_MAX (__TCA_U32_MAX - 1) 164 + 165 + struct tc_u32_key { 166 + __be32 mask; 167 + __be32 val; 168 + int off; 169 + int offmask; 170 + }; 171 + 172 + struct tc_u32_sel { 173 + unsigned char flags; 174 + unsigned char offshift; 175 + unsigned char nkeys; 176 + 177 + __be16 offmask; 178 + __u16 off; 179 + short offoff; 180 + 181 + short hoff; 182 + __be32 hmask; 183 + struct tc_u32_key keys[0]; 184 + }; 185 + 186 + struct tc_u32_mark { 187 + __u32 val; 188 + __u32 mask; 189 + __u32 success; 190 + }; 191 + 192 + struct tc_u32_pcnt { 193 + __u64 rcnt; 194 + __u64 rhit; 195 + __u64 kcnts[0]; 196 + }; 197 + 198 + /* Flags */ 199 + 200 + #define TC_U32_TERMINAL 1 201 + #define TC_U32_OFFSET 2 202 + #define TC_U32_VAROFFSET 4 203 + #define TC_U32_EAT 8 204 + 205 + #define TC_U32_MAXDEPTH 8 206 + 207 + 208 + /* RSVP filter */ 209 + 210 + enum { 211 + TCA_RSVP_UNSPEC, 212 + TCA_RSVP_CLASSID, 213 + TCA_RSVP_DST, 214 + TCA_RSVP_SRC, 215 + TCA_RSVP_PINFO, 216 + TCA_RSVP_POLICE, 217 + TCA_RSVP_ACT, 218 + __TCA_RSVP_MAX 219 + }; 220 + 221 + #define TCA_RSVP_MAX (__TCA_RSVP_MAX - 1 ) 222 + 223 + struct tc_rsvp_gpi { 224 + __u32 key; 225 + __u32 mask; 226 + int offset; 227 + }; 228 + 229 + struct tc_rsvp_pinfo { 230 + struct tc_rsvp_gpi dpi; 231 + struct tc_rsvp_gpi spi; 232 + __u8 protocol; 233 + __u8 tunnelid; 234 + __u8 tunnelhdr; 235 + __u8 pad; 236 + }; 237 + 238 + /* ROUTE filter */ 239 + 240 + enum { 241 + TCA_ROUTE4_UNSPEC, 242 + TCA_ROUTE4_CLASSID, 243 + TCA_ROUTE4_TO, 244 + TCA_ROUTE4_FROM, 245 + TCA_ROUTE4_IIF, 246 + TCA_ROUTE4_POLICE, 247 + TCA_ROUTE4_ACT, 248 + __TCA_ROUTE4_MAX 249 + }; 250 + 251 + #define TCA_ROUTE4_MAX (__TCA_ROUTE4_MAX - 1) 252 + 253 + 254 + /* FW filter */ 255 + 256 + enum { 257 + TCA_FW_UNSPEC, 258 + TCA_FW_CLASSID, 259 + TCA_FW_POLICE, 260 + TCA_FW_INDEV, /* used by CONFIG_NET_CLS_IND */ 261 + TCA_FW_ACT, /* used by CONFIG_NET_CLS_ACT */ 262 + TCA_FW_MASK, 263 + __TCA_FW_MAX 264 + }; 265 + 266 + #define TCA_FW_MAX (__TCA_FW_MAX - 1) 267 + 268 + /* TC index filter */ 269 + 270 + enum { 271 + TCA_TCINDEX_UNSPEC, 272 + TCA_TCINDEX_HASH, 273 + TCA_TCINDEX_MASK, 274 + TCA_TCINDEX_SHIFT, 275 + TCA_TCINDEX_FALL_THROUGH, 276 + TCA_TCINDEX_CLASSID, 277 + TCA_TCINDEX_POLICE, 278 + TCA_TCINDEX_ACT, 279 + __TCA_TCINDEX_MAX 280 + }; 281 + 282 + #define TCA_TCINDEX_MAX (__TCA_TCINDEX_MAX - 1) 283 + 284 + /* Flow filter */ 285 + 286 + enum { 287 + FLOW_KEY_SRC, 288 + FLOW_KEY_DST, 289 + FLOW_KEY_PROTO, 290 + FLOW_KEY_PROTO_SRC, 291 + FLOW_KEY_PROTO_DST, 292 + FLOW_KEY_IIF, 293 + FLOW_KEY_PRIORITY, 294 + FLOW_KEY_MARK, 295 + FLOW_KEY_NFCT, 296 + FLOW_KEY_NFCT_SRC, 297 + FLOW_KEY_NFCT_DST, 298 + FLOW_KEY_NFCT_PROTO_SRC, 299 + FLOW_KEY_NFCT_PROTO_DST, 300 + FLOW_KEY_RTCLASSID, 301 + FLOW_KEY_SKUID, 302 + FLOW_KEY_SKGID, 303 + FLOW_KEY_VLAN_TAG, 304 + FLOW_KEY_RXHASH, 305 + __FLOW_KEY_MAX, 306 + }; 307 + 308 + #define FLOW_KEY_MAX (__FLOW_KEY_MAX - 1) 309 + 310 + enum { 311 + FLOW_MODE_MAP, 312 + FLOW_MODE_HASH, 313 + }; 314 + 315 + enum { 316 + TCA_FLOW_UNSPEC, 317 + TCA_FLOW_KEYS, 318 + TCA_FLOW_MODE, 319 + TCA_FLOW_BASECLASS, 320 + TCA_FLOW_RSHIFT, 321 + TCA_FLOW_ADDEND, 322 + TCA_FLOW_MASK, 323 + TCA_FLOW_XOR, 324 + TCA_FLOW_DIVISOR, 325 + TCA_FLOW_ACT, 326 + TCA_FLOW_POLICE, 327 + TCA_FLOW_EMATCHES, 328 + TCA_FLOW_PERTURB, 329 + __TCA_FLOW_MAX 330 + }; 331 + 332 + #define TCA_FLOW_MAX (__TCA_FLOW_MAX - 1) 333 + 334 + /* Basic filter */ 335 + 336 + enum { 337 + TCA_BASIC_UNSPEC, 338 + TCA_BASIC_CLASSID, 339 + TCA_BASIC_EMATCHES, 340 + TCA_BASIC_ACT, 341 + TCA_BASIC_POLICE, 342 + __TCA_BASIC_MAX 343 + }; 344 + 345 + #define TCA_BASIC_MAX (__TCA_BASIC_MAX - 1) 346 + 347 + 348 + /* Cgroup classifier */ 349 + 350 + enum { 351 + TCA_CGROUP_UNSPEC, 352 + TCA_CGROUP_ACT, 353 + TCA_CGROUP_POLICE, 354 + TCA_CGROUP_EMATCHES, 355 + __TCA_CGROUP_MAX, 356 + }; 357 + 358 + #define TCA_CGROUP_MAX (__TCA_CGROUP_MAX - 1) 359 + 360 + /* BPF classifier */ 361 + 362 + #define TCA_BPF_FLAG_ACT_DIRECT (1 << 0) 363 + 364 + enum { 365 + TCA_BPF_UNSPEC, 366 + TCA_BPF_ACT, 367 + TCA_BPF_POLICE, 368 + TCA_BPF_CLASSID, 369 + TCA_BPF_OPS_LEN, 370 + TCA_BPF_OPS, 371 + TCA_BPF_FD, 372 + TCA_BPF_NAME, 373 + TCA_BPF_FLAGS, 374 + TCA_BPF_FLAGS_GEN, 375 + TCA_BPF_TAG, 376 + TCA_BPF_ID, 377 + __TCA_BPF_MAX, 378 + }; 379 + 380 + #define TCA_BPF_MAX (__TCA_BPF_MAX - 1) 381 + 382 + /* Flower classifier */ 383 + 384 + enum { 385 + TCA_FLOWER_UNSPEC, 386 + TCA_FLOWER_CLASSID, 387 + TCA_FLOWER_INDEV, 388 + TCA_FLOWER_ACT, 389 + TCA_FLOWER_KEY_ETH_DST, /* ETH_ALEN */ 390 + TCA_FLOWER_KEY_ETH_DST_MASK, /* ETH_ALEN */ 391 + TCA_FLOWER_KEY_ETH_SRC, /* ETH_ALEN */ 392 + TCA_FLOWER_KEY_ETH_SRC_MASK, /* ETH_ALEN */ 393 + TCA_FLOWER_KEY_ETH_TYPE, /* be16 */ 394 + TCA_FLOWER_KEY_IP_PROTO, /* u8 */ 395 + TCA_FLOWER_KEY_IPV4_SRC, /* be32 */ 396 + TCA_FLOWER_KEY_IPV4_SRC_MASK, /* be32 */ 397 + TCA_FLOWER_KEY_IPV4_DST, /* be32 */ 398 + TCA_FLOWER_KEY_IPV4_DST_MASK, /* be32 */ 399 + TCA_FLOWER_KEY_IPV6_SRC, /* struct in6_addr */ 400 + TCA_FLOWER_KEY_IPV6_SRC_MASK, /* struct in6_addr */ 401 + TCA_FLOWER_KEY_IPV6_DST, /* struct in6_addr */ 402 + TCA_FLOWER_KEY_IPV6_DST_MASK, /* struct in6_addr */ 403 + TCA_FLOWER_KEY_TCP_SRC, /* be16 */ 404 + TCA_FLOWER_KEY_TCP_DST, /* be16 */ 405 + TCA_FLOWER_KEY_UDP_SRC, /* be16 */ 406 + TCA_FLOWER_KEY_UDP_DST, /* be16 */ 407 + 408 + TCA_FLOWER_FLAGS, 409 + TCA_FLOWER_KEY_VLAN_ID, /* be16 */ 410 + TCA_FLOWER_KEY_VLAN_PRIO, /* u8 */ 411 + TCA_FLOWER_KEY_VLAN_ETH_TYPE, /* be16 */ 412 + 413 + TCA_FLOWER_KEY_ENC_KEY_ID, /* be32 */ 414 + TCA_FLOWER_KEY_ENC_IPV4_SRC, /* be32 */ 415 + TCA_FLOWER_KEY_ENC_IPV4_SRC_MASK,/* be32 */ 416 + TCA_FLOWER_KEY_ENC_IPV4_DST, /* be32 */ 417 + TCA_FLOWER_KEY_ENC_IPV4_DST_MASK,/* be32 */ 418 + TCA_FLOWER_KEY_ENC_IPV6_SRC, /* struct in6_addr */ 419 + TCA_FLOWER_KEY_ENC_IPV6_SRC_MASK,/* struct in6_addr */ 420 + TCA_FLOWER_KEY_ENC_IPV6_DST, /* struct in6_addr */ 421 + TCA_FLOWER_KEY_ENC_IPV6_DST_MASK,/* struct in6_addr */ 422 + 423 + TCA_FLOWER_KEY_TCP_SRC_MASK, /* be16 */ 424 + TCA_FLOWER_KEY_TCP_DST_MASK, /* be16 */ 425 + TCA_FLOWER_KEY_UDP_SRC_MASK, /* be16 */ 426 + TCA_FLOWER_KEY_UDP_DST_MASK, /* be16 */ 427 + TCA_FLOWER_KEY_SCTP_SRC_MASK, /* be16 */ 428 + TCA_FLOWER_KEY_SCTP_DST_MASK, /* be16 */ 429 + 430 + TCA_FLOWER_KEY_SCTP_SRC, /* be16 */ 431 + TCA_FLOWER_KEY_SCTP_DST, /* be16 */ 432 + 433 + TCA_FLOWER_KEY_ENC_UDP_SRC_PORT, /* be16 */ 434 + TCA_FLOWER_KEY_ENC_UDP_SRC_PORT_MASK, /* be16 */ 435 + TCA_FLOWER_KEY_ENC_UDP_DST_PORT, /* be16 */ 436 + TCA_FLOWER_KEY_ENC_UDP_DST_PORT_MASK, /* be16 */ 437 + 438 + TCA_FLOWER_KEY_FLAGS, /* be32 */ 439 + TCA_FLOWER_KEY_FLAGS_MASK, /* be32 */ 440 + 441 + TCA_FLOWER_KEY_ICMPV4_CODE, /* u8 */ 442 + TCA_FLOWER_KEY_ICMPV4_CODE_MASK,/* u8 */ 443 + TCA_FLOWER_KEY_ICMPV4_TYPE, /* u8 */ 444 + TCA_FLOWER_KEY_ICMPV4_TYPE_MASK,/* u8 */ 445 + TCA_FLOWER_KEY_ICMPV6_CODE, /* u8 */ 446 + TCA_FLOWER_KEY_ICMPV6_CODE_MASK,/* u8 */ 447 + TCA_FLOWER_KEY_ICMPV6_TYPE, /* u8 */ 448 + TCA_FLOWER_KEY_ICMPV6_TYPE_MASK,/* u8 */ 449 + 450 + TCA_FLOWER_KEY_ARP_SIP, /* be32 */ 451 + TCA_FLOWER_KEY_ARP_SIP_MASK, /* be32 */ 452 + TCA_FLOWER_KEY_ARP_TIP, /* be32 */ 453 + TCA_FLOWER_KEY_ARP_TIP_MASK, /* be32 */ 454 + TCA_FLOWER_KEY_ARP_OP, /* u8 */ 455 + TCA_FLOWER_KEY_ARP_OP_MASK, /* u8 */ 456 + TCA_FLOWER_KEY_ARP_SHA, /* ETH_ALEN */ 457 + TCA_FLOWER_KEY_ARP_SHA_MASK, /* ETH_ALEN */ 458 + TCA_FLOWER_KEY_ARP_THA, /* ETH_ALEN */ 459 + TCA_FLOWER_KEY_ARP_THA_MASK, /* ETH_ALEN */ 460 + 461 + TCA_FLOWER_KEY_MPLS_TTL, /* u8 - 8 bits */ 462 + TCA_FLOWER_KEY_MPLS_BOS, /* u8 - 1 bit */ 463 + TCA_FLOWER_KEY_MPLS_TC, /* u8 - 3 bits */ 464 + TCA_FLOWER_KEY_MPLS_LABEL, /* be32 - 20 bits */ 465 + 466 + TCA_FLOWER_KEY_TCP_FLAGS, /* be16 */ 467 + TCA_FLOWER_KEY_TCP_FLAGS_MASK, /* be16 */ 468 + 469 + TCA_FLOWER_KEY_IP_TOS, /* u8 */ 470 + TCA_FLOWER_KEY_IP_TOS_MASK, /* u8 */ 471 + TCA_FLOWER_KEY_IP_TTL, /* u8 */ 472 + TCA_FLOWER_KEY_IP_TTL_MASK, /* u8 */ 473 + 474 + TCA_FLOWER_KEY_CVLAN_ID, /* be16 */ 475 + TCA_FLOWER_KEY_CVLAN_PRIO, /* u8 */ 476 + TCA_FLOWER_KEY_CVLAN_ETH_TYPE, /* be16 */ 477 + 478 + TCA_FLOWER_KEY_ENC_IP_TOS, /* u8 */ 479 + TCA_FLOWER_KEY_ENC_IP_TOS_MASK, /* u8 */ 480 + TCA_FLOWER_KEY_ENC_IP_TTL, /* u8 */ 481 + TCA_FLOWER_KEY_ENC_IP_TTL_MASK, /* u8 */ 482 + 483 + TCA_FLOWER_KEY_ENC_OPTS, 484 + TCA_FLOWER_KEY_ENC_OPTS_MASK, 485 + 486 + TCA_FLOWER_IN_HW_COUNT, 487 + 488 + __TCA_FLOWER_MAX, 489 + }; 490 + 491 + #define TCA_FLOWER_MAX (__TCA_FLOWER_MAX - 1) 492 + 493 + enum { 494 + TCA_FLOWER_KEY_ENC_OPTS_UNSPEC, 495 + TCA_FLOWER_KEY_ENC_OPTS_GENEVE, /* Nested 496 + * TCA_FLOWER_KEY_ENC_OPT_GENEVE_ 497 + * attributes 498 + */ 499 + __TCA_FLOWER_KEY_ENC_OPTS_MAX, 500 + }; 501 + 502 + #define TCA_FLOWER_KEY_ENC_OPTS_MAX (__TCA_FLOWER_KEY_ENC_OPTS_MAX - 1) 503 + 504 + enum { 505 + TCA_FLOWER_KEY_ENC_OPT_GENEVE_UNSPEC, 506 + TCA_FLOWER_KEY_ENC_OPT_GENEVE_CLASS, /* u16 */ 507 + TCA_FLOWER_KEY_ENC_OPT_GENEVE_TYPE, /* u8 */ 508 + TCA_FLOWER_KEY_ENC_OPT_GENEVE_DATA, /* 4 to 128 bytes */ 509 + 510 + __TCA_FLOWER_KEY_ENC_OPT_GENEVE_MAX, 511 + }; 512 + 513 + #define TCA_FLOWER_KEY_ENC_OPT_GENEVE_MAX \ 514 + (__TCA_FLOWER_KEY_ENC_OPT_GENEVE_MAX - 1) 515 + 516 + enum { 517 + TCA_FLOWER_KEY_FLAGS_IS_FRAGMENT = (1 << 0), 518 + TCA_FLOWER_KEY_FLAGS_FRAG_IS_FIRST = (1 << 1), 519 + }; 520 + 521 + /* Match-all classifier */ 522 + 523 + enum { 524 + TCA_MATCHALL_UNSPEC, 525 + TCA_MATCHALL_CLASSID, 526 + TCA_MATCHALL_ACT, 527 + TCA_MATCHALL_FLAGS, 528 + __TCA_MATCHALL_MAX, 529 + }; 530 + 531 + #define TCA_MATCHALL_MAX (__TCA_MATCHALL_MAX - 1) 532 + 533 + /* Extended Matches */ 534 + 535 + struct tcf_ematch_tree_hdr { 536 + __u16 nmatches; 537 + __u16 progid; 538 + }; 539 + 540 + enum { 541 + TCA_EMATCH_TREE_UNSPEC, 542 + TCA_EMATCH_TREE_HDR, 543 + TCA_EMATCH_TREE_LIST, 544 + __TCA_EMATCH_TREE_MAX 545 + }; 546 + #define TCA_EMATCH_TREE_MAX (__TCA_EMATCH_TREE_MAX - 1) 547 + 548 + struct tcf_ematch_hdr { 549 + __u16 matchid; 550 + __u16 kind; 551 + __u16 flags; 552 + __u16 pad; /* currently unused */ 553 + }; 554 + 555 + /* 0 1 556 + * 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 557 + * +-----------------------+-+-+---+ 558 + * | Unused |S|I| R | 559 + * +-----------------------+-+-+---+ 560 + * 561 + * R(2) ::= relation to next ematch 562 + * where: 0 0 END (last ematch) 563 + * 0 1 AND 564 + * 1 0 OR 565 + * 1 1 Unused (invalid) 566 + * I(1) ::= invert result 567 + * S(1) ::= simple payload 568 + */ 569 + #define TCF_EM_REL_END 0 570 + #define TCF_EM_REL_AND (1<<0) 571 + #define TCF_EM_REL_OR (1<<1) 572 + #define TCF_EM_INVERT (1<<2) 573 + #define TCF_EM_SIMPLE (1<<3) 574 + 575 + #define TCF_EM_REL_MASK 3 576 + #define TCF_EM_REL_VALID(v) (((v) & TCF_EM_REL_MASK) != TCF_EM_REL_MASK) 577 + 578 + enum { 579 + TCF_LAYER_LINK, 580 + TCF_LAYER_NETWORK, 581 + TCF_LAYER_TRANSPORT, 582 + __TCF_LAYER_MAX 583 + }; 584 + #define TCF_LAYER_MAX (__TCF_LAYER_MAX - 1) 585 + 586 + /* Ematch type assignments 587 + * 1..32767 Reserved for ematches inside kernel tree 588 + * 32768..65535 Free to use, not reliable 589 + */ 590 + #define TCF_EM_CONTAINER 0 591 + #define TCF_EM_CMP 1 592 + #define TCF_EM_NBYTE 2 593 + #define TCF_EM_U32 3 594 + #define TCF_EM_META 4 595 + #define TCF_EM_TEXT 5 596 + #define TCF_EM_VLAN 6 597 + #define TCF_EM_CANID 7 598 + #define TCF_EM_IPSET 8 599 + #define TCF_EM_IPT 9 600 + #define TCF_EM_MAX 9 601 + 602 + enum { 603 + TCF_EM_PROG_TC 604 + }; 605 + 606 + enum { 607 + TCF_EM_OPND_EQ, 608 + TCF_EM_OPND_GT, 609 + TCF_EM_OPND_LT 610 + }; 611 + 612 + #endif
+1
tools/include/uapi/linux/prctl.h
··· 212 212 #define PR_SET_SPECULATION_CTRL 53 213 213 /* Speculation control variants */ 214 214 # define PR_SPEC_STORE_BYPASS 0 215 + # define PR_SPEC_INDIRECT_BRANCH 1 215 216 /* Return and control values for PR_SET/GET_SPECULATION_CTRL */ 216 217 # define PR_SPEC_NOT_AFFECTED 0 217 218 # define PR_SPEC_PRCTL (1UL << 0)
+37
tools/include/uapi/linux/tc_act/tc_bpf.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */ 2 + /* 3 + * Copyright (c) 2015 Jiri Pirko <jiri@resnulli.us> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License as published by 7 + * the Free Software Foundation; either version 2 of the License, or 8 + * (at your option) any later version. 9 + */ 10 + 11 + #ifndef __LINUX_TC_BPF_H 12 + #define __LINUX_TC_BPF_H 13 + 14 + #include <linux/pkt_cls.h> 15 + 16 + #define TCA_ACT_BPF 13 17 + 18 + struct tc_act_bpf { 19 + tc_gen; 20 + }; 21 + 22 + enum { 23 + TCA_ACT_BPF_UNSPEC, 24 + TCA_ACT_BPF_TM, 25 + TCA_ACT_BPF_PARMS, 26 + TCA_ACT_BPF_OPS_LEN, 27 + TCA_ACT_BPF_OPS, 28 + TCA_ACT_BPF_FD, 29 + TCA_ACT_BPF_NAME, 30 + TCA_ACT_BPF_PAD, 31 + TCA_ACT_BPF_TAG, 32 + TCA_ACT_BPF_ID, 33 + __TCA_ACT_BPF_MAX, 34 + }; 35 + #define TCA_ACT_BPF_MAX (__TCA_ACT_BPF_MAX - 1) 36 + 37 + #endif
+15 -4
tools/objtool/elf.c
··· 31 31 #include "elf.h" 32 32 #include "warn.h" 33 33 34 + #define MAX_NAME_LEN 128 35 + 34 36 struct section *find_section_by_name(struct elf *elf, const char *name) 35 37 { 36 38 struct section *sec; ··· 300 298 /* Create parent/child links for any cold subfunctions */ 301 299 list_for_each_entry(sec, &elf->sections, list) { 302 300 list_for_each_entry(sym, &sec->symbol_list, list) { 301 + char pname[MAX_NAME_LEN + 1]; 302 + size_t pnamelen; 303 303 if (sym->type != STT_FUNC) 304 304 continue; 305 305 sym->pfunc = sym->cfunc = sym; ··· 309 305 if (!coldstr) 310 306 continue; 311 307 312 - coldstr[0] = '\0'; 313 - pfunc = find_symbol_by_name(elf, sym->name); 314 - coldstr[0] = '.'; 308 + pnamelen = coldstr - sym->name; 309 + if (pnamelen > MAX_NAME_LEN) { 310 + WARN("%s(): parent function name exceeds maximum length of %d characters", 311 + sym->name, MAX_NAME_LEN); 312 + return -1; 313 + } 314 + 315 + strncpy(pname, sym->name, pnamelen); 316 + pname[pnamelen] = '\0'; 317 + pfunc = find_symbol_by_name(elf, pname); 315 318 316 319 if (!pfunc) { 317 320 WARN("%s(): can't find parent function", 318 321 sym->name); 319 - goto err; 322 + return -1; 320 323 } 321 324 322 325 sym->pfunc = pfunc;
-1
tools/perf/Documentation/perf-list.txt
··· 55 55 S - read sample value (PERF_SAMPLE_READ) 56 56 D - pin the event to the PMU 57 57 W - group is weak and will fallback to non-group if not schedulable, 58 - only supported in 'perf stat' for now. 59 58 60 59 The 'p' modifier can be used for specifying how precise the instruction 61 60 address should be. The 'p' modifier can be specified multiple times:
+5
tools/perf/Makefile.config
··· 299 299 endif 300 300 endif 301 301 302 + ifeq ($(feature-get_current_dir_name), 1) 303 + CFLAGS += -DHAVE_GET_CURRENT_DIR_NAME 304 + endif 305 + 306 + 302 307 ifdef NO_LIBELF 303 308 NO_DWARF := 1 304 309 NO_DEMANGLE := 1
+1 -1
tools/perf/Makefile.perf
··· 387 387 388 388 linux_uapi_dir := $(srctree)/tools/include/uapi/linux 389 389 asm_generic_uapi_dir := $(srctree)/tools/include/uapi/asm-generic 390 - arch_asm_uapi_dir := $(srctree)/tools/arch/$(ARCH)/include/uapi/asm/ 390 + arch_asm_uapi_dir := $(srctree)/tools/arch/$(SRCARCH)/include/uapi/asm/ 391 391 392 392 beauty_outdir := $(OUTPUT)trace/beauty/generated 393 393 beauty_ioctl_outdir := $(beauty_outdir)/ioctl
+6 -1
tools/perf/builtin-record.c
··· 391 391 ui__warning("%s\n", msg); 392 392 goto try_again; 393 393 } 394 - 394 + if ((errno == EINVAL || errno == EBADF) && 395 + pos->leader != pos && 396 + pos->weak_group) { 397 + pos = perf_evlist__reset_weak_group(evlist, pos); 398 + goto try_again; 399 + } 395 400 rc = -errno; 396 401 perf_evsel__open_strerror(pos, &opts->target, 397 402 errno, msg, sizeof(msg));
+1 -27
tools/perf/builtin-stat.c
··· 383 383 return STAT_RECORD || counter->attr.read_format & PERF_FORMAT_ID; 384 384 } 385 385 386 - static struct perf_evsel *perf_evsel__reset_weak_group(struct perf_evsel *evsel) 387 - { 388 - struct perf_evsel *c2, *leader; 389 - bool is_open = true; 390 - 391 - leader = evsel->leader; 392 - pr_debug("Weak group for %s/%d failed\n", 393 - leader->name, leader->nr_members); 394 - 395 - /* 396 - * for_each_group_member doesn't work here because it doesn't 397 - * include the first entry. 398 - */ 399 - evlist__for_each_entry(evsel_list, c2) { 400 - if (c2 == evsel) 401 - is_open = false; 402 - if (c2->leader == leader) { 403 - if (is_open) 404 - perf_evsel__close(c2); 405 - c2->leader = c2; 406 - c2->nr_members = 0; 407 - } 408 - } 409 - return leader; 410 - } 411 - 412 386 static bool is_target_alive(struct target *_target, 413 387 struct thread_map *threads) 414 388 { ··· 451 477 if ((errno == EINVAL || errno == EBADF) && 452 478 counter->leader != counter && 453 479 counter->weak_group) { 454 - counter = perf_evsel__reset_weak_group(counter); 480 + counter = perf_evlist__reset_weak_group(evsel_list, counter); 455 481 goto try_again; 456 482 } 457 483
+3
tools/perf/builtin-top.c
··· 1429 1429 } 1430 1430 } 1431 1431 1432 + if (opts->branch_stack && callchain_param.enabled) 1433 + symbol_conf.show_branchflag_count = true; 1434 + 1432 1435 sort__mode = SORT_MODE__TOP; 1433 1436 /* display thread wants entries to be collapsed in a different tree */ 1434 1437 perf_hpp_list.need_collapse = 1;
+29 -5
tools/perf/builtin-trace.c
··· 108 108 } stats; 109 109 unsigned int max_stack; 110 110 unsigned int min_stack; 111 + bool raw_augmented_syscalls; 111 112 bool not_ev_qualifier; 112 113 bool live; 113 114 bool full_time; ··· 1725 1724 return printed; 1726 1725 } 1727 1726 1728 - static void *syscall__augmented_args(struct syscall *sc, struct perf_sample *sample, int *augmented_args_size) 1727 + static void *syscall__augmented_args(struct syscall *sc, struct perf_sample *sample, int *augmented_args_size, bool raw_augmented) 1729 1728 { 1730 1729 void *augmented_args = NULL; 1730 + /* 1731 + * For now with BPF raw_augmented we hook into raw_syscalls:sys_enter 1732 + * and there we get all 6 syscall args plus the tracepoint common 1733 + * fields (sizeof(long)) and the syscall_nr (another long). So we check 1734 + * if that is the case and if so don't look after the sc->args_size, 1735 + * but always after the full raw_syscalls:sys_enter payload, which is 1736 + * fixed. 1737 + * 1738 + * We'll revisit this later to pass s->args_size to the BPF augmenter 1739 + * (now tools/perf/examples/bpf/augmented_raw_syscalls.c, so that it 1740 + * copies only what we need for each syscall, like what happens when we 1741 + * use syscalls:sys_enter_NAME, so that we reduce the kernel/userspace 1742 + * traffic to just what is needed for each syscall. 1743 + */ 1744 + int args_size = raw_augmented ? (8 * (int)sizeof(long)) : sc->args_size; 1731 1745 1732 - *augmented_args_size = sample->raw_size - sc->args_size; 1746 + *augmented_args_size = sample->raw_size - args_size; 1733 1747 if (*augmented_args_size > 0) 1734 - augmented_args = sample->raw_data + sc->args_size; 1748 + augmented_args = sample->raw_data + args_size; 1735 1749 1736 1750 return augmented_args; 1737 1751 } ··· 1796 1780 * here and avoid using augmented syscalls when the evsel is the raw_syscalls one. 1797 1781 */ 1798 1782 if (evsel != trace->syscalls.events.sys_enter) 1799 - augmented_args = syscall__augmented_args(sc, sample, &augmented_args_size); 1783 + augmented_args = syscall__augmented_args(sc, sample, &augmented_args_size, trace->raw_augmented_syscalls); 1800 1784 ttrace->entry_time = sample->time; 1801 1785 msg = ttrace->entry_str; 1802 1786 printed += scnprintf(msg + printed, trace__entry_str_size - printed, "%s(", sc->name); ··· 1849 1833 goto out_put; 1850 1834 1851 1835 args = perf_evsel__sc_tp_ptr(evsel, args, sample); 1852 - augmented_args = syscall__augmented_args(sc, sample, &augmented_args_size); 1836 + augmented_args = syscall__augmented_args(sc, sample, &augmented_args_size, trace->raw_augmented_syscalls); 1853 1837 syscall__scnprintf_args(sc, msg, sizeof(msg), args, augmented_args, augmented_args_size, trace, thread); 1854 1838 fprintf(trace->output, "%s", msg); 1855 1839 err = 0; ··· 3517 3501 evsel->handler = trace__sys_enter; 3518 3502 3519 3503 evlist__for_each_entry(trace.evlist, evsel) { 3504 + bool raw_syscalls_sys_exit = strcmp(perf_evsel__name(evsel), "raw_syscalls:sys_exit") == 0; 3505 + 3506 + if (raw_syscalls_sys_exit) { 3507 + trace.raw_augmented_syscalls = true; 3508 + goto init_augmented_syscall_tp; 3509 + } 3510 + 3520 3511 if (strstarts(perf_evsel__name(evsel), "syscalls:sys_exit_")) { 3512 + init_augmented_syscall_tp: 3521 3513 perf_evsel__init_augmented_syscall_tp(evsel); 3522 3514 perf_evsel__init_augmented_syscall_tp_ret(evsel); 3523 3515 evsel->handler = trace__sys_exit;
+131
tools/perf/examples/bpf/augmented_raw_syscalls.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Augment the raw_syscalls tracepoints with the contents of the pointer arguments. 4 + * 5 + * Test it with: 6 + * 7 + * perf trace -e tools/perf/examples/bpf/augmented_raw_syscalls.c cat /etc/passwd > /dev/null 8 + * 9 + * This exactly matches what is marshalled into the raw_syscall:sys_enter 10 + * payload expected by the 'perf trace' beautifiers. 11 + * 12 + * For now it just uses the existing tracepoint augmentation code in 'perf 13 + * trace', in the next csets we'll hook up these with the sys_enter/sys_exit 14 + * code that will combine entry/exit in a strace like way. 15 + */ 16 + 17 + #include <stdio.h> 18 + #include <linux/socket.h> 19 + 20 + /* bpf-output associated map */ 21 + struct bpf_map SEC("maps") __augmented_syscalls__ = { 22 + .type = BPF_MAP_TYPE_PERF_EVENT_ARRAY, 23 + .key_size = sizeof(int), 24 + .value_size = sizeof(u32), 25 + .max_entries = __NR_CPUS__, 26 + }; 27 + 28 + struct syscall_enter_args { 29 + unsigned long long common_tp_fields; 30 + long syscall_nr; 31 + unsigned long args[6]; 32 + }; 33 + 34 + struct syscall_exit_args { 35 + unsigned long long common_tp_fields; 36 + long syscall_nr; 37 + long ret; 38 + }; 39 + 40 + struct augmented_filename { 41 + unsigned int size; 42 + int reserved; 43 + char value[256]; 44 + }; 45 + 46 + #define SYS_OPEN 2 47 + #define SYS_OPENAT 257 48 + 49 + SEC("raw_syscalls:sys_enter") 50 + int sys_enter(struct syscall_enter_args *args) 51 + { 52 + struct { 53 + struct syscall_enter_args args; 54 + struct augmented_filename filename; 55 + } augmented_args; 56 + unsigned int len = sizeof(augmented_args); 57 + const void *filename_arg = NULL; 58 + 59 + probe_read(&augmented_args.args, sizeof(augmented_args.args), args); 60 + /* 61 + * Yonghong and Edward Cree sayz: 62 + * 63 + * https://www.spinics.net/lists/netdev/msg531645.html 64 + * 65 + * >> R0=inv(id=0) R1=inv2 R6=ctx(id=0,off=0,imm=0) R7=inv64 R10=fp0,call_-1 66 + * >> 10: (bf) r1 = r6 67 + * >> 11: (07) r1 += 16 68 + * >> 12: (05) goto pc+2 69 + * >> 15: (79) r3 = *(u64 *)(r1 +0) 70 + * >> dereference of modified ctx ptr R1 off=16 disallowed 71 + * > Aha, we at least got a different error message this time. 72 + * > And indeed llvm has done that optimisation, rather than the more obvious 73 + * > 11: r3 = *(u64 *)(r1 +16) 74 + * > because it wants to have lots of reads share a single insn. You may be able 75 + * > to defeat that optimisation by adding compiler barriers, idk. Maybe someone 76 + * > with llvm knowledge can figure out how to stop it (ideally, llvm would know 77 + * > when it's generating for bpf backend and not do that). -O0? ¯\_(ツ)_/¯ 78 + * 79 + * The optimization mostly likes below: 80 + * 81 + * br1: 82 + * ... 83 + * r1 += 16 84 + * goto merge 85 + * br2: 86 + * ... 87 + * r1 += 20 88 + * goto merge 89 + * merge: 90 + * *(u64 *)(r1 + 0) 91 + * 92 + * The compiler tries to merge common loads. There is no easy way to 93 + * stop this compiler optimization without turning off a lot of other 94 + * optimizations. The easiest way is to add barriers: 95 + * 96 + * __asm__ __volatile__("": : :"memory") 97 + * 98 + * after the ctx memory access to prevent their down stream merging. 99 + */ 100 + switch (augmented_args.args.syscall_nr) { 101 + case SYS_OPEN: filename_arg = (const void *)args->args[0]; 102 + __asm__ __volatile__("": : :"memory"); 103 + break; 104 + case SYS_OPENAT: filename_arg = (const void *)args->args[1]; 105 + break; 106 + } 107 + 108 + if (filename_arg != NULL) { 109 + augmented_args.filename.reserved = 0; 110 + augmented_args.filename.size = probe_read_str(&augmented_args.filename.value, 111 + sizeof(augmented_args.filename.value), 112 + filename_arg); 113 + if (augmented_args.filename.size < sizeof(augmented_args.filename.value)) { 114 + len -= sizeof(augmented_args.filename.value) - augmented_args.filename.size; 115 + len &= sizeof(augmented_args.filename.value) - 1; 116 + } 117 + } else { 118 + len = sizeof(augmented_args.args); 119 + } 120 + 121 + perf_event_output(args, &__augmented_syscalls__, BPF_F_CURRENT_CPU, &augmented_args, len); 122 + return 0; 123 + } 124 + 125 + SEC("raw_syscalls:sys_exit") 126 + int sys_exit(struct syscall_exit_args *args) 127 + { 128 + return 1; /* 0 as soon as we start copying data returned by the kernel, e.g. 'read' */ 129 + } 130 + 131 + license(GPL);
+38 -11
tools/perf/jvmti/jvmti_agent.c
··· 125 125 } 126 126 127 127 static int 128 - debug_cache_init(void) 128 + create_jit_cache_dir(void) 129 129 { 130 130 char str[32]; 131 131 char *base, *p; ··· 144 144 145 145 strftime(str, sizeof(str), JIT_LANG"-jit-%Y%m%d", &tm); 146 146 147 - snprintf(jit_path, PATH_MAX - 1, "%s/.debug/", base); 148 - 147 + ret = snprintf(jit_path, PATH_MAX, "%s/.debug/", base); 148 + if (ret >= PATH_MAX) { 149 + warnx("jvmti: cannot generate jit cache dir because %s/.debug/" 150 + " is too long, please check the cwd, JITDUMPDIR, and" 151 + " HOME variables", base); 152 + return -1; 153 + } 149 154 ret = mkdir(jit_path, 0755); 150 155 if (ret == -1) { 151 156 if (errno != EEXIST) { ··· 159 154 } 160 155 } 161 156 162 - snprintf(jit_path, PATH_MAX - 1, "%s/.debug/jit", base); 157 + ret = snprintf(jit_path, PATH_MAX, "%s/.debug/jit", base); 158 + if (ret >= PATH_MAX) { 159 + warnx("jvmti: cannot generate jit cache dir because" 160 + " %s/.debug/jit is too long, please check the cwd," 161 + " JITDUMPDIR, and HOME variables", base); 162 + return -1; 163 + } 163 164 ret = mkdir(jit_path, 0755); 164 165 if (ret == -1) { 165 166 if (errno != EEXIST) { 166 - warn("cannot create jit cache dir %s", jit_path); 167 + warn("jvmti: cannot create jit cache dir %s", jit_path); 167 168 return -1; 168 169 } 169 170 } 170 171 171 - snprintf(jit_path, PATH_MAX - 1, "%s/.debug/jit/%s.XXXXXXXX", base, str); 172 - 172 + ret = snprintf(jit_path, PATH_MAX, "%s/.debug/jit/%s.XXXXXXXX", base, str); 173 + if (ret >= PATH_MAX) { 174 + warnx("jvmti: cannot generate jit cache dir because" 175 + " %s/.debug/jit/%s.XXXXXXXX is too long, please check" 176 + " the cwd, JITDUMPDIR, and HOME variables", 177 + base, str); 178 + return -1; 179 + } 173 180 p = mkdtemp(jit_path); 174 181 if (p != jit_path) { 175 - warn("cannot create jit cache dir %s", jit_path); 182 + warn("jvmti: cannot create jit cache dir %s", jit_path); 176 183 return -1; 177 184 } 178 185 ··· 245 228 { 246 229 char dump_path[PATH_MAX]; 247 230 struct jitheader header; 248 - int fd; 231 + int fd, ret; 249 232 FILE *fp; 250 233 251 234 init_arch_timestamp(); ··· 262 245 263 246 memset(&header, 0, sizeof(header)); 264 247 265 - debug_cache_init(); 248 + /* 249 + * jitdump file dir 250 + */ 251 + if (create_jit_cache_dir() < 0) 252 + return NULL; 266 253 267 254 /* 268 255 * jitdump file name 269 256 */ 270 - scnprintf(dump_path, PATH_MAX, "%s/jit-%i.dump", jit_path, getpid()); 257 + ret = snprintf(dump_path, PATH_MAX, "%s/jit-%i.dump", jit_path, getpid()); 258 + if (ret >= PATH_MAX) { 259 + warnx("jvmti: cannot generate jitdump file full path because" 260 + " %s/jit-%i.dump is too long, please check the cwd," 261 + " JITDUMPDIR, and HOME variables", jit_path, getpid()); 262 + return NULL; 263 + } 271 264 272 265 fd = open(dump_path, O_CREAT|O_TRUNC|O_RDWR, 0666); 273 266 if (fd == -1)
+490 -3
tools/perf/scripts/python/exported-sql-viewer.py
··· 119 119 return "[kernel]" 120 120 return name 121 121 122 + def findnth(s, sub, n, offs=0): 123 + pos = s.find(sub) 124 + if pos < 0: 125 + return pos 126 + if n <= 1: 127 + return offs + pos 128 + return findnth(s[pos + 1:], sub, n - 1, offs + pos + 1) 129 + 122 130 # Percent to one decimal place 123 131 124 132 def PercentToOneDP(n, d): ··· 1472 1464 else: 1473 1465 self.find_bar.NotFound() 1474 1466 1467 + # Dialog data item converted and validated using a SQL table 1468 + 1469 + class SQLTableDialogDataItem(): 1470 + 1471 + def __init__(self, glb, label, placeholder_text, table_name, match_column, column_name1, column_name2, parent): 1472 + self.glb = glb 1473 + self.label = label 1474 + self.placeholder_text = placeholder_text 1475 + self.table_name = table_name 1476 + self.match_column = match_column 1477 + self.column_name1 = column_name1 1478 + self.column_name2 = column_name2 1479 + self.parent = parent 1480 + 1481 + self.value = "" 1482 + 1483 + self.widget = QLineEdit() 1484 + self.widget.editingFinished.connect(self.Validate) 1485 + self.widget.textChanged.connect(self.Invalidate) 1486 + self.red = False 1487 + self.error = "" 1488 + self.validated = True 1489 + 1490 + self.last_id = 0 1491 + self.first_time = 0 1492 + self.last_time = 2 ** 64 1493 + if self.table_name == "<timeranges>": 1494 + query = QSqlQuery(self.glb.db) 1495 + QueryExec(query, "SELECT id, time FROM samples ORDER BY id DESC LIMIT 1") 1496 + if query.next(): 1497 + self.last_id = int(query.value(0)) 1498 + self.last_time = int(query.value(1)) 1499 + QueryExec(query, "SELECT time FROM samples WHERE time != 0 ORDER BY id LIMIT 1") 1500 + if query.next(): 1501 + self.first_time = int(query.value(0)) 1502 + if placeholder_text: 1503 + placeholder_text += ", between " + str(self.first_time) + " and " + str(self.last_time) 1504 + 1505 + if placeholder_text: 1506 + self.widget.setPlaceholderText(placeholder_text) 1507 + 1508 + def ValueToIds(self, value): 1509 + ids = [] 1510 + query = QSqlQuery(self.glb.db) 1511 + stmt = "SELECT id FROM " + self.table_name + " WHERE " + self.match_column + " = '" + value + "'" 1512 + ret = query.exec_(stmt) 1513 + if ret: 1514 + while query.next(): 1515 + ids.append(str(query.value(0))) 1516 + return ids 1517 + 1518 + def IdBetween(self, query, lower_id, higher_id, order): 1519 + QueryExec(query, "SELECT id FROM samples WHERE id > " + str(lower_id) + " AND id < " + str(higher_id) + " ORDER BY id " + order + " LIMIT 1") 1520 + if query.next(): 1521 + return True, int(query.value(0)) 1522 + else: 1523 + return False, 0 1524 + 1525 + def BinarySearchTime(self, lower_id, higher_id, target_time, get_floor): 1526 + query = QSqlQuery(self.glb.db) 1527 + while True: 1528 + next_id = int((lower_id + higher_id) / 2) 1529 + QueryExec(query, "SELECT time FROM samples WHERE id = " + str(next_id)) 1530 + if not query.next(): 1531 + ok, dbid = self.IdBetween(query, lower_id, next_id, "DESC") 1532 + if not ok: 1533 + ok, dbid = self.IdBetween(query, next_id, higher_id, "") 1534 + if not ok: 1535 + return str(higher_id) 1536 + next_id = dbid 1537 + QueryExec(query, "SELECT time FROM samples WHERE id = " + str(next_id)) 1538 + next_time = int(query.value(0)) 1539 + if get_floor: 1540 + if target_time > next_time: 1541 + lower_id = next_id 1542 + else: 1543 + higher_id = next_id 1544 + if higher_id <= lower_id + 1: 1545 + return str(higher_id) 1546 + else: 1547 + if target_time >= next_time: 1548 + lower_id = next_id 1549 + else: 1550 + higher_id = next_id 1551 + if higher_id <= lower_id + 1: 1552 + return str(lower_id) 1553 + 1554 + def ConvertRelativeTime(self, val): 1555 + print "val ", val 1556 + mult = 1 1557 + suffix = val[-2:] 1558 + if suffix == "ms": 1559 + mult = 1000000 1560 + elif suffix == "us": 1561 + mult = 1000 1562 + elif suffix == "ns": 1563 + mult = 1 1564 + else: 1565 + return val 1566 + val = val[:-2].strip() 1567 + if not self.IsNumber(val): 1568 + return val 1569 + val = int(val) * mult 1570 + if val >= 0: 1571 + val += self.first_time 1572 + else: 1573 + val += self.last_time 1574 + return str(val) 1575 + 1576 + def ConvertTimeRange(self, vrange): 1577 + print "vrange ", vrange 1578 + if vrange[0] == "": 1579 + vrange[0] = str(self.first_time) 1580 + if vrange[1] == "": 1581 + vrange[1] = str(self.last_time) 1582 + vrange[0] = self.ConvertRelativeTime(vrange[0]) 1583 + vrange[1] = self.ConvertRelativeTime(vrange[1]) 1584 + print "vrange2 ", vrange 1585 + if not self.IsNumber(vrange[0]) or not self.IsNumber(vrange[1]): 1586 + return False 1587 + print "ok1" 1588 + beg_range = max(int(vrange[0]), self.first_time) 1589 + end_range = min(int(vrange[1]), self.last_time) 1590 + if beg_range > self.last_time or end_range < self.first_time: 1591 + return False 1592 + print "ok2" 1593 + vrange[0] = self.BinarySearchTime(0, self.last_id, beg_range, True) 1594 + vrange[1] = self.BinarySearchTime(1, self.last_id + 1, end_range, False) 1595 + print "vrange3 ", vrange 1596 + return True 1597 + 1598 + def AddTimeRange(self, value, ranges): 1599 + print "value ", value 1600 + n = value.count("-") 1601 + if n == 1: 1602 + pass 1603 + elif n == 2: 1604 + if value.split("-")[1].strip() == "": 1605 + n = 1 1606 + elif n == 3: 1607 + n = 2 1608 + else: 1609 + return False 1610 + pos = findnth(value, "-", n) 1611 + vrange = [value[:pos].strip() ,value[pos+1:].strip()] 1612 + if self.ConvertTimeRange(vrange): 1613 + ranges.append(vrange) 1614 + return True 1615 + return False 1616 + 1617 + def InvalidValue(self, value): 1618 + self.value = "" 1619 + palette = QPalette() 1620 + palette.setColor(QPalette.Text,Qt.red) 1621 + self.widget.setPalette(palette) 1622 + self.red = True 1623 + self.error = self.label + " invalid value '" + value + "'" 1624 + self.parent.ShowMessage(self.error) 1625 + 1626 + def IsNumber(self, value): 1627 + try: 1628 + x = int(value) 1629 + except: 1630 + x = 0 1631 + return str(x) == value 1632 + 1633 + def Invalidate(self): 1634 + self.validated = False 1635 + 1636 + def Validate(self): 1637 + input_string = self.widget.text() 1638 + self.validated = True 1639 + if self.red: 1640 + palette = QPalette() 1641 + self.widget.setPalette(palette) 1642 + self.red = False 1643 + if not len(input_string.strip()): 1644 + self.error = "" 1645 + self.value = "" 1646 + return 1647 + if self.table_name == "<timeranges>": 1648 + ranges = [] 1649 + for value in [x.strip() for x in input_string.split(",")]: 1650 + if not self.AddTimeRange(value, ranges): 1651 + return self.InvalidValue(value) 1652 + ranges = [("(" + self.column_name1 + " >= " + r[0] + " AND " + self.column_name1 + " <= " + r[1] + ")") for r in ranges] 1653 + self.value = " OR ".join(ranges) 1654 + elif self.table_name == "<ranges>": 1655 + singles = [] 1656 + ranges = [] 1657 + for value in [x.strip() for x in input_string.split(",")]: 1658 + if "-" in value: 1659 + vrange = value.split("-") 1660 + if len(vrange) != 2 or not self.IsNumber(vrange[0]) or not self.IsNumber(vrange[1]): 1661 + return self.InvalidValue(value) 1662 + ranges.append(vrange) 1663 + else: 1664 + if not self.IsNumber(value): 1665 + return self.InvalidValue(value) 1666 + singles.append(value) 1667 + ranges = [("(" + self.column_name1 + " >= " + r[0] + " AND " + self.column_name1 + " <= " + r[1] + ")") for r in ranges] 1668 + if len(singles): 1669 + ranges.append(self.column_name1 + " IN (" + ",".join(singles) + ")") 1670 + self.value = " OR ".join(ranges) 1671 + elif self.table_name: 1672 + all_ids = [] 1673 + for value in [x.strip() for x in input_string.split(",")]: 1674 + ids = self.ValueToIds(value) 1675 + if len(ids): 1676 + all_ids.extend(ids) 1677 + else: 1678 + return self.InvalidValue(value) 1679 + self.value = self.column_name1 + " IN (" + ",".join(all_ids) + ")" 1680 + if self.column_name2: 1681 + self.value = "( " + self.value + " OR " + self.column_name2 + " IN (" + ",".join(all_ids) + ") )" 1682 + else: 1683 + self.value = input_string.strip() 1684 + self.error = "" 1685 + self.parent.ClearMessage() 1686 + 1687 + def IsValid(self): 1688 + if not self.validated: 1689 + self.Validate() 1690 + if len(self.error): 1691 + self.parent.ShowMessage(self.error) 1692 + return False 1693 + return True 1694 + 1695 + # Selected branch report creation dialog 1696 + 1697 + class SelectedBranchDialog(QDialog): 1698 + 1699 + def __init__(self, glb, parent=None): 1700 + super(SelectedBranchDialog, self).__init__(parent) 1701 + 1702 + self.glb = glb 1703 + 1704 + self.name = "" 1705 + self.where_clause = "" 1706 + 1707 + self.setWindowTitle("Selected Branches") 1708 + self.setMinimumWidth(600) 1709 + 1710 + items = ( 1711 + ("Report name:", "Enter a name to appear in the window title bar", "", "", "", ""), 1712 + ("Time ranges:", "Enter time ranges", "<timeranges>", "", "samples.id", ""), 1713 + ("CPUs:", "Enter CPUs or ranges e.g. 0,5-6", "<ranges>", "", "cpu", ""), 1714 + ("Commands:", "Only branches with these commands will be included", "comms", "comm", "comm_id", ""), 1715 + ("PIDs:", "Only branches with these process IDs will be included", "threads", "pid", "thread_id", ""), 1716 + ("TIDs:", "Only branches with these thread IDs will be included", "threads", "tid", "thread_id", ""), 1717 + ("DSOs:", "Only branches with these DSOs will be included", "dsos", "short_name", "samples.dso_id", "to_dso_id"), 1718 + ("Symbols:", "Only branches with these symbols will be included", "symbols", "name", "symbol_id", "to_symbol_id"), 1719 + ("Raw SQL clause: ", "Enter a raw SQL WHERE clause", "", "", "", ""), 1720 + ) 1721 + self.data_items = [SQLTableDialogDataItem(glb, *x, parent=self) for x in items] 1722 + 1723 + self.grid = QGridLayout() 1724 + 1725 + for row in xrange(len(self.data_items)): 1726 + self.grid.addWidget(QLabel(self.data_items[row].label), row, 0) 1727 + self.grid.addWidget(self.data_items[row].widget, row, 1) 1728 + 1729 + self.status = QLabel() 1730 + 1731 + self.ok_button = QPushButton("Ok", self) 1732 + self.ok_button.setDefault(True) 1733 + self.ok_button.released.connect(self.Ok) 1734 + self.ok_button.setSizePolicy(QSizePolicy.Fixed, QSizePolicy.Fixed) 1735 + 1736 + self.cancel_button = QPushButton("Cancel", self) 1737 + self.cancel_button.released.connect(self.reject) 1738 + self.cancel_button.setSizePolicy(QSizePolicy.Fixed, QSizePolicy.Fixed) 1739 + 1740 + self.hbox = QHBoxLayout() 1741 + #self.hbox.addStretch() 1742 + self.hbox.addWidget(self.status) 1743 + self.hbox.addWidget(self.ok_button) 1744 + self.hbox.addWidget(self.cancel_button) 1745 + 1746 + self.vbox = QVBoxLayout() 1747 + self.vbox.addLayout(self.grid) 1748 + self.vbox.addLayout(self.hbox) 1749 + 1750 + self.setLayout(self.vbox); 1751 + 1752 + def Ok(self): 1753 + self.name = self.data_items[0].value 1754 + if not self.name: 1755 + self.ShowMessage("Report name is required") 1756 + return 1757 + for d in self.data_items: 1758 + if not d.IsValid(): 1759 + return 1760 + for d in self.data_items[1:]: 1761 + if len(d.value): 1762 + if len(self.where_clause): 1763 + self.where_clause += " AND " 1764 + self.where_clause += d.value 1765 + if len(self.where_clause): 1766 + self.where_clause = " AND ( " + self.where_clause + " ) " 1767 + else: 1768 + self.ShowMessage("No selection") 1769 + return 1770 + self.accept() 1771 + 1772 + def ShowMessage(self, msg): 1773 + self.status.setText("<font color=#FF0000>" + msg) 1774 + 1775 + def ClearMessage(self): 1776 + self.status.setText("") 1777 + 1475 1778 # Event list 1476 1779 1477 1780 def GetEventList(db): ··· 1975 1656 def FindDone(self, row): 1976 1657 self.find_bar.Idle() 1977 1658 if row >= 0: 1978 - self.view.setCurrentIndex(self.model.index(row, 0, QModelIndex())) 1659 + self.view.setCurrentIndex(self.model.mapFromSource(self.data_model.index(row, 0, QModelIndex()))) 1979 1660 else: 1980 1661 self.find_bar.NotFound() 1981 1662 ··· 2084 1765 def setActiveSubWindow(self, nr): 2085 1766 self.mdi_area.setActiveSubWindow(self.mdi_area.subWindowList()[nr - 1]) 2086 1767 1768 + # Help text 1769 + 1770 + glb_help_text = """ 1771 + <h1>Contents</h1> 1772 + <style> 1773 + p.c1 { 1774 + text-indent: 40px; 1775 + } 1776 + p.c2 { 1777 + text-indent: 80px; 1778 + } 1779 + } 1780 + </style> 1781 + <p class=c1><a href=#reports>1. Reports</a></p> 1782 + <p class=c2><a href=#callgraph>1.1 Context-Sensitive Call Graph</a></p> 1783 + <p class=c2><a href=#allbranches>1.2 All branches</a></p> 1784 + <p class=c2><a href=#selectedbranches>1.3 Selected branches</a></p> 1785 + <p class=c1><a href=#tables>2. Tables</a></p> 1786 + <h1 id=reports>1. Reports</h1> 1787 + <h2 id=callgraph>1.1 Context-Sensitive Call Graph</h2> 1788 + The result is a GUI window with a tree representing a context-sensitive 1789 + call-graph. Expanding a couple of levels of the tree and adjusting column 1790 + widths to suit will display something like: 1791 + <pre> 1792 + Call Graph: pt_example 1793 + Call Path Object Count Time(ns) Time(%) Branch Count Branch Count(%) 1794 + v- ls 1795 + v- 2638:2638 1796 + v- _start ld-2.19.so 1 10074071 100.0 211135 100.0 1797 + |- unknown unknown 1 13198 0.1 1 0.0 1798 + >- _dl_start ld-2.19.so 1 1400980 13.9 19637 9.3 1799 + >- _d_linit_internal ld-2.19.so 1 448152 4.4 11094 5.3 1800 + v-__libc_start_main@plt ls 1 8211741 81.5 180397 85.4 1801 + >- _dl_fixup ld-2.19.so 1 7607 0.1 108 0.1 1802 + >- __cxa_atexit libc-2.19.so 1 11737 0.1 10 0.0 1803 + >- __libc_csu_init ls 1 10354 0.1 10 0.0 1804 + |- _setjmp libc-2.19.so 1 0 0.0 4 0.0 1805 + v- main ls 1 8182043 99.6 180254 99.9 1806 + </pre> 1807 + <h3>Points to note:</h3> 1808 + <ul> 1809 + <li>The top level is a command name (comm)</li> 1810 + <li>The next level is a thread (pid:tid)</li> 1811 + <li>Subsequent levels are functions</li> 1812 + <li>'Count' is the number of calls</li> 1813 + <li>'Time' is the elapsed time until the function returns</li> 1814 + <li>Percentages are relative to the level above</li> 1815 + <li>'Branch Count' is the total number of branches for that function and all functions that it calls 1816 + </ul> 1817 + <h3>Find</h3> 1818 + Ctrl-F displays a Find bar which finds function names by either an exact match or a pattern match. 1819 + The pattern matching symbols are ? for any character and * for zero or more characters. 1820 + <h2 id=allbranches>1.2 All branches</h2> 1821 + The All branches report displays all branches in chronological order. 1822 + Not all data is fetched immediately. More records can be fetched using the Fetch bar provided. 1823 + <h3>Disassembly</h3> 1824 + Open a branch to display disassembly. This only works if: 1825 + <ol> 1826 + <li>The disassembler is available. Currently, only Intel XED is supported - see <a href=#xed>Intel XED Setup</a></li> 1827 + <li>The object code is available. Currently, only the perf build ID cache is searched for object code. 1828 + The default directory ~/.debug can be overridden by setting environment variable PERF_BUILDID_DIR. 1829 + One exception is kcore where the DSO long name is used (refer dsos_view on the Tables menu), 1830 + or alternatively, set environment variable PERF_KCORE to the kcore file name.</li> 1831 + </ol> 1832 + <h4 id=xed>Intel XED Setup</h4> 1833 + To use Intel XED, libxed.so must be present. To build and install libxed.so: 1834 + <pre> 1835 + git clone https://github.com/intelxed/mbuild.git mbuild 1836 + git clone https://github.com/intelxed/xed 1837 + cd xed 1838 + ./mfile.py --share 1839 + sudo ./mfile.py --prefix=/usr/local install 1840 + sudo ldconfig 1841 + </pre> 1842 + <h3>Find</h3> 1843 + Ctrl-F displays a Find bar which finds substrings by either an exact match or a regular expression match. 1844 + Refer to Python documentation for the regular expression syntax. 1845 + All columns are searched, but only currently fetched rows are searched. 1846 + <h2 id=selectedbranches>1.3 Selected branches</h2> 1847 + This is the same as the <a href=#allbranches>All branches</a> report but with the data reduced 1848 + by various selection criteria. A dialog box displays available criteria which are AND'ed together. 1849 + <h3>1.3.1 Time ranges</h3> 1850 + The time ranges hint text shows the total time range. Relative time ranges can also be entered in 1851 + ms, us or ns. Also, negative values are relative to the end of trace. Examples: 1852 + <pre> 1853 + 81073085947329-81073085958238 From 81073085947329 to 81073085958238 1854 + 100us-200us From 100us to 200us 1855 + 10ms- From 10ms to the end 1856 + -100ns The first 100ns 1857 + -10ms- The last 10ms 1858 + </pre> 1859 + N.B. Due to the granularity of timestamps, there could be no branches in any given time range. 1860 + <h1 id=tables>2. Tables</h1> 1861 + The Tables menu shows all tables and views in the database. Most tables have an associated view 1862 + which displays the information in a more friendly way. Not all data for large tables is fetched 1863 + immediately. More records can be fetched using the Fetch bar provided. Columns can be sorted, 1864 + but that can be slow for large tables. 1865 + <p>There are also tables of database meta-information. 1866 + For SQLite3 databases, the sqlite_master table is included. 1867 + For PostgreSQL databases, information_schema.tables/views/columns are included. 1868 + <h3>Find</h3> 1869 + Ctrl-F displays a Find bar which finds substrings by either an exact match or a regular expression match. 1870 + Refer to Python documentation for the regular expression syntax. 1871 + All columns are searched, but only currently fetched rows are searched. 1872 + <p>N.B. Results are found in id order, so if the table is re-ordered, find-next and find-previous 1873 + will go to the next/previous result in id order, instead of display order. 1874 + """ 1875 + 1876 + # Help window 1877 + 1878 + class HelpWindow(QMdiSubWindow): 1879 + 1880 + def __init__(self, glb, parent=None): 1881 + super(HelpWindow, self).__init__(parent) 1882 + 1883 + self.text = QTextBrowser() 1884 + self.text.setHtml(glb_help_text) 1885 + self.text.setReadOnly(True) 1886 + self.text.setOpenExternalLinks(True) 1887 + 1888 + self.setWidget(self.text) 1889 + 1890 + AddSubWindow(glb.mainwindow.mdi_area, self, "Exported SQL Viewer Help") 1891 + 1892 + # Main window that only displays the help text 1893 + 1894 + class HelpOnlyWindow(QMainWindow): 1895 + 1896 + def __init__(self, parent=None): 1897 + super(HelpOnlyWindow, self).__init__(parent) 1898 + 1899 + self.setMinimumSize(200, 100) 1900 + self.resize(800, 600) 1901 + self.setWindowTitle("Exported SQL Viewer Help") 1902 + self.setWindowIcon(self.style().standardIcon(QStyle.SP_MessageBoxInformation)) 1903 + 1904 + self.text = QTextBrowser() 1905 + self.text.setHtml(glb_help_text) 1906 + self.text.setReadOnly(True) 1907 + self.text.setOpenExternalLinks(True) 1908 + 1909 + self.setCentralWidget(self.text) 1910 + 2087 1911 # Font resize 2088 1912 2089 1913 def ResizeFont(widget, diff): ··· 2313 1851 2314 1852 self.window_menu = WindowMenu(self.mdi_area, menu) 2315 1853 1854 + help_menu = menu.addMenu("&Help") 1855 + help_menu.addAction(CreateAction("&Exported SQL Viewer Help", "Helpful information", self.Help, self, QKeySequence.HelpContents)) 1856 + 2316 1857 def Find(self): 2317 1858 win = self.mdi_area.activeSubWindow() 2318 1859 if win: ··· 2353 1888 if event == "branches": 2354 1889 label = "All branches" if branches_events == 1 else "All branches " + "(id=" + dbid + ")" 2355 1890 reports_menu.addAction(CreateAction(label, "Create a new window displaying branch events", lambda x=dbid: self.NewBranchView(x), self)) 1891 + label = "Selected branches" if branches_events == 1 else "Selected branches " + "(id=" + dbid + ")" 1892 + reports_menu.addAction(CreateAction(label, "Create a new window displaying branch events", lambda x=dbid: self.NewSelectedBranchView(x), self)) 2356 1893 2357 1894 def TableMenu(self, tables, menu): 2358 1895 table_menu = menu.addMenu("&Tables") ··· 2367 1900 def NewBranchView(self, event_id): 2368 1901 BranchWindow(self.glb, event_id, "", "", self) 2369 1902 1903 + def NewSelectedBranchView(self, event_id): 1904 + dialog = SelectedBranchDialog(self.glb, self) 1905 + ret = dialog.exec_() 1906 + if ret: 1907 + BranchWindow(self.glb, event_id, dialog.name, dialog.where_clause, self) 1908 + 2370 1909 def NewTableView(self, table_name): 2371 1910 TableWindow(self.glb, table_name, self) 1911 + 1912 + def Help(self): 1913 + HelpWindow(self.glb, self) 2372 1914 2373 1915 # XED Disassembler 2374 1916 ··· 2405 1929 class LibXED(): 2406 1930 2407 1931 def __init__(self): 2408 - self.libxed = CDLL("libxed.so") 1932 + try: 1933 + self.libxed = CDLL("libxed.so") 1934 + except: 1935 + self.libxed = None 1936 + if not self.libxed: 1937 + self.libxed = CDLL("/usr/local/lib/libxed.so") 2409 1938 2410 1939 self.xed_tables_init = self.libxed.xed_tables_init 2411 1940 self.xed_tables_init.restype = None ··· 2578 2097 2579 2098 def Main(): 2580 2099 if (len(sys.argv) < 2): 2581 - print >> sys.stderr, "Usage is: exported-sql-viewer.py <database name>" 2100 + print >> sys.stderr, "Usage is: exported-sql-viewer.py {<database name> | --help-only}" 2582 2101 raise Exception("Too few arguments") 2583 2102 2584 2103 dbname = sys.argv[1] 2104 + if dbname == "--help-only": 2105 + app = QApplication(sys.argv) 2106 + mainwindow = HelpOnlyWindow() 2107 + mainwindow.show() 2108 + err = app.exec_() 2109 + sys.exit(err) 2585 2110 2586 2111 is_sqlite3 = False 2587 2112 try:
+1 -1
tools/perf/tests/attr/base-record
··· 9 9 config=0 10 10 sample_period=* 11 11 sample_type=263 12 - read_format=0 12 + read_format=0|4 13 13 disabled=1 14 14 inherit=1 15 15 pinned=0
-1
tools/perf/tests/attr/test-record-group-sampling
··· 37 37 sample_period=0 38 38 freq=0 39 39 write_backward=0 40 - sample_id_all=0
+1
tools/perf/trace/beauty/ioctl.c
··· 31 31 "TCSETSW2", "TCSETSF2", "TIOCGRS48", "TIOCSRS485", "TIOCGPTN", "TIOCSPTLCK", 32 32 "TIOCGDEV", "TCSETX", "TCSETXF", "TCSETXW", "TIOCSIG", "TIOCVHANGUP", "TIOCGPKT", 33 33 "TIOCGPTLCK", [_IOC_NR(TIOCGEXCL)] = "TIOCGEXCL", "TIOCGPTPEER", 34 + "TIOCGISO7816", "TIOCSISO7816", 34 35 [_IOC_NR(FIONCLEX)] = "FIONCLEX", "FIOCLEX", "FIOASYNC", "TIOCSERCONFIG", 35 36 "TIOCSERGWILD", "TIOCSERSWILD", "TIOCGLCKTRMIOS", "TIOCSLCKTRMIOS", 36 37 "TIOCSERGSTRUCT", "TIOCSERGETLSR", "TIOCSERGETMULTI", "TIOCSERSETMULTI",
+1
tools/perf/util/Build
··· 10 10 libperf-y += evsel.o 11 11 libperf-y += evsel_fprintf.o 12 12 libperf-y += find_bit.o 13 + libperf-y += get_current_dir_name.o 13 14 libperf-y += kallsyms.o 14 15 libperf-y += levenshtein.o 15 16 libperf-y += llvm-utils.o
+27
tools/perf/util/evlist.c
··· 1810 1810 leader->forced_leader = true; 1811 1811 } 1812 1812 } 1813 + 1814 + struct perf_evsel *perf_evlist__reset_weak_group(struct perf_evlist *evsel_list, 1815 + struct perf_evsel *evsel) 1816 + { 1817 + struct perf_evsel *c2, *leader; 1818 + bool is_open = true; 1819 + 1820 + leader = evsel->leader; 1821 + pr_debug("Weak group for %s/%d failed\n", 1822 + leader->name, leader->nr_members); 1823 + 1824 + /* 1825 + * for_each_group_member doesn't work here because it doesn't 1826 + * include the first entry. 1827 + */ 1828 + evlist__for_each_entry(evsel_list, c2) { 1829 + if (c2 == evsel) 1830 + is_open = false; 1831 + if (c2->leader == leader) { 1832 + if (is_open) 1833 + perf_evsel__close(c2); 1834 + c2->leader = c2; 1835 + c2->nr_members = 0; 1836 + } 1837 + } 1838 + return leader; 1839 + }
+3
tools/perf/util/evlist.h
··· 312 312 313 313 void perf_evlist__force_leader(struct perf_evlist *evlist); 314 314 315 + struct perf_evsel *perf_evlist__reset_weak_group(struct perf_evlist *evlist, 316 + struct perf_evsel *evsel); 317 + 315 318 #endif /* __PERF_EVLIST_H */
+1 -2
tools/perf/util/evsel.c
··· 956 956 attr->sample_freq = 0; 957 957 attr->sample_period = 0; 958 958 attr->write_backward = 0; 959 - attr->sample_id_all = 0; 960 959 } 961 960 962 961 if (opts->no_samples) ··· 1092 1093 attr->exclude_user = 1; 1093 1094 } 1094 1095 1095 - if (evsel->own_cpus) 1096 + if (evsel->own_cpus || evsel->unit) 1096 1097 evsel->attr.read_format |= PERF_FORMAT_ID; 1097 1098 1098 1099 /*
+18
tools/perf/util/get_current_dir_name.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (C) 2018, Red Hat Inc, Arnaldo Carvalho de Melo <acme@redhat.com> 3 + // 4 + #ifndef HAVE_GET_CURRENT_DIR_NAME 5 + #include "util.h" 6 + #include <unistd.h> 7 + #include <stdlib.h> 8 + #include <stdlib.h> 9 + 10 + /* Android's 'bionic' library, for one, doesn't have this */ 11 + 12 + char *get_current_dir_name(void) 13 + { 14 + char pwd[PATH_MAX]; 15 + 16 + return getcwd(pwd, sizeof(pwd)) == NULL ? NULL : strdup(pwd); 17 + } 18 + #endif // HAVE_GET_CURRENT_DIR_NAME
+4
tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
··· 1474 1474 decoder->have_calc_cyc_to_tsc = false; 1475 1475 intel_pt_calc_cyc_to_tsc(decoder, true); 1476 1476 } 1477 + 1478 + intel_pt_log_to("Setting timestamp", decoder->timestamp); 1477 1479 } 1478 1480 1479 1481 static void intel_pt_calc_cbr(struct intel_pt_decoder *decoder) ··· 1516 1514 decoder->timestamp = timestamp; 1517 1515 1518 1516 decoder->timestamp_insn_cnt = 0; 1517 + 1518 + intel_pt_log_to("Setting timestamp", decoder->timestamp); 1519 1519 } 1520 1520 1521 1521 /* Walk PSB+ packets when already in sync. */
+5
tools/perf/util/intel-pt-decoder/intel-pt-log.c
··· 31 31 static char log_name[MAX_LOG_NAME]; 32 32 bool intel_pt_enable_logging; 33 33 34 + void *intel_pt_log_fp(void) 35 + { 36 + return f; 37 + } 38 + 34 39 void intel_pt_log_enable(void) 35 40 { 36 41 intel_pt_enable_logging = true;
+1
tools/perf/util/intel-pt-decoder/intel-pt-log.h
··· 22 22 23 23 struct intel_pt_pkt; 24 24 25 + void *intel_pt_log_fp(void); 25 26 void intel_pt_log_enable(void); 26 27 void intel_pt_log_disable(void); 27 28 void intel_pt_log_set_name(const char *name);
+13 -3
tools/perf/util/intel-pt.c
··· 206 206 intel_pt_dump(pt, buf, len); 207 207 } 208 208 209 + static void intel_pt_log_event(union perf_event *event) 210 + { 211 + FILE *f = intel_pt_log_fp(); 212 + 213 + if (!intel_pt_enable_logging || !f) 214 + return; 215 + 216 + perf_event__fprintf(event, f); 217 + } 218 + 209 219 static int intel_pt_do_fix_overlap(struct intel_pt *pt, struct auxtrace_buffer *a, 210 220 struct auxtrace_buffer *b) 211 221 { ··· 2020 2010 event->header.type == PERF_RECORD_SWITCH_CPU_WIDE) 2021 2011 err = intel_pt_context_switch(pt, event, sample); 2022 2012 2023 - intel_pt_log("event %s (%u): cpu %d time %"PRIu64" tsc %#"PRIx64"\n", 2024 - perf_event__name(event->header.type), event->header.type, 2025 - sample->cpu, sample->time, timestamp); 2013 + intel_pt_log("event %u: cpu %d time %"PRIu64" tsc %#"PRIx64" ", 2014 + event->header.type, sample->cpu, sample->time, timestamp); 2015 + intel_pt_log_event(event); 2026 2016 2027 2017 return err; 2028 2018 }
+15 -2
tools/perf/util/namespaces.c
··· 18 18 #include <stdio.h> 19 19 #include <string.h> 20 20 #include <unistd.h> 21 + #include <asm/bug.h> 21 22 22 23 struct namespaces *namespaces__new(struct namespaces_event *event) 23 24 { ··· 187 186 char curpath[PATH_MAX]; 188 187 int oldns = -1; 189 188 int newns = -1; 189 + char *oldcwd = NULL; 190 190 191 191 if (nc == NULL) 192 192 return; ··· 201 199 if (snprintf(curpath, PATH_MAX, "/proc/self/ns/mnt") >= PATH_MAX) 202 200 return; 203 201 202 + oldcwd = get_current_dir_name(); 203 + if (!oldcwd) 204 + return; 205 + 204 206 oldns = open(curpath, O_RDONLY); 205 207 if (oldns < 0) 206 - return; 208 + goto errout; 207 209 208 210 newns = open(nsi->mntns_path, O_RDONLY); 209 211 if (newns < 0) ··· 216 210 if (setns(newns, CLONE_NEWNS) < 0) 217 211 goto errout; 218 212 213 + nc->oldcwd = oldcwd; 219 214 nc->oldns = oldns; 220 215 nc->newns = newns; 221 216 return; 222 217 223 218 errout: 219 + free(oldcwd); 224 220 if (oldns > -1) 225 221 close(oldns); 226 222 if (newns > -1) ··· 231 223 232 224 void nsinfo__mountns_exit(struct nscookie *nc) 233 225 { 234 - if (nc == NULL || nc->oldns == -1 || nc->newns == -1) 226 + if (nc == NULL || nc->oldns == -1 || nc->newns == -1 || !nc->oldcwd) 235 227 return; 236 228 237 229 setns(nc->oldns, CLONE_NEWNS); 230 + 231 + if (nc->oldcwd) { 232 + WARN_ON_ONCE(chdir(nc->oldcwd)); 233 + zfree(&nc->oldcwd); 234 + } 238 235 239 236 if (nc->oldns > -1) { 240 237 close(nc->oldns);
+1
tools/perf/util/namespaces.h
··· 38 38 struct nscookie { 39 39 int oldns; 40 40 int newns; 41 + char *oldcwd; 41 42 }; 42 43 43 44 int nsinfo__init(struct nsinfo *nsi);
+1 -1
tools/perf/util/pmu.c
··· 773 773 774 774 if (!is_arm_pmu_core(name)) { 775 775 pname = pe->pmu ? pe->pmu : "cpu"; 776 - if (strncmp(pname, name, strlen(pname))) 776 + if (strcmp(pname, name)) 777 777 continue; 778 778 } 779 779
+4
tools/perf/util/util.h
··· 59 59 60 60 const char *perf_tip(const char *dirpath); 61 61 62 + #ifndef HAVE_GET_CURRENT_DIR_NAME 63 + char *get_current_dir_name(void); 64 + #endif 65 + 62 66 #ifndef HAVE_SCHED_GETCPU_SUPPORT 63 67 int sched_getcpu(void); 64 68 #endif
+6 -6
tools/power/cpupower/Makefile
··· 129 129 WARNINGS += $(call cc-supports,-Wdeclaration-after-statement) 130 130 WARNINGS += -Wshadow 131 131 132 - CFLAGS += -DVERSION=\"$(VERSION)\" -DPACKAGE=\"$(PACKAGE)\" \ 132 + override CFLAGS += -DVERSION=\"$(VERSION)\" -DPACKAGE=\"$(PACKAGE)\" \ 133 133 -DPACKAGE_BUGREPORT=\"$(PACKAGE_BUGREPORT)\" -D_GNU_SOURCE 134 134 135 135 UTIL_OBJS = utils/helpers/amd.o utils/helpers/msr.o \ ··· 156 156 LIB_OBJS = lib/cpufreq.o lib/cpupower.o lib/cpuidle.o 157 157 LIB_OBJS := $(addprefix $(OUTPUT),$(LIB_OBJS)) 158 158 159 - CFLAGS += -pipe 159 + override CFLAGS += -pipe 160 160 161 161 ifeq ($(strip $(NLS)),true) 162 162 INSTALL_NLS += install-gmo 163 163 COMPILE_NLS += create-gmo 164 - CFLAGS += -DNLS 164 + override CFLAGS += -DNLS 165 165 endif 166 166 167 167 ifeq ($(strip $(CPUFREQ_BENCH)),true) ··· 175 175 UTIL_SRC += $(LIB_SRC) 176 176 endif 177 177 178 - CFLAGS += $(WARNINGS) 178 + override CFLAGS += $(WARNINGS) 179 179 180 180 ifeq ($(strip $(V)),false) 181 181 QUIET=@ ··· 188 188 189 189 # if DEBUG is enabled, then we do not strip or optimize 190 190 ifeq ($(strip $(DEBUG)),true) 191 - CFLAGS += -O1 -g -DDEBUG 191 + override CFLAGS += -O1 -g -DDEBUG 192 192 STRIPCMD = /bin/true -Since_we_are_debugging 193 193 else 194 - CFLAGS += $(OPTIMIZATION) -fomit-frame-pointer 194 + override CFLAGS += $(OPTIMIZATION) -fomit-frame-pointer 195 195 STRIPCMD = $(STRIP) -s --remove-section=.note --remove-section=.comment 196 196 endif 197 197
+1 -1
tools/power/cpupower/bench/Makefile
··· 9 9 ifeq ($(strip $(STATIC)),true) 10 10 LIBS = -L../ -L$(OUTPUT) -lm 11 11 OBJS = $(OUTPUT)main.o $(OUTPUT)parse.o $(OUTPUT)system.o $(OUTPUT)benchmark.o \ 12 - $(OUTPUT)../lib/cpufreq.o $(OUTPUT)../lib/sysfs.o 12 + $(OUTPUT)../lib/cpufreq.o $(OUTPUT)../lib/cpupower.o 13 13 else 14 14 LIBS = -L../ -L$(OUTPUT) -lm -lcpupower 15 15 OBJS = $(OUTPUT)main.o $(OUTPUT)parse.o $(OUTPUT)system.o $(OUTPUT)benchmark.o
+2 -2
tools/power/cpupower/debug/x86_64/Makefile
··· 13 13 default: all 14 14 15 15 $(OUTPUT)centrino-decode: ../i386/centrino-decode.c 16 - $(CC) $(CFLAGS) -o $@ $< 16 + $(CC) $(CFLAGS) -o $@ $(LDFLAGS) $< 17 17 18 18 $(OUTPUT)powernow-k8-decode: ../i386/powernow-k8-decode.c 19 - $(CC) $(CFLAGS) -o $@ $< 19 + $(CC) $(CFLAGS) -o $@ $(LDFLAGS) $< 20 20 21 21 all: $(OUTPUT)centrino-decode $(OUTPUT)powernow-k8-decode 22 22
+1 -1
tools/power/cpupower/lib/cpufreq.c
··· 28 28 29 29 snprintf(path, sizeof(path), PATH_TO_CPU "cpu%u/cpufreq/%s", 30 30 cpu, fname); 31 - return sysfs_read_file(path, buf, buflen); 31 + return cpupower_read_sysfs(path, buf, buflen); 32 32 } 33 33 34 34 /* helper function to write a new value to a /sys file */
+1 -1
tools/power/cpupower/lib/cpuidle.c
··· 319 319 320 320 snprintf(path, sizeof(path), PATH_TO_CPU "cpuidle/%s", fname); 321 321 322 - return sysfs_read_file(path, buf, buflen); 322 + return cpupower_read_sysfs(path, buf, buflen); 323 323 } 324 324 325 325
+2 -2
tools/power/cpupower/lib/cpupower.c
··· 15 15 #include "cpupower.h" 16 16 #include "cpupower_intern.h" 17 17 18 - unsigned int sysfs_read_file(const char *path, char *buf, size_t buflen) 18 + unsigned int cpupower_read_sysfs(const char *path, char *buf, size_t buflen) 19 19 { 20 20 int fd; 21 21 ssize_t numread; ··· 95 95 96 96 snprintf(path, sizeof(path), PATH_TO_CPU "cpu%u/topology/%s", 97 97 cpu, fname); 98 - if (sysfs_read_file(path, linebuf, MAX_LINE_LEN) == 0) 98 + if (cpupower_read_sysfs(path, linebuf, MAX_LINE_LEN) == 0) 99 99 return -1; 100 100 *result = strtol(linebuf, &endp, 0); 101 101 if (endp == linebuf || errno == ERANGE)
+1 -1
tools/power/cpupower/lib/cpupower_intern.h
··· 3 3 #define MAX_LINE_LEN 4096 4 4 #define SYSFS_PATH_MAX 255 5 5 6 - unsigned int sysfs_read_file(const char *path, char *buf, size_t buflen); 6 + unsigned int cpupower_read_sysfs(const char *path, char *buf, size_t buflen);
+4 -4
tools/testing/nvdimm/test/nfit.c
··· 140 140 [6] = NFIT_DIMM_HANDLE(1, 0, 0, 0, 1), 141 141 }; 142 142 143 - static unsigned long dimm_fail_cmd_flags[NUM_DCR]; 144 - static int dimm_fail_cmd_code[NUM_DCR]; 143 + static unsigned long dimm_fail_cmd_flags[ARRAY_SIZE(handle)]; 144 + static int dimm_fail_cmd_code[ARRAY_SIZE(handle)]; 145 145 146 146 static const struct nd_intel_smart smart_def = { 147 147 .flags = ND_INTEL_SMART_HEALTH_VALID ··· 205 205 unsigned long deadline; 206 206 spinlock_t lock; 207 207 } ars_state; 208 - struct device *dimm_dev[NUM_DCR]; 208 + struct device *dimm_dev[ARRAY_SIZE(handle)]; 209 209 struct nd_intel_smart *smart; 210 210 struct nd_intel_smart_threshold *smart_threshold; 211 211 struct badrange badrange; ··· 2680 2680 u32 nfit_handle = __to_nfit_memdev(nfit_mem)->device_handle; 2681 2681 int i; 2682 2682 2683 - for (i = 0; i < NUM_DCR; i++) 2683 + for (i = 0; i < ARRAY_SIZE(handle); i++) 2684 2684 if (nfit_handle == handle[i]) 2685 2685 dev_set_drvdata(nfit_test->dimm_dev[i], 2686 2686 nfit_mem);
+1
tools/testing/selftests/Makefile
··· 24 24 TARGETS += mount 25 25 TARGETS += mqueue 26 26 TARGETS += net 27 + TARGETS += netfilter 27 28 TARGETS += nsfs 28 29 TARGETS += powerpc 29 30 TARGETS += proc
+4 -1
tools/testing/selftests/bpf/test_netcnt.c
··· 81 81 goto err; 82 82 } 83 83 84 - assert(system("ping localhost -6 -c 10000 -f -q > /dev/null") == 0); 84 + if (system("which ping6 &>/dev/null") == 0) 85 + assert(!system("ping6 localhost -c 10000 -f -q > /dev/null")); 86 + else 87 + assert(!system("ping -6 localhost -c 10000 -f -q > /dev/null")); 85 88 86 89 if (bpf_prog_query(cgroup_fd, BPF_CGROUP_INET_EGRESS, 0, NULL, NULL, 87 90 &prog_cnt)) {
+19
tools/testing/selftests/bpf/test_verifier.c
··· 13896 13896 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 13897 13897 .result = ACCEPT, 13898 13898 }, 13899 + { 13900 + "calls: ctx read at start of subprog", 13901 + .insns = { 13902 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), 13903 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 5), 13904 + BPF_JMP_REG(BPF_JSGT, BPF_REG_0, BPF_REG_0, 0), 13905 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), 13906 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2), 13907 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), 13908 + BPF_EXIT_INSN(), 13909 + BPF_LDX_MEM(BPF_B, BPF_REG_9, BPF_REG_1, 0), 13910 + BPF_MOV64_IMM(BPF_REG_0, 0), 13911 + BPF_EXIT_INSN(), 13912 + }, 13913 + .prog_type = BPF_PROG_TYPE_SOCKET_FILTER, 13914 + .errstr_unpriv = "function calls to other bpf functions are allowed for root only", 13915 + .result_unpriv = REJECT, 13916 + .result = ACCEPT, 13917 + }, 13899 13918 }; 13900 13919 13901 13920 static int probe_filter_length(const struct bpf_insn *fp)
+6
tools/testing/selftests/netfilter/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + # Makefile for netfilter selftests 3 + 4 + TEST_PROGS := nft_trans_stress.sh 5 + 6 + include ../lib.mk
+2
tools/testing/selftests/netfilter/config
··· 1 + CONFIG_NET_NS=y 2 + NF_TABLES_INET=y
+78
tools/testing/selftests/netfilter/nft_trans_stress.sh
··· 1 + #!/bin/bash 2 + # 3 + # This test is for stress-testing the nf_tables config plane path vs. 4 + # packet path processing: Make sure we never release rules that are 5 + # still visible to other cpus. 6 + # 7 + # set -e 8 + 9 + # Kselftest framework requirement - SKIP code is 4. 10 + ksft_skip=4 11 + 12 + testns=testns1 13 + tables="foo bar baz quux" 14 + 15 + nft --version > /dev/null 2>&1 16 + if [ $? -ne 0 ];then 17 + echo "SKIP: Could not run test without nft tool" 18 + exit $ksft_skip 19 + fi 20 + 21 + ip -Version > /dev/null 2>&1 22 + if [ $? -ne 0 ];then 23 + echo "SKIP: Could not run test without ip tool" 24 + exit $ksft_skip 25 + fi 26 + 27 + tmp=$(mktemp) 28 + 29 + for table in $tables; do 30 + echo add table inet "$table" >> "$tmp" 31 + echo flush table inet "$table" >> "$tmp" 32 + 33 + echo "add chain inet $table INPUT { type filter hook input priority 0; }" >> "$tmp" 34 + echo "add chain inet $table OUTPUT { type filter hook output priority 0; }" >> "$tmp" 35 + for c in $(seq 1 400); do 36 + chain=$(printf "chain%03u" "$c") 37 + echo "add chain inet $table $chain" >> "$tmp" 38 + done 39 + 40 + for c in $(seq 1 400); do 41 + chain=$(printf "chain%03u" "$c") 42 + for BASE in INPUT OUTPUT; do 43 + echo "add rule inet $table $BASE counter jump $chain" >> "$tmp" 44 + done 45 + echo "add rule inet $table $chain counter return" >> "$tmp" 46 + done 47 + done 48 + 49 + ip netns add "$testns" 50 + ip -netns "$testns" link set lo up 51 + 52 + lscpu | grep ^CPU\(s\): | ( read cpu cpunum ; 53 + cpunum=$((cpunum-1)) 54 + for i in $(seq 0 $cpunum);do 55 + mask=$(printf 0x%x $((1<<$i))) 56 + ip netns exec "$testns" taskset $mask ping -4 127.0.0.1 -fq > /dev/null & 57 + ip netns exec "$testns" taskset $mask ping -6 ::1 -fq > /dev/null & 58 + done) 59 + 60 + sleep 1 61 + 62 + for i in $(seq 1 10) ; do ip netns exec "$testns" nft -f "$tmp" & done 63 + 64 + for table in $tables;do 65 + randsleep=$((RANDOM%10)) 66 + sleep $randsleep 67 + ip netns exec "$testns" nft delete table inet $table 2>/dev/null 68 + done 69 + 70 + randsleep=$((RANDOM%10)) 71 + sleep $randsleep 72 + 73 + pkill -9 ping 74 + 75 + wait 76 + 77 + rm -f "$tmp" 78 + ip netns del "$testns"
+18 -3
tools/testing/selftests/powerpc/mm/wild_bctr.c
··· 47 47 return 0; 48 48 } 49 49 50 - #define REG_POISON 0x5a5aUL 51 - #define POISONED_REG(n) ((REG_POISON << 48) | ((n) << 32) | (REG_POISON << 16) | (n)) 50 + #define REG_POISON 0x5a5a 51 + #define POISONED_REG(n) ((((unsigned long)REG_POISON) << 48) | ((n) << 32) | \ 52 + (((unsigned long)REG_POISON) << 16) | (n)) 52 53 53 54 static inline void poison_regs(void) 54 55 { ··· 106 105 } 107 106 } 108 107 108 + #ifdef _CALL_AIXDESC 109 + struct opd { 110 + unsigned long ip; 111 + unsigned long toc; 112 + unsigned long env; 113 + }; 114 + static struct opd bad_opd = { 115 + .ip = BAD_NIP, 116 + }; 117 + #define BAD_FUNC (&bad_opd) 118 + #else 119 + #define BAD_FUNC BAD_NIP 120 + #endif 121 + 109 122 int test_wild_bctr(void) 110 123 { 111 124 int (*func_ptr)(void); ··· 148 133 149 134 poison_regs(); 150 135 151 - func_ptr = (int (*)(void))BAD_NIP; 136 + func_ptr = (int (*)(void))BAD_FUNC; 152 137 func_ptr(); 153 138 154 139 FAIL_IF(1); /* we didn't segv? */
+7 -2
tools/testing/selftests/proc/proc-self-map-files-002.c
··· 13 13 * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF 14 14 * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 15 15 */ 16 - /* Test readlink /proc/self/map_files/... with address 0. */ 16 + /* Test readlink /proc/self/map_files/... with minimum address. */ 17 17 #include <errno.h> 18 18 #include <sys/types.h> 19 19 #include <sys/stat.h> ··· 47 47 int main(void) 48 48 { 49 49 const unsigned int PAGE_SIZE = sysconf(_SC_PAGESIZE); 50 + #ifdef __arm__ 51 + unsigned long va = 2 * PAGE_SIZE; 52 + #else 53 + unsigned long va = 0; 54 + #endif 50 55 void *p; 51 56 int fd; 52 57 unsigned long a, b; ··· 60 55 if (fd == -1) 61 56 return 1; 62 57 63 - p = mmap(NULL, PAGE_SIZE, PROT_NONE, MAP_PRIVATE|MAP_FILE|MAP_FIXED, fd, 0); 58 + p = mmap((void *)va, PAGE_SIZE, PROT_NONE, MAP_PRIVATE|MAP_FILE|MAP_FIXED, fd, 0); 64 59 if (p == MAP_FAILED) { 65 60 if (errno == EPERM) 66 61 return 2;
+13 -5
tools/testing/selftests/tc-testing/tdc.py
··· 134 134 (rawout, serr) = proc.communicate() 135 135 136 136 if proc.returncode != 0 and len(serr) > 0: 137 - foutput = serr.decode("utf-8") 137 + foutput = serr.decode("utf-8", errors="ignore") 138 138 else: 139 - foutput = rawout.decode("utf-8") 139 + foutput = rawout.decode("utf-8", errors="ignore") 140 140 141 141 proc.stdout.close() 142 142 proc.stderr.close() ··· 169 169 file=sys.stderr) 170 170 print("\n{} *** Error message: \"{}\"".format(prefix, foutput), 171 171 file=sys.stderr) 172 + print("returncode {}; expected {}".format(proc.returncode, 173 + exit_codes)) 172 174 print("\n{} *** Aborting test run.".format(prefix), file=sys.stderr) 173 175 print("\n\n{} *** stdout ***".format(proc.stdout), file=sys.stderr) 174 176 print("\n\n{} *** stderr ***".format(proc.stderr), file=sys.stderr) ··· 197 195 print('-----> execute stage') 198 196 pm.call_pre_execute() 199 197 (p, procout) = exec_cmd(args, pm, 'execute', tidx["cmdUnderTest"]) 200 - exit_code = p.returncode 198 + if p: 199 + exit_code = p.returncode 200 + else: 201 + exit_code = None 202 + 201 203 pm.call_post_execute() 202 204 203 - if (exit_code != int(tidx["expExitCode"])): 205 + if (exit_code is None or exit_code != int(tidx["expExitCode"])): 204 206 result = False 205 - print("exit:", exit_code, int(tidx["expExitCode"])) 207 + print("exit: {!r}".format(exit_code)) 208 + print("exit: {}".format(int(tidx["expExitCode"]))) 209 + #print("exit: {!r} {}".format(exit_code, int(tidx["expExitCode"]))) 206 210 print(procout) 207 211 else: 208 212 if args.verbose > 0: