Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 4.20-rc4 into tty-next

We need the gsps fixes in here for some other serdev patches we will be
merging into this branch.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+4251 -2543
+8
CREDITS
··· 2138 2138 D: Soundblaster driver fixes, ISAPnP quirk 2139 2139 S: California, USA 2140 2140 2141 + N: Jarkko Lavinen 2142 + E: jarkko.lavinen@nokia.com 2143 + D: OMAP MMC support 2144 + 2141 2145 N: Jonathan Layes 2142 2146 D: ARPD support 2143 2147 ··· 2203 2199 S: Post Office Box 371 2204 2200 S: North Little Rock, Arkansas 72115 2205 2201 S: USA 2202 + 2203 + N: Christopher Li 2204 + E: sparse@chrisli.org 2205 + D: Sparse maintainer 2009 - 2018 2206 2206 2207 2207 N: Stephan Linz 2208 2208 E: linz@mazet.de
+2
Documentation/admin-guide/kernel-parameters.txt
··· 4713 4713 prevent spurious wakeup); 4714 4714 n = USB_QUIRK_DELAY_CTRL_MSG (Device needs a 4715 4715 pause after every control message); 4716 + o = USB_QUIRK_HUB_SLOW_RESET (Hub needs extra 4717 + delay after resetting its port); 4716 4718 Example: quirks=0781:5580:bk,0a5c:5834:gij 4717 4719 4718 4720 usbhid.mousepoll=
+1 -1
Documentation/admin-guide/pm/cpufreq.rst
··· 150 150 a governor ``sysfs`` interface to it. Next, the governor is started by 151 151 invoking its ``->start()`` callback. 152 152 153 - That callback it expected to register per-CPU utilization update callbacks for 153 + That callback is expected to register per-CPU utilization update callbacks for 154 154 all of the online CPUs belonging to the given policy with the CPU scheduler. 155 155 The utilization update callbacks will be invoked by the CPU scheduler on 156 156 important events, like task enqueue and dequeue, on every iteration of the
+10 -9
Documentation/admin-guide/security-bugs.rst
··· 32 32 The security list is not a disclosure channel. For that, see Coordination 33 33 below. 34 34 35 - Once a robust fix has been developed, our preference is to release the 36 - fix in a timely fashion, treating it no differently than any of the other 37 - thousands of changes and fixes the Linux kernel project releases every 38 - month. 35 + Once a robust fix has been developed, the release process starts. Fixes 36 + for publicly known bugs are released immediately. 39 37 40 - However, at the request of the reporter, we will postpone releasing the 41 - fix for up to 5 business days after the date of the report or after the 42 - embargo has lifted; whichever comes first. The only exception to that 43 - rule is if the bug is publicly known, in which case the preference is to 44 - release the fix as soon as it's available. 38 + Although our preference is to release fixes for publicly undisclosed bugs 39 + as soon as they become available, this may be postponed at the request of 40 + the reporter or an affected party for up to 7 calendar days from the start 41 + of the release process, with an exceptional extension to 14 calendar days 42 + if it is agreed that the criticality of the bug requires more time. The 43 + only valid reason for deferring the publication of a fix is to accommodate 44 + the logistics of QA and large scale rollouts which require release 45 + coordination. 45 46 46 47 Whilst embargoed information may be shared with trusted individuals in 47 48 order to develop a fix, such information will not be published alongside
+41 -11
Documentation/core-api/xarray.rst
··· 74 74 new entry and return the previous entry stored at that index. You can 75 75 use :c:func:`xa_erase` instead of calling :c:func:`xa_store` with a 76 76 ``NULL`` entry. There is no difference between an entry that has never 77 - been stored to and one that has most recently had ``NULL`` stored to it. 77 + been stored to, one that has been erased and one that has most recently 78 + had ``NULL`` stored to it. 78 79 79 80 You can conditionally replace an entry at an index by using 80 81 :c:func:`xa_cmpxchg`. Like :c:func:`cmpxchg`, it will only succeed if ··· 106 105 indices. Storing into one index may result in the entry retrieved by 107 106 some, but not all of the other indices changing. 108 107 108 + Sometimes you need to ensure that a subsequent call to :c:func:`xa_store` 109 + will not need to allocate memory. The :c:func:`xa_reserve` function 110 + will store a reserved entry at the indicated index. Users of the normal 111 + API will see this entry as containing ``NULL``. If you do not need to 112 + use the reserved entry, you can call :c:func:`xa_release` to remove the 113 + unused entry. If another user has stored to the entry in the meantime, 114 + :c:func:`xa_release` will do nothing; if instead you want the entry to 115 + become ``NULL``, you should use :c:func:`xa_erase`. 116 + 117 + If all entries in the array are ``NULL``, the :c:func:`xa_empty` function 118 + will return ``true``. 119 + 109 120 Finally, you can remove all entries from an XArray by calling 110 121 :c:func:`xa_destroy`. If the XArray entries are pointers, you may wish 111 122 to free the entries first. You can do this by iterating over all present 112 123 entries in the XArray using the :c:func:`xa_for_each` iterator. 113 124 114 - ID assignment 115 - ------------- 125 + Allocating XArrays 126 + ------------------ 127 + 128 + If you use :c:func:`DEFINE_XARRAY_ALLOC` to define the XArray, or 129 + initialise it by passing ``XA_FLAGS_ALLOC`` to :c:func:`xa_init_flags`, 130 + the XArray changes to track whether entries are in use or not. 116 131 117 132 You can call :c:func:`xa_alloc` to store the entry at any unused index 118 133 in the XArray. If you need to modify the array from interrupt context, 119 134 you can use :c:func:`xa_alloc_bh` or :c:func:`xa_alloc_irq` to disable 120 - interrupts while allocating the ID. Unlike :c:func:`xa_store`, allocating 121 - a ``NULL`` pointer does not delete an entry. Instead it reserves an 122 - entry like :c:func:`xa_reserve` and you can release it using either 123 - :c:func:`xa_erase` or :c:func:`xa_release`. To use ID assignment, the 124 - XArray must be defined with :c:func:`DEFINE_XARRAY_ALLOC`, or initialised 125 - by passing ``XA_FLAGS_ALLOC`` to :c:func:`xa_init_flags`, 135 + interrupts while allocating the ID. 136 + 137 + Using :c:func:`xa_store`, :c:func:`xa_cmpxchg` or :c:func:`xa_insert` 138 + will mark the entry as being allocated. Unlike a normal XArray, storing 139 + ``NULL`` will mark the entry as being in use, like :c:func:`xa_reserve`. 140 + To free an entry, use :c:func:`xa_erase` (or :c:func:`xa_release` if 141 + you only want to free the entry if it's ``NULL``). 142 + 143 + You cannot use ``XA_MARK_0`` with an allocating XArray as this mark 144 + is used to track whether an entry is free or not. The other marks are 145 + available for your use. 126 146 127 147 Memory allocation 128 148 ----------------- ··· 180 158 181 159 Takes xa_lock internally: 182 160 * :c:func:`xa_store` 161 + * :c:func:`xa_store_bh` 162 + * :c:func:`xa_store_irq` 183 163 * :c:func:`xa_insert` 184 164 * :c:func:`xa_erase` 185 165 * :c:func:`xa_erase_bh` ··· 191 167 * :c:func:`xa_alloc` 192 168 * :c:func:`xa_alloc_bh` 193 169 * :c:func:`xa_alloc_irq` 170 + * :c:func:`xa_reserve` 171 + * :c:func:`xa_reserve_bh` 172 + * :c:func:`xa_reserve_irq` 194 173 * :c:func:`xa_destroy` 195 174 * :c:func:`xa_set_mark` 196 175 * :c:func:`xa_clear_mark` ··· 204 177 * :c:func:`__xa_erase` 205 178 * :c:func:`__xa_cmpxchg` 206 179 * :c:func:`__xa_alloc` 180 + * :c:func:`__xa_reserve` 207 181 * :c:func:`__xa_set_mark` 208 182 * :c:func:`__xa_clear_mark` 209 183 ··· 262 234 using :c:func:`xa_lock_irqsave` in both the interrupt handler and process 263 235 context, or :c:func:`xa_lock_irq` in process context and :c:func:`xa_lock` 264 236 in the interrupt handler. Some of the more common patterns have helper 265 - functions such as :c:func:`xa_erase_bh` and :c:func:`xa_erase_irq`. 237 + functions such as :c:func:`xa_store_bh`, :c:func:`xa_store_irq`, 238 + :c:func:`xa_erase_bh` and :c:func:`xa_erase_irq`. 266 239 267 240 Sometimes you need to protect access to the XArray with a mutex because 268 241 that lock sits above another mutex in the locking hierarchy. That does ··· 351 322 - :c:func:`xa_is_zero` 352 323 - Zero entries appear as ``NULL`` through the Normal API, but occupy 353 324 an entry in the XArray which can be used to reserve the index for 354 - future use. 325 + future use. This is used by allocating XArrays for allocated entries 326 + which are ``NULL``. 355 327 356 328 Other internal entries may be added in the future. As far as possible, they 357 329 will be handled by :c:func:`xas_retry`.
+5 -3
Documentation/cpu-freq/cpufreq-stats.txt
··· 86 86 This will give a fine grained information about all the CPU frequency 87 87 transitions. The cat output here is a two dimensional matrix, where an entry 88 88 <i,j> (row i, column j) represents the count of number of transitions from 89 - Freq_i to Freq_j. Freq_i is in descending order with increasing rows and 90 - Freq_j is in descending order with increasing columns. The output here also 91 - contains the actual freq values for each row and column for better readability. 89 + Freq_i to Freq_j. Freq_i rows and Freq_j columns follow the sorting order in 90 + which the driver has provided the frequency table initially to the cpufreq core 91 + and so can be sorted (ascending or descending) or unsorted. The output here 92 + also contains the actual freq values for each row and column for better 93 + readability. 92 94 93 95 If the transition table is bigger than PAGE_SIZE, reading this will 94 96 return an -EFBIG error.
-65
Documentation/devicetree/bindings/cpufreq/arm_big_little_dt.txt
··· 1 - Generic ARM big LITTLE cpufreq driver's DT glue 2 - ----------------------------------------------- 3 - 4 - This is DT specific glue layer for generic cpufreq driver for big LITTLE 5 - systems. 6 - 7 - Both required and optional properties listed below must be defined 8 - under node /cpus/cpu@x. Where x is the first cpu inside a cluster. 9 - 10 - FIXME: Cpus should boot in the order specified in DT and all cpus for a cluster 11 - must be present contiguously. Generic DT driver will check only node 'x' for 12 - cpu:x. 13 - 14 - Required properties: 15 - - operating-points: Refer to Documentation/devicetree/bindings/opp/opp.txt 16 - for details 17 - 18 - Optional properties: 19 - - clock-latency: Specify the possible maximum transition latency for clock, 20 - in unit of nanoseconds. 21 - 22 - Examples: 23 - 24 - cpus { 25 - #address-cells = <1>; 26 - #size-cells = <0>; 27 - 28 - cpu@0 { 29 - compatible = "arm,cortex-a15"; 30 - reg = <0>; 31 - next-level-cache = <&L2>; 32 - operating-points = < 33 - /* kHz uV */ 34 - 792000 1100000 35 - 396000 950000 36 - 198000 850000 37 - >; 38 - clock-latency = <61036>; /* two CLK32 periods */ 39 - }; 40 - 41 - cpu@1 { 42 - compatible = "arm,cortex-a15"; 43 - reg = <1>; 44 - next-level-cache = <&L2>; 45 - }; 46 - 47 - cpu@100 { 48 - compatible = "arm,cortex-a7"; 49 - reg = <100>; 50 - next-level-cache = <&L2>; 51 - operating-points = < 52 - /* kHz uV */ 53 - 792000 950000 54 - 396000 750000 55 - 198000 450000 56 - >; 57 - clock-latency = <61036>; /* two CLK32 periods */ 58 - }; 59 - 60 - cpu@101 { 61 - compatible = "arm,cortex-a7"; 62 - reg = <101>; 63 - next-level-cache = <&L2>; 64 - }; 65 - };
+1 -1
Documentation/devicetree/bindings/net/can/holt_hi311x.txt
··· 17 17 reg = <1>; 18 18 clocks = <&clk32m>; 19 19 interrupt-parent = <&gpio4>; 20 - interrupts = <13 IRQ_TYPE_EDGE_RISING>; 20 + interrupts = <13 IRQ_TYPE_LEVEL_HIGH>; 21 21 vdd-supply = <&reg5v0>; 22 22 xceiver-supply = <&reg5v0>; 23 23 };
+18 -10
Documentation/devicetree/bindings/net/can/rcar_can.txt
··· 5 5 - compatible: "renesas,can-r8a7743" if CAN controller is a part of R8A7743 SoC. 6 6 "renesas,can-r8a7744" if CAN controller is a part of R8A7744 SoC. 7 7 "renesas,can-r8a7745" if CAN controller is a part of R8A7745 SoC. 8 + "renesas,can-r8a774a1" if CAN controller is a part of R8A774A1 SoC. 8 9 "renesas,can-r8a7778" if CAN controller is a part of R8A7778 SoC. 9 10 "renesas,can-r8a7779" if CAN controller is a part of R8A7779 SoC. 10 11 "renesas,can-r8a7790" if CAN controller is a part of R8A7790 SoC. ··· 15 14 "renesas,can-r8a7794" if CAN controller is a part of R8A7794 SoC. 16 15 "renesas,can-r8a7795" if CAN controller is a part of R8A7795 SoC. 17 16 "renesas,can-r8a7796" if CAN controller is a part of R8A7796 SoC. 17 + "renesas,can-r8a77965" if CAN controller is a part of R8A77965 SoC. 18 18 "renesas,rcar-gen1-can" for a generic R-Car Gen1 compatible device. 19 19 "renesas,rcar-gen2-can" for a generic R-Car Gen2 or RZ/G1 20 20 compatible device. 21 - "renesas,rcar-gen3-can" for a generic R-Car Gen3 compatible device. 21 + "renesas,rcar-gen3-can" for a generic R-Car Gen3 or RZ/G2 22 + compatible device. 22 23 When compatible with the generic version, nodes must list the 23 24 SoC-specific version corresponding to the platform first 24 25 followed by the generic version. 25 26 26 27 - reg: physical base address and size of the R-Car CAN register map. 27 28 - interrupts: interrupt specifier for the sole interrupt. 28 - - clocks: phandles and clock specifiers for 3 CAN clock inputs. 29 - - clock-names: 3 clock input name strings: "clkp1", "clkp2", "can_clk". 29 + - clocks: phandles and clock specifiers for 2 CAN clock inputs for RZ/G2 30 + devices. 31 + phandles and clock specifiers for 3 CAN clock inputs for every other 32 + SoC. 33 + - clock-names: 2 clock input name strings for RZ/G2: "clkp1", "can_clk". 34 + 3 clock input name strings for every other SoC: "clkp1", "clkp2", 35 + "can_clk". 30 36 - pinctrl-0: pin control group to be used for this controller. 31 37 - pinctrl-names: must be "default". 32 38 33 - Required properties for "renesas,can-r8a7795" and "renesas,can-r8a7796" 34 - compatible: 35 - In R8A7795 and R8A7796 SoCs, "clkp2" can be CANFD clock. This is a div6 clock 36 - and can be used by both CAN and CAN FD controller at the same time. It needs to 37 - be scaled to maximum frequency if any of these controllers use it. This is done 39 + Required properties for R8A7795, R8A7796 and R8A77965: 40 + For the denoted SoCs, "clkp2" can be CANFD clock. This is a div6 clock and can 41 + be used by both CAN and CAN FD controller at the same time. It needs to be 42 + scaled to maximum frequency if any of these controllers use it. This is done 38 43 using the below properties: 39 44 40 45 - assigned-clocks: phandle of clkp2(CANFD) clock. ··· 49 42 Optional properties: 50 43 - renesas,can-clock-select: R-Car CAN Clock Source Select. Valid values are: 51 44 <0x0> (default) : Peripheral clock (clkp1) 52 - <0x1> : Peripheral clock (clkp2) 53 - <0x3> : Externally input clock 45 + <0x1> : Peripheral clock (clkp2) (not supported by 46 + RZ/G2 devices) 47 + <0x3> : External input clock 54 48 55 49 Example 56 50 -------
+1 -1
Documentation/devicetree/bindings/net/dsa/dsa.txt
··· 7 7 Current Binding 8 8 --------------- 9 9 10 - Switches are true Linux devices and can be probes by any means. Once 10 + Switches are true Linux devices and can be probed by any means. Once 11 11 probed, they register to the DSA framework, passing a node 12 12 pointer. This node is expected to fulfil the following binding, and 13 13 may contain additional properties as required by the device it is
+1 -10
Documentation/input/event-codes.rst
··· 190 190 * REL_WHEEL, REL_HWHEEL: 191 191 192 192 - These codes are used for vertical and horizontal scroll wheels, 193 - respectively. The value is the number of "notches" moved on the wheel, the 194 - physical size of which varies by device. For high-resolution wheels (which 195 - report multiple events for each notch of movement, or do not have notches) 196 - this may be an approximation based on the high-resolution scroll events. 197 - 198 - * REL_WHEEL_HI_RES: 199 - 200 - - If a vertical scroll wheel supports high-resolution scrolling, this code 201 - will be emitted in addition to REL_WHEEL. The value is the (approximate) 202 - distance travelled by the user's finger, in microns. 193 + respectively. 203 194 204 195 EV_ABS 205 196 ------
+1 -1
Documentation/media/uapi/v4l/dev-meta.rst
··· 40 40 the desired operation. Both drivers and applications must set the remainder of 41 41 the :c:type:`v4l2_format` structure to 0. 42 42 43 - .. _v4l2-meta-format: 43 + .. c:type:: v4l2_meta_format 44 44 45 45 .. tabularcolumns:: |p{1.4cm}|p{2.2cm}|p{13.9cm}| 46 46
+5
Documentation/media/uapi/v4l/vidioc-g-fmt.rst
··· 133 133 - Definition of a data format, see :ref:`pixfmt`, used by SDR 134 134 capture and output devices. 135 135 * - 136 + - struct :c:type:`v4l2_meta_format` 137 + - ``meta`` 138 + - Definition of a metadata format, see :ref:`meta-formats`, used by 139 + metadata capture devices. 140 + * - 136 141 - __u8 137 142 - ``raw_data``\ [200] 138 143 - Place holder for future extensions.
+11 -6
Documentation/networking/rxrpc.txt
··· 1056 1056 1057 1057 u32 rxrpc_kernel_check_life(struct socket *sock, 1058 1058 struct rxrpc_call *call); 1059 + void rxrpc_kernel_probe_life(struct socket *sock, 1060 + struct rxrpc_call *call); 1059 1061 1060 - This returns a number that is updated when ACKs are received from the peer 1061 - (notably including PING RESPONSE ACKs which we can elicit by sending PING 1062 - ACKs to see if the call still exists on the server). The caller should 1063 - compare the numbers of two calls to see if the call is still alive after 1064 - waiting for a suitable interval. 1062 + The first function returns a number that is updated when ACKs are received 1063 + from the peer (notably including PING RESPONSE ACKs which we can elicit by 1064 + sending PING ACKs to see if the call still exists on the server). The 1065 + caller should compare the numbers of two calls to see if the call is still 1066 + alive after waiting for a suitable interval. 1065 1067 1066 1068 This allows the caller to work out if the server is still contactable and 1067 1069 if the call is still alive on the server whilst waiting for the server to 1068 1070 process a client operation. 1069 1071 1070 - This function may transmit a PING ACK. 1072 + The second function causes a ping ACK to be transmitted to try to provoke 1073 + the peer into responding, which would then cause the value returned by the 1074 + first function to change. Note that this must be called in TASK_RUNNING 1075 + state. 1071 1076 1072 1077 (*) Get reply timestamp. 1073 1078
+36 -17
MAINTAINERS
··· 180 180 181 181 8169 10/100/1000 GIGABIT ETHERNET DRIVER 182 182 M: Realtek linux nic maintainers <nic_swsd@realtek.com> 183 + M: Heiner Kallweit <hkallweit1@gmail.com> 183 184 L: netdev@vger.kernel.org 184 185 S: Maintained 185 186 F: drivers/net/ethernet/realtek/r8169.c ··· 718 717 F: include/dt-bindings/reset/altr,rst-mgr-a10sr.h 719 718 720 719 ALTERA TRIPLE SPEED ETHERNET DRIVER 721 - M: Vince Bridgers <vbridger@opensource.altera.com> 720 + M: Thor Thayer <thor.thayer@linux.intel.com> 722 721 L: netdev@vger.kernel.org 723 722 L: nios2-dev@lists.rocketboards.org (moderated for non-subscribers) 724 723 S: Maintained ··· 3277 3276 F: include/net/caif/ 3278 3277 F: net/caif/ 3279 3278 3279 + CAKE QDISC 3280 + M: Toke Høiland-Jørgensen <toke@toke.dk> 3281 + L: cake@lists.bufferbloat.net (moderated for non-subscribers) 3282 + S: Maintained 3283 + F: net/sched/sch_cake.c 3284 + 3280 3285 CALGARY x86-64 IOMMU 3281 3286 M: Muli Ben-Yehuda <mulix@mulix.org> 3282 3287 M: Jon Mason <jdmason@kudzu.us> ··· 5535 5528 ETHERNET PHY LIBRARY 5536 5529 M: Andrew Lunn <andrew@lunn.ch> 5537 5530 M: Florian Fainelli <f.fainelli@gmail.com> 5531 + M: Heiner Kallweit <hkallweit1@gmail.com> 5538 5532 L: netdev@vger.kernel.org 5539 5533 S: Maintained 5540 5534 F: Documentation/ABI/testing/sysfs-bus-mdio ··· 6307 6299 6308 6300 GPIO SUBSYSTEM 6309 6301 M: Linus Walleij <linus.walleij@linaro.org> 6302 + M: Bartosz Golaszewski <bgolaszewski@baylibre.com> 6310 6303 L: linux-gpio@vger.kernel.org 6311 6304 T: git git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio.git 6312 6305 S: Maintained ··· 7445 7436 F: Documentation/fb/intelfb.txt 7446 7437 F: drivers/video/fbdev/intelfb/ 7447 7438 7439 + INTEL GPIO DRIVERS 7440 + M: Andy Shevchenko <andriy.shevchenko@linux.intel.com> 7441 + L: linux-gpio@vger.kernel.org 7442 + S: Maintained 7443 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/andy/linux-gpio-intel.git 7444 + F: drivers/gpio/gpio-ich.c 7445 + F: drivers/gpio/gpio-intel-mid.c 7446 + F: drivers/gpio/gpio-lynxpoint.c 7447 + F: drivers/gpio/gpio-merrifield.c 7448 + F: drivers/gpio/gpio-ml-ioh.c 7449 + F: drivers/gpio/gpio-pch.c 7450 + F: drivers/gpio/gpio-sch.c 7451 + F: drivers/gpio/gpio-sodaville.c 7452 + 7448 7453 INTEL GVT-g DRIVERS (Intel GPU Virtualization) 7449 7454 M: Zhenyu Wang <zhenyuw@linux.intel.com> 7450 7455 M: Zhi Wang <zhi.a.wang@intel.com> ··· 7468 7445 T: git https://github.com/intel/gvt-linux.git 7469 7446 S: Supported 7470 7447 F: drivers/gpu/drm/i915/gvt/ 7471 - 7472 - INTEL PMIC GPIO DRIVER 7473 - R: Andy Shevchenko <andriy.shevchenko@linux.intel.com> 7474 - S: Maintained 7475 - F: drivers/gpio/gpio-*cove.c 7476 - F: drivers/gpio/gpio-msic.c 7477 7448 7478 7449 INTEL HID EVENT DRIVER 7479 7450 M: Alex Hung <alex.hung@canonical.com> ··· 7556 7539 S: Supported 7557 7540 F: drivers/platform/x86/intel_menlow.c 7558 7541 7559 - INTEL MERRIFIELD GPIO DRIVER 7560 - M: Andy Shevchenko <andriy.shevchenko@linux.intel.com> 7561 - L: linux-gpio@vger.kernel.org 7562 - S: Maintained 7563 - F: drivers/gpio/gpio-merrifield.c 7564 - 7565 7542 INTEL MIC DRIVERS (mic) 7566 7543 M: Sudeep Dutt <sudeep.dutt@intel.com> 7567 7544 M: Ashutosh Dixit <ashutosh.dixit@intel.com> ··· 7587 7576 F: drivers/platform/x86/intel_punit_ipc.c 7588 7577 F: arch/x86/include/asm/intel_pmc_ipc.h 7589 7578 F: arch/x86/include/asm/intel_punit_ipc.h 7579 + 7580 + INTEL PMIC GPIO DRIVERS 7581 + M: Andy Shevchenko <andriy.shevchenko@linux.intel.com> 7582 + S: Maintained 7583 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/andy/linux-gpio-intel.git 7584 + F: drivers/gpio/gpio-*cove.c 7585 + F: drivers/gpio/gpio-msic.c 7590 7586 7591 7587 INTEL MULTIFUNCTION PMIC DEVICE DRIVERS 7592 7588 R: Andy Shevchenko <andriy.shevchenko@linux.intel.com> ··· 10826 10808 F: drivers/staging/media/omap4iss/ 10827 10809 10828 10810 OMAP MMC SUPPORT 10829 - M: Jarkko Lavinen <jarkko.lavinen@nokia.com> 10811 + M: Aaro Koskinen <aaro.koskinen@iki.fi> 10830 10812 L: linux-omap@vger.kernel.org 10831 - S: Maintained 10813 + S: Odd Fixes 10832 10814 F: drivers/mmc/host/omap.c 10833 10815 10834 10816 OMAP POWER MANAGEMENT SUPPORT ··· 11763 11745 PIN CONTROLLER - INTEL 11764 11746 M: Mika Westerberg <mika.westerberg@linux.intel.com> 11765 11747 M: Andy Shevchenko <andriy.shevchenko@linux.intel.com> 11748 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/pinctrl/intel.git 11766 11749 S: Maintained 11767 11750 F: drivers/pinctrl/intel/ 11768 11751 ··· 13996 13977 F: drivers/tty/vcc.c 13997 13978 13998 13979 SPARSE CHECKER 13999 - M: "Christopher Li" <sparse@chrisli.org> 13980 + M: "Luc Van Oostenryck" <luc.vanoostenryck@gmail.com> 14000 13981 L: linux-sparse@vger.kernel.org 14001 13982 W: https://sparse.wiki.kernel.org/ 14002 13983 T: git git://git.kernel.org/pub/scm/devel/sparse/sparse.git 14003 - T: git git://git.kernel.org/pub/scm/devel/sparse/chrisl/sparse.git 14004 13984 S: Maintained 14005 13985 F: include/linux/compiler.h 14006 13986 ··· 14096 14078 14097 14079 STABLE BRANCH 14098 14080 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 14081 + M: Sasha Levin <sashal@kernel.org> 14099 14082 L: stable@vger.kernel.org 14100 14083 S: Supported 14101 14084 F: Documentation/process/stable-kernel-rules.rst
+2 -2
Makefile
··· 2 2 VERSION = 4 3 3 PATCHLEVEL = 20 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc2 6 - NAME = "People's Front" 5 + EXTRAVERSION = -rc4 6 + NAME = Shy Crocodile 7 7 8 8 # *DOCUMENTATION* 9 9 # To see a list of typical targets execute "make help"
+1
arch/arm/include/asm/cputype.h
··· 111 111 #include <linux/kernel.h> 112 112 113 113 extern unsigned int processor_id; 114 + struct proc_info_list *lookup_processor(u32 midr); 114 115 115 116 #ifdef CONFIG_CPU_CP15 116 117 #define read_cpuid(reg) \
+49 -12
arch/arm/include/asm/proc-fns.h
··· 23 23 /* 24 24 * Don't change this structure - ASM code relies on it. 25 25 */ 26 - extern struct processor { 26 + struct processor { 27 27 /* MISC 28 28 * get data abort address/flags 29 29 */ ··· 79 79 unsigned int suspend_size; 80 80 void (*do_suspend)(void *); 81 81 void (*do_resume)(void *); 82 - } processor; 82 + }; 83 83 84 84 #ifndef MULTI_CPU 85 + static inline void init_proc_vtable(const struct processor *p) 86 + { 87 + } 88 + 85 89 extern void cpu_proc_init(void); 86 90 extern void cpu_proc_fin(void); 87 91 extern int cpu_do_idle(void); ··· 102 98 extern void cpu_do_suspend(void *); 103 99 extern void cpu_do_resume(void *); 104 100 #else 105 - #define cpu_proc_init processor._proc_init 106 - #define cpu_proc_fin processor._proc_fin 107 - #define cpu_reset processor.reset 108 - #define cpu_do_idle processor._do_idle 109 - #define cpu_dcache_clean_area processor.dcache_clean_area 110 - #define cpu_set_pte_ext processor.set_pte_ext 111 - #define cpu_do_switch_mm processor.switch_mm 112 101 113 - /* These three are private to arch/arm/kernel/suspend.c */ 114 - #define cpu_do_suspend processor.do_suspend 115 - #define cpu_do_resume processor.do_resume 102 + extern struct processor processor; 103 + #if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) 104 + #include <linux/smp.h> 105 + /* 106 + * This can't be a per-cpu variable because we need to access it before 107 + * per-cpu has been initialised. We have a couple of functions that are 108 + * called in a pre-emptible context, and so can't use smp_processor_id() 109 + * there, hence PROC_TABLE(). We insist in init_proc_vtable() that the 110 + * function pointers for these are identical across all CPUs. 111 + */ 112 + extern struct processor *cpu_vtable[]; 113 + #define PROC_VTABLE(f) cpu_vtable[smp_processor_id()]->f 114 + #define PROC_TABLE(f) cpu_vtable[0]->f 115 + static inline void init_proc_vtable(const struct processor *p) 116 + { 117 + unsigned int cpu = smp_processor_id(); 118 + *cpu_vtable[cpu] = *p; 119 + WARN_ON_ONCE(cpu_vtable[cpu]->dcache_clean_area != 120 + cpu_vtable[0]->dcache_clean_area); 121 + WARN_ON_ONCE(cpu_vtable[cpu]->set_pte_ext != 122 + cpu_vtable[0]->set_pte_ext); 123 + } 124 + #else 125 + #define PROC_VTABLE(f) processor.f 126 + #define PROC_TABLE(f) processor.f 127 + static inline void init_proc_vtable(const struct processor *p) 128 + { 129 + processor = *p; 130 + } 131 + #endif 132 + 133 + #define cpu_proc_init PROC_VTABLE(_proc_init) 134 + #define cpu_check_bugs PROC_VTABLE(check_bugs) 135 + #define cpu_proc_fin PROC_VTABLE(_proc_fin) 136 + #define cpu_reset PROC_VTABLE(reset) 137 + #define cpu_do_idle PROC_VTABLE(_do_idle) 138 + #define cpu_dcache_clean_area PROC_TABLE(dcache_clean_area) 139 + #define cpu_set_pte_ext PROC_TABLE(set_pte_ext) 140 + #define cpu_do_switch_mm PROC_VTABLE(switch_mm) 141 + 142 + /* These two are private to arch/arm/kernel/suspend.c */ 143 + #define cpu_do_suspend PROC_VTABLE(do_suspend) 144 + #define cpu_do_resume PROC_VTABLE(do_resume) 116 145 #endif 117 146 118 147 extern void cpu_resume(void);
+2 -2
arch/arm/kernel/bugs.c
··· 6 6 void check_other_bugs(void) 7 7 { 8 8 #ifdef MULTI_CPU 9 - if (processor.check_bugs) 10 - processor.check_bugs(); 9 + if (cpu_check_bugs) 10 + cpu_check_bugs(); 11 11 #endif 12 12 } 13 13
+3 -3
arch/arm/kernel/head-common.S
··· 145 145 #endif 146 146 .size __mmap_switched_data, . - __mmap_switched_data 147 147 148 + __FINIT 149 + .text 150 + 148 151 /* 149 152 * This provides a C-API version of __lookup_processor_type 150 153 */ ··· 158 155 mov r0, r5 159 156 ldmfd sp!, {r4 - r6, r9, pc} 160 157 ENDPROC(lookup_processor_type) 161 - 162 - __FINIT 163 - .text 164 158 165 159 /* 166 160 * Read processor ID register (CP#15, CR0), and look up in the linker-built
+27 -17
arch/arm/kernel/setup.c
··· 114 114 115 115 #ifdef MULTI_CPU 116 116 struct processor processor __ro_after_init; 117 + #if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) 118 + struct processor *cpu_vtable[NR_CPUS] = { 119 + [0] = &processor, 120 + }; 121 + #endif 117 122 #endif 118 123 #ifdef MULTI_TLB 119 124 struct cpu_tlb_fns cpu_tlb __ro_after_init; ··· 671 666 } 672 667 #endif 673 668 669 + /* 670 + * locate processor in the list of supported processor types. The linker 671 + * builds this table for us from the entries in arch/arm/mm/proc-*.S 672 + */ 673 + struct proc_info_list *lookup_processor(u32 midr) 674 + { 675 + struct proc_info_list *list = lookup_processor_type(midr); 676 + 677 + if (!list) { 678 + pr_err("CPU%u: configuration botched (ID %08x), CPU halted\n", 679 + smp_processor_id(), midr); 680 + while (1) 681 + /* can't use cpu_relax() here as it may require MMU setup */; 682 + } 683 + 684 + return list; 685 + } 686 + 674 687 static void __init setup_processor(void) 675 688 { 676 - struct proc_info_list *list; 677 - 678 - /* 679 - * locate processor in the list of supported processor 680 - * types. The linker builds this table for us from the 681 - * entries in arch/arm/mm/proc-*.S 682 - */ 683 - list = lookup_processor_type(read_cpuid_id()); 684 - if (!list) { 685 - pr_err("CPU configuration botched (ID %08x), unable to continue.\n", 686 - read_cpuid_id()); 687 - while (1); 688 - } 689 + unsigned int midr = read_cpuid_id(); 690 + struct proc_info_list *list = lookup_processor(midr); 689 691 690 692 cpu_name = list->cpu_name; 691 693 __cpu_architecture = __get_cpu_architecture(); 692 694 693 - #ifdef MULTI_CPU 694 - processor = *list->proc; 695 - #endif 695 + init_proc_vtable(list->proc); 696 696 #ifdef MULTI_TLB 697 697 cpu_tlb = *list->tlb; 698 698 #endif ··· 709 699 #endif 710 700 711 701 pr_info("CPU: %s [%08x] revision %d (ARMv%s), cr=%08lx\n", 712 - cpu_name, read_cpuid_id(), read_cpuid_id() & 15, 702 + list->cpu_name, midr, midr & 15, 713 703 proc_arch[cpu_architecture()], get_cr()); 714 704 715 705 snprintf(init_utsname()->machine, __NEW_UTS_LEN + 1, "%s%c",
+31
arch/arm/kernel/smp.c
··· 42 42 #include <asm/mmu_context.h> 43 43 #include <asm/pgtable.h> 44 44 #include <asm/pgalloc.h> 45 + #include <asm/procinfo.h> 45 46 #include <asm/processor.h> 46 47 #include <asm/sections.h> 47 48 #include <asm/tlbflush.h> ··· 103 102 #endif 104 103 } 105 104 105 + #if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) 106 + static int secondary_biglittle_prepare(unsigned int cpu) 107 + { 108 + if (!cpu_vtable[cpu]) 109 + cpu_vtable[cpu] = kzalloc(sizeof(*cpu_vtable[cpu]), GFP_KERNEL); 110 + 111 + return cpu_vtable[cpu] ? 0 : -ENOMEM; 112 + } 113 + 114 + static void secondary_biglittle_init(void) 115 + { 116 + init_proc_vtable(lookup_processor(read_cpuid_id())->proc); 117 + } 118 + #else 119 + static int secondary_biglittle_prepare(unsigned int cpu) 120 + { 121 + return 0; 122 + } 123 + 124 + static void secondary_biglittle_init(void) 125 + { 126 + } 127 + #endif 128 + 106 129 int __cpu_up(unsigned int cpu, struct task_struct *idle) 107 130 { 108 131 int ret; 109 132 110 133 if (!smp_ops.smp_boot_secondary) 111 134 return -ENOSYS; 135 + 136 + ret = secondary_biglittle_prepare(cpu); 137 + if (ret) 138 + return ret; 112 139 113 140 /* 114 141 * We need to tell the secondary core where to find ··· 387 358 { 388 359 struct mm_struct *mm = &init_mm; 389 360 unsigned int cpu; 361 + 362 + secondary_biglittle_init(); 390 363 391 364 /* 392 365 * The identity mapping is uncached (strongly ordered), so
+53 -58
arch/arm/mach-omap2/display.c
··· 209 209 210 210 return 0; 211 211 } 212 - #else 213 - static inline int omapdss_init_fbdev(void) 212 + 213 + static const char * const omapdss_compat_names[] __initconst = { 214 + "ti,omap2-dss", 215 + "ti,omap3-dss", 216 + "ti,omap4-dss", 217 + "ti,omap5-dss", 218 + "ti,dra7-dss", 219 + }; 220 + 221 + static struct device_node * __init omapdss_find_dss_of_node(void) 214 222 { 215 - return 0; 223 + struct device_node *node; 224 + int i; 225 + 226 + for (i = 0; i < ARRAY_SIZE(omapdss_compat_names); ++i) { 227 + node = of_find_compatible_node(NULL, NULL, 228 + omapdss_compat_names[i]); 229 + if (node) 230 + return node; 231 + } 232 + 233 + return NULL; 216 234 } 235 + 236 + static int __init omapdss_init_of(void) 237 + { 238 + int r; 239 + struct device_node *node; 240 + struct platform_device *pdev; 241 + 242 + /* only create dss helper devices if dss is enabled in the .dts */ 243 + 244 + node = omapdss_find_dss_of_node(); 245 + if (!node) 246 + return 0; 247 + 248 + if (!of_device_is_available(node)) 249 + return 0; 250 + 251 + pdev = of_find_device_by_node(node); 252 + 253 + if (!pdev) { 254 + pr_err("Unable to find DSS platform device\n"); 255 + return -ENODEV; 256 + } 257 + 258 + r = of_platform_populate(node, NULL, NULL, &pdev->dev); 259 + if (r) { 260 + pr_err("Unable to populate DSS submodule devices\n"); 261 + return r; 262 + } 263 + 264 + return omapdss_init_fbdev(); 265 + } 266 + omap_device_initcall(omapdss_init_of); 217 267 #endif /* CONFIG_FB_OMAP2 */ 218 268 219 269 static void dispc_disable_outputs(void) ··· 411 361 412 362 return r; 413 363 } 414 - 415 - static const char * const omapdss_compat_names[] __initconst = { 416 - "ti,omap2-dss", 417 - "ti,omap3-dss", 418 - "ti,omap4-dss", 419 - "ti,omap5-dss", 420 - "ti,dra7-dss", 421 - }; 422 - 423 - static struct device_node * __init omapdss_find_dss_of_node(void) 424 - { 425 - struct device_node *node; 426 - int i; 427 - 428 - for (i = 0; i < ARRAY_SIZE(omapdss_compat_names); ++i) { 429 - node = of_find_compatible_node(NULL, NULL, 430 - omapdss_compat_names[i]); 431 - if (node) 432 - return node; 433 - } 434 - 435 - return NULL; 436 - } 437 - 438 - static int __init omapdss_init_of(void) 439 - { 440 - int r; 441 - struct device_node *node; 442 - struct platform_device *pdev; 443 - 444 - /* only create dss helper devices if dss is enabled in the .dts */ 445 - 446 - node = omapdss_find_dss_of_node(); 447 - if (!node) 448 - return 0; 449 - 450 - if (!of_device_is_available(node)) 451 - return 0; 452 - 453 - pdev = of_find_device_by_node(node); 454 - 455 - if (!pdev) { 456 - pr_err("Unable to find DSS platform device\n"); 457 - return -ENODEV; 458 - } 459 - 460 - r = of_platform_populate(node, NULL, NULL, &pdev->dev); 461 - if (r) { 462 - pr_err("Unable to populate DSS submodule devices\n"); 463 - return r; 464 - } 465 - 466 - return omapdss_init_fbdev(); 467 - } 468 - omap_device_initcall(omapdss_init_of);
+2 -15
arch/arm/mm/proc-v7-bugs.c
··· 52 52 case ARM_CPU_PART_CORTEX_A17: 53 53 case ARM_CPU_PART_CORTEX_A73: 54 54 case ARM_CPU_PART_CORTEX_A75: 55 - if (processor.switch_mm != cpu_v7_bpiall_switch_mm) 56 - goto bl_error; 57 55 per_cpu(harden_branch_predictor_fn, cpu) = 58 56 harden_branch_predictor_bpiall; 59 57 spectre_v2_method = "BPIALL"; ··· 59 61 60 62 case ARM_CPU_PART_CORTEX_A15: 61 63 case ARM_CPU_PART_BRAHMA_B15: 62 - if (processor.switch_mm != cpu_v7_iciallu_switch_mm) 63 - goto bl_error; 64 64 per_cpu(harden_branch_predictor_fn, cpu) = 65 65 harden_branch_predictor_iciallu; 66 66 spectre_v2_method = "ICIALLU"; ··· 84 88 ARM_SMCCC_ARCH_WORKAROUND_1, &res); 85 89 if ((int)res.a0 != 0) 86 90 break; 87 - if (processor.switch_mm != cpu_v7_hvc_switch_mm && cpu) 88 - goto bl_error; 89 91 per_cpu(harden_branch_predictor_fn, cpu) = 90 92 call_hvc_arch_workaround_1; 91 - processor.switch_mm = cpu_v7_hvc_switch_mm; 93 + cpu_do_switch_mm = cpu_v7_hvc_switch_mm; 92 94 spectre_v2_method = "hypervisor"; 93 95 break; 94 96 ··· 95 101 ARM_SMCCC_ARCH_WORKAROUND_1, &res); 96 102 if ((int)res.a0 != 0) 97 103 break; 98 - if (processor.switch_mm != cpu_v7_smc_switch_mm && cpu) 99 - goto bl_error; 100 104 per_cpu(harden_branch_predictor_fn, cpu) = 101 105 call_smc_arch_workaround_1; 102 - processor.switch_mm = cpu_v7_smc_switch_mm; 106 + cpu_do_switch_mm = cpu_v7_smc_switch_mm; 103 107 spectre_v2_method = "firmware"; 104 108 break; 105 109 ··· 111 119 if (spectre_v2_method) 112 120 pr_info("CPU%u: Spectre v2: using %s workaround\n", 113 121 smp_processor_id(), spectre_v2_method); 114 - return; 115 - 116 - bl_error: 117 - pr_err("CPU%u: Spectre v2: incorrect context switching function, system vulnerable\n", 118 - cpu); 119 122 } 120 123 #else 121 124 static void cpu_v7_spectre_init(void)
+1 -1
arch/arm/vfp/vfpmodule.c
··· 573 573 */ 574 574 ufp_exc->fpexc = hwstate->fpexc; 575 575 ufp_exc->fpinst = hwstate->fpinst; 576 - ufp_exc->fpinst2 = ufp_exc->fpinst2; 576 + ufp_exc->fpinst2 = hwstate->fpinst2; 577 577 578 578 /* Ensure that VFP is disabled. */ 579 579 vfp_flush_hwstate(thread);
+2 -2
arch/arm64/include/asm/sysreg.h
··· 468 468 SCTLR_ELx_SA | SCTLR_ELx_I | SCTLR_ELx_WXN | \ 469 469 SCTLR_ELx_DSSBS | ENDIAN_CLEAR_EL2 | SCTLR_EL2_RES0) 470 470 471 - #if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffffffffffff 471 + #if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffffffffffffUL 472 472 #error "Inconsistent SCTLR_EL2 set/clear bits" 473 473 #endif 474 474 ··· 509 509 SCTLR_EL1_UMA | SCTLR_ELx_WXN | ENDIAN_CLEAR_EL1 |\ 510 510 SCTLR_ELx_DSSBS | SCTLR_EL1_NTWI | SCTLR_EL1_RES0) 511 511 512 - #if (SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != 0xffffffffffffffff 512 + #if (SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != 0xffffffffffffffffUL 513 513 #error "Inconsistent SCTLR_EL1 set/clear bits" 514 514 #endif 515 515
+1 -1
arch/arm64/kernel/cpufeature.c
··· 1333 1333 .cpu_enable = cpu_enable_hw_dbm, 1334 1334 }, 1335 1335 #endif 1336 - #ifdef CONFIG_ARM64_SSBD 1337 1336 { 1338 1337 .desc = "CRC32 instructions", 1339 1338 .capability = ARM64_HAS_CRC32, ··· 1342 1343 .field_pos = ID_AA64ISAR0_CRC32_SHIFT, 1343 1344 .min_field_value = 1, 1344 1345 }, 1346 + #ifdef CONFIG_ARM64_SSBD 1345 1347 { 1346 1348 .desc = "Speculative Store Bypassing Safe (SSBS)", 1347 1349 .capability = ARM64_SSBS,
+1
arch/arm64/kernel/setup.c
··· 313 313 arm64_memblock_init(); 314 314 315 315 paging_init(); 316 + efi_apply_persistent_mem_reservations(); 316 317 317 318 acpi_table_upgrade(); 318 319
+1
arch/mips/configs/cavium_octeon_defconfig
··· 140 140 CONFIG_RTC_DRV_DS1307=y 141 141 CONFIG_STAGING=y 142 142 CONFIG_OCTEON_ETHERNET=y 143 + CONFIG_OCTEON_USB=y 143 144 # CONFIG_IOMMU_SUPPORT is not set 144 145 CONFIG_RAS=y 145 146 CONFIG_EXT4_FS=y
+1
arch/mips/kernel/setup.c
··· 794 794 795 795 /* call board setup routine */ 796 796 plat_mem_setup(); 797 + memblock_set_bottom_up(true); 797 798 798 799 /* 799 800 * Make sure all kernel memory is in the maps. The "UP" and
+1 -2
arch/mips/kernel/traps.c
··· 2260 2260 unsigned long size = 0x200 + VECTORSPACING*64; 2261 2261 phys_addr_t ebase_pa; 2262 2262 2263 - memblock_set_bottom_up(true); 2264 2263 ebase = (unsigned long) 2265 2264 memblock_alloc_from(size, 1 << fls(size), 0); 2266 - memblock_set_bottom_up(false); 2267 2265 2268 2266 /* 2269 2267 * Try to ensure ebase resides in KSeg0 if possible. ··· 2305 2307 if (board_ebase_setup) 2306 2308 board_ebase_setup(); 2307 2309 per_cpu_trap_init(true); 2310 + memblock_set_bottom_up(false); 2308 2311 2309 2312 /* 2310 2313 * Copy the generic exception handlers to their final destination.
+2 -10
arch/mips/loongson64/loongson-3/numa.c
··· 231 231 cpumask_clear(&__node_data[(node)]->cpumask); 232 232 } 233 233 } 234 + max_low_pfn = PHYS_PFN(memblock_end_of_DRAM()); 235 + 234 236 for (cpu = 0; cpu < loongson_sysconf.nr_cpus; cpu++) { 235 237 node = cpu / loongson_sysconf.cores_per_node; 236 238 if (node >= num_online_nodes()) ··· 250 248 251 249 void __init paging_init(void) 252 250 { 253 - unsigned node; 254 251 unsigned long zones_size[MAX_NR_ZONES] = {0, }; 255 252 256 253 pagetable_init(); 257 - 258 - for_each_online_node(node) { 259 - unsigned long start_pfn, end_pfn; 260 - 261 - get_pfn_range_for_nid(node, &start_pfn, &end_pfn); 262 - 263 - if (end_pfn > max_low_pfn) 264 - max_low_pfn = end_pfn; 265 - } 266 254 #ifdef CONFIG_ZONE_DMA32 267 255 zones_size[ZONE_DMA32] = MAX_DMA32_PFN; 268 256 #endif
+1 -10
arch/mips/sgi-ip27/ip27-memory.c
··· 435 435 436 436 mlreset(); 437 437 szmem(); 438 + max_low_pfn = PHYS_PFN(memblock_end_of_DRAM()); 438 439 439 440 for (node = 0; node < MAX_COMPACT_NODES; node++) { 440 441 if (node_online(node)) { ··· 456 455 void __init paging_init(void) 457 456 { 458 457 unsigned long zones_size[MAX_NR_ZONES] = {0, }; 459 - unsigned node; 460 458 461 459 pagetable_init(); 462 - 463 - for_each_online_node(node) { 464 - unsigned long start_pfn, end_pfn; 465 - 466 - get_pfn_range_for_nid(node, &start_pfn, &end_pfn); 467 - 468 - if (end_pfn > max_low_pfn) 469 - max_low_pfn = end_pfn; 470 - } 471 460 zones_size[ZONE_NORMAL] = max_low_pfn; 472 461 free_area_init_nodes(zones_size); 473 462 }
+2 -2
arch/parisc/include/asm/spinlock.h
··· 37 37 volatile unsigned int *a; 38 38 39 39 a = __ldcw_align(x); 40 - /* Release with ordered store. */ 41 - __asm__ __volatile__("stw,ma %0,0(%1)" : : "r"(1), "r"(a) : "memory"); 40 + mb(); 41 + *a = 1; 42 42 } 43 43 44 44 static inline int arch_spin_trylock(arch_spinlock_t *x)
+8 -4
arch/parisc/kernel/syscall.S
··· 640 640 sub,<> %r28, %r25, %r0 641 641 2: stw %r24, 0(%r26) 642 642 /* Free lock */ 643 - stw,ma %r20, 0(%sr2,%r20) 643 + sync 644 + stw %r20, 0(%sr2,%r20) 644 645 #if ENABLE_LWS_DEBUG 645 646 /* Clear thread register indicator */ 646 647 stw %r0, 4(%sr2,%r20) ··· 655 654 3: 656 655 /* Error occurred on load or store */ 657 656 /* Free lock */ 658 - stw,ma %r20, 0(%sr2,%r20) 657 + sync 658 + stw %r20, 0(%sr2,%r20) 659 659 #if ENABLE_LWS_DEBUG 660 660 stw %r0, 4(%sr2,%r20) 661 661 #endif ··· 857 855 858 856 cas2_end: 859 857 /* Free lock */ 860 - stw,ma %r20, 0(%sr2,%r20) 858 + sync 859 + stw %r20, 0(%sr2,%r20) 861 860 /* Enable interrupts */ 862 861 ssm PSW_SM_I, %r0 863 862 /* Return to userspace, set no error */ ··· 868 865 22: 869 866 /* Error occurred on load or store */ 870 867 /* Free lock */ 871 - stw,ma %r20, 0(%sr2,%r20) 868 + sync 869 + stw %r20, 0(%sr2,%r20) 872 870 ssm PSW_SM_I, %r0 873 871 ldo 1(%r0),%r28 874 872 b lws_exit
+7 -13
arch/powerpc/include/asm/io.h
··· 268 268 * their hooks, a bitfield is reserved for use by the platform near the 269 269 * top of MMIO addresses (not PIO, those have to cope the hard way). 270 270 * 271 - * This bit field is 12 bits and is at the top of the IO virtual 272 - * addresses PCI_IO_INDIRECT_TOKEN_MASK. 271 + * The highest address in the kernel virtual space are: 273 272 * 274 - * The kernel virtual space is thus: 273 + * d0003fffffffffff # with Hash MMU 274 + * c00fffffffffffff # with Radix MMU 275 275 * 276 - * 0xD000000000000000 : vmalloc 277 - * 0xD000080000000000 : PCI PHB IO space 278 - * 0xD000080080000000 : ioremap 279 - * 0xD0000fffffffffff : end of ioremap region 280 - * 281 - * Since the top 4 bits are reserved as the region ID, we use thus 282 - * the next 12 bits and keep 4 bits available for the future if the 283 - * virtual address space is ever to be extended. 276 + * The top 4 bits are reserved as the region ID on hash, leaving us 8 bits 277 + * that can be used for the field. 284 278 * 285 279 * The direct IO mapping operations will then mask off those bits 286 280 * before doing the actual access, though that only happen when ··· 286 292 */ 287 293 288 294 #ifdef CONFIG_PPC_INDIRECT_MMIO 289 - #define PCI_IO_IND_TOKEN_MASK 0x0fff000000000000ul 290 - #define PCI_IO_IND_TOKEN_SHIFT 48 295 + #define PCI_IO_IND_TOKEN_SHIFT 52 296 + #define PCI_IO_IND_TOKEN_MASK (0xfful << PCI_IO_IND_TOKEN_SHIFT) 291 297 #define PCI_FIX_ADDR(addr) \ 292 298 ((PCI_IO_ADDR)(((unsigned long)(addr)) & ~PCI_IO_IND_TOKEN_MASK)) 293 299 #define PCI_GET_ADDR_TOKEN(addr) \
+2
arch/powerpc/include/asm/ppc-opcode.h
··· 493 493 __PPC_RS(t) | __PPC_RA0(a) | __PPC_RB(b)) 494 494 #define PPC_SLBFEE_DOT(t, b) stringify_in_c(.long PPC_INST_SLBFEE | \ 495 495 __PPC_RT(t) | __PPC_RB(b)) 496 + #define __PPC_SLBFEE_DOT(t, b) stringify_in_c(.long PPC_INST_SLBFEE | \ 497 + ___PPC_RT(t) | ___PPC_RB(b)) 496 498 #define PPC_ICBT(c,a,b) stringify_in_c(.long PPC_INST_ICBT | \ 497 499 __PPC_CT(c) | __PPC_RA0(a) | __PPC_RB(b)) 498 500 /* PASemi instructions */
+1
arch/powerpc/include/asm/ptrace.h
··· 54 54 55 55 #ifdef CONFIG_PPC64 56 56 unsigned long ppr; 57 + unsigned long __pad; /* Maintain 16 byte interrupt stack alignment */ 57 58 #endif 58 59 }; 59 60 #endif
+2
arch/powerpc/kernel/setup_64.c
··· 636 636 { 637 637 unsigned long pa; 638 638 639 + BUILD_BUG_ON(STACK_INT_FRAME_SIZE % 16); 640 + 639 641 pa = memblock_alloc_base_nid(THREAD_SIZE, THREAD_SIZE, limit, 640 642 early_cpu_to_node(cpu), MEMBLOCK_NONE); 641 643 if (!pa) {
+6 -2
arch/powerpc/kvm/trace.h
··· 6 6 7 7 #undef TRACE_SYSTEM 8 8 #define TRACE_SYSTEM kvm 9 - #define TRACE_INCLUDE_PATH . 10 - #define TRACE_INCLUDE_FILE trace 11 9 12 10 /* 13 11 * Tracepoint for guest mode entry. ··· 118 120 #endif /* _TRACE_KVM_H */ 119 121 120 122 /* This part must be outside protection */ 123 + #undef TRACE_INCLUDE_PATH 124 + #undef TRACE_INCLUDE_FILE 125 + 126 + #define TRACE_INCLUDE_PATH . 127 + #define TRACE_INCLUDE_FILE trace 128 + 121 129 #include <trace/define_trace.h>
+7 -2
arch/powerpc/kvm/trace_booke.h
··· 6 6 7 7 #undef TRACE_SYSTEM 8 8 #define TRACE_SYSTEM kvm_booke 9 - #define TRACE_INCLUDE_PATH . 10 - #define TRACE_INCLUDE_FILE trace_booke 11 9 12 10 #define kvm_trace_symbol_exit \ 13 11 {0, "CRITICAL"}, \ ··· 216 218 #endif 217 219 218 220 /* This part must be outside protection */ 221 + 222 + #undef TRACE_INCLUDE_PATH 223 + #undef TRACE_INCLUDE_FILE 224 + 225 + #define TRACE_INCLUDE_PATH . 226 + #define TRACE_INCLUDE_FILE trace_booke 227 + 219 228 #include <trace/define_trace.h>
+7 -2
arch/powerpc/kvm/trace_hv.h
··· 9 9 10 10 #undef TRACE_SYSTEM 11 11 #define TRACE_SYSTEM kvm_hv 12 - #define TRACE_INCLUDE_PATH . 13 - #define TRACE_INCLUDE_FILE trace_hv 14 12 15 13 #define kvm_trace_symbol_hcall \ 16 14 {H_REMOVE, "H_REMOVE"}, \ ··· 495 497 #endif /* _TRACE_KVM_HV_H */ 496 498 497 499 /* This part must be outside protection */ 500 + 501 + #undef TRACE_INCLUDE_PATH 502 + #undef TRACE_INCLUDE_FILE 503 + 504 + #define TRACE_INCLUDE_PATH . 505 + #define TRACE_INCLUDE_FILE trace_hv 506 + 498 507 #include <trace/define_trace.h>
+7 -2
arch/powerpc/kvm/trace_pr.h
··· 8 8 9 9 #undef TRACE_SYSTEM 10 10 #define TRACE_SYSTEM kvm_pr 11 - #define TRACE_INCLUDE_PATH . 12 - #define TRACE_INCLUDE_FILE trace_pr 13 11 14 12 TRACE_EVENT(kvm_book3s_reenter, 15 13 TP_PROTO(int r, struct kvm_vcpu *vcpu), ··· 255 257 #endif /* _TRACE_KVM_H */ 256 258 257 259 /* This part must be outside protection */ 260 + 261 + #undef TRACE_INCLUDE_PATH 262 + #undef TRACE_INCLUDE_FILE 263 + 264 + #define TRACE_INCLUDE_PATH . 265 + #define TRACE_INCLUDE_FILE trace_pr 266 + 258 267 #include <trace/define_trace.h>
+1 -1
arch/powerpc/mm/numa.c
··· 1178 1178 1179 1179 switch (rc) { 1180 1180 case H_FUNCTION: 1181 - printk(KERN_INFO 1181 + printk_once(KERN_INFO 1182 1182 "VPHN is not supported. Disabling polling...\n"); 1183 1183 stop_topology_update(); 1184 1184 break;
+14 -21
arch/powerpc/mm/slb.c
··· 19 19 #include <asm/mmu.h> 20 20 #include <asm/mmu_context.h> 21 21 #include <asm/paca.h> 22 + #include <asm/ppc-opcode.h> 22 23 #include <asm/cputable.h> 23 24 #include <asm/cacheflush.h> 24 25 #include <asm/smp.h> ··· 59 58 return __mk_vsid_data(get_kernel_vsid(ea, ssize), ssize, flags); 60 59 } 61 60 62 - static void assert_slb_exists(unsigned long ea) 61 + static void assert_slb_presence(bool present, unsigned long ea) 63 62 { 64 63 #ifdef CONFIG_DEBUG_VM 65 64 unsigned long tmp; 66 65 67 66 WARN_ON_ONCE(mfmsr() & MSR_EE); 68 67 69 - asm volatile("slbfee. %0, %1" : "=r"(tmp) : "r"(ea) : "cr0"); 70 - WARN_ON(tmp == 0); 71 - #endif 72 - } 68 + if (!cpu_has_feature(CPU_FTR_ARCH_206)) 69 + return; 73 70 74 - static void assert_slb_notexists(unsigned long ea) 75 - { 76 - #ifdef CONFIG_DEBUG_VM 77 - unsigned long tmp; 71 + asm volatile(__PPC_SLBFEE_DOT(%0, %1) : "=r"(tmp) : "r"(ea) : "cr0"); 78 72 79 - WARN_ON_ONCE(mfmsr() & MSR_EE); 80 - 81 - asm volatile("slbfee. %0, %1" : "=r"(tmp) : "r"(ea) : "cr0"); 82 - WARN_ON(tmp != 0); 73 + WARN_ON(present == (tmp == 0)); 83 74 #endif 84 75 } 85 76 ··· 107 114 */ 108 115 slb_shadow_update(ea, ssize, flags, index); 109 116 110 - assert_slb_notexists(ea); 117 + assert_slb_presence(false, ea); 111 118 asm volatile("slbmte %0,%1" : 112 119 : "r" (mk_vsid_data(ea, ssize, flags)), 113 120 "r" (mk_esid_data(ea, ssize, index)) ··· 130 137 "r" (be64_to_cpu(p->save_area[index].esid))); 131 138 } 132 139 133 - assert_slb_exists(local_paca->kstack); 140 + assert_slb_presence(true, local_paca->kstack); 134 141 } 135 142 136 143 /* ··· 178 185 :: "r" (be64_to_cpu(p->save_area[KSTACK_INDEX].vsid)), 179 186 "r" (be64_to_cpu(p->save_area[KSTACK_INDEX].esid)) 180 187 : "memory"); 181 - assert_slb_exists(get_paca()->kstack); 188 + assert_slb_presence(true, get_paca()->kstack); 182 189 183 190 get_paca()->slb_cache_ptr = 0; 184 191 ··· 436 443 ea = (unsigned long) 437 444 get_paca()->slb_cache[i] << SID_SHIFT; 438 445 /* 439 - * Could assert_slb_exists here, but hypervisor 440 - * or machine check could have come in and 441 - * removed the entry at this point. 446 + * Could assert_slb_presence(true) here, but 447 + * hypervisor or machine check could have come 448 + * in and removed the entry at this point. 442 449 */ 443 450 444 451 slbie_data = ea; ··· 669 676 * User preloads should add isync afterwards in case the kernel 670 677 * accesses user memory before it returns to userspace with rfid. 671 678 */ 672 - assert_slb_notexists(ea); 679 + assert_slb_presence(false, ea); 673 680 asm volatile("slbmte %0, %1" : : "r" (vsid_data), "r" (esid_data)); 674 681 675 682 barrier(); ··· 708 715 return -EFAULT; 709 716 710 717 if (ea < H_VMALLOC_END) 711 - flags = get_paca()->vmalloc_sllp; 718 + flags = local_paca->vmalloc_sllp; 712 719 else 713 720 flags = SLB_VSID_KERNEL | mmu_psize_defs[mmu_io_psize].sllp; 714 721 } else {
+4 -60
arch/powerpc/platforms/powernv/npu-dma.c
··· 102 102 } 103 103 EXPORT_SYMBOL(pnv_pci_get_npu_dev); 104 104 105 - #define NPU_DMA_OP_UNSUPPORTED() \ 106 - dev_err_once(dev, "%s operation unsupported for NVLink devices\n", \ 107 - __func__) 108 - 109 - static void *dma_npu_alloc(struct device *dev, size_t size, 110 - dma_addr_t *dma_handle, gfp_t flag, 111 - unsigned long attrs) 112 - { 113 - NPU_DMA_OP_UNSUPPORTED(); 114 - return NULL; 115 - } 116 - 117 - static void dma_npu_free(struct device *dev, size_t size, 118 - void *vaddr, dma_addr_t dma_handle, 119 - unsigned long attrs) 120 - { 121 - NPU_DMA_OP_UNSUPPORTED(); 122 - } 123 - 124 - static dma_addr_t dma_npu_map_page(struct device *dev, struct page *page, 125 - unsigned long offset, size_t size, 126 - enum dma_data_direction direction, 127 - unsigned long attrs) 128 - { 129 - NPU_DMA_OP_UNSUPPORTED(); 130 - return 0; 131 - } 132 - 133 - static int dma_npu_map_sg(struct device *dev, struct scatterlist *sglist, 134 - int nelems, enum dma_data_direction direction, 135 - unsigned long attrs) 136 - { 137 - NPU_DMA_OP_UNSUPPORTED(); 138 - return 0; 139 - } 140 - 141 - static int dma_npu_dma_supported(struct device *dev, u64 mask) 142 - { 143 - NPU_DMA_OP_UNSUPPORTED(); 144 - return 0; 145 - } 146 - 147 - static u64 dma_npu_get_required_mask(struct device *dev) 148 - { 149 - NPU_DMA_OP_UNSUPPORTED(); 150 - return 0; 151 - } 152 - 153 - static const struct dma_map_ops dma_npu_ops = { 154 - .map_page = dma_npu_map_page, 155 - .map_sg = dma_npu_map_sg, 156 - .alloc = dma_npu_alloc, 157 - .free = dma_npu_free, 158 - .dma_supported = dma_npu_dma_supported, 159 - .get_required_mask = dma_npu_get_required_mask, 160 - }; 161 - 162 105 /* 163 106 * Returns the PE assoicated with the PCI device of the given 164 107 * NPU. Returns the linked pci device if pci_dev != NULL. ··· 213 270 rc = pnv_npu_set_window(npe, 0, gpe->table_group.tables[0]); 214 271 215 272 /* 216 - * We don't initialise npu_pe->tce32_table as we always use 217 - * dma_npu_ops which are nops. 273 + * NVLink devices use the same TCE table configuration as 274 + * their parent device so drivers shouldn't be doing DMA 275 + * operations directly on these devices. 218 276 */ 219 - set_dma_ops(&npe->pdev->dev, &dma_npu_ops); 277 + set_dma_ops(&npe->pdev->dev, NULL); 220 278 } 221 279 222 280 /*
+18 -1
arch/riscv/Makefile
··· 71 71 # arch specific predefines for sparse 72 72 CHECKFLAGS += -D__riscv -D__riscv_xlen=$(BITS) 73 73 74 + # Default target when executing plain make 75 + boot := arch/riscv/boot 76 + KBUILD_IMAGE := $(boot)/Image.gz 77 + 74 78 head-y := arch/riscv/kernel/head.o 75 79 76 80 core-y += arch/riscv/kernel/ arch/riscv/mm/ 77 81 78 82 libs-y += arch/riscv/lib/ 79 83 80 - all: vmlinux 84 + PHONY += vdso_install 85 + vdso_install: 86 + $(Q)$(MAKE) $(build)=arch/riscv/kernel/vdso $@ 87 + 88 + all: Image.gz 89 + 90 + Image: vmlinux 91 + $(Q)$(MAKE) $(build)=$(boot) $(boot)/$@ 92 + 93 + Image.%: Image 94 + $(Q)$(MAKE) $(build)=$(boot) $(boot)/$@ 95 + 96 + zinstall install: 97 + $(Q)$(MAKE) $(build)=$(boot) $@
+2
arch/riscv/boot/.gitignore
··· 1 + Image 2 + Image.gz
+33
arch/riscv/boot/Makefile
··· 1 + # 2 + # arch/riscv/boot/Makefile 3 + # 4 + # This file is included by the global makefile so that you can add your own 5 + # architecture-specific flags and dependencies. 6 + # 7 + # This file is subject to the terms and conditions of the GNU General Public 8 + # License. See the file "COPYING" in the main directory of this archive 9 + # for more details. 10 + # 11 + # Copyright (C) 2018, Anup Patel. 12 + # Author: Anup Patel <anup@brainfault.org> 13 + # 14 + # Based on the ia64 and arm64 boot/Makefile. 15 + # 16 + 17 + OBJCOPYFLAGS_Image :=-O binary -R .note -R .note.gnu.build-id -R .comment -S 18 + 19 + targets := Image 20 + 21 + $(obj)/Image: vmlinux FORCE 22 + $(call if_changed,objcopy) 23 + 24 + $(obj)/Image.gz: $(obj)/Image FORCE 25 + $(call if_changed,gzip) 26 + 27 + install: 28 + $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \ 29 + $(obj)/Image System.map "$(INSTALL_PATH)" 30 + 31 + zinstall: 32 + $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \ 33 + $(obj)/Image.gz System.map "$(INSTALL_PATH)"
+60
arch/riscv/boot/install.sh
··· 1 + #!/bin/sh 2 + # 3 + # arch/riscv/boot/install.sh 4 + # 5 + # This file is subject to the terms and conditions of the GNU General Public 6 + # License. See the file "COPYING" in the main directory of this archive 7 + # for more details. 8 + # 9 + # Copyright (C) 1995 by Linus Torvalds 10 + # 11 + # Adapted from code in arch/i386/boot/Makefile by H. Peter Anvin 12 + # Adapted from code in arch/i386/boot/install.sh by Russell King 13 + # 14 + # "make install" script for the RISC-V Linux port 15 + # 16 + # Arguments: 17 + # $1 - kernel version 18 + # $2 - kernel image file 19 + # $3 - kernel map file 20 + # $4 - default install path (blank if root directory) 21 + # 22 + 23 + verify () { 24 + if [ ! -f "$1" ]; then 25 + echo "" 1>&2 26 + echo " *** Missing file: $1" 1>&2 27 + echo ' *** You need to run "make" before "make install".' 1>&2 28 + echo "" 1>&2 29 + exit 1 30 + fi 31 + } 32 + 33 + # Make sure the files actually exist 34 + verify "$2" 35 + verify "$3" 36 + 37 + # User may have a custom install script 38 + if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi 39 + if [ -x /sbin/${INSTALLKERNEL} ]; then exec /sbin/${INSTALLKERNEL} "$@"; fi 40 + 41 + if [ "$(basename $2)" = "Image.gz" ]; then 42 + # Compressed install 43 + echo "Installing compressed kernel" 44 + base=vmlinuz 45 + else 46 + # Normal install 47 + echo "Installing normal kernel" 48 + base=vmlinux 49 + fi 50 + 51 + if [ -f $4/$base-$1 ]; then 52 + mv $4/$base-$1 $4/$base-$1.old 53 + fi 54 + cat $2 > $4/$base-$1 55 + 56 + # Install system map file 57 + if [ -f $4/System.map-$1 ]; then 58 + mv $4/System.map-$1 $4/System.map-$1.old 59 + fi 60 + cp $3 $4/System.map-$1
+1
arch/riscv/configs/defconfig
··· 76 76 CONFIG_NFS_V4_2=y 77 77 CONFIG_ROOT_NFS=y 78 78 CONFIG_CRYPTO_USER_API_HASH=y 79 + CONFIG_PRINTK_TIME=y 79 80 # CONFIG_RCU_TRACE is not set
+1
arch/riscv/include/asm/module.h
··· 8 8 9 9 #define MODULE_ARCH_VERMAGIC "riscv" 10 10 11 + struct module; 11 12 u64 module_emit_got_entry(struct module *mod, u64 val); 12 13 u64 module_emit_plt_entry(struct module *mod, u64 val); 13 14
+2 -2
arch/riscv/include/asm/ptrace.h
··· 56 56 unsigned long sstatus; 57 57 unsigned long sbadaddr; 58 58 unsigned long scause; 59 - /* a0 value before the syscall */ 60 - unsigned long orig_a0; 59 + /* a0 value before the syscall */ 60 + unsigned long orig_a0; 61 61 }; 62 62 63 63 #ifdef CONFIG_64BIT
+2 -2
arch/riscv/include/asm/uaccess.h
··· 400 400 static inline unsigned long 401 401 raw_copy_from_user(void *to, const void __user *from, unsigned long n) 402 402 { 403 - return __asm_copy_to_user(to, from, n); 403 + return __asm_copy_from_user(to, from, n); 404 404 } 405 405 406 406 static inline unsigned long 407 407 raw_copy_to_user(void __user *to, const void *from, unsigned long n) 408 408 { 409 - return __asm_copy_from_user(to, from, n); 409 + return __asm_copy_to_user(to, from, n); 410 410 } 411 411 412 412 extern long strncpy_from_user(char *dest, const char __user *src, long count);
+2 -3
arch/riscv/include/asm/unistd.h
··· 13 13 14 14 /* 15 15 * There is explicitly no include guard here because this file is expected to 16 - * be included multiple times. See uapi/asm/syscalls.h for more info. 16 + * be included multiple times. 17 17 */ 18 18 19 - #define __ARCH_WANT_NEW_STAT 20 19 #define __ARCH_WANT_SYS_CLONE 20 + 21 21 #include <uapi/asm/unistd.h> 22 - #include <uapi/asm/syscalls.h>
+19 -7
arch/riscv/include/uapi/asm/syscalls.h arch/riscv/include/uapi/asm/unistd.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 2 /* 3 - * Copyright (C) 2017-2018 SiFive 3 + * Copyright (C) 2018 David Abdurachmanov <david.abdurachmanov@gmail.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + * 14 + * You should have received a copy of the GNU General Public License 15 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 4 16 */ 5 17 6 - /* 7 - * There is explicitly no include guard here because this file is expected to 8 - * be included multiple times in order to define the syscall macros via 9 - * __SYSCALL. 10 - */ 18 + #ifdef __LP64__ 19 + #define __ARCH_WANT_NEW_STAT 20 + #endif /* __LP64__ */ 21 + 22 + #include <asm-generic/unistd.h> 11 23 12 24 /* 13 25 * Allows the instruction cache to be flushed from userspace. Despite RISC-V
+6 -3
arch/riscv/kernel/cpu.c
··· 64 64 65 65 static void print_isa(struct seq_file *f, const char *orig_isa) 66 66 { 67 - static const char *ext = "mafdc"; 67 + static const char *ext = "mafdcsu"; 68 68 const char *isa = orig_isa; 69 69 const char *e; 70 70 ··· 88 88 /* 89 89 * Check the rest of the ISA string for valid extensions, printing those 90 90 * we find. RISC-V ISA strings define an order, so we only print the 91 - * extension bits when they're in order. 91 + * extension bits when they're in order. Hide the supervisor (S) 92 + * extension from userspace as it's not accessible from there. 92 93 */ 93 94 for (e = ext; *e != '\0'; ++e) { 94 95 if (isa[0] == e[0]) { 95 - seq_write(f, isa, 1); 96 + if (isa[0] != 's') 97 + seq_write(f, isa, 1); 98 + 96 99 isa++; 97 100 } 98 101 }
+10
arch/riscv/kernel/head.S
··· 44 44 amoadd.w a3, a2, (a3) 45 45 bnez a3, .Lsecondary_start 46 46 47 + /* Clear BSS for flat non-ELF images */ 48 + la a3, __bss_start 49 + la a4, __bss_stop 50 + ble a4, a3, clear_bss_done 51 + clear_bss: 52 + REG_S zero, (a3) 53 + add a3, a3, RISCV_SZPTR 54 + blt a3, a4, clear_bss 55 + clear_bss_done: 56 + 47 57 /* Save hart ID and DTB physical address */ 48 58 mv s0, a0 49 59 mv s1, a1
+6 -6
arch/riscv/kernel/module.c
··· 21 21 { 22 22 if (v != (u32)v) { 23 23 pr_err("%s: value %016llx out of range for 32-bit field\n", 24 - me->name, v); 24 + me->name, (long long)v); 25 25 return -EINVAL; 26 26 } 27 27 *location = v; ··· 102 102 if (offset != (s32)offset) { 103 103 pr_err( 104 104 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n", 105 - me->name, v, location); 105 + me->name, (long long)v, location); 106 106 return -EINVAL; 107 107 } 108 108 ··· 144 144 if (IS_ENABLED(CMODEL_MEDLOW)) { 145 145 pr_err( 146 146 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n", 147 - me->name, v, location); 147 + me->name, (long long)v, location); 148 148 return -EINVAL; 149 149 } 150 150 ··· 188 188 } else { 189 189 pr_err( 190 190 "%s: can not generate the GOT entry for symbol = %016llx from PC = %p\n", 191 - me->name, v, location); 191 + me->name, (long long)v, location); 192 192 return -EINVAL; 193 193 } 194 194 ··· 212 212 } else { 213 213 pr_err( 214 214 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n", 215 - me->name, v, location); 215 + me->name, (long long)v, location); 216 216 return -EINVAL; 217 217 } 218 218 } ··· 234 234 if (offset != fill_v) { 235 235 pr_err( 236 236 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n", 237 - me->name, v, location); 237 + me->name, (long long)v, location); 238 238 return -EINVAL; 239 239 } 240 240
+1 -1
arch/riscv/kernel/vmlinux.lds.S
··· 74 74 *(.sbss*) 75 75 } 76 76 77 - BSS_SECTION(0, 0, 0) 77 + BSS_SECTION(PAGE_SIZE, PAGE_SIZE, 0) 78 78 79 79 EXCEPTION_TABLE(0x10) 80 80 NOTES
+1 -1
arch/riscv/lib/Makefile
··· 3 3 lib-y += memset.o 4 4 lib-y += uaccess.o 5 5 6 - lib-(CONFIG_64BIT) += tishift.o 6 + lib-$(CONFIG_64BIT) += tishift.o 7 7 8 8 lib-$(CONFIG_32BIT) += udivdi3.o
+25 -8
arch/x86/events/intel/uncore.h
··· 129 129 struct intel_uncore_extra_reg shared_regs[0]; 130 130 }; 131 131 132 - #define UNCORE_BOX_FLAG_INITIATED 0 133 - #define UNCORE_BOX_FLAG_CTL_OFFS8 1 /* event config registers are 8-byte apart */ 132 + /* CFL uncore 8th cbox MSRs */ 133 + #define CFL_UNC_CBO_7_PERFEVTSEL0 0xf70 134 + #define CFL_UNC_CBO_7_PER_CTR0 0xf76 135 + 136 + #define UNCORE_BOX_FLAG_INITIATED 0 137 + /* event config registers are 8-byte apart */ 138 + #define UNCORE_BOX_FLAG_CTL_OFFS8 1 139 + /* CFL 8th CBOX has different MSR space */ 140 + #define UNCORE_BOX_FLAG_CFL8_CBOX_MSR_OFFS 2 134 141 135 142 struct uncore_event_desc { 136 143 struct kobj_attribute attr; ··· 304 297 static inline 305 298 unsigned uncore_msr_event_ctl(struct intel_uncore_box *box, int idx) 306 299 { 307 - return box->pmu->type->event_ctl + 308 - (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx) + 309 - uncore_msr_box_offset(box); 300 + if (test_bit(UNCORE_BOX_FLAG_CFL8_CBOX_MSR_OFFS, &box->flags)) { 301 + return CFL_UNC_CBO_7_PERFEVTSEL0 + 302 + (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx); 303 + } else { 304 + return box->pmu->type->event_ctl + 305 + (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx) + 306 + uncore_msr_box_offset(box); 307 + } 310 308 } 311 309 312 310 static inline 313 311 unsigned uncore_msr_perf_ctr(struct intel_uncore_box *box, int idx) 314 312 { 315 - return box->pmu->type->perf_ctr + 316 - (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx) + 317 - uncore_msr_box_offset(box); 313 + if (test_bit(UNCORE_BOX_FLAG_CFL8_CBOX_MSR_OFFS, &box->flags)) { 314 + return CFL_UNC_CBO_7_PER_CTR0 + 315 + (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx); 316 + } else { 317 + return box->pmu->type->perf_ctr + 318 + (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx) + 319 + uncore_msr_box_offset(box); 320 + } 318 321 } 319 322 320 323 static inline
+119 -2
arch/x86/events/intel/uncore_snb.c
··· 15 15 #define PCI_DEVICE_ID_INTEL_SKL_HQ_IMC 0x1910 16 16 #define PCI_DEVICE_ID_INTEL_SKL_SD_IMC 0x190f 17 17 #define PCI_DEVICE_ID_INTEL_SKL_SQ_IMC 0x191f 18 + #define PCI_DEVICE_ID_INTEL_KBL_Y_IMC 0x590c 19 + #define PCI_DEVICE_ID_INTEL_KBL_U_IMC 0x5904 20 + #define PCI_DEVICE_ID_INTEL_KBL_UQ_IMC 0x5914 21 + #define PCI_DEVICE_ID_INTEL_KBL_SD_IMC 0x590f 22 + #define PCI_DEVICE_ID_INTEL_KBL_SQ_IMC 0x591f 23 + #define PCI_DEVICE_ID_INTEL_CFL_2U_IMC 0x3ecc 24 + #define PCI_DEVICE_ID_INTEL_CFL_4U_IMC 0x3ed0 25 + #define PCI_DEVICE_ID_INTEL_CFL_4H_IMC 0x3e10 26 + #define PCI_DEVICE_ID_INTEL_CFL_6H_IMC 0x3ec4 27 + #define PCI_DEVICE_ID_INTEL_CFL_2S_D_IMC 0x3e0f 28 + #define PCI_DEVICE_ID_INTEL_CFL_4S_D_IMC 0x3e1f 29 + #define PCI_DEVICE_ID_INTEL_CFL_6S_D_IMC 0x3ec2 30 + #define PCI_DEVICE_ID_INTEL_CFL_8S_D_IMC 0x3e30 31 + #define PCI_DEVICE_ID_INTEL_CFL_4S_W_IMC 0x3e18 32 + #define PCI_DEVICE_ID_INTEL_CFL_6S_W_IMC 0x3ec6 33 + #define PCI_DEVICE_ID_INTEL_CFL_8S_W_IMC 0x3e31 34 + #define PCI_DEVICE_ID_INTEL_CFL_4S_S_IMC 0x3e33 35 + #define PCI_DEVICE_ID_INTEL_CFL_6S_S_IMC 0x3eca 36 + #define PCI_DEVICE_ID_INTEL_CFL_8S_S_IMC 0x3e32 18 37 19 38 /* SNB event control */ 20 39 #define SNB_UNC_CTL_EV_SEL_MASK 0x000000ff ··· 221 202 wrmsrl(SKL_UNC_PERF_GLOBAL_CTL, 222 203 SNB_UNC_GLOBAL_CTL_EN | SKL_UNC_GLOBAL_CTL_CORE_ALL); 223 204 } 205 + 206 + /* The 8th CBOX has different MSR space */ 207 + if (box->pmu->pmu_idx == 7) 208 + __set_bit(UNCORE_BOX_FLAG_CFL8_CBOX_MSR_OFFS, &box->flags); 224 209 } 225 210 226 211 static void skl_uncore_msr_enable_box(struct intel_uncore_box *box) ··· 251 228 static struct intel_uncore_type skl_uncore_cbox = { 252 229 .name = "cbox", 253 230 .num_counters = 4, 254 - .num_boxes = 5, 231 + .num_boxes = 8, 255 232 .perf_ctr_bits = 44, 256 233 .fixed_ctr_bits = 48, 257 234 .perf_ctr = SNB_UNC_CBO_0_PER_CTR0, ··· 592 569 PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SKL_SQ_IMC), 593 570 .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 594 571 }, 595 - 572 + { /* IMC */ 573 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_Y_IMC), 574 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 575 + }, 576 + { /* IMC */ 577 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_U_IMC), 578 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 579 + }, 580 + { /* IMC */ 581 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_UQ_IMC), 582 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 583 + }, 584 + { /* IMC */ 585 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_SD_IMC), 586 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 587 + }, 588 + { /* IMC */ 589 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_SQ_IMC), 590 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 591 + }, 592 + { /* IMC */ 593 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_2U_IMC), 594 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 595 + }, 596 + { /* IMC */ 597 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4U_IMC), 598 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 599 + }, 600 + { /* IMC */ 601 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4H_IMC), 602 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 603 + }, 604 + { /* IMC */ 605 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6H_IMC), 606 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 607 + }, 608 + { /* IMC */ 609 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_2S_D_IMC), 610 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 611 + }, 612 + { /* IMC */ 613 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4S_D_IMC), 614 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 615 + }, 616 + { /* IMC */ 617 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6S_D_IMC), 618 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 619 + }, 620 + { /* IMC */ 621 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_8S_D_IMC), 622 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 623 + }, 624 + { /* IMC */ 625 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4S_W_IMC), 626 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 627 + }, 628 + { /* IMC */ 629 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6S_W_IMC), 630 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 631 + }, 632 + { /* IMC */ 633 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_8S_W_IMC), 634 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 635 + }, 636 + { /* IMC */ 637 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4S_S_IMC), 638 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 639 + }, 640 + { /* IMC */ 641 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6S_S_IMC), 642 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 643 + }, 644 + { /* IMC */ 645 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_8S_S_IMC), 646 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 647 + }, 596 648 { /* end: all zeroes */ }, 597 649 }; 598 650 ··· 716 618 IMC_DEV(SKL_HQ_IMC, &skl_uncore_pci_driver), /* 6th Gen Core H Quad Core */ 717 619 IMC_DEV(SKL_SD_IMC, &skl_uncore_pci_driver), /* 6th Gen Core S Dual Core */ 718 620 IMC_DEV(SKL_SQ_IMC, &skl_uncore_pci_driver), /* 6th Gen Core S Quad Core */ 621 + IMC_DEV(KBL_Y_IMC, &skl_uncore_pci_driver), /* 7th Gen Core Y */ 622 + IMC_DEV(KBL_U_IMC, &skl_uncore_pci_driver), /* 7th Gen Core U */ 623 + IMC_DEV(KBL_UQ_IMC, &skl_uncore_pci_driver), /* 7th Gen Core U Quad Core */ 624 + IMC_DEV(KBL_SD_IMC, &skl_uncore_pci_driver), /* 7th Gen Core S Dual Core */ 625 + IMC_DEV(KBL_SQ_IMC, &skl_uncore_pci_driver), /* 7th Gen Core S Quad Core */ 626 + IMC_DEV(CFL_2U_IMC, &skl_uncore_pci_driver), /* 8th Gen Core U 2 Cores */ 627 + IMC_DEV(CFL_4U_IMC, &skl_uncore_pci_driver), /* 8th Gen Core U 4 Cores */ 628 + IMC_DEV(CFL_4H_IMC, &skl_uncore_pci_driver), /* 8th Gen Core H 4 Cores */ 629 + IMC_DEV(CFL_6H_IMC, &skl_uncore_pci_driver), /* 8th Gen Core H 6 Cores */ 630 + IMC_DEV(CFL_2S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 2 Cores Desktop */ 631 + IMC_DEV(CFL_4S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 4 Cores Desktop */ 632 + IMC_DEV(CFL_6S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 6 Cores Desktop */ 633 + IMC_DEV(CFL_8S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 8 Cores Desktop */ 634 + IMC_DEV(CFL_4S_W_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 4 Cores Work Station */ 635 + IMC_DEV(CFL_6S_W_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 6 Cores Work Station */ 636 + IMC_DEV(CFL_8S_W_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 8 Cores Work Station */ 637 + IMC_DEV(CFL_4S_S_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 4 Cores Server */ 638 + IMC_DEV(CFL_6S_S_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 6 Cores Server */ 639 + IMC_DEV(CFL_8S_S_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 8 Cores Server */ 719 640 { /* end marker */ } 720 641 }; 721 642
+5 -1
arch/xtensa/include/asm/processor.h
··· 23 23 # error Linux requires the Xtensa Windowed Registers Option. 24 24 #endif 25 25 26 - #define ARCH_SLAB_MINALIGN XCHAL_DATA_WIDTH 26 + /* Xtensa ABI requires stack alignment to be at least 16 */ 27 + 28 + #define STACK_ALIGN (XCHAL_DATA_WIDTH > 16 ? XCHAL_DATA_WIDTH : 16) 29 + 30 + #define ARCH_SLAB_MINALIGN STACK_ALIGN 27 31 28 32 /* 29 33 * User space process size: 1 GB.
+5 -2
arch/xtensa/kernel/head.S
··· 88 88 initialize_mmu 89 89 #if defined(CONFIG_MMU) && XCHAL_HAVE_PTP_MMU && XCHAL_HAVE_SPANNING_WAY 90 90 rsr a2, excsave1 91 - movi a3, 0x08000000 91 + movi a3, XCHAL_KSEG_PADDR 92 + bltu a2, a3, 1f 93 + sub a2, a2, a3 94 + movi a3, XCHAL_KSEG_SIZE 92 95 bgeu a2, a3, 1f 93 - movi a3, 0xd0000000 96 + movi a3, XCHAL_KSEG_CACHED_VADDR 94 97 add a2, a2, a3 95 98 wsr a2, excsave1 96 99 1:
+1
block/bio.c
··· 605 605 if (bio_flagged(bio_src, BIO_THROTTLED)) 606 606 bio_set_flag(bio, BIO_THROTTLED); 607 607 bio->bi_opf = bio_src->bi_opf; 608 + bio->bi_ioprio = bio_src->bi_ioprio; 608 609 bio->bi_write_hint = bio_src->bi_write_hint; 609 610 bio->bi_iter = bio_src->bi_iter; 610 611 bio->bi_io_vec = bio_src->bi_io_vec;
+2 -3
block/blk-core.c
··· 798 798 * dispatch may still be in-progress since we dispatch requests 799 799 * from more than one contexts. 800 800 * 801 - * No need to quiesce queue if it isn't initialized yet since 802 - * blk_freeze_queue() should be enough for cases of passthrough 803 - * request. 801 + * We rely on driver to deal with the race in case that queue 802 + * initialization isn't done. 804 803 */ 805 804 if (q->mq_ops && blk_queue_init_done(q)) 806 805 blk_mq_quiesce_queue(q);
+3 -1
block/blk-lib.c
··· 55 55 return -EINVAL; 56 56 57 57 while (nr_sects) { 58 - unsigned int req_sects = min_t(unsigned int, nr_sects, 58 + sector_t req_sects = min_t(sector_t, nr_sects, 59 59 bio_allowed_max_sectors(q)); 60 + 61 + WARN_ON_ONCE((req_sects << 9) > UINT_MAX); 60 62 61 63 bio = blk_next_bio(bio, 0, gfp_mask); 62 64 bio->bi_iter.bi_sector = sector;
+1
block/bounce.c
··· 248 248 return NULL; 249 249 bio->bi_disk = bio_src->bi_disk; 250 250 bio->bi_opf = bio_src->bi_opf; 251 + bio->bi_ioprio = bio_src->bi_ioprio; 251 252 bio->bi_write_hint = bio_src->bi_write_hint; 252 253 bio->bi_iter.bi_sector = bio_src->bi_iter.bi_sector; 253 254 bio->bi_iter.bi_size = bio_src->bi_iter.bi_size;
+9 -9
crypto/crypto_user_base.c
··· 84 84 { 85 85 struct crypto_report_cipher rcipher; 86 86 87 - strlcpy(rcipher.type, "cipher", sizeof(rcipher.type)); 87 + strncpy(rcipher.type, "cipher", sizeof(rcipher.type)); 88 88 89 89 rcipher.blocksize = alg->cra_blocksize; 90 90 rcipher.min_keysize = alg->cra_cipher.cia_min_keysize; ··· 103 103 { 104 104 struct crypto_report_comp rcomp; 105 105 106 - strlcpy(rcomp.type, "compression", sizeof(rcomp.type)); 106 + strncpy(rcomp.type, "compression", sizeof(rcomp.type)); 107 107 if (nla_put(skb, CRYPTOCFGA_REPORT_COMPRESS, 108 108 sizeof(struct crypto_report_comp), &rcomp)) 109 109 goto nla_put_failure; ··· 117 117 { 118 118 struct crypto_report_acomp racomp; 119 119 120 - strlcpy(racomp.type, "acomp", sizeof(racomp.type)); 120 + strncpy(racomp.type, "acomp", sizeof(racomp.type)); 121 121 122 122 if (nla_put(skb, CRYPTOCFGA_REPORT_ACOMP, 123 123 sizeof(struct crypto_report_acomp), &racomp)) ··· 132 132 { 133 133 struct crypto_report_akcipher rakcipher; 134 134 135 - strlcpy(rakcipher.type, "akcipher", sizeof(rakcipher.type)); 135 + strncpy(rakcipher.type, "akcipher", sizeof(rakcipher.type)); 136 136 137 137 if (nla_put(skb, CRYPTOCFGA_REPORT_AKCIPHER, 138 138 sizeof(struct crypto_report_akcipher), &rakcipher)) ··· 147 147 { 148 148 struct crypto_report_kpp rkpp; 149 149 150 - strlcpy(rkpp.type, "kpp", sizeof(rkpp.type)); 150 + strncpy(rkpp.type, "kpp", sizeof(rkpp.type)); 151 151 152 152 if (nla_put(skb, CRYPTOCFGA_REPORT_KPP, 153 153 sizeof(struct crypto_report_kpp), &rkpp)) ··· 161 161 static int crypto_report_one(struct crypto_alg *alg, 162 162 struct crypto_user_alg *ualg, struct sk_buff *skb) 163 163 { 164 - strlcpy(ualg->cru_name, alg->cra_name, sizeof(ualg->cru_name)); 165 - strlcpy(ualg->cru_driver_name, alg->cra_driver_name, 164 + strncpy(ualg->cru_name, alg->cra_name, sizeof(ualg->cru_name)); 165 + strncpy(ualg->cru_driver_name, alg->cra_driver_name, 166 166 sizeof(ualg->cru_driver_name)); 167 - strlcpy(ualg->cru_module_name, module_name(alg->cra_module), 167 + strncpy(ualg->cru_module_name, module_name(alg->cra_module), 168 168 sizeof(ualg->cru_module_name)); 169 169 170 170 ualg->cru_type = 0; ··· 177 177 if (alg->cra_flags & CRYPTO_ALG_LARVAL) { 178 178 struct crypto_report_larval rl; 179 179 180 - strlcpy(rl.type, "larval", sizeof(rl.type)); 180 + strncpy(rl.type, "larval", sizeof(rl.type)); 181 181 if (nla_put(skb, CRYPTOCFGA_REPORT_LARVAL, 182 182 sizeof(struct crypto_report_larval), &rl)) 183 183 goto nla_put_failure;
+21
crypto/crypto_user_stat.c
··· 37 37 u64 v64; 38 38 u32 v32; 39 39 40 + memset(&raead, 0, sizeof(raead)); 41 + 40 42 strncpy(raead.type, "aead", sizeof(raead.type)); 41 43 42 44 v32 = atomic_read(&alg->encrypt_cnt); ··· 66 64 struct crypto_stat rcipher; 67 65 u64 v64; 68 66 u32 v32; 67 + 68 + memset(&rcipher, 0, sizeof(rcipher)); 69 69 70 70 strlcpy(rcipher.type, "cipher", sizeof(rcipher.type)); 71 71 ··· 97 93 u64 v64; 98 94 u32 v32; 99 95 96 + memset(&rcomp, 0, sizeof(rcomp)); 97 + 100 98 strlcpy(rcomp.type, "compression", sizeof(rcomp.type)); 101 99 v32 = atomic_read(&alg->compress_cnt); 102 100 rcomp.stat_compress_cnt = v32; ··· 126 120 u64 v64; 127 121 u32 v32; 128 122 123 + memset(&racomp, 0, sizeof(racomp)); 124 + 129 125 strlcpy(racomp.type, "acomp", sizeof(racomp.type)); 130 126 v32 = atomic_read(&alg->compress_cnt); 131 127 racomp.stat_compress_cnt = v32; ··· 154 146 struct crypto_stat rakcipher; 155 147 u64 v64; 156 148 u32 v32; 149 + 150 + memset(&rakcipher, 0, sizeof(rakcipher)); 157 151 158 152 strncpy(rakcipher.type, "akcipher", sizeof(rakcipher.type)); 159 153 v32 = atomic_read(&alg->encrypt_cnt); ··· 187 177 struct crypto_stat rkpp; 188 178 u32 v; 189 179 180 + memset(&rkpp, 0, sizeof(rkpp)); 181 + 190 182 strlcpy(rkpp.type, "kpp", sizeof(rkpp.type)); 191 183 192 184 v = atomic_read(&alg->setsecret_cnt); ··· 215 203 u64 v64; 216 204 u32 v32; 217 205 206 + memset(&rhash, 0, sizeof(rhash)); 207 + 218 208 strncpy(rhash.type, "ahash", sizeof(rhash.type)); 219 209 220 210 v32 = atomic_read(&alg->hash_cnt); ··· 240 226 struct crypto_stat rhash; 241 227 u64 v64; 242 228 u32 v32; 229 + 230 + memset(&rhash, 0, sizeof(rhash)); 243 231 244 232 strncpy(rhash.type, "shash", sizeof(rhash.type)); 245 233 ··· 267 251 u64 v64; 268 252 u32 v32; 269 253 254 + memset(&rrng, 0, sizeof(rrng)); 255 + 270 256 strncpy(rrng.type, "rng", sizeof(rrng.type)); 271 257 272 258 v32 = atomic_read(&alg->generate_cnt); ··· 293 275 struct crypto_user_alg *ualg, 294 276 struct sk_buff *skb) 295 277 { 278 + memset(ualg, 0, sizeof(*ualg)); 279 + 296 280 strlcpy(ualg->cru_name, alg->cra_name, sizeof(ualg->cru_name)); 297 281 strlcpy(ualg->cru_driver_name, alg->cra_driver_name, 298 282 sizeof(ualg->cru_driver_name)); ··· 311 291 if (alg->cra_flags & CRYPTO_ALG_LARVAL) { 312 292 struct crypto_stat rl; 313 293 294 + memset(&rl, 0, sizeof(rl)); 314 295 strlcpy(rl.type, "larval", sizeof(rl.type)); 315 296 if (nla_put(skb, CRYPTOCFGA_STAT_LARVAL, 316 297 sizeof(struct crypto_stat), &rl))
+3 -2
crypto/simd.c
··· 124 124 125 125 ctx->cryptd_tfm = cryptd_tfm; 126 126 127 - reqsize = sizeof(struct skcipher_request); 128 - reqsize += crypto_skcipher_reqsize(&cryptd_tfm->base); 127 + reqsize = crypto_skcipher_reqsize(cryptd_skcipher_child(cryptd_tfm)); 128 + reqsize = max(reqsize, crypto_skcipher_reqsize(&cryptd_tfm->base)); 129 + reqsize += sizeof(struct skcipher_request); 129 130 130 131 crypto_skcipher_set_reqsize(tfm, reqsize); 131 132
+1 -1
drivers/acpi/Kconfig
··· 512 512 513 513 config XPOWER_PMIC_OPREGION 514 514 bool "ACPI operation region support for XPower AXP288 PMIC" 515 - depends on MFD_AXP20X_I2C && IOSF_MBI 515 + depends on MFD_AXP20X_I2C && IOSF_MBI=y 516 516 help 517 517 This config adds ACPI operation region support for XPower AXP288 PMIC. 518 518
+1
drivers/acpi/acpi_platform.c
··· 30 30 {"PNP0200", 0}, /* AT DMA Controller */ 31 31 {"ACPI0009", 0}, /* IOxAPIC */ 32 32 {"ACPI000A", 0}, /* IOAPIC */ 33 + {"SMB0001", 0}, /* ACPI SMBUS virtual device */ 33 34 {"", 0}, 34 35 }; 35 36
+5 -14
drivers/acpi/nfit/core.c
··· 2928 2928 return rc; 2929 2929 2930 2930 if (ars_status_process_records(acpi_desc)) 2931 - return -ENOMEM; 2931 + dev_err(acpi_desc->dev, "Failed to process ARS records\n"); 2932 2932 2933 - return 0; 2933 + return rc; 2934 2934 } 2935 2935 2936 2936 static int ars_register(struct acpi_nfit_desc *acpi_desc, ··· 3341 3341 struct nvdimm *nvdimm, unsigned int cmd) 3342 3342 { 3343 3343 struct acpi_nfit_desc *acpi_desc = to_acpi_nfit_desc(nd_desc); 3344 - struct nfit_spa *nfit_spa; 3345 - int rc = 0; 3346 3344 3347 3345 if (nvdimm) 3348 3346 return 0; ··· 3353 3355 * just needs guarantees that any ARS it initiates are not 3354 3356 * interrupted by any intervening start requests from userspace. 3355 3357 */ 3356 - mutex_lock(&acpi_desc->init_mutex); 3357 - list_for_each_entry(nfit_spa, &acpi_desc->spas, list) 3358 - if (acpi_desc->scrub_spa 3359 - || test_bit(ARS_REQ_SHORT, &nfit_spa->ars_state) 3360 - || test_bit(ARS_REQ_LONG, &nfit_spa->ars_state)) { 3361 - rc = -EBUSY; 3362 - break; 3363 - } 3364 - mutex_unlock(&acpi_desc->init_mutex); 3358 + if (work_busy(&acpi_desc->dwork.work)) 3359 + return -EBUSY; 3365 3360 3366 - return rc; 3361 + return 0; 3367 3362 } 3368 3363 3369 3364 int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc,
+1 -1
drivers/ata/libata-core.c
··· 4553 4553 /* These specific Samsung models/firmware-revs do not handle LPM well */ 4554 4554 { "SAMSUNG MZMPC128HBFU-000MV", "CXM14M1Q", ATA_HORKAGE_NOLPM, }, 4555 4555 { "SAMSUNG SSD PM830 mSATA *", "CXM13D1Q", ATA_HORKAGE_NOLPM, }, 4556 - { "SAMSUNG MZ7TD256HAFV-000L9", "DXT02L5Q", ATA_HORKAGE_NOLPM, }, 4556 + { "SAMSUNG MZ7TD256HAFV-000L9", NULL, ATA_HORKAGE_NOLPM, }, 4557 4557 4558 4558 /* devices that don't properly handle queued TRIM commands */ 4559 4559 { "Micron_M500IT_*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM |
+2 -1
drivers/block/floppy.c
··· 4148 4148 bio.bi_end_io = floppy_rb0_cb; 4149 4149 bio_set_op_attrs(&bio, REQ_OP_READ, 0); 4150 4150 4151 + init_completion(&cbdata.complete); 4152 + 4151 4153 submit_bio(&bio); 4152 4154 process_fd_request(); 4153 4155 4154 - init_completion(&cbdata.complete); 4155 4156 wait_for_completion(&cbdata.complete); 4156 4157 4157 4158 __free_page(page);
+6 -1
drivers/cpufreq/imx6q-cpufreq.c
··· 160 160 /* Ensure the arm clock divider is what we expect */ 161 161 ret = clk_set_rate(clks[ARM].clk, new_freq * 1000); 162 162 if (ret) { 163 + int ret1; 164 + 163 165 dev_err(cpu_dev, "failed to set clock rate: %d\n", ret); 164 - regulator_set_voltage_tol(arm_reg, volt_old, 0); 166 + ret1 = regulator_set_voltage_tol(arm_reg, volt_old, 0); 167 + if (ret1) 168 + dev_warn(cpu_dev, 169 + "failed to restore vddarm voltage: %d\n", ret1); 165 170 return ret; 166 171 } 167 172
+21 -5
drivers/cpufreq/ti-cpufreq.c
··· 201 201 {}, 202 202 }; 203 203 204 + static const struct of_device_id *ti_cpufreq_match_node(void) 205 + { 206 + struct device_node *np; 207 + const struct of_device_id *match; 208 + 209 + np = of_find_node_by_path("/"); 210 + match = of_match_node(ti_cpufreq_of_match, np); 211 + of_node_put(np); 212 + 213 + return match; 214 + } 215 + 204 216 static int ti_cpufreq_probe(struct platform_device *pdev) 205 217 { 206 218 u32 version[VERSION_COUNT]; 207 - struct device_node *np; 208 219 const struct of_device_id *match; 209 220 struct opp_table *ti_opp_table; 210 221 struct ti_cpufreq_data *opp_data; 211 222 const char * const reg_names[] = {"vdd", "vbb"}; 212 223 int ret; 213 224 214 - np = of_find_node_by_path("/"); 215 - match = of_match_node(ti_cpufreq_of_match, np); 216 - of_node_put(np); 225 + match = dev_get_platdata(&pdev->dev); 217 226 if (!match) 218 227 return -ENODEV; 219 228 ··· 299 290 300 291 static int ti_cpufreq_init(void) 301 292 { 302 - platform_device_register_simple("ti-cpufreq", -1, NULL, 0); 293 + const struct of_device_id *match; 294 + 295 + /* Check to ensure we are on a compatible platform */ 296 + match = ti_cpufreq_match_node(); 297 + if (match) 298 + platform_device_register_data(NULL, "ti-cpufreq", -1, match, 299 + sizeof(*match)); 300 + 303 301 return 0; 304 302 } 305 303 module_init(ti_cpufreq_init);
+7 -33
drivers/cpuidle/cpuidle-arm.c
··· 82 82 { 83 83 int ret; 84 84 struct cpuidle_driver *drv; 85 - struct cpuidle_device *dev; 86 85 87 86 drv = kmemdup(&arm_idle_driver, sizeof(*drv), GFP_KERNEL); 88 87 if (!drv) ··· 102 103 goto out_kfree_drv; 103 104 } 104 105 105 - ret = cpuidle_register_driver(drv); 106 - if (ret) { 107 - if (ret != -EBUSY) 108 - pr_err("Failed to register cpuidle driver\n"); 109 - goto out_kfree_drv; 110 - } 111 - 112 106 /* 113 107 * Call arch CPU operations in order to initialize 114 108 * idle states suspend back-end specific data ··· 109 117 ret = arm_cpuidle_init(cpu); 110 118 111 119 /* 112 - * Skip the cpuidle device initialization if the reported 120 + * Allow the initialization to continue for other CPUs, if the reported 113 121 * failure is a HW misconfiguration/breakage (-ENXIO). 114 122 */ 115 - if (ret == -ENXIO) 116 - return 0; 117 - 118 123 if (ret) { 119 124 pr_err("CPU %d failed to init idle CPU ops\n", cpu); 120 - goto out_unregister_drv; 125 + ret = ret == -ENXIO ? 0 : ret; 126 + goto out_kfree_drv; 121 127 } 122 128 123 - dev = kzalloc(sizeof(*dev), GFP_KERNEL); 124 - if (!dev) { 125 - ret = -ENOMEM; 126 - goto out_unregister_drv; 127 - } 128 - dev->cpu = cpu; 129 - 130 - ret = cpuidle_register_device(dev); 131 - if (ret) { 132 - pr_err("Failed to register cpuidle device for CPU %d\n", 133 - cpu); 134 - goto out_kfree_dev; 135 - } 129 + ret = cpuidle_register(drv, NULL); 130 + if (ret) 131 + goto out_kfree_drv; 136 132 137 133 return 0; 138 134 139 - out_kfree_dev: 140 - kfree(dev); 141 - out_unregister_drv: 142 - cpuidle_unregister_driver(drv); 143 135 out_kfree_drv: 144 136 kfree(drv); 145 137 return ret; ··· 154 178 while (--cpu >= 0) { 155 179 dev = per_cpu(cpuidle_devices, cpu); 156 180 drv = cpuidle_get_cpu_driver(dev); 157 - cpuidle_unregister_device(dev); 158 - cpuidle_unregister_driver(drv); 159 - kfree(dev); 181 + cpuidle_unregister(drv); 160 182 kfree(drv); 161 183 } 162 184
+17 -14
drivers/crypto/hisilicon/sec/sec_algs.c
··· 732 732 int *splits_in_nents; 733 733 int *splits_out_nents = NULL; 734 734 struct sec_request_el *el, *temp; 735 + bool split = skreq->src != skreq->dst; 735 736 736 737 mutex_init(&sec_req->lock); 737 738 sec_req->req_base = &skreq->base; ··· 751 750 if (ret) 752 751 goto err_free_split_sizes; 753 752 754 - if (skreq->src != skreq->dst) { 753 + if (split) { 755 754 sec_req->len_out = sg_nents(skreq->dst); 756 755 ret = sec_map_and_split_sg(skreq->dst, split_sizes, steps, 757 756 &splits_out, &splits_out_nents, ··· 786 785 split_sizes[i], 787 786 skreq->src != skreq->dst, 788 787 splits_in[i], splits_in_nents[i], 789 - splits_out[i], 790 - splits_out_nents[i], info); 788 + split ? splits_out[i] : NULL, 789 + split ? splits_out_nents[i] : 0, 790 + info); 791 791 if (IS_ERR(el)) { 792 792 ret = PTR_ERR(el); 793 793 goto err_free_elements; ··· 808 806 * more refined but this is unlikely to happen so no need. 809 807 */ 810 808 811 - /* Cleanup - all elements in pointer arrays have been coppied */ 812 - kfree(splits_in_nents); 813 - kfree(splits_in); 814 - kfree(splits_out_nents); 815 - kfree(splits_out); 816 - kfree(split_sizes); 817 - 818 809 /* Grab a big lock for a long time to avoid concurrency issues */ 819 810 mutex_lock(&queue->queuelock); 820 811 ··· 822 827 (!queue->havesoftqueue || 823 828 kfifo_avail(&queue->softqueue) > steps)) || 824 829 !list_empty(&ctx->backlog)) { 830 + ret = -EBUSY; 825 831 if ((skreq->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) { 826 832 list_add_tail(&sec_req->backlog_head, &ctx->backlog); 827 833 mutex_unlock(&queue->queuelock); 828 - return -EBUSY; 834 + goto out; 829 835 } 830 836 831 - ret = -EBUSY; 832 837 mutex_unlock(&queue->queuelock); 833 838 goto err_free_elements; 834 839 } ··· 837 842 if (ret) 838 843 goto err_free_elements; 839 844 840 - return -EINPROGRESS; 845 + ret = -EINPROGRESS; 846 + out: 847 + /* Cleanup - all elements in pointer arrays have been copied */ 848 + kfree(splits_in_nents); 849 + kfree(splits_in); 850 + kfree(splits_out_nents); 851 + kfree(splits_out); 852 + kfree(split_sizes); 853 + return ret; 841 854 842 855 err_free_elements: 843 856 list_for_each_entry_safe(el, temp, &sec_req->elements, head) { ··· 857 854 crypto_skcipher_ivsize(atfm), 858 855 DMA_BIDIRECTIONAL); 859 856 err_unmap_out_sg: 860 - if (skreq->src != skreq->dst) 857 + if (split) 861 858 sec_unmap_sg_on_err(skreq->dst, steps, splits_out, 862 859 splits_out_nents, sec_req->len_out, 863 860 info->dev);
+1
drivers/dma-buf/udmabuf.c
··· 184 184 exp_info.ops = &udmabuf_ops; 185 185 exp_info.size = ubuf->pagecount << PAGE_SHIFT; 186 186 exp_info.priv = ubuf; 187 + exp_info.flags = O_RDWR; 187 188 188 189 buf = dma_buf_export(&exp_info); 189 190 if (IS_ERR(buf)) {
+4
drivers/firmware/efi/arm-init.c
··· 265 265 (params.mmap & ~PAGE_MASK))); 266 266 267 267 init_screen_info(); 268 + 269 + /* ARM does not permit early mappings to persist across paging_init() */ 270 + if (IS_ENABLED(CONFIG_ARM)) 271 + efi_memmap_unmap(); 268 272 } 269 273 270 274 static int __init register_gop_device(void)
+1 -1
drivers/firmware/efi/arm-runtime.c
··· 110 110 { 111 111 u64 mapsize; 112 112 113 - if (!efi_enabled(EFI_BOOT) || !efi_enabled(EFI_MEMMAP)) { 113 + if (!efi_enabled(EFI_BOOT)) { 114 114 pr_info("EFI services will not be available.\n"); 115 115 return 0; 116 116 }
+24 -13
drivers/firmware/efi/efi.c
··· 592 592 593 593 early_memunmap(tbl, sizeof(*tbl)); 594 594 } 595 + return 0; 596 + } 595 597 598 + int __init efi_apply_persistent_mem_reservations(void) 599 + { 596 600 if (efi.mem_reserve != EFI_INVALID_TABLE_ADDR) { 597 601 unsigned long prsv = efi.mem_reserve; 598 602 ··· 967 963 } 968 964 969 965 static DEFINE_SPINLOCK(efi_mem_reserve_persistent_lock); 966 + static struct linux_efi_memreserve *efi_memreserve_root __ro_after_init; 970 967 971 968 int efi_mem_reserve_persistent(phys_addr_t addr, u64 size) 972 969 { 973 - struct linux_efi_memreserve *rsv, *parent; 970 + struct linux_efi_memreserve *rsv; 974 971 975 - if (efi.mem_reserve == EFI_INVALID_TABLE_ADDR) 972 + if (!efi_memreserve_root) 976 973 return -ENODEV; 977 974 978 - rsv = kmalloc(sizeof(*rsv), GFP_KERNEL); 975 + rsv = kmalloc(sizeof(*rsv), GFP_ATOMIC); 979 976 if (!rsv) 980 977 return -ENOMEM; 981 - 982 - parent = memremap(efi.mem_reserve, sizeof(*rsv), MEMREMAP_WB); 983 - if (!parent) { 984 - kfree(rsv); 985 - return -ENOMEM; 986 - } 987 978 988 979 rsv->base = addr; 989 980 rsv->size = size; 990 981 991 982 spin_lock(&efi_mem_reserve_persistent_lock); 992 - rsv->next = parent->next; 993 - parent->next = __pa(rsv); 983 + rsv->next = efi_memreserve_root->next; 984 + efi_memreserve_root->next = __pa(rsv); 994 985 spin_unlock(&efi_mem_reserve_persistent_lock); 995 - 996 - memunmap(parent); 997 986 998 987 return 0; 999 988 } 989 + 990 + static int __init efi_memreserve_root_init(void) 991 + { 992 + if (efi.mem_reserve == EFI_INVALID_TABLE_ADDR) 993 + return -ENODEV; 994 + 995 + efi_memreserve_root = memremap(efi.mem_reserve, 996 + sizeof(*efi_memreserve_root), 997 + MEMREMAP_WB); 998 + if (!efi_memreserve_root) 999 + return -ENOMEM; 1000 + return 0; 1001 + } 1002 + early_initcall(efi_memreserve_root_init); 1000 1003 1001 1004 #ifdef CONFIG_KEXEC 1002 1005 static int update_efi_random_seed(struct notifier_block *nb,
+3
drivers/firmware/efi/libstub/arm-stub.c
··· 75 75 efi_guid_t memreserve_table_guid = LINUX_EFI_MEMRESERVE_TABLE_GUID; 76 76 efi_status_t status; 77 77 78 + if (IS_ENABLED(CONFIG_ARM)) 79 + return; 80 + 78 81 status = efi_call_early(allocate_pool, EFI_LOADER_DATA, sizeof(*rsv), 79 82 (void **)&rsv); 80 83 if (status != EFI_SUCCESS) {
+4
drivers/firmware/efi/libstub/fdt.c
··· 158 158 return efi_status; 159 159 } 160 160 } 161 + 162 + /* shrink the FDT back to its minimum size */ 163 + fdt_pack(fdt); 164 + 161 165 return EFI_SUCCESS; 162 166 163 167 fdt_set_fail:
+3
drivers/firmware/efi/memmap.c
··· 118 118 119 119 void __init efi_memmap_unmap(void) 120 120 { 121 + if (!efi_enabled(EFI_MEMMAP)) 122 + return; 123 + 121 124 if (!efi.memmap.late) { 122 125 unsigned long size; 123 126
+1 -1
drivers/firmware/efi/runtime-wrappers.c
··· 67 67 } \ 68 68 \ 69 69 init_completion(&efi_rts_work.efi_rts_comp); \ 70 - INIT_WORK_ONSTACK(&efi_rts_work.work, efi_call_rts); \ 70 + INIT_WORK(&efi_rts_work.work, efi_call_rts); \ 71 71 efi_rts_work.arg1 = _arg1; \ 72 72 efi_rts_work.arg2 = _arg2; \ 73 73 efi_rts_work.arg3 = _arg3; \
+2 -1
drivers/gnss/serial.c
··· 13 13 #include <linux/of.h> 14 14 #include <linux/pm.h> 15 15 #include <linux/pm_runtime.h> 16 + #include <linux/sched.h> 16 17 #include <linux/serdev.h> 17 18 #include <linux/slab.h> 18 19 ··· 64 63 int ret; 65 64 66 65 /* write is only buffered synchronously */ 67 - ret = serdev_device_write(serdev, buf, count, 0); 66 + ret = serdev_device_write(serdev, buf, count, MAX_SCHEDULE_TIMEOUT); 68 67 if (ret < 0) 69 68 return ret; 70 69
+2 -1
drivers/gnss/sirf.c
··· 16 16 #include <linux/pm.h> 17 17 #include <linux/pm_runtime.h> 18 18 #include <linux/regulator/consumer.h> 19 + #include <linux/sched.h> 19 20 #include <linux/serdev.h> 20 21 #include <linux/slab.h> 21 22 #include <linux/wait.h> ··· 84 83 int ret; 85 84 86 85 /* write is only buffered synchronously */ 87 - ret = serdev_device_write(serdev, buf, count, 0); 86 + ret = serdev_device_write(serdev, buf, count, MAX_SCHEDULE_TIMEOUT); 88 87 if (ret < 0) 89 88 return ret; 90 89
+3 -3
drivers/gpio/gpio-mockup.c
··· 35 35 #define gpio_mockup_err(...) pr_err(GPIO_MOCKUP_NAME ": " __VA_ARGS__) 36 36 37 37 enum { 38 - GPIO_MOCKUP_DIR_OUT = 0, 39 - GPIO_MOCKUP_DIR_IN = 1, 38 + GPIO_MOCKUP_DIR_IN = 0, 39 + GPIO_MOCKUP_DIR_OUT = 1, 40 40 }; 41 41 42 42 /* ··· 131 131 { 132 132 struct gpio_mockup_chip *chip = gpiochip_get_data(gc); 133 133 134 - return chip->lines[offset].dir; 134 + return !chip->lines[offset].dir; 135 135 } 136 136 137 137 static int gpio_mockup_to_irq(struct gpio_chip *gc, unsigned int offset)
+2 -2
drivers/gpio/gpio-pxa.c
··· 268 268 269 269 if (pxa_gpio_has_pinctrl()) { 270 270 ret = pinctrl_gpio_direction_input(chip->base + offset); 271 - if (!ret) 272 - return 0; 271 + if (ret) 272 + return ret; 273 273 } 274 274 275 275 spin_lock_irqsave(&gpio_lock, flags);
+3 -2
drivers/gpio/gpiolib.c
··· 1295 1295 gdev->descs = kcalloc(chip->ngpio, sizeof(gdev->descs[0]), GFP_KERNEL); 1296 1296 if (!gdev->descs) { 1297 1297 status = -ENOMEM; 1298 - goto err_free_gdev; 1298 + goto err_free_ida; 1299 1299 } 1300 1300 1301 1301 if (chip->ngpio == 0) { ··· 1427 1427 kfree_const(gdev->label); 1428 1428 err_free_descs: 1429 1429 kfree(gdev->descs); 1430 - err_free_gdev: 1430 + err_free_ida: 1431 1431 ida_simple_remove(&gpio_ida, gdev->id); 1432 + err_free_gdev: 1432 1433 /* failures here can mean systems won't boot... */ 1433 1434 pr_err("%s: GPIOs %d..%d (%s) failed to register, %d\n", __func__, 1434 1435 gdev->base, gdev->base + gdev->ngpio - 1,
+5 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
··· 501 501 { 502 502 struct amdgpu_device *adev = (struct amdgpu_device *)kgd; 503 503 504 - amdgpu_dpm_switch_power_profile(adev, 505 - PP_SMC_POWER_PROFILE_COMPUTE, !idle); 504 + if (adev->powerplay.pp_funcs && 505 + adev->powerplay.pp_funcs->switch_power_profile) 506 + amdgpu_dpm_switch_power_profile(adev, 507 + PP_SMC_POWER_PROFILE_COMPUTE, 508 + !idle); 506 509 } 507 510 508 511 bool amdgpu_amdkfd_is_kfd_vmid(struct amdgpu_device *adev, u32 vmid)
+7
drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
··· 626 626 "dither", 627 627 amdgpu_dither_enum_list, sz); 628 628 629 + if (amdgpu_device_has_dc_support(adev)) { 630 + adev->mode_info.max_bpc_property = 631 + drm_property_create_range(adev->ddev, 0, "max bpc", 8, 16); 632 + if (!adev->mode_info.max_bpc_property) 633 + return -ENOMEM; 634 + } 635 + 629 636 return 0; 630 637 } 631 638
+2
drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
··· 339 339 struct drm_property *audio_property; 340 340 /* FMT dithering */ 341 341 struct drm_property *dither_property; 342 + /* maximum number of bits per channel for monitor color */ 343 + struct drm_property *max_bpc_property; 342 344 /* hardcoded DFP edid from BIOS */ 343 345 struct edid *bios_hardcoded_edid; 344 346 int bios_hardcoded_edid_size;
+10 -8
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 1632 1632 continue; 1633 1633 } 1634 1634 1635 - /* First check if the entry is already handled */ 1636 - if (cursor.pfn < frag_start) { 1637 - cursor.entry->huge = true; 1638 - amdgpu_vm_pt_next(adev, &cursor); 1639 - continue; 1640 - } 1641 - 1642 1635 /* If it isn't already handled it can't be a huge page */ 1643 1636 if (cursor.entry->huge) { 1644 1637 /* Add the entry to the relocated list to update it. */ ··· 1694 1701 } 1695 1702 } while (frag_start < entry_end); 1696 1703 1697 - if (frag >= shift) 1704 + if (amdgpu_vm_pt_descendant(adev, &cursor)) { 1705 + /* Mark all child entries as huge */ 1706 + while (cursor.pfn < frag_start) { 1707 + cursor.entry->huge = true; 1708 + amdgpu_vm_pt_next(adev, &cursor); 1709 + } 1710 + 1711 + } else if (frag >= shift) { 1712 + /* or just move on to the next on the same level. */ 1698 1713 amdgpu_vm_pt_next(adev, &cursor); 1714 + } 1699 1715 } 1700 1716 1701 1717 return 0;
+3 -3
drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
··· 72 72 73 73 /* Program the system aperture low logical page number. */ 74 74 WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR, 75 - min(adev->gmc.vram_start, adev->gmc.agp_start) >> 18); 75 + min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18); 76 76 77 77 if (adev->asic_type == CHIP_RAVEN && adev->rev_id >= 0x8) 78 78 /* ··· 82 82 * to get rid of the VM fault and hardware hang. 83 83 */ 84 84 WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR, 85 - max((adev->gmc.vram_end >> 18) + 0x1, 85 + max((adev->gmc.fb_end >> 18) + 0x1, 86 86 adev->gmc.agp_end >> 18)); 87 87 else 88 88 WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR, 89 - max(adev->gmc.vram_end, adev->gmc.agp_end) >> 18); 89 + max(adev->gmc.fb_end, adev->gmc.agp_end) >> 18); 90 90 91 91 /* Set default page address. */ 92 92 value = adev->vram_scratch.gpu_addr - adev->gmc.vram_start
+1
drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
··· 46 46 MODULE_FIRMWARE("amdgpu/pitcairn_mc.bin"); 47 47 MODULE_FIRMWARE("amdgpu/verde_mc.bin"); 48 48 MODULE_FIRMWARE("amdgpu/oland_mc.bin"); 49 + MODULE_FIRMWARE("amdgpu/hainan_mc.bin"); 49 50 MODULE_FIRMWARE("amdgpu/si58_mc.bin"); 50 51 51 52 #define MC_SEQ_MISC0__MT__MASK 0xf0000000
+3 -3
drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
··· 90 90 91 91 /* Program the system aperture low logical page number. */ 92 92 WREG32_SOC15(MMHUB, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR, 93 - min(adev->gmc.vram_start, adev->gmc.agp_start) >> 18); 93 + min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18); 94 94 95 95 if (adev->asic_type == CHIP_RAVEN && adev->rev_id >= 0x8) 96 96 /* ··· 100 100 * to get rid of the VM fault and hardware hang. 101 101 */ 102 102 WREG32_SOC15(MMHUB, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR, 103 - max((adev->gmc.vram_end >> 18) + 0x1, 103 + max((adev->gmc.fb_end >> 18) + 0x1, 104 104 adev->gmc.agp_end >> 18)); 105 105 else 106 106 WREG32_SOC15(MMHUB, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR, 107 - max(adev->gmc.vram_end, adev->gmc.agp_end) >> 18); 107 + max(adev->gmc.fb_end, adev->gmc.agp_end) >> 18); 108 108 109 109 /* Set default page address. */ 110 110 value = adev->vram_scratch.gpu_addr - adev->gmc.vram_start +
+32 -7
drivers/gpu/drm/amd/amdgpu/soc15.c
··· 65 65 #define mmMP0_MISC_LIGHT_SLEEP_CTRL 0x01ba 66 66 #define mmMP0_MISC_LIGHT_SLEEP_CTRL_BASE_IDX 0 67 67 68 + /* for Vega20 register name change */ 69 + #define mmHDP_MEM_POWER_CTRL 0x00d4 70 + #define HDP_MEM_POWER_CTRL__IPH_MEM_POWER_CTRL_EN_MASK 0x00000001L 71 + #define HDP_MEM_POWER_CTRL__IPH_MEM_POWER_LS_EN_MASK 0x00000002L 72 + #define HDP_MEM_POWER_CTRL__RC_MEM_POWER_CTRL_EN_MASK 0x00010000L 73 + #define HDP_MEM_POWER_CTRL__RC_MEM_POWER_LS_EN_MASK 0x00020000L 74 + #define mmHDP_MEM_POWER_CTRL_BASE_IDX 0 68 75 /* 69 76 * Indirect registers accessor 70 77 */ ··· 877 870 { 878 871 uint32_t def, data; 879 872 880 - def = data = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_LS)); 873 + if (adev->asic_type == CHIP_VEGA20) { 874 + def = data = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_CTRL)); 881 875 882 - if (enable && (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS)) 883 - data |= HDP_MEM_POWER_LS__LS_ENABLE_MASK; 884 - else 885 - data &= ~HDP_MEM_POWER_LS__LS_ENABLE_MASK; 876 + if (enable && (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS)) 877 + data |= HDP_MEM_POWER_CTRL__IPH_MEM_POWER_CTRL_EN_MASK | 878 + HDP_MEM_POWER_CTRL__IPH_MEM_POWER_LS_EN_MASK | 879 + HDP_MEM_POWER_CTRL__RC_MEM_POWER_CTRL_EN_MASK | 880 + HDP_MEM_POWER_CTRL__RC_MEM_POWER_LS_EN_MASK; 881 + else 882 + data &= ~(HDP_MEM_POWER_CTRL__IPH_MEM_POWER_CTRL_EN_MASK | 883 + HDP_MEM_POWER_CTRL__IPH_MEM_POWER_LS_EN_MASK | 884 + HDP_MEM_POWER_CTRL__RC_MEM_POWER_CTRL_EN_MASK | 885 + HDP_MEM_POWER_CTRL__RC_MEM_POWER_LS_EN_MASK); 886 886 887 - if (def != data) 888 - WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_LS), data); 887 + if (def != data) 888 + WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_CTRL), data); 889 + } else { 890 + def = data = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_LS)); 891 + 892 + if (enable && (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS)) 893 + data |= HDP_MEM_POWER_LS__LS_ENABLE_MASK; 894 + else 895 + data &= ~HDP_MEM_POWER_LS__LS_ENABLE_MASK; 896 + 897 + if (def != data) 898 + WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_LS), data); 899 + } 889 900 } 890 901 891 902 static void soc15_update_drm_clock_gating(struct amdgpu_device *adev, bool enable)
+1 -1
drivers/gpu/drm/amd/amdgpu/vega10_ih.c
··· 129 129 else 130 130 wptr_off = adev->wb.gpu_addr + (adev->irq.ih.wptr_offs * 4); 131 131 WREG32_SOC15(OSSSYS, 0, mmIH_RB_WPTR_ADDR_LO, lower_32_bits(wptr_off)); 132 - WREG32_SOC15(OSSSYS, 0, mmIH_RB_WPTR_ADDR_HI, upper_32_bits(wptr_off) & 0xFF); 132 + WREG32_SOC15(OSSSYS, 0, mmIH_RB_WPTR_ADDR_HI, upper_32_bits(wptr_off) & 0xFFFF); 133 133 134 134 /* set rptr, wptr to 0 */ 135 135 WREG32_SOC15(OSSSYS, 0, mmIH_RB_RPTR, 0);
+16
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 2358 2358 static enum dc_color_depth 2359 2359 convert_color_depth_from_display_info(const struct drm_connector *connector) 2360 2360 { 2361 + struct dm_connector_state *dm_conn_state = 2362 + to_dm_connector_state(connector->state); 2361 2363 uint32_t bpc = connector->display_info.bpc; 2364 + 2365 + /* TODO: Remove this when there's support for max_bpc in drm */ 2366 + if (dm_conn_state && bpc > dm_conn_state->max_bpc) 2367 + /* Round down to nearest even number. */ 2368 + bpc = dm_conn_state->max_bpc - (dm_conn_state->max_bpc & 1); 2362 2369 2363 2370 switch (bpc) { 2364 2371 case 0: ··· 2950 2943 } else if (property == adev->mode_info.underscan_property) { 2951 2944 dm_new_state->underscan_enable = val; 2952 2945 ret = 0; 2946 + } else if (property == adev->mode_info.max_bpc_property) { 2947 + dm_new_state->max_bpc = val; 2948 + ret = 0; 2953 2949 } 2954 2950 2955 2951 return ret; ··· 2994 2984 ret = 0; 2995 2985 } else if (property == adev->mode_info.underscan_property) { 2996 2986 *val = dm_state->underscan_enable; 2987 + ret = 0; 2988 + } else if (property == adev->mode_info.max_bpc_property) { 2989 + *val = dm_state->max_bpc; 2997 2990 ret = 0; 2998 2991 } 2999 2992 return ret; ··· 3807 3794 0); 3808 3795 drm_object_attach_property(&aconnector->base.base, 3809 3796 adev->mode_info.underscan_vborder_property, 3797 + 0); 3798 + drm_object_attach_property(&aconnector->base.base, 3799 + adev->mode_info.max_bpc_property, 3810 3800 0); 3811 3801 3812 3802 }
+1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
··· 204 204 enum amdgpu_rmx_type scaling; 205 205 uint8_t underscan_vborder; 206 206 uint8_t underscan_hborder; 207 + uint8_t max_bpc; 207 208 bool underscan_enable; 208 209 bool freesync_enable; 209 210 bool freesync_capable;
+10 -10
drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
··· 4525 4525 struct smu7_single_dpm_table *sclk_table = &(data->dpm_table.sclk_table); 4526 4526 struct smu7_single_dpm_table *golden_sclk_table = 4527 4527 &(data->golden_dpm_table.sclk_table); 4528 - int value; 4528 + int value = sclk_table->dpm_levels[sclk_table->count - 1].value; 4529 + int golden_value = golden_sclk_table->dpm_levels 4530 + [golden_sclk_table->count - 1].value; 4529 4531 4530 - value = (sclk_table->dpm_levels[sclk_table->count - 1].value - 4531 - golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value) * 4532 - 100 / 4533 - golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value; 4532 + value -= golden_value; 4533 + value = DIV_ROUND_UP(value * 100, golden_value); 4534 4534 4535 4535 return value; 4536 4536 } ··· 4567 4567 struct smu7_single_dpm_table *mclk_table = &(data->dpm_table.mclk_table); 4568 4568 struct smu7_single_dpm_table *golden_mclk_table = 4569 4569 &(data->golden_dpm_table.mclk_table); 4570 - int value; 4570 + int value = mclk_table->dpm_levels[mclk_table->count - 1].value; 4571 + int golden_value = golden_mclk_table->dpm_levels 4572 + [golden_mclk_table->count - 1].value; 4571 4573 4572 - value = (mclk_table->dpm_levels[mclk_table->count - 1].value - 4573 - golden_mclk_table->dpm_levels[golden_mclk_table->count - 1].value) * 4574 - 100 / 4575 - golden_mclk_table->dpm_levels[golden_mclk_table->count - 1].value; 4574 + value -= golden_value; 4575 + value = DIV_ROUND_UP(value * 100, golden_value); 4576 4576 4577 4577 return value; 4578 4578 }
+16 -16
drivers/gpu/drm/amd/powerplay/hwmgr/smu_helper.c
··· 713 713 for (i = 0; i < wm_with_clock_ranges->num_wm_dmif_sets; i++) { 714 714 table->WatermarkRow[1][i].MinClock = 715 715 cpu_to_le16((uint16_t) 716 - (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_min_dcfclk_clk_in_khz) / 717 - 1000); 716 + (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_min_dcfclk_clk_in_khz / 717 + 1000)); 718 718 table->WatermarkRow[1][i].MaxClock = 719 719 cpu_to_le16((uint16_t) 720 - (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_max_dcfclk_clk_in_khz) / 721 - 1000); 720 + (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_max_dcfclk_clk_in_khz / 721 + 1000)); 722 722 table->WatermarkRow[1][i].MinUclk = 723 723 cpu_to_le16((uint16_t) 724 - (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_min_mem_clk_in_khz) / 725 - 1000); 724 + (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_min_mem_clk_in_khz / 725 + 1000)); 726 726 table->WatermarkRow[1][i].MaxUclk = 727 727 cpu_to_le16((uint16_t) 728 - (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_max_mem_clk_in_khz) / 729 - 1000); 728 + (wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_max_mem_clk_in_khz / 729 + 1000)); 730 730 table->WatermarkRow[1][i].WmSetting = (uint8_t) 731 731 wm_with_clock_ranges->wm_dmif_clocks_ranges[i].wm_set_id; 732 732 } ··· 734 734 for (i = 0; i < wm_with_clock_ranges->num_wm_mcif_sets; i++) { 735 735 table->WatermarkRow[0][i].MinClock = 736 736 cpu_to_le16((uint16_t) 737 - (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_min_socclk_clk_in_khz) / 738 - 1000); 737 + (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_min_socclk_clk_in_khz / 738 + 1000)); 739 739 table->WatermarkRow[0][i].MaxClock = 740 740 cpu_to_le16((uint16_t) 741 - (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_max_socclk_clk_in_khz) / 742 - 1000); 741 + (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_max_socclk_clk_in_khz / 742 + 1000)); 743 743 table->WatermarkRow[0][i].MinUclk = 744 744 cpu_to_le16((uint16_t) 745 - (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_min_mem_clk_in_khz) / 746 - 1000); 745 + (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_min_mem_clk_in_khz / 746 + 1000)); 747 747 table->WatermarkRow[0][i].MaxUclk = 748 748 cpu_to_le16((uint16_t) 749 - (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_max_mem_clk_in_khz) / 750 - 1000); 749 + (wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_max_mem_clk_in_khz / 750 + 1000)); 751 751 table->WatermarkRow[0][i].WmSetting = (uint8_t) 752 752 wm_with_clock_ranges->wm_mcif_clocks_ranges[i].wm_set_id; 753 753 }
+10 -15
drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
··· 4522 4522 struct vega10_single_dpm_table *sclk_table = &(data->dpm_table.gfx_table); 4523 4523 struct vega10_single_dpm_table *golden_sclk_table = 4524 4524 &(data->golden_dpm_table.gfx_table); 4525 - int value; 4526 - 4527 - value = (sclk_table->dpm_levels[sclk_table->count - 1].value - 4528 - golden_sclk_table->dpm_levels 4529 - [golden_sclk_table->count - 1].value) * 4530 - 100 / 4531 - golden_sclk_table->dpm_levels 4525 + int value = sclk_table->dpm_levels[sclk_table->count - 1].value; 4526 + int golden_value = golden_sclk_table->dpm_levels 4532 4527 [golden_sclk_table->count - 1].value; 4528 + 4529 + value -= golden_value; 4530 + value = DIV_ROUND_UP(value * 100, golden_value); 4533 4531 4534 4532 return value; 4535 4533 } ··· 4573 4575 struct vega10_single_dpm_table *mclk_table = &(data->dpm_table.mem_table); 4574 4576 struct vega10_single_dpm_table *golden_mclk_table = 4575 4577 &(data->golden_dpm_table.mem_table); 4576 - int value; 4577 - 4578 - value = (mclk_table->dpm_levels 4579 - [mclk_table->count - 1].value - 4580 - golden_mclk_table->dpm_levels 4581 - [golden_mclk_table->count - 1].value) * 4582 - 100 / 4583 - golden_mclk_table->dpm_levels 4578 + int value = mclk_table->dpm_levels[mclk_table->count - 1].value; 4579 + int golden_value = golden_mclk_table->dpm_levels 4584 4580 [golden_mclk_table->count - 1].value; 4581 + 4582 + value -= golden_value; 4583 + value = DIV_ROUND_UP(value * 100, golden_value); 4585 4584 4586 4585 return value; 4587 4586 }
+10 -13
drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
··· 2243 2243 struct vega12_single_dpm_table *sclk_table = &(data->dpm_table.gfx_table); 2244 2244 struct vega12_single_dpm_table *golden_sclk_table = 2245 2245 &(data->golden_dpm_table.gfx_table); 2246 - int value; 2246 + int value = sclk_table->dpm_levels[sclk_table->count - 1].value; 2247 + int golden_value = golden_sclk_table->dpm_levels 2248 + [golden_sclk_table->count - 1].value; 2247 2249 2248 - value = (sclk_table->dpm_levels[sclk_table->count - 1].value - 2249 - golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value) * 2250 - 100 / 2251 - golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value; 2250 + value -= golden_value; 2251 + value = DIV_ROUND_UP(value * 100, golden_value); 2252 2252 2253 2253 return value; 2254 2254 } ··· 2264 2264 struct vega12_single_dpm_table *mclk_table = &(data->dpm_table.mem_table); 2265 2265 struct vega12_single_dpm_table *golden_mclk_table = 2266 2266 &(data->golden_dpm_table.mem_table); 2267 - int value; 2268 - 2269 - value = (mclk_table->dpm_levels 2270 - [mclk_table->count - 1].value - 2271 - golden_mclk_table->dpm_levels 2272 - [golden_mclk_table->count - 1].value) * 2273 - 100 / 2274 - golden_mclk_table->dpm_levels 2267 + int value = mclk_table->dpm_levels[mclk_table->count - 1].value; 2268 + int golden_value = golden_mclk_table->dpm_levels 2275 2269 [golden_mclk_table->count - 1].value; 2270 + 2271 + value -= golden_value; 2272 + value = DIV_ROUND_UP(value * 100, golden_value); 2276 2273 2277 2274 return value; 2278 2275 }
+21 -9
drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
··· 75 75 data->phy_clk_quad_eqn_b = PPREGKEY_VEGA20QUADRATICEQUATION_DFLT; 76 76 data->phy_clk_quad_eqn_c = PPREGKEY_VEGA20QUADRATICEQUATION_DFLT; 77 77 78 - data->registry_data.disallowed_features = 0x0; 78 + /* 79 + * Disable the following features for now: 80 + * GFXCLK DS 81 + * SOCLK DS 82 + * LCLK DS 83 + * DCEFCLK DS 84 + * FCLK DS 85 + * MP1CLK DS 86 + * MP0CLK DS 87 + */ 88 + data->registry_data.disallowed_features = 0xE0041C00; 79 89 data->registry_data.od_state_in_dc_support = 0; 80 90 data->registry_data.thermal_support = 1; 81 91 data->registry_data.skip_baco_hardware = 0; ··· 1323 1313 &(data->dpm_table.gfx_table); 1324 1314 struct vega20_single_dpm_table *golden_sclk_table = 1325 1315 &(data->golden_dpm_table.gfx_table); 1326 - int value; 1316 + int value = sclk_table->dpm_levels[sclk_table->count - 1].value; 1317 + int golden_value = golden_sclk_table->dpm_levels 1318 + [golden_sclk_table->count - 1].value; 1327 1319 1328 1320 /* od percentage */ 1329 - value = DIV_ROUND_UP((sclk_table->dpm_levels[sclk_table->count - 1].value - 1330 - golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value) * 100, 1331 - golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value); 1321 + value -= golden_value; 1322 + value = DIV_ROUND_UP(value * 100, golden_value); 1332 1323 1333 1324 return value; 1334 1325 } ··· 1369 1358 &(data->dpm_table.mem_table); 1370 1359 struct vega20_single_dpm_table *golden_mclk_table = 1371 1360 &(data->golden_dpm_table.mem_table); 1372 - int value; 1361 + int value = mclk_table->dpm_levels[mclk_table->count - 1].value; 1362 + int golden_value = golden_mclk_table->dpm_levels 1363 + [golden_mclk_table->count - 1].value; 1373 1364 1374 1365 /* od percentage */ 1375 - value = DIV_ROUND_UP((mclk_table->dpm_levels[mclk_table->count - 1].value - 1376 - golden_mclk_table->dpm_levels[golden_mclk_table->count - 1].value) * 100, 1377 - golden_mclk_table->dpm_levels[golden_mclk_table->count - 1].value); 1366 + value -= golden_value; 1367 + value = DIV_ROUND_UP(value * 100, golden_value); 1378 1368 1379 1369 return value; 1380 1370 }
+21
drivers/gpu/drm/ast/ast_drv.c
··· 60 60 61 61 MODULE_DEVICE_TABLE(pci, pciidlist); 62 62 63 + static void ast_kick_out_firmware_fb(struct pci_dev *pdev) 64 + { 65 + struct apertures_struct *ap; 66 + bool primary = false; 67 + 68 + ap = alloc_apertures(1); 69 + if (!ap) 70 + return; 71 + 72 + ap->ranges[0].base = pci_resource_start(pdev, 0); 73 + ap->ranges[0].size = pci_resource_len(pdev, 0); 74 + 75 + #ifdef CONFIG_X86 76 + primary = pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW; 77 + #endif 78 + drm_fb_helper_remove_conflicting_framebuffers(ap, "astdrmfb", primary); 79 + kfree(ap); 80 + } 81 + 63 82 static int ast_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent) 64 83 { 84 + ast_kick_out_firmware_fb(pdev); 85 + 65 86 return drm_get_pci_dev(pdev, ent, &driver); 66 87 } 67 88
+2 -1
drivers/gpu/drm/ast/ast_mode.c
··· 568 568 } 569 569 ast_bo_unreserve(bo); 570 570 571 + ast_set_offset_reg(crtc); 571 572 ast_set_start_address_crt1(crtc, (u32)gpu_addr); 572 573 573 574 return 0; ··· 1255 1254 ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc7, ((y >> 8) & 0x07)); 1256 1255 1257 1256 /* dummy write to fire HWC */ 1258 - ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xCB, 0xFF, 0x00); 1257 + ast_show_cursor(crtc); 1259 1258 1260 1259 return 0; 1261 1260 }
+3
drivers/gpu/drm/drm_dp_mst_topology.c
··· 1275 1275 mutex_lock(&mgr->lock); 1276 1276 mstb = mgr->mst_primary; 1277 1277 1278 + if (!mstb) 1279 + goto out; 1280 + 1278 1281 for (i = 0; i < lct - 1; i++) { 1279 1282 int shift = (i % 2) ? 0 : 4; 1280 1283 int port_num = (rad[i / 2] >> shift) & 0xf;
+3
drivers/gpu/drm/drm_fb_helper.c
··· 219 219 mutex_lock(&fb_helper->lock); 220 220 drm_connector_list_iter_begin(dev, &conn_iter); 221 221 drm_for_each_connector_iter(connector, &conn_iter) { 222 + if (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK) 223 + continue; 224 + 222 225 ret = __drm_fb_helper_add_one_connector(fb_helper, connector); 223 226 if (ret) 224 227 goto fail;
+1 -1
drivers/gpu/drm/drm_fourcc.c
··· 97 97 98 98 /** 99 99 * drm_driver_legacy_fb_format - compute drm fourcc code from legacy description 100 + * @dev: DRM device 100 101 * @bpp: bits per pixels 101 102 * @depth: bit depth per pixel 102 - * @native: use host native byte order 103 103 * 104 104 * Computes a drm fourcc pixel format code for the given @bpp/@depth values. 105 105 * Unlike drm_mode_legacy_fb_format() this looks at the drivers mode_config,
+6 -1
drivers/gpu/drm/i915/i915_gem_execbuffer.c
··· 1268 1268 else if (gen >= 4) 1269 1269 len = 4; 1270 1270 else 1271 - len = 3; 1271 + len = 6; 1272 1272 1273 1273 batch = reloc_gpu(eb, vma, len); 1274 1274 if (IS_ERR(batch)) ··· 1306 1306 *batch++ = addr; 1307 1307 *batch++ = target_offset; 1308 1308 } else { 1309 + *batch++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL; 1310 + *batch++ = addr; 1311 + *batch++ = target_offset; 1312 + 1313 + /* And again for good measure (blb/pnv) */ 1309 1314 *batch++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL; 1310 1315 *batch++ = addr; 1311 1316 *batch++ = target_offset;
+5
drivers/gpu/drm/i915/i915_gem_gtt.c
··· 3413 3413 ggtt->vm.insert_page = bxt_vtd_ggtt_insert_page__BKL; 3414 3414 if (ggtt->vm.clear_range != nop_clear_range) 3415 3415 ggtt->vm.clear_range = bxt_vtd_ggtt_clear_range__BKL; 3416 + 3417 + /* Prevent recursively calling stop_machine() and deadlocks. */ 3418 + dev_info(dev_priv->drm.dev, 3419 + "Disabling error capture for VT-d workaround\n"); 3420 + i915_disable_error_state(dev_priv, -ENODEV); 3416 3421 } 3417 3422 3418 3423 ggtt->invalidate = gen6_ggtt_invalidate;
+14 -1
drivers/gpu/drm/i915/i915_gpu_error.c
··· 648 648 return 0; 649 649 } 650 650 651 + if (IS_ERR(error)) 652 + return PTR_ERR(error); 653 + 651 654 if (*error->error_msg) 652 655 err_printf(m, "%s\n", error->error_msg); 653 656 err_printf(m, "Kernel: " UTS_RELEASE "\n"); ··· 1862 1859 error = i915_capture_gpu_state(i915); 1863 1860 if (!error) { 1864 1861 DRM_DEBUG_DRIVER("out of memory, not capturing error state\n"); 1862 + i915_disable_error_state(i915, -ENOMEM); 1865 1863 return; 1866 1864 } 1867 1865 ··· 1918 1914 i915->gpu_error.first_error = NULL; 1919 1915 spin_unlock_irq(&i915->gpu_error.lock); 1920 1916 1921 - i915_gpu_state_put(error); 1917 + if (!IS_ERR(error)) 1918 + i915_gpu_state_put(error); 1919 + } 1920 + 1921 + void i915_disable_error_state(struct drm_i915_private *i915, int err) 1922 + { 1923 + spin_lock_irq(&i915->gpu_error.lock); 1924 + if (!i915->gpu_error.first_error) 1925 + i915->gpu_error.first_error = ERR_PTR(err); 1926 + spin_unlock_irq(&i915->gpu_error.lock); 1922 1927 }
+7 -1
drivers/gpu/drm/i915/i915_gpu_error.h
··· 343 343 344 344 struct i915_gpu_state *i915_first_error_state(struct drm_i915_private *i915); 345 345 void i915_reset_error_state(struct drm_i915_private *i915); 346 + void i915_disable_error_state(struct drm_i915_private *i915, int err); 346 347 347 348 #else 348 349 ··· 356 355 static inline struct i915_gpu_state * 357 356 i915_first_error_state(struct drm_i915_private *i915) 358 357 { 359 - return NULL; 358 + return ERR_PTR(-ENODEV); 360 359 } 361 360 362 361 static inline void i915_reset_error_state(struct drm_i915_private *i915) 362 + { 363 + } 364 + 365 + static inline void i915_disable_error_state(struct drm_i915_private *i915, 366 + int err) 363 367 { 364 368 } 365 369
+1 -1
drivers/gpu/drm/i915/intel_device_info.c
··· 474 474 u8 eu_disabled_mask; 475 475 u32 n_disabled; 476 476 477 - if (!(sseu->subslice_mask[ss] & BIT(ss))) 477 + if (!(sseu->subslice_mask[s] & BIT(ss))) 478 478 /* skip disabled subslice */ 479 479 continue; 480 480
+81 -3
drivers/gpu/drm/i915/intel_display.c
··· 2890 2890 return; 2891 2891 2892 2892 valid_fb: 2893 + intel_state->base.rotation = plane_config->rotation; 2893 2894 intel_fill_fb_ggtt_view(&intel_state->view, fb, 2894 2895 intel_state->base.rotation); 2895 2896 intel_state->color_plane[0].stride = ··· 4851 4850 * chroma samples for both of the luma samples, and thus we don't 4852 4851 * actually get the expected MPEG2 chroma siting convention :( 4853 4852 * The same behaviour is observed on pre-SKL platforms as well. 4853 + * 4854 + * Theory behind the formula (note that we ignore sub-pixel 4855 + * source coordinates): 4856 + * s = source sample position 4857 + * d = destination sample position 4858 + * 4859 + * Downscaling 4:1: 4860 + * -0.5 4861 + * | 0.0 4862 + * | | 1.5 (initial phase) 4863 + * | | | 4864 + * v v v 4865 + * | s | s | s | s | 4866 + * | d | 4867 + * 4868 + * Upscaling 1:4: 4869 + * -0.5 4870 + * | -0.375 (initial phase) 4871 + * | | 0.0 4872 + * | | | 4873 + * v v v 4874 + * | s | 4875 + * | d | d | d | d | 4854 4876 */ 4855 - u16 skl_scaler_calc_phase(int sub, bool chroma_cosited) 4877 + u16 skl_scaler_calc_phase(int sub, int scale, bool chroma_cosited) 4856 4878 { 4857 4879 int phase = -0x8000; 4858 4880 u16 trip = 0; 4859 4881 4860 4882 if (chroma_cosited) 4861 4883 phase += (sub - 1) * 0x8000 / sub; 4884 + 4885 + phase += scale / (2 * sub); 4886 + 4887 + /* 4888 + * Hardware initial phase limited to [-0.5:1.5]. 4889 + * Since the max hardware scale factor is 3.0, we 4890 + * should never actually excdeed 1.0 here. 4891 + */ 4892 + WARN_ON(phase < -0x8000 || phase > 0x18000); 4862 4893 4863 4894 if (phase < 0) 4864 4895 phase = 0x10000 + phase; ··· 5100 5067 5101 5068 if (crtc->config->pch_pfit.enabled) { 5102 5069 u16 uv_rgb_hphase, uv_rgb_vphase; 5070 + int pfit_w, pfit_h, hscale, vscale; 5103 5071 int id; 5104 5072 5105 5073 if (WARN_ON(crtc->config->scaler_state.scaler_id < 0)) 5106 5074 return; 5107 5075 5108 - uv_rgb_hphase = skl_scaler_calc_phase(1, false); 5109 - uv_rgb_vphase = skl_scaler_calc_phase(1, false); 5076 + pfit_w = (crtc->config->pch_pfit.size >> 16) & 0xFFFF; 5077 + pfit_h = crtc->config->pch_pfit.size & 0xFFFF; 5078 + 5079 + hscale = (crtc->config->pipe_src_w << 16) / pfit_w; 5080 + vscale = (crtc->config->pipe_src_h << 16) / pfit_h; 5081 + 5082 + uv_rgb_hphase = skl_scaler_calc_phase(1, hscale, false); 5083 + uv_rgb_vphase = skl_scaler_calc_phase(1, vscale, false); 5110 5084 5111 5085 id = scaler_state->scaler_id; 5112 5086 I915_WRITE(SKL_PS_CTRL(pipe, id), PS_SCALER_EN | ··· 7883 7843 plane_config->tiling = I915_TILING_X; 7884 7844 fb->modifier = I915_FORMAT_MOD_X_TILED; 7885 7845 } 7846 + 7847 + if (val & DISPPLANE_ROTATE_180) 7848 + plane_config->rotation = DRM_MODE_ROTATE_180; 7886 7849 } 7850 + 7851 + if (IS_CHERRYVIEW(dev_priv) && pipe == PIPE_B && 7852 + val & DISPPLANE_MIRROR) 7853 + plane_config->rotation |= DRM_MODE_REFLECT_X; 7887 7854 7888 7855 pixel_format = val & DISPPLANE_PIXFORMAT_MASK; 7889 7856 fourcc = i9xx_format_to_fourcc(pixel_format); ··· 8959 8912 MISSING_CASE(tiling); 8960 8913 goto error; 8961 8914 } 8915 + 8916 + /* 8917 + * DRM_MODE_ROTATE_ is counter clockwise to stay compatible with Xrandr 8918 + * while i915 HW rotation is clockwise, thats why this swapping. 8919 + */ 8920 + switch (val & PLANE_CTL_ROTATE_MASK) { 8921 + case PLANE_CTL_ROTATE_0: 8922 + plane_config->rotation = DRM_MODE_ROTATE_0; 8923 + break; 8924 + case PLANE_CTL_ROTATE_90: 8925 + plane_config->rotation = DRM_MODE_ROTATE_270; 8926 + break; 8927 + case PLANE_CTL_ROTATE_180: 8928 + plane_config->rotation = DRM_MODE_ROTATE_180; 8929 + break; 8930 + case PLANE_CTL_ROTATE_270: 8931 + plane_config->rotation = DRM_MODE_ROTATE_90; 8932 + break; 8933 + } 8934 + 8935 + if (INTEL_GEN(dev_priv) >= 10 && 8936 + val & PLANE_CTL_FLIP_HORIZONTAL) 8937 + plane_config->rotation |= DRM_MODE_REFLECT_X; 8962 8938 8963 8939 base = I915_READ(PLANE_SURF(pipe, plane_id)) & 0xfffff000; 8964 8940 plane_config->base = base; ··· 15298 15228 ret = drm_atomic_add_affected_planes(state, crtc); 15299 15229 if (ret) 15300 15230 goto out; 15231 + 15232 + /* 15233 + * FIXME hack to force a LUT update to avoid the 15234 + * plane update forcing the pipe gamma on without 15235 + * having a proper LUT loaded. Remove once we 15236 + * have readout for pipe gamma enable. 15237 + */ 15238 + crtc_state->color_mgmt_changed = true; 15301 15239 } 15302 15240 } 15303 15241
+4 -4
drivers/gpu/drm/i915/intel_dp_mst.c
··· 452 452 if (!intel_connector) 453 453 return NULL; 454 454 455 + intel_connector->get_hw_state = intel_dp_mst_get_hw_state; 456 + intel_connector->mst_port = intel_dp; 457 + intel_connector->port = port; 458 + 455 459 connector = &intel_connector->base; 456 460 ret = drm_connector_init(dev, connector, &intel_dp_mst_connector_funcs, 457 461 DRM_MODE_CONNECTOR_DisplayPort); ··· 465 461 } 466 462 467 463 drm_connector_helper_add(connector, &intel_dp_mst_connector_helper_funcs); 468 - 469 - intel_connector->get_hw_state = intel_dp_mst_get_hw_state; 470 - intel_connector->mst_port = intel_dp; 471 - intel_connector->port = port; 472 464 473 465 for_each_pipe(dev_priv, pipe) { 474 466 struct drm_encoder *enc =
+2 -1
drivers/gpu/drm/i915/intel_drv.h
··· 547 547 unsigned int tiling; 548 548 int size; 549 549 u32 base; 550 + u8 rotation; 550 551 }; 551 552 552 553 #define SKL_MIN_SRC_W 8 ··· 1647 1646 void intel_crtc_arm_fifo_underrun(struct intel_crtc *crtc, 1648 1647 struct intel_crtc_state *crtc_state); 1649 1648 1650 - u16 skl_scaler_calc_phase(int sub, bool chroma_center); 1649 + u16 skl_scaler_calc_phase(int sub, int scale, bool chroma_center); 1651 1650 int skl_update_scaler_crtc(struct intel_crtc_state *crtc_state); 1652 1651 int skl_max_scale(const struct intel_crtc_state *crtc_state, 1653 1652 u32 pixel_format);
+52 -22
drivers/gpu/drm/i915/intel_hotplug.c
··· 228 228 drm_for_each_connector_iter(connector, &conn_iter) { 229 229 struct intel_connector *intel_connector = to_intel_connector(connector); 230 230 231 - if (intel_connector->encoder->hpd_pin == pin) { 231 + /* Don't check MST ports, they don't have pins */ 232 + if (!intel_connector->mst_port && 233 + intel_connector->encoder->hpd_pin == pin) { 232 234 if (connector->polled != intel_connector->polled) 233 235 DRM_DEBUG_DRIVER("Reenabling HPD on connector %s\n", 234 236 connector->name); ··· 397 395 struct intel_encoder *encoder; 398 396 bool storm_detected = false; 399 397 bool queue_dig = false, queue_hp = false; 398 + u32 long_hpd_pulse_mask = 0; 399 + u32 short_hpd_pulse_mask = 0; 400 + enum hpd_pin pin; 400 401 401 402 if (!pin_mask) 402 403 return; 403 404 404 405 spin_lock(&dev_priv->irq_lock); 405 - for_each_intel_encoder(&dev_priv->drm, encoder) { 406 - enum hpd_pin pin = encoder->hpd_pin; 407 - bool has_hpd_pulse = intel_encoder_has_hpd_pulse(encoder); 408 406 407 + /* 408 + * Determine whether ->hpd_pulse() exists for each pin, and 409 + * whether we have a short or a long pulse. This is needed 410 + * as each pin may have up to two encoders (HDMI and DP) and 411 + * only the one of them (DP) will have ->hpd_pulse(). 412 + */ 413 + for_each_intel_encoder(&dev_priv->drm, encoder) { 414 + bool has_hpd_pulse = intel_encoder_has_hpd_pulse(encoder); 415 + enum port port = encoder->port; 416 + bool long_hpd; 417 + 418 + pin = encoder->hpd_pin; 409 419 if (!(BIT(pin) & pin_mask)) 410 420 continue; 411 421 412 - if (has_hpd_pulse) { 413 - bool long_hpd = long_mask & BIT(pin); 414 - enum port port = encoder->port; 422 + if (!has_hpd_pulse) 423 + continue; 415 424 416 - DRM_DEBUG_DRIVER("digital hpd port %c - %s\n", port_name(port), 417 - long_hpd ? "long" : "short"); 418 - /* 419 - * For long HPD pulses we want to have the digital queue happen, 420 - * but we still want HPD storm detection to function. 421 - */ 422 - queue_dig = true; 423 - if (long_hpd) { 424 - dev_priv->hotplug.long_port_mask |= (1 << port); 425 - } else { 426 - /* for short HPD just trigger the digital queue */ 427 - dev_priv->hotplug.short_port_mask |= (1 << port); 428 - continue; 429 - } 425 + long_hpd = long_mask & BIT(pin); 426 + 427 + DRM_DEBUG_DRIVER("digital hpd port %c - %s\n", port_name(port), 428 + long_hpd ? "long" : "short"); 429 + queue_dig = true; 430 + 431 + if (long_hpd) { 432 + long_hpd_pulse_mask |= BIT(pin); 433 + dev_priv->hotplug.long_port_mask |= BIT(port); 434 + } else { 435 + short_hpd_pulse_mask |= BIT(pin); 436 + dev_priv->hotplug.short_port_mask |= BIT(port); 430 437 } 438 + } 439 + 440 + /* Now process each pin just once */ 441 + for_each_hpd_pin(pin) { 442 + bool long_hpd; 443 + 444 + if (!(BIT(pin) & pin_mask)) 445 + continue; 431 446 432 447 if (dev_priv->hotplug.stats[pin].state == HPD_DISABLED) { 433 448 /* ··· 461 442 if (dev_priv->hotplug.stats[pin].state != HPD_ENABLED) 462 443 continue; 463 444 464 - if (!has_hpd_pulse) { 445 + /* 446 + * Delegate to ->hpd_pulse() if one of the encoders for this 447 + * pin has it, otherwise let the hotplug_work deal with this 448 + * pin directly. 449 + */ 450 + if (((short_hpd_pulse_mask | long_hpd_pulse_mask) & BIT(pin))) { 451 + long_hpd = long_hpd_pulse_mask & BIT(pin); 452 + } else { 465 453 dev_priv->hotplug.event_bits |= BIT(pin); 454 + long_hpd = true; 466 455 queue_hp = true; 467 456 } 457 + 458 + if (!long_hpd) 459 + continue; 468 460 469 461 if (intel_hpd_irq_storm_detect(dev_priv, pin)) { 470 462 dev_priv->hotplug.event_bits &= ~BIT(pin);
+13 -1
drivers/gpu/drm/i915/intel_lrc.c
··· 424 424 425 425 reg_state[CTX_RING_TAIL+1] = intel_ring_set_tail(rq->ring, rq->tail); 426 426 427 - /* True 32b PPGTT with dynamic page allocation: update PDP 427 + /* 428 + * True 32b PPGTT with dynamic page allocation: update PDP 428 429 * registers and point the unallocated PDPs to scratch page. 429 430 * PML4 is allocated during ppgtt init, so this is not needed 430 431 * in 48-bit mode. ··· 433 432 if (ppgtt && !i915_vm_is_48bit(&ppgtt->vm)) 434 433 execlists_update_context_pdps(ppgtt, reg_state); 435 434 435 + /* 436 + * Make sure the context image is complete before we submit it to HW. 437 + * 438 + * Ostensibly, writes (including the WCB) should be flushed prior to 439 + * an uncached write such as our mmio register access, the empirical 440 + * evidence (esp. on Braswell) suggests that the WC write into memory 441 + * may not be visible to the HW prior to the completion of the UC 442 + * register write and that we may begin execution from the context 443 + * before its image is complete leading to invalid PD chasing. 444 + */ 445 + wmb(); 436 446 return ce->lrc_desc; 437 447 } 438 448
+40 -1
drivers/gpu/drm/i915/intel_pm.c
··· 2493 2493 uint32_t method1, method2; 2494 2494 int cpp; 2495 2495 2496 + if (mem_value == 0) 2497 + return U32_MAX; 2498 + 2496 2499 if (!intel_wm_plane_visible(cstate, pstate)) 2497 2500 return 0; 2498 2501 ··· 2525 2522 uint32_t method1, method2; 2526 2523 int cpp; 2527 2524 2525 + if (mem_value == 0) 2526 + return U32_MAX; 2527 + 2528 2528 if (!intel_wm_plane_visible(cstate, pstate)) 2529 2529 return 0; 2530 2530 ··· 2550 2544 uint32_t mem_value) 2551 2545 { 2552 2546 int cpp; 2547 + 2548 + if (mem_value == 0) 2549 + return U32_MAX; 2553 2550 2554 2551 if (!intel_wm_plane_visible(cstate, pstate)) 2555 2552 return 0; ··· 3017 3008 intel_print_wm_latency(dev_priv, "Cursor", dev_priv->wm.cur_latency); 3018 3009 } 3019 3010 3011 + static void snb_wm_lp3_irq_quirk(struct drm_i915_private *dev_priv) 3012 + { 3013 + /* 3014 + * On some SNB machines (Thinkpad X220 Tablet at least) 3015 + * LP3 usage can cause vblank interrupts to be lost. 3016 + * The DEIIR bit will go high but it looks like the CPU 3017 + * never gets interrupted. 3018 + * 3019 + * It's not clear whether other interrupt source could 3020 + * be affected or if this is somehow limited to vblank 3021 + * interrupts only. To play it safe we disable LP3 3022 + * watermarks entirely. 3023 + */ 3024 + if (dev_priv->wm.pri_latency[3] == 0 && 3025 + dev_priv->wm.spr_latency[3] == 0 && 3026 + dev_priv->wm.cur_latency[3] == 0) 3027 + return; 3028 + 3029 + dev_priv->wm.pri_latency[3] = 0; 3030 + dev_priv->wm.spr_latency[3] = 0; 3031 + dev_priv->wm.cur_latency[3] = 0; 3032 + 3033 + DRM_DEBUG_KMS("LP3 watermarks disabled due to potential for lost interrupts\n"); 3034 + intel_print_wm_latency(dev_priv, "Primary", dev_priv->wm.pri_latency); 3035 + intel_print_wm_latency(dev_priv, "Sprite", dev_priv->wm.spr_latency); 3036 + intel_print_wm_latency(dev_priv, "Cursor", dev_priv->wm.cur_latency); 3037 + } 3038 + 3020 3039 static void ilk_setup_wm_latency(struct drm_i915_private *dev_priv) 3021 3040 { 3022 3041 intel_read_wm_latency(dev_priv, dev_priv->wm.pri_latency); ··· 3061 3024 intel_print_wm_latency(dev_priv, "Sprite", dev_priv->wm.spr_latency); 3062 3025 intel_print_wm_latency(dev_priv, "Cursor", dev_priv->wm.cur_latency); 3063 3026 3064 - if (IS_GEN6(dev_priv)) 3027 + if (IS_GEN6(dev_priv)) { 3065 3028 snb_wm_latency_quirk(dev_priv); 3029 + snb_wm_lp3_irq_quirk(dev_priv); 3030 + } 3066 3031 } 3067 3032 3068 3033 static void skl_setup_wm_latency(struct drm_i915_private *dev_priv)
+36 -2
drivers/gpu/drm/i915/intel_ringbuffer.c
··· 91 91 gen4_render_ring_flush(struct i915_request *rq, u32 mode) 92 92 { 93 93 u32 cmd, *cs; 94 + int i; 94 95 95 96 /* 96 97 * read/write caches: ··· 128 127 cmd |= MI_INVALIDATE_ISP; 129 128 } 130 129 131 - cs = intel_ring_begin(rq, 2); 130 + i = 2; 131 + if (mode & EMIT_INVALIDATE) 132 + i += 20; 133 + 134 + cs = intel_ring_begin(rq, i); 132 135 if (IS_ERR(cs)) 133 136 return PTR_ERR(cs); 134 137 135 138 *cs++ = cmd; 136 - *cs++ = MI_NOOP; 139 + 140 + /* 141 + * A random delay to let the CS invalidate take effect? Without this 142 + * delay, the GPU relocation path fails as the CS does not see 143 + * the updated contents. Just as important, if we apply the flushes 144 + * to the EMIT_FLUSH branch (i.e. immediately after the relocation 145 + * write and before the invalidate on the next batch), the relocations 146 + * still fail. This implies that is a delay following invalidation 147 + * that is required to reset the caches as opposed to a delay to 148 + * ensure the memory is written. 149 + */ 150 + if (mode & EMIT_INVALIDATE) { 151 + *cs++ = GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE; 152 + *cs++ = i915_ggtt_offset(rq->engine->scratch) | 153 + PIPE_CONTROL_GLOBAL_GTT; 154 + *cs++ = 0; 155 + *cs++ = 0; 156 + 157 + for (i = 0; i < 12; i++) 158 + *cs++ = MI_FLUSH; 159 + 160 + *cs++ = GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE; 161 + *cs++ = i915_ggtt_offset(rq->engine->scratch) | 162 + PIPE_CONTROL_GLOBAL_GTT; 163 + *cs++ = 0; 164 + *cs++ = 0; 165 + } 166 + 167 + *cs++ = cmd; 168 + 137 169 intel_ring_advance(rq, cs); 138 170 139 171 return 0;
+7 -9
drivers/gpu/drm/i915/intel_runtime_pm.c
··· 2749 2749 }, 2750 2750 }, 2751 2751 { 2752 + .name = "DC off", 2753 + .domains = ICL_DISPLAY_DC_OFF_POWER_DOMAINS, 2754 + .ops = &gen9_dc_off_power_well_ops, 2755 + .id = DISP_PW_ID_NONE, 2756 + }, 2757 + { 2752 2758 .name = "power well 2", 2753 2759 .domains = ICL_PW_2_POWER_DOMAINS, 2754 2760 .ops = &hsw_power_well_ops, ··· 2764 2758 .hsw.idx = ICL_PW_CTL_IDX_PW_2, 2765 2759 .hsw.has_fuses = true, 2766 2760 }, 2767 - }, 2768 - { 2769 - .name = "DC off", 2770 - .domains = ICL_DISPLAY_DC_OFF_POWER_DOMAINS, 2771 - .ops = &gen9_dc_off_power_well_ops, 2772 - .id = DISP_PW_ID_NONE, 2773 2761 }, 2774 2762 { 2775 2763 .name = "power well 3", ··· 3176 3176 void icl_dbuf_slices_update(struct drm_i915_private *dev_priv, 3177 3177 u8 req_slices) 3178 3178 { 3179 - u8 hw_enabled_slices = dev_priv->wm.skl_hw.ddb.enabled_slices; 3180 - u32 val; 3179 + const u8 hw_enabled_slices = dev_priv->wm.skl_hw.ddb.enabled_slices; 3181 3180 bool ret; 3182 3181 3183 3182 if (req_slices > intel_dbuf_max_slices(dev_priv)) { ··· 3187 3188 if (req_slices == hw_enabled_slices || req_slices == 0) 3188 3189 return; 3189 3190 3190 - val = I915_READ(DBUF_CTL_S2); 3191 3191 if (req_slices > hw_enabled_slices) 3192 3192 ret = intel_dbuf_slice_set(dev_priv, DBUF_CTL_S2, true); 3193 3193 else
+54 -39
drivers/gpu/drm/i915/intel_sprite.c
··· 302 302 return min(8192 * cpp, 32768); 303 303 } 304 304 305 + static void 306 + skl_program_scaler(struct intel_plane *plane, 307 + const struct intel_crtc_state *crtc_state, 308 + const struct intel_plane_state *plane_state) 309 + { 310 + struct drm_i915_private *dev_priv = to_i915(plane->base.dev); 311 + enum pipe pipe = plane->pipe; 312 + int scaler_id = plane_state->scaler_id; 313 + const struct intel_scaler *scaler = 314 + &crtc_state->scaler_state.scalers[scaler_id]; 315 + int crtc_x = plane_state->base.dst.x1; 316 + int crtc_y = plane_state->base.dst.y1; 317 + uint32_t crtc_w = drm_rect_width(&plane_state->base.dst); 318 + uint32_t crtc_h = drm_rect_height(&plane_state->base.dst); 319 + u16 y_hphase, uv_rgb_hphase; 320 + u16 y_vphase, uv_rgb_vphase; 321 + int hscale, vscale; 322 + 323 + hscale = drm_rect_calc_hscale(&plane_state->base.src, 324 + &plane_state->base.dst, 325 + 0, INT_MAX); 326 + vscale = drm_rect_calc_vscale(&plane_state->base.src, 327 + &plane_state->base.dst, 328 + 0, INT_MAX); 329 + 330 + /* TODO: handle sub-pixel coordinates */ 331 + if (plane_state->base.fb->format->format == DRM_FORMAT_NV12) { 332 + y_hphase = skl_scaler_calc_phase(1, hscale, false); 333 + y_vphase = skl_scaler_calc_phase(1, vscale, false); 334 + 335 + /* MPEG2 chroma siting convention */ 336 + uv_rgb_hphase = skl_scaler_calc_phase(2, hscale, true); 337 + uv_rgb_vphase = skl_scaler_calc_phase(2, vscale, false); 338 + } else { 339 + /* not used */ 340 + y_hphase = 0; 341 + y_vphase = 0; 342 + 343 + uv_rgb_hphase = skl_scaler_calc_phase(1, hscale, false); 344 + uv_rgb_vphase = skl_scaler_calc_phase(1, vscale, false); 345 + } 346 + 347 + I915_WRITE_FW(SKL_PS_CTRL(pipe, scaler_id), 348 + PS_SCALER_EN | PS_PLANE_SEL(plane->id) | scaler->mode); 349 + I915_WRITE_FW(SKL_PS_PWR_GATE(pipe, scaler_id), 0); 350 + I915_WRITE_FW(SKL_PS_VPHASE(pipe, scaler_id), 351 + PS_Y_PHASE(y_vphase) | PS_UV_RGB_PHASE(uv_rgb_vphase)); 352 + I915_WRITE_FW(SKL_PS_HPHASE(pipe, scaler_id), 353 + PS_Y_PHASE(y_hphase) | PS_UV_RGB_PHASE(uv_rgb_hphase)); 354 + I915_WRITE_FW(SKL_PS_WIN_POS(pipe, scaler_id), (crtc_x << 16) | crtc_y); 355 + I915_WRITE_FW(SKL_PS_WIN_SZ(pipe, scaler_id), (crtc_w << 16) | crtc_h); 356 + } 357 + 305 358 void 306 359 skl_update_plane(struct intel_plane *plane, 307 360 const struct intel_crtc_state *crtc_state, 308 361 const struct intel_plane_state *plane_state) 309 362 { 310 363 struct drm_i915_private *dev_priv = to_i915(plane->base.dev); 311 - const struct drm_framebuffer *fb = plane_state->base.fb; 312 364 enum plane_id plane_id = plane->id; 313 365 enum pipe pipe = plane->pipe; 314 366 u32 plane_ctl = plane_state->ctl; ··· 370 318 u32 aux_stride = skl_plane_stride(plane_state, 1); 371 319 int crtc_x = plane_state->base.dst.x1; 372 320 int crtc_y = plane_state->base.dst.y1; 373 - uint32_t crtc_w = drm_rect_width(&plane_state->base.dst); 374 - uint32_t crtc_h = drm_rect_height(&plane_state->base.dst); 375 321 uint32_t x = plane_state->color_plane[0].x; 376 322 uint32_t y = plane_state->color_plane[0].y; 377 323 uint32_t src_w = drm_rect_width(&plane_state->base.src) >> 16; ··· 379 329 /* Sizes are 0 based */ 380 330 src_w--; 381 331 src_h--; 382 - crtc_w--; 383 - crtc_h--; 384 332 385 333 spin_lock_irqsave(&dev_priv->uncore.lock, irqflags); 386 334 ··· 401 353 (plane_state->color_plane[1].y << 16) | 402 354 plane_state->color_plane[1].x); 403 355 404 - /* program plane scaler */ 405 356 if (plane_state->scaler_id >= 0) { 406 - int scaler_id = plane_state->scaler_id; 407 - const struct intel_scaler *scaler = 408 - &crtc_state->scaler_state.scalers[scaler_id]; 409 - u16 y_hphase, uv_rgb_hphase; 410 - u16 y_vphase, uv_rgb_vphase; 411 - 412 - /* TODO: handle sub-pixel coordinates */ 413 - if (fb->format->format == DRM_FORMAT_NV12) { 414 - y_hphase = skl_scaler_calc_phase(1, false); 415 - y_vphase = skl_scaler_calc_phase(1, false); 416 - 417 - /* MPEG2 chroma siting convention */ 418 - uv_rgb_hphase = skl_scaler_calc_phase(2, true); 419 - uv_rgb_vphase = skl_scaler_calc_phase(2, false); 420 - } else { 421 - /* not used */ 422 - y_hphase = 0; 423 - y_vphase = 0; 424 - 425 - uv_rgb_hphase = skl_scaler_calc_phase(1, false); 426 - uv_rgb_vphase = skl_scaler_calc_phase(1, false); 427 - } 428 - 429 - I915_WRITE_FW(SKL_PS_CTRL(pipe, scaler_id), 430 - PS_SCALER_EN | PS_PLANE_SEL(plane_id) | scaler->mode); 431 - I915_WRITE_FW(SKL_PS_PWR_GATE(pipe, scaler_id), 0); 432 - I915_WRITE_FW(SKL_PS_VPHASE(pipe, scaler_id), 433 - PS_Y_PHASE(y_vphase) | PS_UV_RGB_PHASE(uv_rgb_vphase)); 434 - I915_WRITE_FW(SKL_PS_HPHASE(pipe, scaler_id), 435 - PS_Y_PHASE(y_hphase) | PS_UV_RGB_PHASE(uv_rgb_hphase)); 436 - I915_WRITE_FW(SKL_PS_WIN_POS(pipe, scaler_id), (crtc_x << 16) | crtc_y); 437 - I915_WRITE_FW(SKL_PS_WIN_SZ(pipe, scaler_id), 438 - ((crtc_w + 1) << 16)|(crtc_h + 1)); 357 + skl_program_scaler(plane, crtc_state, plane_state); 439 358 440 359 I915_WRITE_FW(PLANE_POS(pipe, plane_id), 0); 441 360 } else {
+8 -7
drivers/gpu/drm/meson/meson_venc.c
··· 854 854 unsigned int sof_lines; 855 855 unsigned int vsync_lines; 856 856 857 + /* Use VENCI for 480i and 576i and double HDMI pixels */ 858 + if (mode->flags & DRM_MODE_FLAG_DBLCLK) { 859 + hdmi_repeat = true; 860 + use_enci = true; 861 + venc_hdmi_latency = 1; 862 + } 863 + 857 864 if (meson_venc_hdmi_supported_vic(vic)) { 858 865 vmode = meson_venc_hdmi_get_vic_vmode(vic); 859 866 if (!vmode) { ··· 872 865 } else { 873 866 meson_venc_hdmi_get_dmt_vmode(mode, &vmode_dmt); 874 867 vmode = &vmode_dmt; 875 - } 876 - 877 - /* Use VENCI for 480i and 576i and double HDMI pixels */ 878 - if (mode->flags & DRM_MODE_FLAG_DBLCLK) { 879 - hdmi_repeat = true; 880 - use_enci = true; 881 - venc_hdmi_latency = 1; 868 + use_enci = false; 882 869 } 883 870 884 871 /* Repeat VENC pixels for 480/576i/p, 720p50/60 and 1080p50/60 */
+11 -11
drivers/gpu/drm/omapdrm/dss/dsi.c
··· 5409 5409 5410 5410 /* DSI on OMAP3 doesn't have register DSI_GNQ, set number 5411 5411 * of data to 3 by default */ 5412 - if (dsi->data->quirks & DSI_QUIRK_GNQ) 5412 + if (dsi->data->quirks & DSI_QUIRK_GNQ) { 5413 + dsi_runtime_get(dsi); 5413 5414 /* NB_DATA_LANES */ 5414 5415 dsi->num_lanes_supported = 1 + REG_GET(dsi, DSI_GNQ, 11, 9); 5415 - else 5416 + dsi_runtime_put(dsi); 5417 + } else { 5416 5418 dsi->num_lanes_supported = 3; 5419 + } 5417 5420 5418 5421 r = dsi_init_output(dsi); 5419 5422 if (r) ··· 5429 5426 } 5430 5427 5431 5428 r = of_platform_populate(dev->of_node, NULL, NULL, dev); 5432 - if (r) 5429 + if (r) { 5433 5430 DSSERR("Failed to populate DSI child devices: %d\n", r); 5431 + goto err_uninit_output; 5432 + } 5434 5433 5435 5434 r = component_add(&pdev->dev, &dsi_component_ops); 5436 5435 if (r) 5437 - goto err_uninit_output; 5436 + goto err_of_depopulate; 5438 5437 5439 5438 return 0; 5440 5439 5440 + err_of_depopulate: 5441 + of_platform_depopulate(dev); 5441 5442 err_uninit_output: 5442 5443 dsi_uninit_output(dsi); 5443 5444 err_pm_disable: ··· 5477 5470 /* wait for current handler to finish before turning the DSI off */ 5478 5471 synchronize_irq(dsi->irq); 5479 5472 5480 - dispc_runtime_put(dsi->dss->dispc); 5481 - 5482 5473 return 0; 5483 5474 } 5484 5475 5485 5476 static int dsi_runtime_resume(struct device *dev) 5486 5477 { 5487 5478 struct dsi_data *dsi = dev_get_drvdata(dev); 5488 - int r; 5489 - 5490 - r = dispc_runtime_get(dsi->dss->dispc); 5491 - if (r) 5492 - return r; 5493 5479 5494 5480 dsi->is_enabled = true; 5495 5481 /* ensure the irq handler sees the is_enabled value */
+10 -1
drivers/gpu/drm/omapdrm/dss/dss.c
··· 1484 1484 dss); 1485 1485 1486 1486 /* Add all the child devices as components. */ 1487 + r = of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev); 1488 + if (r) 1489 + goto err_uninit_debugfs; 1490 + 1487 1491 omapdss_gather_components(&pdev->dev); 1488 1492 1489 1493 device_for_each_child(&pdev->dev, &match, dss_add_child_component); 1490 1494 1491 1495 r = component_master_add_with_match(&pdev->dev, &dss_component_ops, match); 1492 1496 if (r) 1493 - goto err_uninit_debugfs; 1497 + goto err_of_depopulate; 1494 1498 1495 1499 return 0; 1500 + 1501 + err_of_depopulate: 1502 + of_platform_depopulate(&pdev->dev); 1496 1503 1497 1504 err_uninit_debugfs: 1498 1505 dss_debugfs_remove_file(dss->debugfs.clk); ··· 1528 1521 static int dss_remove(struct platform_device *pdev) 1529 1522 { 1530 1523 struct dss_device *dss = platform_get_drvdata(pdev); 1524 + 1525 + of_platform_depopulate(&pdev->dev); 1531 1526 1532 1527 component_master_del(&pdev->dev, &dss_component_ops); 1533 1528
+9 -28
drivers/gpu/drm/omapdrm/dss/hdmi4.c
··· 635 635 636 636 hdmi->dss = dss; 637 637 638 - r = hdmi_pll_init(dss, hdmi->pdev, &hdmi->pll, &hdmi->wp); 638 + r = hdmi_runtime_get(hdmi); 639 639 if (r) 640 640 return r; 641 + 642 + r = hdmi_pll_init(dss, hdmi->pdev, &hdmi->pll, &hdmi->wp); 643 + if (r) 644 + goto err_runtime_put; 641 645 642 646 r = hdmi4_cec_init(hdmi->pdev, &hdmi->core, &hdmi->wp); 643 647 if (r) ··· 656 652 hdmi->debugfs = dss_debugfs_create_file(dss, "hdmi", hdmi_dump_regs, 657 653 hdmi); 658 654 655 + hdmi_runtime_put(hdmi); 656 + 659 657 return 0; 660 658 661 659 err_cec_uninit: 662 660 hdmi4_cec_uninit(&hdmi->core); 663 661 err_pll_uninit: 664 662 hdmi_pll_uninit(&hdmi->pll); 663 + err_runtime_put: 664 + hdmi_runtime_put(hdmi); 665 665 return r; 666 666 } 667 667 ··· 841 833 return 0; 842 834 } 843 835 844 - static int hdmi_runtime_suspend(struct device *dev) 845 - { 846 - struct omap_hdmi *hdmi = dev_get_drvdata(dev); 847 - 848 - dispc_runtime_put(hdmi->dss->dispc); 849 - 850 - return 0; 851 - } 852 - 853 - static int hdmi_runtime_resume(struct device *dev) 854 - { 855 - struct omap_hdmi *hdmi = dev_get_drvdata(dev); 856 - int r; 857 - 858 - r = dispc_runtime_get(hdmi->dss->dispc); 859 - if (r < 0) 860 - return r; 861 - 862 - return 0; 863 - } 864 - 865 - static const struct dev_pm_ops hdmi_pm_ops = { 866 - .runtime_suspend = hdmi_runtime_suspend, 867 - .runtime_resume = hdmi_runtime_resume, 868 - }; 869 - 870 836 static const struct of_device_id hdmi_of_match[] = { 871 837 { .compatible = "ti,omap4-hdmi", }, 872 838 {}, ··· 851 869 .remove = hdmi4_remove, 852 870 .driver = { 853 871 .name = "omapdss_hdmi", 854 - .pm = &hdmi_pm_ops, 855 872 .of_match_table = hdmi_of_match, 856 873 .suppress_bind_attrs = true, 857 874 },
-27
drivers/gpu/drm/omapdrm/dss/hdmi5.c
··· 825 825 return 0; 826 826 } 827 827 828 - static int hdmi_runtime_suspend(struct device *dev) 829 - { 830 - struct omap_hdmi *hdmi = dev_get_drvdata(dev); 831 - 832 - dispc_runtime_put(hdmi->dss->dispc); 833 - 834 - return 0; 835 - } 836 - 837 - static int hdmi_runtime_resume(struct device *dev) 838 - { 839 - struct omap_hdmi *hdmi = dev_get_drvdata(dev); 840 - int r; 841 - 842 - r = dispc_runtime_get(hdmi->dss->dispc); 843 - if (r < 0) 844 - return r; 845 - 846 - return 0; 847 - } 848 - 849 - static const struct dev_pm_ops hdmi_pm_ops = { 850 - .runtime_suspend = hdmi_runtime_suspend, 851 - .runtime_resume = hdmi_runtime_resume, 852 - }; 853 - 854 828 static const struct of_device_id hdmi_of_match[] = { 855 829 { .compatible = "ti,omap5-hdmi", }, 856 830 { .compatible = "ti,dra7-hdmi", }, ··· 836 862 .remove = hdmi5_remove, 837 863 .driver = { 838 864 .name = "omapdss_hdmi5", 839 - .pm = &hdmi_pm_ops, 840 865 .of_match_table = hdmi_of_match, 841 866 .suppress_bind_attrs = true, 842 867 },
-7
drivers/gpu/drm/omapdrm/dss/venc.c
··· 946 946 if (venc->tv_dac_clk) 947 947 clk_disable_unprepare(venc->tv_dac_clk); 948 948 949 - dispc_runtime_put(venc->dss->dispc); 950 - 951 949 return 0; 952 950 } 953 951 954 952 static int venc_runtime_resume(struct device *dev) 955 953 { 956 954 struct venc_device *venc = dev_get_drvdata(dev); 957 - int r; 958 - 959 - r = dispc_runtime_get(venc->dss->dispc); 960 - if (r < 0) 961 - return r; 962 955 963 956 if (venc->tv_dac_clk) 964 957 clk_prepare_enable(venc->tv_dac_clk);
+6
drivers/gpu/drm/omapdrm/omap_crtc.c
··· 350 350 static void omap_crtc_atomic_enable(struct drm_crtc *crtc, 351 351 struct drm_crtc_state *old_state) 352 352 { 353 + struct omap_drm_private *priv = crtc->dev->dev_private; 353 354 struct omap_crtc *omap_crtc = to_omap_crtc(crtc); 354 355 int ret; 355 356 356 357 DBG("%s", omap_crtc->name); 358 + 359 + priv->dispc_ops->runtime_get(priv->dispc); 357 360 358 361 spin_lock_irq(&crtc->dev->event_lock); 359 362 drm_crtc_vblank_on(crtc); ··· 370 367 static void omap_crtc_atomic_disable(struct drm_crtc *crtc, 371 368 struct drm_crtc_state *old_state) 372 369 { 370 + struct omap_drm_private *priv = crtc->dev->dev_private; 373 371 struct omap_crtc *omap_crtc = to_omap_crtc(crtc); 374 372 375 373 DBG("%s", omap_crtc->name); ··· 383 379 spin_unlock_irq(&crtc->dev->event_lock); 384 380 385 381 drm_crtc_vblank_off(crtc); 382 + 383 + priv->dispc_ops->runtime_put(priv->dispc); 386 384 } 387 385 388 386 static enum drm_mode_status omap_crtc_mode_valid(struct drm_crtc *crtc,
+6
drivers/gpu/drm/vc4/vc4_kms.c
··· 214 214 return 0; 215 215 } 216 216 217 + /* We know for sure we don't want an async update here. Set 218 + * state->legacy_cursor_update to false to prevent 219 + * drm_atomic_helper_setup_commit() from auto-completing 220 + * commit->flip_done. 221 + */ 222 + state->legacy_cursor_update = false; 217 223 ret = drm_atomic_helper_setup_commit(state, nonblock); 218 224 if (ret) 219 225 return ret;
+13 -2
drivers/gpu/drm/vc4/vc4_plane.c
··· 807 807 static void vc4_plane_atomic_async_update(struct drm_plane *plane, 808 808 struct drm_plane_state *state) 809 809 { 810 - struct vc4_plane_state *vc4_state = to_vc4_plane_state(plane->state); 810 + struct vc4_plane_state *vc4_state, *new_vc4_state; 811 811 812 812 if (plane->state->fb != state->fb) { 813 813 vc4_plane_async_set_fb(plane, state->fb); ··· 828 828 plane->state->src_y = state->src_y; 829 829 830 830 /* Update the display list based on the new crtc_x/y. */ 831 - vc4_plane_atomic_check(plane, plane->state); 831 + vc4_plane_atomic_check(plane, state); 832 + 833 + new_vc4_state = to_vc4_plane_state(state); 834 + vc4_state = to_vc4_plane_state(plane->state); 835 + 836 + /* Update the current vc4_state pos0, pos2 and ptr0 dlist entries. */ 837 + vc4_state->dlist[vc4_state->pos0_offset] = 838 + new_vc4_state->dlist[vc4_state->pos0_offset]; 839 + vc4_state->dlist[vc4_state->pos2_offset] = 840 + new_vc4_state->dlist[vc4_state->pos2_offset]; 841 + vc4_state->dlist[vc4_state->ptr0_offset] = 842 + new_vc4_state->dlist[vc4_state->ptr0_offset]; 832 843 833 844 /* Note that we can't just call vc4_plane_write_dlist() 834 845 * because that would smash the context data that the HVS is
+8
drivers/hid/hid-ids.h
··· 275 275 276 276 #define USB_VENDOR_ID_CIDC 0x1677 277 277 278 + #define I2C_VENDOR_ID_CIRQUE 0x0488 279 + #define I2C_PRODUCT_ID_CIRQUE_121F 0x121F 280 + 278 281 #define USB_VENDOR_ID_CJTOUCH 0x24b8 279 282 #define USB_DEVICE_ID_CJTOUCH_MULTI_TOUCH_0020 0x0020 280 283 #define USB_DEVICE_ID_CJTOUCH_MULTI_TOUCH_0040 0x0040 ··· 710 707 #define USB_VENDOR_ID_LG 0x1fd2 711 708 #define USB_DEVICE_ID_LG_MULTITOUCH 0x0064 712 709 #define USB_DEVICE_ID_LG_MELFAS_MT 0x6007 710 + #define I2C_DEVICE_ID_LG_8001 0x8001 713 711 714 712 #define USB_VENDOR_ID_LOGITECH 0x046d 715 713 #define USB_DEVICE_ID_LOGITECH_AUDIOHUB 0x0a0e ··· 809 805 #define USB_DEVICE_ID_MS_TYPE_COVER_2 0x07a9 810 806 #define USB_DEVICE_ID_MS_POWER_COVER 0x07da 811 807 #define USB_DEVICE_ID_MS_XBOX_ONE_S_CONTROLLER 0x02fd 808 + #define USB_DEVICE_ID_MS_PIXART_MOUSE 0x00cb 812 809 813 810 #define USB_VENDOR_ID_MOJO 0x8282 814 811 #define USB_DEVICE_ID_RETRO_ADAPTER 0x3201 ··· 1048 1043 #define USB_VENDOR_ID_SYMBOL 0x05e0 1049 1044 #define USB_DEVICE_ID_SYMBOL_SCANNER_1 0x0800 1050 1045 #define USB_DEVICE_ID_SYMBOL_SCANNER_2 0x1300 1046 + #define USB_DEVICE_ID_SYMBOL_SCANNER_3 0x1200 1051 1047 1052 1048 #define USB_VENDOR_ID_SYNAPTICS 0x06cb 1053 1049 #define USB_DEVICE_ID_SYNAPTICS_TP 0x0001 ··· 1210 1204 #define USB_DEVICE_ID_PRIMAX_MOUSE_4D22 0x4d22 1211 1205 #define USB_DEVICE_ID_PRIMAX_KEYBOARD 0x4e05 1212 1206 #define USB_DEVICE_ID_PRIMAX_REZEL 0x4e72 1207 + #define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F 0x4d0f 1208 + #define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22 0x4e22 1213 1209 1214 1210 1215 1211 #define USB_VENDOR_ID_RISO_KAGAKU 0x1294 /* Riso Kagaku Corp. */
+3 -44
drivers/hid/hid-input.c
··· 325 325 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM, 326 326 USB_DEVICE_ID_ELECOM_BM084), 327 327 HID_BATTERY_QUIRK_IGNORE }, 328 + { HID_USB_DEVICE(USB_VENDOR_ID_SYMBOL, 329 + USB_DEVICE_ID_SYMBOL_SCANNER_3), 330 + HID_BATTERY_QUIRK_IGNORE }, 328 331 {} 329 332 }; 330 333 ··· 1841 1838 } 1842 1839 EXPORT_SYMBOL_GPL(hidinput_disconnect); 1843 1840 1844 - /** 1845 - * hid_scroll_counter_handle_scroll() - Send high- and low-resolution scroll 1846 - * events given a high-resolution wheel 1847 - * movement. 1848 - * @counter: a hid_scroll_counter struct describing the wheel. 1849 - * @hi_res_value: the movement of the wheel, in the mouse's high-resolution 1850 - * units. 1851 - * 1852 - * Given a high-resolution movement, this function converts the movement into 1853 - * microns and emits high-resolution scroll events for the input device. It also 1854 - * uses the multiplier from &struct hid_scroll_counter to emit low-resolution 1855 - * scroll events when appropriate for backwards-compatibility with userspace 1856 - * input libraries. 1857 - */ 1858 - void hid_scroll_counter_handle_scroll(struct hid_scroll_counter *counter, 1859 - int hi_res_value) 1860 - { 1861 - int low_res_value, remainder, multiplier; 1862 - 1863 - input_report_rel(counter->dev, REL_WHEEL_HI_RES, 1864 - hi_res_value * counter->microns_per_hi_res_unit); 1865 - 1866 - /* 1867 - * Update the low-res remainder with the high-res value, 1868 - * but reset if the direction has changed. 1869 - */ 1870 - remainder = counter->remainder; 1871 - if ((remainder ^ hi_res_value) < 0) 1872 - remainder = 0; 1873 - remainder += hi_res_value; 1874 - 1875 - /* 1876 - * Then just use the resolution multiplier to see if 1877 - * we should send a low-res (aka regular wheel) event. 1878 - */ 1879 - multiplier = counter->resolution_multiplier; 1880 - low_res_value = remainder / multiplier; 1881 - remainder -= low_res_value * multiplier; 1882 - counter->remainder = remainder; 1883 - 1884 - if (low_res_value) 1885 - input_report_rel(counter->dev, REL_WHEEL, low_res_value); 1886 - } 1887 - EXPORT_SYMBOL_GPL(hid_scroll_counter_handle_scroll);
+27 -282
drivers/hid/hid-logitech-hidpp.c
··· 64 64 #define HIDPP_QUIRK_NO_HIDINPUT BIT(23) 65 65 #define HIDPP_QUIRK_FORCE_OUTPUT_REPORTS BIT(24) 66 66 #define HIDPP_QUIRK_UNIFYING BIT(25) 67 - #define HIDPP_QUIRK_HI_RES_SCROLL_1P0 BIT(26) 68 - #define HIDPP_QUIRK_HI_RES_SCROLL_X2120 BIT(27) 69 - #define HIDPP_QUIRK_HI_RES_SCROLL_X2121 BIT(28) 70 - 71 - /* Convenience constant to check for any high-res support. */ 72 - #define HIDPP_QUIRK_HI_RES_SCROLL (HIDPP_QUIRK_HI_RES_SCROLL_1P0 | \ 73 - HIDPP_QUIRK_HI_RES_SCROLL_X2120 | \ 74 - HIDPP_QUIRK_HI_RES_SCROLL_X2121) 75 67 76 68 #define HIDPP_QUIRK_DELAYED_INIT HIDPP_QUIRK_NO_HIDINPUT 77 69 ··· 149 157 unsigned long capabilities; 150 158 151 159 struct hidpp_battery battery; 152 - struct hid_scroll_counter vertical_wheel_counter; 153 160 }; 154 161 155 162 /* HID++ 1.0 error codes */ ··· 400 409 #define HIDPP_SET_LONG_REGISTER 0x82 401 410 #define HIDPP_GET_LONG_REGISTER 0x83 402 411 403 - /** 404 - * hidpp10_set_register_bit() - Sets a single bit in a HID++ 1.0 register. 405 - * @hidpp_dev: the device to set the register on. 406 - * @register_address: the address of the register to modify. 407 - * @byte: the byte of the register to modify. Should be less than 3. 408 - * Return: 0 if successful, otherwise a negative error code. 409 - */ 410 - static int hidpp10_set_register_bit(struct hidpp_device *hidpp_dev, 411 - u8 register_address, u8 byte, u8 bit) 412 + #define HIDPP_REG_GENERAL 0x00 413 + 414 + static int hidpp10_enable_battery_reporting(struct hidpp_device *hidpp_dev) 412 415 { 413 416 struct hidpp_report response; 414 417 int ret; 415 418 u8 params[3] = { 0 }; 416 419 417 420 ret = hidpp_send_rap_command_sync(hidpp_dev, 418 - REPORT_ID_HIDPP_SHORT, 419 - HIDPP_GET_REGISTER, 420 - register_address, 421 - NULL, 0, &response); 421 + REPORT_ID_HIDPP_SHORT, 422 + HIDPP_GET_REGISTER, 423 + HIDPP_REG_GENERAL, 424 + NULL, 0, &response); 422 425 if (ret) 423 426 return ret; 424 427 425 428 memcpy(params, response.rap.params, 3); 426 429 427 - params[byte] |= BIT(bit); 430 + /* Set the battery bit */ 431 + params[0] |= BIT(4); 428 432 429 433 return hidpp_send_rap_command_sync(hidpp_dev, 430 - REPORT_ID_HIDPP_SHORT, 431 - HIDPP_SET_REGISTER, 432 - register_address, 433 - params, 3, &response); 434 - } 435 - 436 - 437 - #define HIDPP_REG_GENERAL 0x00 438 - 439 - static int hidpp10_enable_battery_reporting(struct hidpp_device *hidpp_dev) 440 - { 441 - return hidpp10_set_register_bit(hidpp_dev, HIDPP_REG_GENERAL, 0, 4); 442 - } 443 - 444 - #define HIDPP_REG_FEATURES 0x01 445 - 446 - /* On HID++ 1.0 devices, high-res scroll was called "scrolling acceleration". */ 447 - static int hidpp10_enable_scrolling_acceleration(struct hidpp_device *hidpp_dev) 448 - { 449 - return hidpp10_set_register_bit(hidpp_dev, HIDPP_REG_FEATURES, 0, 6); 434 + REPORT_ID_HIDPP_SHORT, 435 + HIDPP_SET_REGISTER, 436 + HIDPP_REG_GENERAL, 437 + params, 3, &response); 450 438 } 451 439 452 440 #define HIDPP_REG_BATTERY_STATUS 0x07 ··· 1134 1164 } 1135 1165 1136 1166 return ret; 1137 - } 1138 - 1139 - /* -------------------------------------------------------------------------- */ 1140 - /* 0x2120: Hi-resolution scrolling */ 1141 - /* -------------------------------------------------------------------------- */ 1142 - 1143 - #define HIDPP_PAGE_HI_RESOLUTION_SCROLLING 0x2120 1144 - 1145 - #define CMD_HI_RESOLUTION_SCROLLING_SET_HIGHRES_SCROLLING_MODE 0x10 1146 - 1147 - static int hidpp_hrs_set_highres_scrolling_mode(struct hidpp_device *hidpp, 1148 - bool enabled, u8 *multiplier) 1149 - { 1150 - u8 feature_index; 1151 - u8 feature_type; 1152 - int ret; 1153 - u8 params[1]; 1154 - struct hidpp_report response; 1155 - 1156 - ret = hidpp_root_get_feature(hidpp, 1157 - HIDPP_PAGE_HI_RESOLUTION_SCROLLING, 1158 - &feature_index, 1159 - &feature_type); 1160 - if (ret) 1161 - return ret; 1162 - 1163 - params[0] = enabled ? BIT(0) : 0; 1164 - ret = hidpp_send_fap_command_sync(hidpp, feature_index, 1165 - CMD_HI_RESOLUTION_SCROLLING_SET_HIGHRES_SCROLLING_MODE, 1166 - params, sizeof(params), &response); 1167 - if (ret) 1168 - return ret; 1169 - *multiplier = response.fap.params[1]; 1170 - return 0; 1171 - } 1172 - 1173 - /* -------------------------------------------------------------------------- */ 1174 - /* 0x2121: HiRes Wheel */ 1175 - /* -------------------------------------------------------------------------- */ 1176 - 1177 - #define HIDPP_PAGE_HIRES_WHEEL 0x2121 1178 - 1179 - #define CMD_HIRES_WHEEL_GET_WHEEL_CAPABILITY 0x00 1180 - #define CMD_HIRES_WHEEL_SET_WHEEL_MODE 0x20 1181 - 1182 - static int hidpp_hrw_get_wheel_capability(struct hidpp_device *hidpp, 1183 - u8 *multiplier) 1184 - { 1185 - u8 feature_index; 1186 - u8 feature_type; 1187 - int ret; 1188 - struct hidpp_report response; 1189 - 1190 - ret = hidpp_root_get_feature(hidpp, HIDPP_PAGE_HIRES_WHEEL, 1191 - &feature_index, &feature_type); 1192 - if (ret) 1193 - goto return_default; 1194 - 1195 - ret = hidpp_send_fap_command_sync(hidpp, feature_index, 1196 - CMD_HIRES_WHEEL_GET_WHEEL_CAPABILITY, 1197 - NULL, 0, &response); 1198 - if (ret) 1199 - goto return_default; 1200 - 1201 - *multiplier = response.fap.params[0]; 1202 - return 0; 1203 - return_default: 1204 - hid_warn(hidpp->hid_dev, 1205 - "Couldn't get wheel multiplier (error %d), assuming %d.\n", 1206 - ret, *multiplier); 1207 - return ret; 1208 - } 1209 - 1210 - static int hidpp_hrw_set_wheel_mode(struct hidpp_device *hidpp, bool invert, 1211 - bool high_resolution, bool use_hidpp) 1212 - { 1213 - u8 feature_index; 1214 - u8 feature_type; 1215 - int ret; 1216 - u8 params[1]; 1217 - struct hidpp_report response; 1218 - 1219 - ret = hidpp_root_get_feature(hidpp, HIDPP_PAGE_HIRES_WHEEL, 1220 - &feature_index, &feature_type); 1221 - if (ret) 1222 - return ret; 1223 - 1224 - params[0] = (invert ? BIT(2) : 0) | 1225 - (high_resolution ? BIT(1) : 0) | 1226 - (use_hidpp ? BIT(0) : 0); 1227 - 1228 - return hidpp_send_fap_command_sync(hidpp, feature_index, 1229 - CMD_HIRES_WHEEL_SET_WHEEL_MODE, 1230 - params, sizeof(params), &response); 1231 1167 } 1232 1168 1233 1169 /* -------------------------------------------------------------------------- */ ··· 2399 2523 input_report_rel(mydata->input, REL_Y, v); 2400 2524 2401 2525 v = hid_snto32(data[6], 8); 2402 - hid_scroll_counter_handle_scroll( 2403 - &hidpp->vertical_wheel_counter, v); 2526 + input_report_rel(mydata->input, REL_WHEEL, v); 2404 2527 2405 2528 input_sync(mydata->input); 2406 2529 } ··· 2528 2653 } 2529 2654 2530 2655 /* -------------------------------------------------------------------------- */ 2531 - /* High-resolution scroll wheels */ 2532 - /* -------------------------------------------------------------------------- */ 2533 - 2534 - /** 2535 - * struct hi_res_scroll_info - Stores info on a device's high-res scroll wheel. 2536 - * @product_id: the HID product ID of the device being described. 2537 - * @microns_per_hi_res_unit: the distance moved by the user's finger for each 2538 - * high-resolution unit reported by the device, in 2539 - * 256ths of a millimetre. 2540 - */ 2541 - struct hi_res_scroll_info { 2542 - __u32 product_id; 2543 - int microns_per_hi_res_unit; 2544 - }; 2545 - 2546 - static struct hi_res_scroll_info hi_res_scroll_devices[] = { 2547 - { /* Anywhere MX */ 2548 - .product_id = 0x1017, .microns_per_hi_res_unit = 445 }, 2549 - { /* Performance MX */ 2550 - .product_id = 0x101a, .microns_per_hi_res_unit = 406 }, 2551 - { /* M560 */ 2552 - .product_id = 0x402d, .microns_per_hi_res_unit = 435 }, 2553 - { /* MX Master 2S */ 2554 - .product_id = 0x4069, .microns_per_hi_res_unit = 406 }, 2555 - }; 2556 - 2557 - static int hi_res_scroll_look_up_microns(__u32 product_id) 2558 - { 2559 - int i; 2560 - int num_devices = sizeof(hi_res_scroll_devices) 2561 - / sizeof(hi_res_scroll_devices[0]); 2562 - for (i = 0; i < num_devices; i++) { 2563 - if (hi_res_scroll_devices[i].product_id == product_id) 2564 - return hi_res_scroll_devices[i].microns_per_hi_res_unit; 2565 - } 2566 - /* We don't have a value for this device, so use a sensible default. */ 2567 - return 406; 2568 - } 2569 - 2570 - static int hi_res_scroll_enable(struct hidpp_device *hidpp) 2571 - { 2572 - int ret; 2573 - u8 multiplier = 8; 2574 - 2575 - if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_X2121) { 2576 - ret = hidpp_hrw_set_wheel_mode(hidpp, false, true, false); 2577 - hidpp_hrw_get_wheel_capability(hidpp, &multiplier); 2578 - } else if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_X2120) { 2579 - ret = hidpp_hrs_set_highres_scrolling_mode(hidpp, true, 2580 - &multiplier); 2581 - } else /* if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_1P0) */ 2582 - ret = hidpp10_enable_scrolling_acceleration(hidpp); 2583 - 2584 - if (ret) 2585 - return ret; 2586 - 2587 - hidpp->vertical_wheel_counter.resolution_multiplier = multiplier; 2588 - hidpp->vertical_wheel_counter.microns_per_hi_res_unit = 2589 - hi_res_scroll_look_up_microns(hidpp->hid_dev->product); 2590 - hid_info(hidpp->hid_dev, "multiplier = %d, microns = %d\n", 2591 - multiplier, 2592 - hidpp->vertical_wheel_counter.microns_per_hi_res_unit); 2593 - return 0; 2594 - } 2595 - 2596 - /* -------------------------------------------------------------------------- */ 2597 2656 /* Generic HID++ devices */ 2598 2657 /* -------------------------------------------------------------------------- */ 2599 2658 ··· 2572 2763 wtp_populate_input(hidpp, input, origin_is_hid_core); 2573 2764 else if (hidpp->quirks & HIDPP_QUIRK_CLASS_M560) 2574 2765 m560_populate_input(hidpp, input, origin_is_hid_core); 2575 - 2576 - if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL) { 2577 - input_set_capability(input, EV_REL, REL_WHEEL_HI_RES); 2578 - hidpp->vertical_wheel_counter.dev = input; 2579 - } 2580 2766 } 2581 2767 2582 2768 static int hidpp_input_configured(struct hid_device *hdev, ··· 2688 2884 return m560_raw_event(hdev, data, size); 2689 2885 2690 2886 return 0; 2691 - } 2692 - 2693 - static int hidpp_event(struct hid_device *hdev, struct hid_field *field, 2694 - struct hid_usage *usage, __s32 value) 2695 - { 2696 - /* This function will only be called for scroll events, due to the 2697 - * restriction imposed in hidpp_usages. 2698 - */ 2699 - struct hidpp_device *hidpp = hid_get_drvdata(hdev); 2700 - struct hid_scroll_counter *counter = &hidpp->vertical_wheel_counter; 2701 - /* A scroll event may occur before the multiplier has been retrieved or 2702 - * the input device set, or high-res scroll enabling may fail. In such 2703 - * cases we must return early (falling back to default behaviour) to 2704 - * avoid a crash in hid_scroll_counter_handle_scroll. 2705 - */ 2706 - if (!(hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL) || value == 0 2707 - || counter->dev == NULL || counter->resolution_multiplier == 0) 2708 - return 0; 2709 - 2710 - hid_scroll_counter_handle_scroll(counter, value); 2711 - return 1; 2712 2887 } 2713 2888 2714 2889 static int hidpp_initialize_battery(struct hidpp_device *hidpp) ··· 2901 3118 if (hidpp->battery.ps) 2902 3119 power_supply_changed(hidpp->battery.ps); 2903 3120 2904 - if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL) 2905 - hi_res_scroll_enable(hidpp); 2906 - 2907 3121 if (!(hidpp->quirks & HIDPP_QUIRK_NO_HIDINPUT) || hidpp->delayed_input) 2908 3122 /* if the input nodes are already created, we can stop now */ 2909 3123 return; ··· 3086 3306 mutex_destroy(&hidpp->send_mutex); 3087 3307 } 3088 3308 3089 - #define LDJ_DEVICE(product) \ 3090 - HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, \ 3091 - USB_VENDOR_ID_LOGITECH, (product)) 3092 - 3093 3309 static const struct hid_device_id hidpp_devices[] = { 3094 3310 { /* wireless touchpad */ 3095 - LDJ_DEVICE(0x4011), 3311 + HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, 3312 + USB_VENDOR_ID_LOGITECH, 0x4011), 3096 3313 .driver_data = HIDPP_QUIRK_CLASS_WTP | HIDPP_QUIRK_DELAYED_INIT | 3097 3314 HIDPP_QUIRK_WTP_PHYSICAL_BUTTONS }, 3098 3315 { /* wireless touchpad T650 */ 3099 - LDJ_DEVICE(0x4101), 3316 + HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, 3317 + USB_VENDOR_ID_LOGITECH, 0x4101), 3100 3318 .driver_data = HIDPP_QUIRK_CLASS_WTP | HIDPP_QUIRK_DELAYED_INIT }, 3101 3319 { /* wireless touchpad T651 */ 3102 3320 HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 3103 3321 USB_DEVICE_ID_LOGITECH_T651), 3104 3322 .driver_data = HIDPP_QUIRK_CLASS_WTP }, 3105 - { /* Mouse Logitech Anywhere MX */ 3106 - LDJ_DEVICE(0x1017), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 }, 3107 - { /* Mouse Logitech Cube */ 3108 - LDJ_DEVICE(0x4010), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2120 }, 3109 - { /* Mouse Logitech M335 */ 3110 - LDJ_DEVICE(0x4050), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3111 - { /* Mouse Logitech M515 */ 3112 - LDJ_DEVICE(0x4007), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2120 }, 3113 3323 { /* Mouse logitech M560 */ 3114 - LDJ_DEVICE(0x402d), 3115 - .driver_data = HIDPP_QUIRK_DELAYED_INIT | HIDPP_QUIRK_CLASS_M560 3116 - | HIDPP_QUIRK_HI_RES_SCROLL_X2120 }, 3117 - { /* Mouse Logitech M705 (firmware RQM17) */ 3118 - LDJ_DEVICE(0x101b), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 }, 3119 - { /* Mouse Logitech M705 (firmware RQM67) */ 3120 - LDJ_DEVICE(0x406d), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3121 - { /* Mouse Logitech M720 */ 3122 - LDJ_DEVICE(0x405e), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3123 - { /* Mouse Logitech MX Anywhere 2 */ 3124 - LDJ_DEVICE(0x404a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3125 - { LDJ_DEVICE(0xb013), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3126 - { LDJ_DEVICE(0xb018), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3127 - { LDJ_DEVICE(0xb01f), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3128 - { /* Mouse Logitech MX Anywhere 2S */ 3129 - LDJ_DEVICE(0x406a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3130 - { /* Mouse Logitech MX Master */ 3131 - LDJ_DEVICE(0x4041), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3132 - { LDJ_DEVICE(0x4060), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3133 - { LDJ_DEVICE(0x4071), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3134 - { /* Mouse Logitech MX Master 2S */ 3135 - LDJ_DEVICE(0x4069), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3136 - { /* Mouse Logitech Performance MX */ 3137 - LDJ_DEVICE(0x101a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 }, 3324 + HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, 3325 + USB_VENDOR_ID_LOGITECH, 0x402d), 3326 + .driver_data = HIDPP_QUIRK_DELAYED_INIT | HIDPP_QUIRK_CLASS_M560 }, 3138 3327 { /* Keyboard logitech K400 */ 3139 - LDJ_DEVICE(0x4024), 3328 + HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, 3329 + USB_VENDOR_ID_LOGITECH, 0x4024), 3140 3330 .driver_data = HIDPP_QUIRK_CLASS_K400 }, 3141 3331 { /* Solar Keyboard Logitech K750 */ 3142 - LDJ_DEVICE(0x4002), 3332 + HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, 3333 + USB_VENDOR_ID_LOGITECH, 0x4002), 3143 3334 .driver_data = HIDPP_QUIRK_CLASS_K750 }, 3144 3335 3145 - { LDJ_DEVICE(HID_ANY_ID) }, 3336 + { HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, 3337 + USB_VENDOR_ID_LOGITECH, HID_ANY_ID)}, 3146 3338 3147 3339 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_G920_WHEEL), 3148 3340 .driver_data = HIDPP_QUIRK_CLASS_G920 | HIDPP_QUIRK_FORCE_OUTPUT_REPORTS}, ··· 3123 3371 3124 3372 MODULE_DEVICE_TABLE(hid, hidpp_devices); 3125 3373 3126 - static const struct hid_usage_id hidpp_usages[] = { 3127 - { HID_GD_WHEEL, EV_REL, REL_WHEEL }, 3128 - { HID_ANY_ID - 1, HID_ANY_ID - 1, HID_ANY_ID - 1} 3129 - }; 3130 - 3131 3374 static struct hid_driver hidpp_driver = { 3132 3375 .name = "logitech-hidpp-device", 3133 3376 .id_table = hidpp_devices, 3134 3377 .probe = hidpp_probe, 3135 3378 .remove = hidpp_remove, 3136 3379 .raw_event = hidpp_raw_event, 3137 - .usage_table = hidpp_usages, 3138 - .event = hidpp_event, 3139 3380 .input_configured = hidpp_input_configured, 3140 3381 .input_mapping = hidpp_input_mapping, 3141 3382 .input_mapped = hidpp_input_mapped,
+6
drivers/hid/hid-multitouch.c
··· 1814 1814 MT_USB_DEVICE(USB_VENDOR_ID_CHUNGHWAT, 1815 1815 USB_DEVICE_ID_CHUNGHWAT_MULTITOUCH) }, 1816 1816 1817 + /* Cirque devices */ 1818 + { .driver_data = MT_CLS_WIN_8_DUAL, 1819 + HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8, 1820 + I2C_VENDOR_ID_CIRQUE, 1821 + I2C_PRODUCT_ID_CIRQUE_121F) }, 1822 + 1817 1823 /* CJTouch panels */ 1818 1824 { .driver_data = MT_CLS_NSMU, 1819 1825 MT_USB_DEVICE(USB_VENDOR_ID_CJTOUCH,
+3
drivers/hid/hid-quirks.c
··· 107 107 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_MOUSE_C05A), HID_QUIRK_ALWAYS_POLL }, 108 108 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_MOUSE_C06A), HID_QUIRK_ALWAYS_POLL }, 109 109 { HID_USB_DEVICE(USB_VENDOR_ID_MCS, USB_DEVICE_ID_MCS_GAMEPADBLOCK), HID_QUIRK_MULTI_INPUT }, 110 + { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_PIXART_MOUSE), HID_QUIRK_ALWAYS_POLL }, 110 111 { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_POWER_COVER), HID_QUIRK_NO_INIT_REPORTS }, 111 112 { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_SURFACE_PRO_2), HID_QUIRK_NO_INIT_REPORTS }, 112 113 { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TOUCH_COVER_2), HID_QUIRK_NO_INIT_REPORTS }, ··· 130 129 { HID_USB_DEVICE(USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN), HID_QUIRK_NO_INIT_REPORTS }, 131 130 { HID_USB_DEVICE(USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL }, 132 131 { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_MOUSE_4D22), HID_QUIRK_ALWAYS_POLL }, 132 + { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F), HID_QUIRK_ALWAYS_POLL }, 133 + { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22), HID_QUIRK_ALWAYS_POLL }, 133 134 { HID_USB_DEVICE(USB_VENDOR_ID_PRODIGE, USB_DEVICE_ID_PRODIGE_CORDLESS), HID_QUIRK_NOGET }, 134 135 { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3001), HID_QUIRK_NOGET }, 135 136 { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3003), HID_QUIRK_NOGET },
+90 -64
drivers/hid/hid-steam.c
··· 23 23 * In order to avoid breaking them this driver creates a layered hidraw device, 24 24 * so it can detect when the client is running and then: 25 25 * - it will not send any command to the controller. 26 - * - this input device will be disabled, to avoid double input of the same 26 + * - this input device will be removed, to avoid double input of the same 27 27 * user action. 28 + * When the client is closed, this input device will be created again. 28 29 * 29 30 * For additional functions, such as changing the right-pad margin or switching 30 31 * the led, you can use the user-space tool at: ··· 114 113 spinlock_t lock; 115 114 struct hid_device *hdev, *client_hdev; 116 115 struct mutex mutex; 117 - bool client_opened, input_opened; 116 + bool client_opened; 118 117 struct input_dev __rcu *input; 119 118 unsigned long quirks; 120 119 struct work_struct work_connect; ··· 280 279 } 281 280 } 282 281 283 - static void steam_update_lizard_mode(struct steam_device *steam) 284 - { 285 - mutex_lock(&steam->mutex); 286 - if (!steam->client_opened) { 287 - if (steam->input_opened) 288 - steam_set_lizard_mode(steam, false); 289 - else 290 - steam_set_lizard_mode(steam, lizard_mode); 291 - } 292 - mutex_unlock(&steam->mutex); 293 - } 294 - 295 282 static int steam_input_open(struct input_dev *dev) 296 283 { 297 284 struct steam_device *steam = input_get_drvdata(dev); ··· 290 301 return ret; 291 302 292 303 mutex_lock(&steam->mutex); 293 - steam->input_opened = true; 294 304 if (!steam->client_opened && lizard_mode) 295 305 steam_set_lizard_mode(steam, false); 296 306 mutex_unlock(&steam->mutex); ··· 301 313 struct steam_device *steam = input_get_drvdata(dev); 302 314 303 315 mutex_lock(&steam->mutex); 304 - steam->input_opened = false; 305 316 if (!steam->client_opened && lizard_mode) 306 317 steam_set_lizard_mode(steam, true); 307 318 mutex_unlock(&steam->mutex); ··· 387 400 return 0; 388 401 } 389 402 390 - static int steam_register(struct steam_device *steam) 403 + static int steam_input_register(struct steam_device *steam) 391 404 { 392 405 struct hid_device *hdev = steam->hdev; 393 406 struct input_dev *input; ··· 400 413 dbg_hid("%s: already connected\n", __func__); 401 414 return 0; 402 415 } 403 - 404 - /* 405 - * Unlikely, but getting the serial could fail, and it is not so 406 - * important, so make up a serial number and go on. 407 - */ 408 - if (steam_get_serial(steam) < 0) 409 - strlcpy(steam->serial_no, "XXXXXXXXXX", 410 - sizeof(steam->serial_no)); 411 - 412 - hid_info(hdev, "Steam Controller '%s' connected", 413 - steam->serial_no); 414 416 415 417 input = input_allocate_device(); 416 418 if (!input) ··· 468 492 goto input_register_fail; 469 493 470 494 rcu_assign_pointer(steam->input, input); 471 - 472 - /* ignore battery errors, we can live without it */ 473 - if (steam->quirks & STEAM_QUIRK_WIRELESS) 474 - steam_battery_register(steam); 475 - 476 495 return 0; 477 496 478 497 input_register_fail: ··· 475 504 return ret; 476 505 } 477 506 478 - static void steam_unregister(struct steam_device *steam) 507 + static void steam_input_unregister(struct steam_device *steam) 479 508 { 480 509 struct input_dev *input; 510 + rcu_read_lock(); 511 + input = rcu_dereference(steam->input); 512 + rcu_read_unlock(); 513 + if (!input) 514 + return; 515 + RCU_INIT_POINTER(steam->input, NULL); 516 + synchronize_rcu(); 517 + input_unregister_device(input); 518 + } 519 + 520 + static void steam_battery_unregister(struct steam_device *steam) 521 + { 481 522 struct power_supply *battery; 482 523 483 524 rcu_read_lock(); 484 - input = rcu_dereference(steam->input); 485 525 battery = rcu_dereference(steam->battery); 486 526 rcu_read_unlock(); 487 527 488 - if (battery) { 489 - RCU_INIT_POINTER(steam->battery, NULL); 490 - synchronize_rcu(); 491 - power_supply_unregister(battery); 528 + if (!battery) 529 + return; 530 + RCU_INIT_POINTER(steam->battery, NULL); 531 + synchronize_rcu(); 532 + power_supply_unregister(battery); 533 + } 534 + 535 + static int steam_register(struct steam_device *steam) 536 + { 537 + int ret; 538 + 539 + /* 540 + * This function can be called several times in a row with the 541 + * wireless adaptor, without steam_unregister() between them, because 542 + * another client send a get_connection_status command, for example. 543 + * The battery and serial number are set just once per device. 544 + */ 545 + if (!steam->serial_no[0]) { 546 + /* 547 + * Unlikely, but getting the serial could fail, and it is not so 548 + * important, so make up a serial number and go on. 549 + */ 550 + if (steam_get_serial(steam) < 0) 551 + strlcpy(steam->serial_no, "XXXXXXXXXX", 552 + sizeof(steam->serial_no)); 553 + 554 + hid_info(steam->hdev, "Steam Controller '%s' connected", 555 + steam->serial_no); 556 + 557 + /* ignore battery errors, we can live without it */ 558 + if (steam->quirks & STEAM_QUIRK_WIRELESS) 559 + steam_battery_register(steam); 560 + 561 + mutex_lock(&steam_devices_lock); 562 + list_add(&steam->list, &steam_devices); 563 + mutex_unlock(&steam_devices_lock); 492 564 } 493 - if (input) { 494 - RCU_INIT_POINTER(steam->input, NULL); 495 - synchronize_rcu(); 565 + 566 + mutex_lock(&steam->mutex); 567 + if (!steam->client_opened) { 568 + steam_set_lizard_mode(steam, lizard_mode); 569 + ret = steam_input_register(steam); 570 + } else { 571 + ret = 0; 572 + } 573 + mutex_unlock(&steam->mutex); 574 + 575 + return ret; 576 + } 577 + 578 + static void steam_unregister(struct steam_device *steam) 579 + { 580 + steam_battery_unregister(steam); 581 + steam_input_unregister(steam); 582 + if (steam->serial_no[0]) { 496 583 hid_info(steam->hdev, "Steam Controller '%s' disconnected", 497 584 steam->serial_no); 498 - input_unregister_device(input); 585 + mutex_lock(&steam_devices_lock); 586 + list_del(&steam->list); 587 + mutex_unlock(&steam_devices_lock); 588 + steam->serial_no[0] = 0; 499 589 } 500 590 } 501 591 ··· 632 600 mutex_lock(&steam->mutex); 633 601 steam->client_opened = true; 634 602 mutex_unlock(&steam->mutex); 603 + 604 + steam_input_unregister(steam); 605 + 635 606 return ret; 636 607 } 637 608 ··· 644 609 645 610 mutex_lock(&steam->mutex); 646 611 steam->client_opened = false; 647 - if (steam->input_opened) 648 - steam_set_lizard_mode(steam, false); 649 - else 650 - steam_set_lizard_mode(steam, lizard_mode); 651 612 mutex_unlock(&steam->mutex); 652 613 653 614 hid_hw_close(steam->hdev); 615 + if (steam->connected) { 616 + steam_set_lizard_mode(steam, lizard_mode); 617 + steam_input_register(steam); 618 + } 654 619 } 655 620 656 621 static int steam_client_ll_raw_request(struct hid_device *hdev, ··· 779 744 } 780 745 } 781 746 782 - mutex_lock(&steam_devices_lock); 783 - steam_update_lizard_mode(steam); 784 - list_add(&steam->list, &steam_devices); 785 - mutex_unlock(&steam_devices_lock); 786 - 787 747 return 0; 788 748 789 749 hid_hw_open_fail: ··· 804 774 return; 805 775 } 806 776 807 - mutex_lock(&steam_devices_lock); 808 - list_del(&steam->list); 809 - mutex_unlock(&steam_devices_lock); 810 - 811 777 hid_destroy_device(steam->client_hdev); 812 778 steam->client_opened = false; 813 779 cancel_work_sync(&steam->work_connect); ··· 818 792 static void steam_do_connect_event(struct steam_device *steam, bool connected) 819 793 { 820 794 unsigned long flags; 795 + bool changed; 821 796 822 797 spin_lock_irqsave(&steam->lock, flags); 798 + changed = steam->connected != connected; 823 799 steam->connected = connected; 824 800 spin_unlock_irqrestore(&steam->lock, flags); 825 801 826 - if (schedule_work(&steam->work_connect) == 0) 802 + if (changed && schedule_work(&steam->work_connect) == 0) 827 803 dbg_hid("%s: connected=%d event already queued\n", 828 804 __func__, connected); 829 805 } ··· 1047 1019 return 0; 1048 1020 rcu_read_lock(); 1049 1021 input = rcu_dereference(steam->input); 1050 - if (likely(input)) { 1022 + if (likely(input)) 1051 1023 steam_do_input_event(steam, input, data); 1052 - } else { 1053 - dbg_hid("%s: input data without connect event\n", 1054 - __func__); 1055 - steam_do_connect_event(steam, true); 1056 - } 1057 1024 rcu_read_unlock(); 1058 1025 break; 1059 1026 case STEAM_EV_CONNECT: ··· 1097 1074 1098 1075 mutex_lock(&steam_devices_lock); 1099 1076 list_for_each_entry(steam, &steam_devices, list) { 1100 - steam_update_lizard_mode(steam); 1077 + mutex_lock(&steam->mutex); 1078 + if (!steam->client_opened) 1079 + steam_set_lizard_mode(steam, lizard_mode); 1080 + mutex_unlock(&steam->mutex); 1101 1081 } 1102 1082 mutex_unlock(&steam_devices_lock); 1103 1083 return 0;
+2
drivers/hid/i2c-hid/i2c-hid-core.c
··· 177 177 I2C_HID_QUIRK_NO_RUNTIME_PM }, 178 178 { I2C_VENDOR_ID_RAYDIUM, I2C_PRODUCT_ID_RAYDIUM_4B33, 179 179 I2C_HID_QUIRK_DELAY_AFTER_SLEEP }, 180 + { USB_VENDOR_ID_LG, I2C_DEVICE_ID_LG_8001, 181 + I2C_HID_QUIRK_NO_RUNTIME_PM }, 180 182 { 0, 0 } 181 183 }; 182 184
+19 -6
drivers/hid/uhid.c
··· 12 12 13 13 #include <linux/atomic.h> 14 14 #include <linux/compat.h> 15 + #include <linux/cred.h> 15 16 #include <linux/device.h> 16 17 #include <linux/fs.h> 17 18 #include <linux/hid.h> ··· 497 496 goto err_free; 498 497 } 499 498 500 - len = min(sizeof(hid->name), sizeof(ev->u.create2.name)); 501 - strlcpy(hid->name, ev->u.create2.name, len); 502 - len = min(sizeof(hid->phys), sizeof(ev->u.create2.phys)); 503 - strlcpy(hid->phys, ev->u.create2.phys, len); 504 - len = min(sizeof(hid->uniq), sizeof(ev->u.create2.uniq)); 505 - strlcpy(hid->uniq, ev->u.create2.uniq, len); 499 + /* @hid is zero-initialized, strncpy() is correct, strlcpy() not */ 500 + len = min(sizeof(hid->name), sizeof(ev->u.create2.name)) - 1; 501 + strncpy(hid->name, ev->u.create2.name, len); 502 + len = min(sizeof(hid->phys), sizeof(ev->u.create2.phys)) - 1; 503 + strncpy(hid->phys, ev->u.create2.phys, len); 504 + len = min(sizeof(hid->uniq), sizeof(ev->u.create2.uniq)) - 1; 505 + strncpy(hid->uniq, ev->u.create2.uniq, len); 506 506 507 507 hid->ll_driver = &uhid_hid_driver; 508 508 hid->bus = ev->u.create2.bus; ··· 724 722 725 723 switch (uhid->input_buf.type) { 726 724 case UHID_CREATE: 725 + /* 726 + * 'struct uhid_create_req' contains a __user pointer which is 727 + * copied from, so it's unsafe to allow this with elevated 728 + * privileges (e.g. from a setuid binary) or via kernel_write(). 729 + */ 730 + if (file->f_cred != current_cred() || uaccess_kernel()) { 731 + pr_err_once("UHID_CREATE from different security context by process %d (%s), this is not allowed.\n", 732 + task_tgid_vnr(current), current->comm); 733 + ret = -EACCES; 734 + goto unlock; 735 + } 727 736 ret = uhid_dev_create(uhid, &uhid->input_buf); 728 737 break; 729 738 case UHID_CREATE2:
+22 -4
drivers/hv/hv_kvp.c
··· 353 353 354 354 out->body.kvp_ip_val.dhcp_enabled = in->kvp_ip_val.dhcp_enabled; 355 355 356 + /* fallthrough */ 357 + 358 + case KVP_OP_GET_IP_INFO: 356 359 utf16s_to_utf8s((wchar_t *)in->kvp_ip_val.adapter_id, 357 360 MAX_ADAPTER_ID_SIZE, 358 361 UTF16_LITTLE_ENDIAN, ··· 408 405 process_ib_ipinfo(in_msg, message, KVP_OP_SET_IP_INFO); 409 406 break; 410 407 case KVP_OP_GET_IP_INFO: 411 - /* We only need to pass on message->kvp_hdr.operation. */ 408 + /* 409 + * We only need to pass on the info of operation, adapter_id 410 + * and addr_family to the userland kvp daemon. 411 + */ 412 + process_ib_ipinfo(in_msg, message, KVP_OP_GET_IP_INFO); 412 413 break; 413 414 case KVP_OP_SET: 414 415 switch (in_msg->body.kvp_set.data.value_type) { ··· 453 446 454 447 } 455 448 456 - break; 457 - 458 - case KVP_OP_GET: 449 + /* 450 + * The key is always a string - utf16 encoding. 451 + */ 459 452 message->body.kvp_set.data.key_size = 460 453 utf16s_to_utf8s( 461 454 (wchar_t *)in_msg->body.kvp_set.data.key, 462 455 in_msg->body.kvp_set.data.key_size, 463 456 UTF16_LITTLE_ENDIAN, 464 457 message->body.kvp_set.data.key, 458 + HV_KVP_EXCHANGE_MAX_KEY_SIZE - 1) + 1; 459 + 460 + break; 461 + 462 + case KVP_OP_GET: 463 + message->body.kvp_get.data.key_size = 464 + utf16s_to_utf8s( 465 + (wchar_t *)in_msg->body.kvp_get.data.key, 466 + in_msg->body.kvp_get.data.key_size, 467 + UTF16_LITTLE_ENDIAN, 468 + message->body.kvp_get.data.key, 465 469 HV_KVP_EXCHANGE_MAX_KEY_SIZE - 1) + 1; 466 470 break; 467 471
+2 -1
drivers/iommu/amd_iommu_init.c
··· 797 797 entry = iommu_virt_to_phys(iommu->ga_log) | GA_LOG_SIZE_512; 798 798 memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_BASE_OFFSET, 799 799 &entry, sizeof(entry)); 800 - entry = (iommu_virt_to_phys(iommu->ga_log) & 0xFFFFFFFFFFFFFULL) & ~7ULL; 800 + entry = (iommu_virt_to_phys(iommu->ga_log_tail) & 801 + (BIT_ULL(52)-1)) & ~7ULL; 801 802 memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_TAIL_OFFSET, 802 803 &entry, sizeof(entry)); 803 804 writel(0x00, iommu->mmio_base + MMIO_GA_HEAD_OFFSET);
+1 -1
drivers/iommu/intel-iommu.c
··· 3075 3075 } 3076 3076 3077 3077 if (old_ce) 3078 - iounmap(old_ce); 3078 + memunmap(old_ce); 3079 3079 3080 3080 ret = 0; 3081 3081 if (devfn < 0x80)
+1 -1
drivers/iommu/intel-svm.c
··· 595 595 pr_err("%s: Page request without PASID: %08llx %08llx\n", 596 596 iommu->name, ((unsigned long long *)req)[0], 597 597 ((unsigned long long *)req)[1]); 598 - goto bad_req; 598 + goto no_pasid; 599 599 } 600 600 601 601 if (!svm || svm->pasid != req->pasid) {
+3
drivers/iommu/ipmmu-vmsa.c
··· 498 498 499 499 static void ipmmu_domain_destroy_context(struct ipmmu_vmsa_domain *domain) 500 500 { 501 + if (!domain->mmu) 502 + return; 503 + 501 504 /* 502 505 * Disable the context. Flush the TLB as required when modifying the 503 506 * context registers.
+38 -11
drivers/media/cec/cec-adap.c
··· 807 807 } 808 808 809 809 if (adap->transmit_queue_sz >= CEC_MAX_MSG_TX_QUEUE_SZ) { 810 - dprintk(1, "%s: transmit queue full\n", __func__); 810 + dprintk(2, "%s: transmit queue full\n", __func__); 811 811 return -EBUSY; 812 812 } 813 813 ··· 1180 1180 { 1181 1181 struct cec_log_addrs *las = &adap->log_addrs; 1182 1182 struct cec_msg msg = { }; 1183 + const unsigned int max_retries = 2; 1184 + unsigned int i; 1183 1185 int err; 1184 1186 1185 1187 if (cec_has_log_addr(adap, log_addr)) ··· 1190 1188 /* Send poll message */ 1191 1189 msg.len = 1; 1192 1190 msg.msg[0] = (log_addr << 4) | log_addr; 1193 - err = cec_transmit_msg_fh(adap, &msg, NULL, true); 1191 + 1192 + for (i = 0; i < max_retries; i++) { 1193 + err = cec_transmit_msg_fh(adap, &msg, NULL, true); 1194 + 1195 + /* 1196 + * While trying to poll the physical address was reset 1197 + * and the adapter was unconfigured, so bail out. 1198 + */ 1199 + if (!adap->is_configuring) 1200 + return -EINTR; 1201 + 1202 + if (err) 1203 + return err; 1204 + 1205 + /* 1206 + * The message was aborted due to a disconnect or 1207 + * unconfigure, just bail out. 1208 + */ 1209 + if (msg.tx_status & CEC_TX_STATUS_ABORTED) 1210 + return -EINTR; 1211 + if (msg.tx_status & CEC_TX_STATUS_OK) 1212 + return 0; 1213 + if (msg.tx_status & CEC_TX_STATUS_NACK) 1214 + break; 1215 + /* 1216 + * Retry up to max_retries times if the message was neither 1217 + * OKed or NACKed. This can happen due to e.g. a Lost 1218 + * Arbitration condition. 1219 + */ 1220 + } 1194 1221 1195 1222 /* 1196 - * While trying to poll the physical address was reset 1197 - * and the adapter was unconfigured, so bail out. 1223 + * If we are unable to get an OK or a NACK after max_retries attempts 1224 + * (and note that each attempt already consists of four polls), then 1225 + * then we assume that something is really weird and that it is not a 1226 + * good idea to try and claim this logical address. 1198 1227 */ 1199 - if (!adap->is_configuring) 1200 - return -EINTR; 1201 - 1202 - if (err) 1203 - return err; 1204 - 1205 - if (msg.tx_status & CEC_TX_STATUS_OK) 1228 + if (i == max_retries) 1206 1229 return 0; 1207 1230 1208 1231 /*
-1
drivers/media/i2c/tc358743.c
··· 1918 1918 ret = v4l2_fwnode_endpoint_alloc_parse(of_fwnode_handle(ep), &endpoint); 1919 1919 if (ret) { 1920 1920 dev_err(dev, "failed to parse endpoint\n"); 1921 - ret = ret; 1922 1921 goto put_node; 1923 1922 } 1924 1923
+4 -6
drivers/media/pci/intel/ipu3/ipu3-cio2.c
··· 1844 1844 static void cio2_pci_remove(struct pci_dev *pci_dev) 1845 1845 { 1846 1846 struct cio2_device *cio2 = pci_get_drvdata(pci_dev); 1847 - unsigned int i; 1848 1847 1849 - cio2_notifier_exit(cio2); 1850 - cio2_fbpt_exit_dummy(cio2); 1851 - for (i = 0; i < CIO2_QUEUES; i++) 1852 - cio2_queue_exit(cio2, &cio2->queue[i]); 1853 - v4l2_device_unregister(&cio2->v4l2_dev); 1854 1848 media_device_unregister(&cio2->media_dev); 1849 + cio2_notifier_exit(cio2); 1850 + cio2_queues_exit(cio2); 1851 + cio2_fbpt_exit_dummy(cio2); 1852 + v4l2_device_unregister(&cio2->v4l2_dev); 1855 1853 media_device_cleanup(&cio2->media_dev); 1856 1854 mutex_destroy(&cio2->lock); 1857 1855 }
+2 -1
drivers/media/platform/omap3isp/isp.c
··· 1587 1587 1588 1588 static void isp_unregister_entities(struct isp_device *isp) 1589 1589 { 1590 + media_device_unregister(&isp->media_dev); 1591 + 1590 1592 omap3isp_csi2_unregister_entities(&isp->isp_csi2a); 1591 1593 omap3isp_ccp2_unregister_entities(&isp->isp_ccp2); 1592 1594 omap3isp_ccdc_unregister_entities(&isp->isp_ccdc); ··· 1599 1597 omap3isp_stat_unregister_entities(&isp->isp_hist); 1600 1598 1601 1599 v4l2_device_unregister(&isp->v4l2_dev); 1602 - media_device_unregister(&isp->media_dev); 1603 1600 media_device_cleanup(&isp->media_dev); 1604 1601 } 1605 1602
+1 -1
drivers/media/platform/vicodec/vicodec-core.c
··· 42 42 #define MAX_WIDTH 4096U 43 43 #define MIN_WIDTH 640U 44 44 #define MAX_HEIGHT 2160U 45 - #define MIN_HEIGHT 480U 45 + #define MIN_HEIGHT 360U 46 46 47 47 #define dprintk(dev, fmt, arg...) \ 48 48 v4l2_dbg(1, debug, &dev->v4l2_dev, "%s: " fmt, __func__, ## arg)
+1 -1
drivers/media/platform/vim2m.c
··· 1009 1009 1010 1010 static const struct media_device_ops m2m_media_ops = { 1011 1011 .req_validate = vb2_request_validate, 1012 - .req_queue = vb2_m2m_request_queue, 1012 + .req_queue = v4l2_m2m_request_queue, 1013 1013 }; 1014 1014 1015 1015 static int vim2m_probe(struct platform_device *pdev)
+5
drivers/media/v4l2-core/v4l2-ctrls.c
··· 1664 1664 p_mpeg2_slice_params->forward_ref_index >= VIDEO_MAX_FRAME) 1665 1665 return -EINVAL; 1666 1666 1667 + if (p_mpeg2_slice_params->pad || 1668 + p_mpeg2_slice_params->picture.pad || 1669 + p_mpeg2_slice_params->sequence.pad) 1670 + return -EINVAL; 1671 + 1667 1672 return 0; 1668 1673 1669 1674 case V4L2_CTRL_TYPE_MPEG2_QUANTIZATION:
+24 -19
drivers/media/v4l2-core/v4l2-event.c
··· 193 193 } 194 194 EXPORT_SYMBOL_GPL(v4l2_event_pending); 195 195 196 + static void __v4l2_event_unsubscribe(struct v4l2_subscribed_event *sev) 197 + { 198 + struct v4l2_fh *fh = sev->fh; 199 + unsigned int i; 200 + 201 + lockdep_assert_held(&fh->subscribe_lock); 202 + assert_spin_locked(&fh->vdev->fh_lock); 203 + 204 + /* Remove any pending events for this subscription */ 205 + for (i = 0; i < sev->in_use; i++) { 206 + list_del(&sev->events[sev_pos(sev, i)].list); 207 + fh->navailable--; 208 + } 209 + list_del(&sev->list); 210 + } 211 + 196 212 int v4l2_event_subscribe(struct v4l2_fh *fh, 197 213 const struct v4l2_event_subscription *sub, unsigned elems, 198 214 const struct v4l2_subscribed_event_ops *ops) ··· 240 224 241 225 spin_lock_irqsave(&fh->vdev->fh_lock, flags); 242 226 found_ev = v4l2_event_subscribed(fh, sub->type, sub->id); 227 + if (!found_ev) 228 + list_add(&sev->list, &fh->subscribed); 243 229 spin_unlock_irqrestore(&fh->vdev->fh_lock, flags); 244 230 245 231 if (found_ev) { 246 232 /* Already listening */ 247 233 kvfree(sev); 248 - goto out_unlock; 249 - } 250 - 251 - if (sev->ops && sev->ops->add) { 234 + } else if (sev->ops && sev->ops->add) { 252 235 ret = sev->ops->add(sev, elems); 253 236 if (ret) { 237 + spin_lock_irqsave(&fh->vdev->fh_lock, flags); 238 + __v4l2_event_unsubscribe(sev); 239 + spin_unlock_irqrestore(&fh->vdev->fh_lock, flags); 254 240 kvfree(sev); 255 - goto out_unlock; 256 241 } 257 242 } 258 243 259 - spin_lock_irqsave(&fh->vdev->fh_lock, flags); 260 - list_add(&sev->list, &fh->subscribed); 261 - spin_unlock_irqrestore(&fh->vdev->fh_lock, flags); 262 - 263 - out_unlock: 264 244 mutex_unlock(&fh->subscribe_lock); 265 245 266 246 return ret; ··· 291 279 { 292 280 struct v4l2_subscribed_event *sev; 293 281 unsigned long flags; 294 - int i; 295 282 296 283 if (sub->type == V4L2_EVENT_ALL) { 297 284 v4l2_event_unsubscribe_all(fh); ··· 302 291 spin_lock_irqsave(&fh->vdev->fh_lock, flags); 303 292 304 293 sev = v4l2_event_subscribed(fh, sub->type, sub->id); 305 - if (sev != NULL) { 306 - /* Remove any pending events for this subscription */ 307 - for (i = 0; i < sev->in_use; i++) { 308 - list_del(&sev->events[sev_pos(sev, i)].list); 309 - fh->navailable--; 310 - } 311 - list_del(&sev->list); 312 - } 294 + if (sev != NULL) 295 + __v4l2_event_unsubscribe(sev); 313 296 314 297 spin_unlock_irqrestore(&fh->vdev->fh_lock, flags); 315 298
+2 -2
drivers/media/v4l2-core/v4l2-mem2mem.c
··· 953 953 } 954 954 EXPORT_SYMBOL_GPL(v4l2_m2m_buf_queue); 955 955 956 - void vb2_m2m_request_queue(struct media_request *req) 956 + void v4l2_m2m_request_queue(struct media_request *req) 957 957 { 958 958 struct media_request_object *obj, *obj_safe; 959 959 struct v4l2_m2m_ctx *m2m_ctx = NULL; ··· 997 997 if (m2m_ctx) 998 998 v4l2_m2m_try_schedule(m2m_ctx); 999 999 } 1000 - EXPORT_SYMBOL_GPL(vb2_m2m_request_queue); 1000 + EXPORT_SYMBOL_GPL(v4l2_m2m_request_queue); 1001 1001 1002 1002 /* Videobuf2 ioctl helpers */ 1003 1003
+1 -1
drivers/misc/atmel-ssc.c
··· 132 132 MODULE_DEVICE_TABLE(of, atmel_ssc_dt_ids); 133 133 #endif 134 134 135 - static inline const struct atmel_ssc_platform_data * __init 135 + static inline const struct atmel_ssc_platform_data * 136 136 atmel_ssc_get_driver_data(struct platform_device *pdev) 137 137 { 138 138 if (pdev->dev.of_node) {
+4
drivers/misc/sgi-gru/grukdump.c
··· 27 27 #include <linux/delay.h> 28 28 #include <linux/bitops.h> 29 29 #include <asm/uv/uv_hub.h> 30 + 31 + #include <linux/nospec.h> 32 + 30 33 #include "gru.h" 31 34 #include "grutables.h" 32 35 #include "gruhandles.h" ··· 199 196 /* Currently, only dump by gid is implemented */ 200 197 if (req.gid >= gru_max_gids) 201 198 return -EINVAL; 199 + req.gid = array_index_nospec(req.gid, gru_max_gids); 202 200 203 201 gru = GID_TO_GRU(req.gid); 204 202 ubuf = req.buf;
+83 -3
drivers/mmc/host/sdhci-pci-core.c
··· 12 12 * - JMicron (hardware and technical support) 13 13 */ 14 14 15 + #include <linux/bitfield.h> 15 16 #include <linux/string.h> 16 17 #include <linux/delay.h> 17 18 #include <linux/highmem.h> ··· 463 462 u32 dsm_fns; 464 463 int drv_strength; 465 464 bool d3_retune; 465 + bool rpm_retune_ok; 466 + u32 glk_rx_ctrl1; 467 + u32 glk_tun_val; 466 468 }; 467 469 468 470 static const guid_t intel_dsm_guid = ··· 795 791 return ret; 796 792 } 797 793 794 + #ifdef CONFIG_PM 795 + #define GLK_RX_CTRL1 0x834 796 + #define GLK_TUN_VAL 0x840 797 + #define GLK_PATH_PLL GENMASK(13, 8) 798 + #define GLK_DLY GENMASK(6, 0) 799 + /* Workaround firmware failing to restore the tuning value */ 800 + static void glk_rpm_retune_wa(struct sdhci_pci_chip *chip, bool susp) 801 + { 802 + struct sdhci_pci_slot *slot = chip->slots[0]; 803 + struct intel_host *intel_host = sdhci_pci_priv(slot); 804 + struct sdhci_host *host = slot->host; 805 + u32 glk_rx_ctrl1; 806 + u32 glk_tun_val; 807 + u32 dly; 808 + 809 + if (intel_host->rpm_retune_ok || !mmc_can_retune(host->mmc)) 810 + return; 811 + 812 + glk_rx_ctrl1 = sdhci_readl(host, GLK_RX_CTRL1); 813 + glk_tun_val = sdhci_readl(host, GLK_TUN_VAL); 814 + 815 + if (susp) { 816 + intel_host->glk_rx_ctrl1 = glk_rx_ctrl1; 817 + intel_host->glk_tun_val = glk_tun_val; 818 + return; 819 + } 820 + 821 + if (!intel_host->glk_tun_val) 822 + return; 823 + 824 + if (glk_rx_ctrl1 != intel_host->glk_rx_ctrl1) { 825 + intel_host->rpm_retune_ok = true; 826 + return; 827 + } 828 + 829 + dly = FIELD_PREP(GLK_DLY, FIELD_GET(GLK_PATH_PLL, glk_rx_ctrl1) + 830 + (intel_host->glk_tun_val << 1)); 831 + if (dly == FIELD_GET(GLK_DLY, glk_rx_ctrl1)) 832 + return; 833 + 834 + glk_rx_ctrl1 = (glk_rx_ctrl1 & ~GLK_DLY) | dly; 835 + sdhci_writel(host, glk_rx_ctrl1, GLK_RX_CTRL1); 836 + 837 + intel_host->rpm_retune_ok = true; 838 + chip->rpm_retune = true; 839 + mmc_retune_needed(host->mmc); 840 + pr_info("%s: Requiring re-tune after rpm resume", mmc_hostname(host->mmc)); 841 + } 842 + 843 + static void glk_rpm_retune_chk(struct sdhci_pci_chip *chip, bool susp) 844 + { 845 + if (chip->pdev->device == PCI_DEVICE_ID_INTEL_GLK_EMMC && 846 + !chip->rpm_retune) 847 + glk_rpm_retune_wa(chip, susp); 848 + } 849 + 850 + static int glk_runtime_suspend(struct sdhci_pci_chip *chip) 851 + { 852 + glk_rpm_retune_chk(chip, true); 853 + 854 + return sdhci_cqhci_runtime_suspend(chip); 855 + } 856 + 857 + static int glk_runtime_resume(struct sdhci_pci_chip *chip) 858 + { 859 + glk_rpm_retune_chk(chip, false); 860 + 861 + return sdhci_cqhci_runtime_resume(chip); 862 + } 863 + #endif 864 + 798 865 #ifdef CONFIG_ACPI 799 866 static int ni_set_max_freq(struct sdhci_pci_slot *slot) 800 867 { ··· 954 879 .resume = sdhci_cqhci_resume, 955 880 #endif 956 881 #ifdef CONFIG_PM 957 - .runtime_suspend = sdhci_cqhci_runtime_suspend, 958 - .runtime_resume = sdhci_cqhci_runtime_resume, 882 + .runtime_suspend = glk_runtime_suspend, 883 + .runtime_resume = glk_runtime_resume, 959 884 #endif 960 885 .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC, 961 886 .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | ··· 1837 1762 device_init_wakeup(&pdev->dev, true); 1838 1763 1839 1764 if (slot->cd_idx >= 0) { 1840 - ret = mmc_gpiod_request_cd(host->mmc, NULL, slot->cd_idx, 1765 + ret = mmc_gpiod_request_cd(host->mmc, "cd", slot->cd_idx, 1841 1766 slot->cd_override_level, 0, NULL); 1767 + if (ret && ret != -EPROBE_DEFER) 1768 + ret = mmc_gpiod_request_cd(host->mmc, NULL, 1769 + slot->cd_idx, 1770 + slot->cd_override_level, 1771 + 0, NULL); 1842 1772 if (ret == -EPROBE_DEFER) 1843 1773 goto remove; 1844 1774
+7 -4
drivers/mtd/nand/raw/atmel/nand-controller.c
··· 2032 2032 int ret; 2033 2033 2034 2034 nand_np = dev->of_node; 2035 - nfc_np = of_find_compatible_node(dev->of_node, NULL, 2036 - "atmel,sama5d3-nfc"); 2035 + nfc_np = of_get_compatible_child(dev->of_node, "atmel,sama5d3-nfc"); 2037 2036 if (!nfc_np) { 2038 2037 dev_err(dev, "Could not find device node for sama5d3-nfc\n"); 2039 2038 return -ENODEV; ··· 2446 2447 } 2447 2448 2448 2449 if (caps->legacy_of_bindings) { 2450 + struct device_node *nfc_node; 2449 2451 u32 ale_offs = 21; 2450 2452 2451 2453 /* 2452 2454 * If we are parsing legacy DT props and the DT contains a 2453 2455 * valid NFC node, forward the request to the sama5 logic. 2454 2456 */ 2455 - if (of_find_compatible_node(pdev->dev.of_node, NULL, 2456 - "atmel,sama5d3-nfc")) 2457 + nfc_node = of_get_compatible_child(pdev->dev.of_node, 2458 + "atmel,sama5d3-nfc"); 2459 + if (nfc_node) { 2457 2460 caps = &atmel_sama5_nand_caps; 2461 + of_node_put(nfc_node); 2462 + } 2458 2463 2459 2464 /* 2460 2465 * Even if the compatible says we are dealing with an
+16 -16
drivers/mtd/nand/raw/qcom_nandc.c
··· 150 150 #define NAND_VERSION_MINOR_SHIFT 16 151 151 152 152 /* NAND OP_CMDs */ 153 - #define PAGE_READ 0x2 154 - #define PAGE_READ_WITH_ECC 0x3 155 - #define PAGE_READ_WITH_ECC_SPARE 0x4 156 - #define PROGRAM_PAGE 0x6 157 - #define PAGE_PROGRAM_WITH_ECC 0x7 158 - #define PROGRAM_PAGE_SPARE 0x9 159 - #define BLOCK_ERASE 0xa 160 - #define FETCH_ID 0xb 161 - #define RESET_DEVICE 0xd 153 + #define OP_PAGE_READ 0x2 154 + #define OP_PAGE_READ_WITH_ECC 0x3 155 + #define OP_PAGE_READ_WITH_ECC_SPARE 0x4 156 + #define OP_PROGRAM_PAGE 0x6 157 + #define OP_PAGE_PROGRAM_WITH_ECC 0x7 158 + #define OP_PROGRAM_PAGE_SPARE 0x9 159 + #define OP_BLOCK_ERASE 0xa 160 + #define OP_FETCH_ID 0xb 161 + #define OP_RESET_DEVICE 0xd 162 162 163 163 /* Default Value for NAND_DEV_CMD_VLD */ 164 164 #define NAND_DEV_CMD_VLD_VAL (READ_START_VLD | WRITE_START_VLD | \ ··· 692 692 693 693 if (read) { 694 694 if (host->use_ecc) 695 - cmd = PAGE_READ_WITH_ECC | PAGE_ACC | LAST_PAGE; 695 + cmd = OP_PAGE_READ_WITH_ECC | PAGE_ACC | LAST_PAGE; 696 696 else 697 - cmd = PAGE_READ | PAGE_ACC | LAST_PAGE; 697 + cmd = OP_PAGE_READ | PAGE_ACC | LAST_PAGE; 698 698 } else { 699 - cmd = PROGRAM_PAGE | PAGE_ACC | LAST_PAGE; 699 + cmd = OP_PROGRAM_PAGE | PAGE_ACC | LAST_PAGE; 700 700 } 701 701 702 702 if (host->use_ecc) { ··· 1170 1170 * in use. we configure the controller to perform a raw read of 512 1171 1171 * bytes to read onfi params 1172 1172 */ 1173 - nandc_set_reg(nandc, NAND_FLASH_CMD, PAGE_READ | PAGE_ACC | LAST_PAGE); 1173 + nandc_set_reg(nandc, NAND_FLASH_CMD, OP_PAGE_READ | PAGE_ACC | LAST_PAGE); 1174 1174 nandc_set_reg(nandc, NAND_ADDR0, 0); 1175 1175 nandc_set_reg(nandc, NAND_ADDR1, 0); 1176 1176 nandc_set_reg(nandc, NAND_DEV0_CFG0, 0 << CW_PER_PAGE ··· 1224 1224 struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1225 1225 1226 1226 nandc_set_reg(nandc, NAND_FLASH_CMD, 1227 - BLOCK_ERASE | PAGE_ACC | LAST_PAGE); 1227 + OP_BLOCK_ERASE | PAGE_ACC | LAST_PAGE); 1228 1228 nandc_set_reg(nandc, NAND_ADDR0, page_addr); 1229 1229 nandc_set_reg(nandc, NAND_ADDR1, 0); 1230 1230 nandc_set_reg(nandc, NAND_DEV0_CFG0, ··· 1255 1255 if (column == -1) 1256 1256 return 0; 1257 1257 1258 - nandc_set_reg(nandc, NAND_FLASH_CMD, FETCH_ID); 1258 + nandc_set_reg(nandc, NAND_FLASH_CMD, OP_FETCH_ID); 1259 1259 nandc_set_reg(nandc, NAND_ADDR0, column); 1260 1260 nandc_set_reg(nandc, NAND_ADDR1, 0); 1261 1261 nandc_set_reg(nandc, NAND_FLASH_CHIP_SELECT, ··· 1276 1276 struct nand_chip *chip = &host->chip; 1277 1277 struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1278 1278 1279 - nandc_set_reg(nandc, NAND_FLASH_CMD, RESET_DEVICE); 1279 + nandc_set_reg(nandc, NAND_FLASH_CMD, OP_RESET_DEVICE); 1280 1280 nandc_set_reg(nandc, NAND_EXEC_CMD, 1); 1281 1281 1282 1282 write_reg_dma(nandc, NAND_FLASH_CMD, 1, NAND_BAM_NEXT_SGL);
+16 -3
drivers/mtd/spi-nor/cadence-quadspi.c
··· 644 644 ndelay(cqspi->wr_delay); 645 645 646 646 while (remaining > 0) { 647 + size_t write_words, mod_bytes; 648 + 647 649 write_bytes = remaining > page_size ? page_size : remaining; 648 - iowrite32_rep(cqspi->ahb_base, txbuf, 649 - DIV_ROUND_UP(write_bytes, 4)); 650 + write_words = write_bytes / 4; 651 + mod_bytes = write_bytes % 4; 652 + /* Write 4 bytes at a time then single bytes. */ 653 + if (write_words) { 654 + iowrite32_rep(cqspi->ahb_base, txbuf, write_words); 655 + txbuf += (write_words * 4); 656 + } 657 + if (mod_bytes) { 658 + unsigned int temp = 0xFFFFFFFF; 659 + 660 + memcpy(&temp, txbuf, mod_bytes); 661 + iowrite32(temp, cqspi->ahb_base); 662 + txbuf += mod_bytes; 663 + } 650 664 651 665 if (!wait_for_completion_timeout(&cqspi->transfer_complete, 652 666 msecs_to_jiffies(CQSPI_TIMEOUT_MS))) { ··· 669 655 goto failwr; 670 656 } 671 657 672 - txbuf += write_bytes; 673 658 remaining -= write_bytes; 674 659 675 660 if (remaining > 0)
+98 -32
drivers/mtd/spi-nor/spi-nor.c
··· 2156 2156 * @nor: pointer to a 'struct spi_nor' 2157 2157 * @addr: offset in the serial flash memory 2158 2158 * @len: number of bytes to read 2159 - * @buf: buffer where the data is copied into 2159 + * @buf: buffer where the data is copied into (dma-safe memory) 2160 2160 * 2161 2161 * Return: 0 on success, -errno otherwise. 2162 2162 */ ··· 2522 2522 } 2523 2523 2524 2524 /** 2525 + * spi_nor_sort_erase_mask() - sort erase mask 2526 + * @map: the erase map of the SPI NOR 2527 + * @erase_mask: the erase type mask to be sorted 2528 + * 2529 + * Replicate the sort done for the map's erase types in BFPT: sort the erase 2530 + * mask in ascending order with the smallest erase type size starting from 2531 + * BIT(0) in the sorted erase mask. 2532 + * 2533 + * Return: sorted erase mask. 2534 + */ 2535 + static u8 spi_nor_sort_erase_mask(struct spi_nor_erase_map *map, u8 erase_mask) 2536 + { 2537 + struct spi_nor_erase_type *erase_type = map->erase_type; 2538 + int i; 2539 + u8 sorted_erase_mask = 0; 2540 + 2541 + if (!erase_mask) 2542 + return 0; 2543 + 2544 + /* Replicate the sort done for the map's erase types. */ 2545 + for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++) 2546 + if (erase_type[i].size && erase_mask & BIT(erase_type[i].idx)) 2547 + sorted_erase_mask |= BIT(i); 2548 + 2549 + return sorted_erase_mask; 2550 + } 2551 + 2552 + /** 2525 2553 * spi_nor_regions_sort_erase_types() - sort erase types in each region 2526 2554 * @map: the erase map of the SPI NOR 2527 2555 * ··· 2564 2536 static void spi_nor_regions_sort_erase_types(struct spi_nor_erase_map *map) 2565 2537 { 2566 2538 struct spi_nor_erase_region *region = map->regions; 2567 - struct spi_nor_erase_type *erase_type = map->erase_type; 2568 - int i; 2569 2539 u8 region_erase_mask, sorted_erase_mask; 2570 2540 2571 2541 while (region) { 2572 2542 region_erase_mask = region->offset & SNOR_ERASE_TYPE_MASK; 2573 2543 2574 - /* Replicate the sort done for the map's erase types. */ 2575 - sorted_erase_mask = 0; 2576 - for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++) 2577 - if (erase_type[i].size && 2578 - region_erase_mask & BIT(erase_type[i].idx)) 2579 - sorted_erase_mask |= BIT(i); 2544 + sorted_erase_mask = spi_nor_sort_erase_mask(map, 2545 + region_erase_mask); 2580 2546 2581 2547 /* Overwrite erase mask. */ 2582 2548 region->offset = (region->offset & ~SNOR_ERASE_TYPE_MASK) | ··· 2877 2855 * spi_nor_get_map_in_use() - get the configuration map in use 2878 2856 * @nor: pointer to a 'struct spi_nor' 2879 2857 * @smpt: pointer to the sector map parameter table 2858 + * @smpt_len: sector map parameter table length 2859 + * 2860 + * Return: pointer to the map in use, ERR_PTR(-errno) otherwise. 2880 2861 */ 2881 - static const u32 *spi_nor_get_map_in_use(struct spi_nor *nor, const u32 *smpt) 2862 + static const u32 *spi_nor_get_map_in_use(struct spi_nor *nor, const u32 *smpt, 2863 + u8 smpt_len) 2882 2864 { 2883 - const u32 *ret = NULL; 2884 - u32 i, addr; 2865 + const u32 *ret; 2866 + u8 *buf; 2867 + u32 addr; 2885 2868 int err; 2869 + u8 i; 2886 2870 u8 addr_width, read_opcode, read_dummy; 2887 - u8 read_data_mask, data_byte, map_id; 2871 + u8 read_data_mask, map_id; 2872 + 2873 + /* Use a kmalloc'ed bounce buffer to guarantee it is DMA-able. */ 2874 + buf = kmalloc(sizeof(*buf), GFP_KERNEL); 2875 + if (!buf) 2876 + return ERR_PTR(-ENOMEM); 2888 2877 2889 2878 addr_width = nor->addr_width; 2890 2879 read_dummy = nor->read_dummy; 2891 2880 read_opcode = nor->read_opcode; 2892 2881 2893 2882 map_id = 0; 2894 - i = 0; 2895 2883 /* Determine if there are any optional Detection Command Descriptors */ 2896 - while (!(smpt[i] & SMPT_DESC_TYPE_MAP)) { 2884 + for (i = 0; i < smpt_len; i += 2) { 2885 + if (smpt[i] & SMPT_DESC_TYPE_MAP) 2886 + break; 2887 + 2897 2888 read_data_mask = SMPT_CMD_READ_DATA(smpt[i]); 2898 2889 nor->addr_width = spi_nor_smpt_addr_width(nor, smpt[i]); 2899 2890 nor->read_dummy = spi_nor_smpt_read_dummy(nor, smpt[i]); 2900 2891 nor->read_opcode = SMPT_CMD_OPCODE(smpt[i]); 2901 2892 addr = smpt[i + 1]; 2902 2893 2903 - err = spi_nor_read_raw(nor, addr, 1, &data_byte); 2904 - if (err) 2894 + err = spi_nor_read_raw(nor, addr, 1, buf); 2895 + if (err) { 2896 + ret = ERR_PTR(err); 2905 2897 goto out; 2898 + } 2906 2899 2907 2900 /* 2908 2901 * Build an index value that is used to select the Sector Map 2909 2902 * Configuration that is currently in use. 2910 2903 */ 2911 - map_id = map_id << 1 | !!(data_byte & read_data_mask); 2912 - i = i + 2; 2904 + map_id = map_id << 1 | !!(*buf & read_data_mask); 2913 2905 } 2914 2906 2915 - /* Find the matching configuration map */ 2916 - while (SMPT_MAP_ID(smpt[i]) != map_id) { 2907 + /* 2908 + * If command descriptors are provided, they always precede map 2909 + * descriptors in the table. There is no need to start the iteration 2910 + * over smpt array all over again. 2911 + * 2912 + * Find the matching configuration map. 2913 + */ 2914 + ret = ERR_PTR(-EINVAL); 2915 + while (i < smpt_len) { 2916 + if (SMPT_MAP_ID(smpt[i]) == map_id) { 2917 + ret = smpt + i; 2918 + break; 2919 + } 2920 + 2921 + /* 2922 + * If there are no more configuration map descriptors and no 2923 + * configuration ID matched the configuration identifier, the 2924 + * sector address map is unknown. 2925 + */ 2917 2926 if (smpt[i] & SMPT_DESC_END) 2918 - goto out; 2927 + break; 2928 + 2919 2929 /* increment the table index to the next map */ 2920 2930 i += SMPT_MAP_REGION_COUNT(smpt[i]) + 1; 2921 2931 } 2922 2932 2923 - ret = smpt + i; 2924 2933 /* fall through */ 2925 2934 out: 2935 + kfree(buf); 2926 2936 nor->addr_width = addr_width; 2927 2937 nor->read_dummy = read_dummy; 2928 2938 nor->read_opcode = read_opcode; ··· 3000 2946 u64 offset; 3001 2947 u32 region_count; 3002 2948 int i, j; 3003 - u8 erase_type; 2949 + u8 erase_type, uniform_erase_type; 3004 2950 3005 2951 region_count = SMPT_MAP_REGION_COUNT(*smpt); 3006 2952 /* ··· 3013 2959 return -ENOMEM; 3014 2960 map->regions = region; 3015 2961 3016 - map->uniform_erase_type = 0xff; 2962 + uniform_erase_type = 0xff; 3017 2963 offset = 0; 3018 2964 /* Populate regions. */ 3019 2965 for (i = 0; i < region_count; i++) { ··· 3028 2974 * Save the erase types that are supported in all regions and 3029 2975 * can erase the entire flash memory. 3030 2976 */ 3031 - map->uniform_erase_type &= erase_type; 2977 + uniform_erase_type &= erase_type; 3032 2978 3033 2979 offset = (region[i].offset & ~SNOR_ERASE_FLAGS_MASK) + 3034 2980 region[i].size; 3035 2981 } 2982 + 2983 + map->uniform_erase_type = spi_nor_sort_erase_mask(map, 2984 + uniform_erase_type); 3036 2985 3037 2986 spi_nor_region_mark_end(&region[i - 1]); 3038 2987 ··· 3077 3020 for (i = 0; i < smpt_header->length; i++) 3078 3021 smpt[i] = le32_to_cpu(smpt[i]); 3079 3022 3080 - sector_map = spi_nor_get_map_in_use(nor, smpt); 3081 - if (!sector_map) { 3082 - ret = -EINVAL; 3023 + sector_map = spi_nor_get_map_in_use(nor, smpt, smpt_header->length); 3024 + if (IS_ERR(sector_map)) { 3025 + ret = PTR_ERR(sector_map); 3083 3026 goto out; 3084 3027 } 3085 3028 ··· 3182 3125 if (err) 3183 3126 goto exit; 3184 3127 3185 - /* Parse other parameter headers. */ 3128 + /* Parse optional parameter tables. */ 3186 3129 for (i = 0; i < header.nph; i++) { 3187 3130 param_header = &param_headers[i]; 3188 3131 ··· 3195 3138 break; 3196 3139 } 3197 3140 3198 - if (err) 3199 - goto exit; 3141 + if (err) { 3142 + dev_warn(dev, "Failed to parse optional parameter table: %04x\n", 3143 + SFDP_PARAM_HEADER_ID(param_header)); 3144 + /* 3145 + * Let's not drop all information we extracted so far 3146 + * if optional table parsers fail. In case of failing, 3147 + * each optional parser is responsible to roll back to 3148 + * the previously known spi_nor data. 3149 + */ 3150 + err = 0; 3151 + } 3200 3152 } 3201 3153 3202 3154 exit:
+35 -13
drivers/net/can/dev.c
··· 477 477 } 478 478 EXPORT_SYMBOL_GPL(can_put_echo_skb); 479 479 480 + struct sk_buff *__can_get_echo_skb(struct net_device *dev, unsigned int idx, u8 *len_ptr) 481 + { 482 + struct can_priv *priv = netdev_priv(dev); 483 + struct sk_buff *skb = priv->echo_skb[idx]; 484 + struct canfd_frame *cf; 485 + 486 + if (idx >= priv->echo_skb_max) { 487 + netdev_err(dev, "%s: BUG! Trying to access can_priv::echo_skb out of bounds (%u/max %u)\n", 488 + __func__, idx, priv->echo_skb_max); 489 + return NULL; 490 + } 491 + 492 + if (!skb) { 493 + netdev_err(dev, "%s: BUG! Trying to echo non existing skb: can_priv::echo_skb[%u]\n", 494 + __func__, idx); 495 + return NULL; 496 + } 497 + 498 + /* Using "struct canfd_frame::len" for the frame 499 + * length is supported on both CAN and CANFD frames. 500 + */ 501 + cf = (struct canfd_frame *)skb->data; 502 + *len_ptr = cf->len; 503 + priv->echo_skb[idx] = NULL; 504 + 505 + return skb; 506 + } 507 + 480 508 /* 481 509 * Get the skb from the stack and loop it back locally 482 510 * ··· 514 486 */ 515 487 unsigned int can_get_echo_skb(struct net_device *dev, unsigned int idx) 516 488 { 517 - struct can_priv *priv = netdev_priv(dev); 489 + struct sk_buff *skb; 490 + u8 len; 518 491 519 - BUG_ON(idx >= priv->echo_skb_max); 492 + skb = __can_get_echo_skb(dev, idx, &len); 493 + if (!skb) 494 + return 0; 520 495 521 - if (priv->echo_skb[idx]) { 522 - struct sk_buff *skb = priv->echo_skb[idx]; 523 - struct can_frame *cf = (struct can_frame *)skb->data; 524 - u8 dlc = cf->can_dlc; 496 + netif_rx(skb); 525 497 526 - netif_rx(priv->echo_skb[idx]); 527 - priv->echo_skb[idx] = NULL; 528 - 529 - return dlc; 530 - } 531 - 532 - return 0; 498 + return len; 533 499 } 534 500 EXPORT_SYMBOL_GPL(can_get_echo_skb); 535 501
+60 -48
drivers/net/can/flexcan.c
··· 135 135 136 136 /* FLEXCAN interrupt flag register (IFLAG) bits */ 137 137 /* Errata ERR005829 step7: Reserve first valid MB */ 138 - #define FLEXCAN_TX_MB_RESERVED_OFF_FIFO 8 139 - #define FLEXCAN_TX_MB_OFF_FIFO 9 138 + #define FLEXCAN_TX_MB_RESERVED_OFF_FIFO 8 140 139 #define FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP 0 141 - #define FLEXCAN_TX_MB_OFF_TIMESTAMP 1 142 - #define FLEXCAN_RX_MB_OFF_TIMESTAMP_FIRST (FLEXCAN_TX_MB_OFF_TIMESTAMP + 1) 143 - #define FLEXCAN_RX_MB_OFF_TIMESTAMP_LAST 63 144 - #define FLEXCAN_IFLAG_MB(x) BIT(x) 140 + #define FLEXCAN_TX_MB 63 141 + #define FLEXCAN_RX_MB_OFF_TIMESTAMP_FIRST (FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP + 1) 142 + #define FLEXCAN_RX_MB_OFF_TIMESTAMP_LAST (FLEXCAN_TX_MB - 1) 143 + #define FLEXCAN_IFLAG_MB(x) BIT(x & 0x1f) 145 144 #define FLEXCAN_IFLAG_RX_FIFO_OVERFLOW BIT(7) 146 145 #define FLEXCAN_IFLAG_RX_FIFO_WARN BIT(6) 147 146 #define FLEXCAN_IFLAG_RX_FIFO_AVAILABLE BIT(5) ··· 258 259 struct can_rx_offload offload; 259 260 260 261 struct flexcan_regs __iomem *regs; 261 - struct flexcan_mb __iomem *tx_mb; 262 262 struct flexcan_mb __iomem *tx_mb_reserved; 263 - u8 tx_mb_idx; 264 263 u32 reg_ctrl_default; 265 264 u32 reg_imask1_default; 266 265 u32 reg_imask2_default; ··· 512 515 static netdev_tx_t flexcan_start_xmit(struct sk_buff *skb, struct net_device *dev) 513 516 { 514 517 const struct flexcan_priv *priv = netdev_priv(dev); 518 + struct flexcan_regs __iomem *regs = priv->regs; 515 519 struct can_frame *cf = (struct can_frame *)skb->data; 516 520 u32 can_id; 517 521 u32 data; ··· 535 537 536 538 if (cf->can_dlc > 0) { 537 539 data = be32_to_cpup((__be32 *)&cf->data[0]); 538 - priv->write(data, &priv->tx_mb->data[0]); 540 + priv->write(data, &regs->mb[FLEXCAN_TX_MB].data[0]); 539 541 } 540 542 if (cf->can_dlc > 4) { 541 543 data = be32_to_cpup((__be32 *)&cf->data[4]); 542 - priv->write(data, &priv->tx_mb->data[1]); 544 + priv->write(data, &regs->mb[FLEXCAN_TX_MB].data[1]); 543 545 } 544 546 545 547 can_put_echo_skb(skb, dev, 0); 546 548 547 - priv->write(can_id, &priv->tx_mb->can_id); 548 - priv->write(ctrl, &priv->tx_mb->can_ctrl); 549 + priv->write(can_id, &regs->mb[FLEXCAN_TX_MB].can_id); 550 + priv->write(ctrl, &regs->mb[FLEXCAN_TX_MB].can_ctrl); 549 551 550 552 /* Errata ERR005829 step8: 551 553 * Write twice INACTIVE(0x8) code to first MB. ··· 561 563 static void flexcan_irq_bus_err(struct net_device *dev, u32 reg_esr) 562 564 { 563 565 struct flexcan_priv *priv = netdev_priv(dev); 566 + struct flexcan_regs __iomem *regs = priv->regs; 564 567 struct sk_buff *skb; 565 568 struct can_frame *cf; 566 569 bool rx_errors = false, tx_errors = false; 570 + u32 timestamp; 571 + 572 + timestamp = priv->read(&regs->timer) << 16; 567 573 568 574 skb = alloc_can_err_skb(dev, &cf); 569 575 if (unlikely(!skb)) ··· 614 612 if (tx_errors) 615 613 dev->stats.tx_errors++; 616 614 617 - can_rx_offload_irq_queue_err_skb(&priv->offload, skb); 615 + can_rx_offload_queue_sorted(&priv->offload, skb, timestamp); 618 616 } 619 617 620 618 static void flexcan_irq_state(struct net_device *dev, u32 reg_esr) 621 619 { 622 620 struct flexcan_priv *priv = netdev_priv(dev); 621 + struct flexcan_regs __iomem *regs = priv->regs; 623 622 struct sk_buff *skb; 624 623 struct can_frame *cf; 625 624 enum can_state new_state, rx_state, tx_state; 626 625 int flt; 627 626 struct can_berr_counter bec; 627 + u32 timestamp; 628 + 629 + timestamp = priv->read(&regs->timer) << 16; 628 630 629 631 flt = reg_esr & FLEXCAN_ESR_FLT_CONF_MASK; 630 632 if (likely(flt == FLEXCAN_ESR_FLT_CONF_ACTIVE)) { ··· 658 652 if (unlikely(new_state == CAN_STATE_BUS_OFF)) 659 653 can_bus_off(dev); 660 654 661 - can_rx_offload_irq_queue_err_skb(&priv->offload, skb); 655 + can_rx_offload_queue_sorted(&priv->offload, skb, timestamp); 662 656 } 663 657 664 658 static inline struct flexcan_priv *rx_offload_to_priv(struct can_rx_offload *offload) ··· 726 720 priv->write(BIT(n - 32), &regs->iflag2); 727 721 } else { 728 722 priv->write(FLEXCAN_IFLAG_RX_FIFO_AVAILABLE, &regs->iflag1); 729 - priv->read(&regs->timer); 730 723 } 724 + 725 + /* Read the Free Running Timer. It is optional but recommended 726 + * to unlock Mailbox as soon as possible and make it available 727 + * for reception. 728 + */ 729 + priv->read(&regs->timer); 731 730 732 731 return 1; 733 732 } ··· 743 732 struct flexcan_regs __iomem *regs = priv->regs; 744 733 u32 iflag1, iflag2; 745 734 746 - iflag2 = priv->read(&regs->iflag2) & priv->reg_imask2_default; 747 - iflag1 = priv->read(&regs->iflag1) & priv->reg_imask1_default & 748 - ~FLEXCAN_IFLAG_MB(priv->tx_mb_idx); 735 + iflag2 = priv->read(&regs->iflag2) & priv->reg_imask2_default & 736 + ~FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB); 737 + iflag1 = priv->read(&regs->iflag1) & priv->reg_imask1_default; 749 738 750 739 return (u64)iflag2 << 32 | iflag1; 751 740 } ··· 757 746 struct flexcan_priv *priv = netdev_priv(dev); 758 747 struct flexcan_regs __iomem *regs = priv->regs; 759 748 irqreturn_t handled = IRQ_NONE; 760 - u32 reg_iflag1, reg_esr; 749 + u32 reg_iflag2, reg_esr; 761 750 enum can_state last_state = priv->can.state; 762 - 763 - reg_iflag1 = priv->read(&regs->iflag1); 764 751 765 752 /* reception interrupt */ 766 753 if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) { ··· 773 764 break; 774 765 } 775 766 } else { 767 + u32 reg_iflag1; 768 + 769 + reg_iflag1 = priv->read(&regs->iflag1); 776 770 if (reg_iflag1 & FLEXCAN_IFLAG_RX_FIFO_AVAILABLE) { 777 771 handled = IRQ_HANDLED; 778 772 can_rx_offload_irq_offload_fifo(&priv->offload); ··· 791 779 } 792 780 } 793 781 782 + reg_iflag2 = priv->read(&regs->iflag2); 783 + 794 784 /* transmission complete interrupt */ 795 - if (reg_iflag1 & FLEXCAN_IFLAG_MB(priv->tx_mb_idx)) { 785 + if (reg_iflag2 & FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB)) { 786 + u32 reg_ctrl = priv->read(&regs->mb[FLEXCAN_TX_MB].can_ctrl); 787 + 796 788 handled = IRQ_HANDLED; 797 - stats->tx_bytes += can_get_echo_skb(dev, 0); 789 + stats->tx_bytes += can_rx_offload_get_echo_skb(&priv->offload, 790 + 0, reg_ctrl << 16); 798 791 stats->tx_packets++; 799 792 can_led_event(dev, CAN_LED_EVENT_TX); 800 793 801 794 /* after sending a RTR frame MB is in RX mode */ 802 795 priv->write(FLEXCAN_MB_CODE_TX_INACTIVE, 803 - &priv->tx_mb->can_ctrl); 804 - priv->write(FLEXCAN_IFLAG_MB(priv->tx_mb_idx), &regs->iflag1); 796 + &regs->mb[FLEXCAN_TX_MB].can_ctrl); 797 + priv->write(FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB), &regs->iflag2); 805 798 netif_wake_queue(dev); 806 799 } 807 800 ··· 948 931 reg_mcr &= ~FLEXCAN_MCR_MAXMB(0xff); 949 932 reg_mcr |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT | FLEXCAN_MCR_SUPV | 950 933 FLEXCAN_MCR_WRN_EN | FLEXCAN_MCR_SRX_DIS | FLEXCAN_MCR_IRMQ | 951 - FLEXCAN_MCR_IDAM_C; 934 + FLEXCAN_MCR_IDAM_C | FLEXCAN_MCR_MAXMB(FLEXCAN_TX_MB); 952 935 953 - if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) { 936 + if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) 954 937 reg_mcr &= ~FLEXCAN_MCR_FEN; 955 - reg_mcr |= FLEXCAN_MCR_MAXMB(priv->offload.mb_last); 956 - } else { 957 - reg_mcr |= FLEXCAN_MCR_FEN | 958 - FLEXCAN_MCR_MAXMB(priv->tx_mb_idx); 959 - } 938 + else 939 + reg_mcr |= FLEXCAN_MCR_FEN; 940 + 960 941 netdev_dbg(dev, "%s: writing mcr=0x%08x", __func__, reg_mcr); 961 942 priv->write(reg_mcr, &regs->mcr); 962 943 ··· 997 982 priv->write(reg_ctrl2, &regs->ctrl2); 998 983 } 999 984 1000 - /* clear and invalidate all mailboxes first */ 1001 - for (i = priv->tx_mb_idx; i < ARRAY_SIZE(regs->mb); i++) { 1002 - priv->write(FLEXCAN_MB_CODE_RX_INACTIVE, 1003 - &regs->mb[i].can_ctrl); 1004 - } 1005 - 1006 985 if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) { 1007 - for (i = priv->offload.mb_first; i <= priv->offload.mb_last; i++) 986 + for (i = priv->offload.mb_first; i <= priv->offload.mb_last; i++) { 1008 987 priv->write(FLEXCAN_MB_CODE_RX_EMPTY, 1009 988 &regs->mb[i].can_ctrl); 989 + } 990 + } else { 991 + /* clear and invalidate unused mailboxes first */ 992 + for (i = FLEXCAN_TX_MB_RESERVED_OFF_FIFO; i <= ARRAY_SIZE(regs->mb); i++) { 993 + priv->write(FLEXCAN_MB_CODE_RX_INACTIVE, 994 + &regs->mb[i].can_ctrl); 995 + } 1010 996 } 1011 997 1012 998 /* Errata ERR005829: mark first TX mailbox as INACTIVE */ ··· 1016 1000 1017 1001 /* mark TX mailbox as INACTIVE */ 1018 1002 priv->write(FLEXCAN_MB_CODE_TX_INACTIVE, 1019 - &priv->tx_mb->can_ctrl); 1003 + &regs->mb[FLEXCAN_TX_MB].can_ctrl); 1020 1004 1021 1005 /* acceptance mask/acceptance code (accept everything) */ 1022 1006 priv->write(0x0, &regs->rxgmask); ··· 1371 1355 priv->devtype_data = devtype_data; 1372 1356 priv->reg_xceiver = reg_xceiver; 1373 1357 1374 - if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) { 1375 - priv->tx_mb_idx = FLEXCAN_TX_MB_OFF_TIMESTAMP; 1358 + if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) 1376 1359 priv->tx_mb_reserved = &regs->mb[FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP]; 1377 - } else { 1378 - priv->tx_mb_idx = FLEXCAN_TX_MB_OFF_FIFO; 1360 + else 1379 1361 priv->tx_mb_reserved = &regs->mb[FLEXCAN_TX_MB_RESERVED_OFF_FIFO]; 1380 - } 1381 - priv->tx_mb = &regs->mb[priv->tx_mb_idx]; 1382 1362 1383 - priv->reg_imask1_default = FLEXCAN_IFLAG_MB(priv->tx_mb_idx); 1384 - priv->reg_imask2_default = 0; 1363 + priv->reg_imask1_default = 0; 1364 + priv->reg_imask2_default = FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB); 1385 1365 1386 1366 priv->offload.mailbox_read = flexcan_mailbox_read; 1387 1367
+4 -1
drivers/net/can/rcar/rcar_can.c
··· 24 24 25 25 #define RCAR_CAN_DRV_NAME "rcar_can" 26 26 27 + #define RCAR_SUPPORTED_CLOCKS (BIT(CLKR_CLKP1) | BIT(CLKR_CLKP2) | \ 28 + BIT(CLKR_CLKEXT)) 29 + 27 30 /* Mailbox configuration: 28 31 * mailbox 60 - 63 - Rx FIFO mailboxes 29 32 * mailbox 56 - 59 - Tx FIFO mailboxes ··· 792 789 goto fail_clk; 793 790 } 794 791 795 - if (clock_select >= ARRAY_SIZE(clock_names)) { 792 + if (!(BIT(clock_select) & RCAR_SUPPORTED_CLOCKS)) { 796 793 err = -EINVAL; 797 794 dev_err(&pdev->dev, "invalid CAN clock selected\n"); 798 795 goto fail_clk;
+49 -2
drivers/net/can/rx-offload.c
··· 211 211 } 212 212 EXPORT_SYMBOL_GPL(can_rx_offload_irq_offload_fifo); 213 213 214 - int can_rx_offload_irq_queue_err_skb(struct can_rx_offload *offload, struct sk_buff *skb) 214 + int can_rx_offload_queue_sorted(struct can_rx_offload *offload, 215 + struct sk_buff *skb, u32 timestamp) 216 + { 217 + struct can_rx_offload_cb *cb; 218 + unsigned long flags; 219 + 220 + if (skb_queue_len(&offload->skb_queue) > 221 + offload->skb_queue_len_max) 222 + return -ENOMEM; 223 + 224 + cb = can_rx_offload_get_cb(skb); 225 + cb->timestamp = timestamp; 226 + 227 + spin_lock_irqsave(&offload->skb_queue.lock, flags); 228 + __skb_queue_add_sort(&offload->skb_queue, skb, can_rx_offload_compare); 229 + spin_unlock_irqrestore(&offload->skb_queue.lock, flags); 230 + 231 + can_rx_offload_schedule(offload); 232 + 233 + return 0; 234 + } 235 + EXPORT_SYMBOL_GPL(can_rx_offload_queue_sorted); 236 + 237 + unsigned int can_rx_offload_get_echo_skb(struct can_rx_offload *offload, 238 + unsigned int idx, u32 timestamp) 239 + { 240 + struct net_device *dev = offload->dev; 241 + struct net_device_stats *stats = &dev->stats; 242 + struct sk_buff *skb; 243 + u8 len; 244 + int err; 245 + 246 + skb = __can_get_echo_skb(dev, idx, &len); 247 + if (!skb) 248 + return 0; 249 + 250 + err = can_rx_offload_queue_sorted(offload, skb, timestamp); 251 + if (err) { 252 + stats->rx_errors++; 253 + stats->tx_fifo_errors++; 254 + } 255 + 256 + return len; 257 + } 258 + EXPORT_SYMBOL_GPL(can_rx_offload_get_echo_skb); 259 + 260 + int can_rx_offload_queue_tail(struct can_rx_offload *offload, 261 + struct sk_buff *skb) 215 262 { 216 263 if (skb_queue_len(&offload->skb_queue) > 217 264 offload->skb_queue_len_max) ··· 269 222 270 223 return 0; 271 224 } 272 - EXPORT_SYMBOL_GPL(can_rx_offload_irq_queue_err_skb); 225 + EXPORT_SYMBOL_GPL(can_rx_offload_queue_tail); 273 226 274 227 static int can_rx_offload_init_queue(struct net_device *dev, struct can_rx_offload *offload, unsigned int weight) 275 228 {
+1 -1
drivers/net/can/spi/hi311x.c
··· 760 760 { 761 761 struct hi3110_priv *priv = netdev_priv(net); 762 762 struct spi_device *spi = priv->spi; 763 - unsigned long flags = IRQF_ONESHOT | IRQF_TRIGGER_RISING; 763 + unsigned long flags = IRQF_ONESHOT | IRQF_TRIGGER_HIGH; 764 764 int ret; 765 765 766 766 ret = open_candev(net);
+2 -2
drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
··· 528 528 context = &priv->tx_contexts[i]; 529 529 530 530 context->echo_index = i; 531 - can_put_echo_skb(skb, netdev, context->echo_index); 532 531 ++priv->active_tx_contexts; 533 532 if (priv->active_tx_contexts >= (int)dev->max_tx_urbs) 534 533 netif_stop_queue(netdev); ··· 552 553 dev_kfree_skb(skb); 553 554 spin_lock_irqsave(&priv->tx_contexts_lock, flags); 554 555 555 - can_free_echo_skb(netdev, context->echo_index); 556 556 context->echo_index = dev->max_tx_urbs; 557 557 --priv->active_tx_contexts; 558 558 netif_wake_queue(netdev); ··· 561 563 } 562 564 563 565 context->priv = priv; 566 + 567 + can_put_echo_skb(skb, netdev, context->echo_index); 564 568 565 569 usb_fill_bulk_urb(urb, dev->udev, 566 570 usb_sndbulkpipe(dev->udev,
+5 -5
drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
··· 1019 1019 new_state : CAN_STATE_ERROR_ACTIVE; 1020 1020 1021 1021 can_change_state(netdev, cf, tx_state, rx_state); 1022 + 1023 + if (priv->can.restart_ms && 1024 + old_state >= CAN_STATE_BUS_OFF && 1025 + new_state < CAN_STATE_BUS_OFF) 1026 + cf->can_id |= CAN_ERR_RESTARTED; 1022 1027 } 1023 1028 1024 1029 if (new_state == CAN_STATE_BUS_OFF) { ··· 1033 1028 1034 1029 can_bus_off(netdev); 1035 1030 } 1036 - 1037 - if (priv->can.restart_ms && 1038 - old_state >= CAN_STATE_BUS_OFF && 1039 - new_state < CAN_STATE_BUS_OFF) 1040 - cf->can_id |= CAN_ERR_RESTARTED; 1041 1031 } 1042 1032 1043 1033 if (!skb) {
-7
drivers/net/can/usb/ucan.c
··· 35 35 #include <linux/slab.h> 36 36 #include <linux/usb.h> 37 37 38 - #include <linux/can.h> 39 - #include <linux/can/dev.h> 40 - #include <linux/can/error.h> 41 - 42 38 #define UCAN_DRIVER_NAME "ucan" 43 39 #define UCAN_MAX_RX_URBS 8 44 40 /* the CAN controller needs a while to enable/disable the bus */ ··· 1571 1575 /* disconnect the device */ 1572 1576 static void ucan_disconnect(struct usb_interface *intf) 1573 1577 { 1574 - struct usb_device *udev; 1575 1578 struct ucan_priv *up = usb_get_intfdata(intf); 1576 - 1577 - udev = interface_to_usbdev(intf); 1578 1579 1579 1580 usb_set_intfdata(intf, NULL); 1580 1581
+11 -12
drivers/net/ethernet/amazon/ena/ena_netdev.c
··· 1848 1848 rc = ena_com_dev_reset(adapter->ena_dev, adapter->reset_reason); 1849 1849 if (rc) 1850 1850 dev_err(&adapter->pdev->dev, "Device reset failed\n"); 1851 + /* stop submitting admin commands on a device that was reset */ 1852 + ena_com_set_admin_running_state(adapter->ena_dev, false); 1851 1853 } 1852 1854 1853 1855 ena_destroy_all_io_queues(adapter); ··· 1915 1913 struct ena_adapter *adapter = netdev_priv(netdev); 1916 1914 1917 1915 netif_dbg(adapter, ifdown, netdev, "%s\n", __func__); 1916 + 1917 + if (!test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags)) 1918 + return 0; 1918 1919 1919 1920 if (test_bit(ENA_FLAG_DEV_UP, &adapter->flags)) 1920 1921 ena_down(adapter); ··· 2618 2613 ena_down(adapter); 2619 2614 2620 2615 /* Stop the device from sending AENQ events (in case reset flag is set 2621 - * and device is up, ena_close already reset the device 2622 - * In case the reset flag is set and the device is up, ena_down() 2623 - * already perform the reset, so it can be skipped. 2616 + * and device is up, ena_down() already reset the device. 2624 2617 */ 2625 2618 if (!(test_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags) && dev_up)) 2626 2619 ena_com_dev_reset(adapter->ena_dev, adapter->reset_reason); ··· 2697 2694 ena_com_abort_admin_commands(ena_dev); 2698 2695 ena_com_wait_for_abort_completion(ena_dev); 2699 2696 ena_com_admin_destroy(ena_dev); 2700 - ena_com_mmio_reg_read_request_destroy(ena_dev); 2701 2697 ena_com_dev_reset(ena_dev, ENA_REGS_RESET_DRIVER_INVALID_STATE); 2698 + ena_com_mmio_reg_read_request_destroy(ena_dev); 2702 2699 err: 2703 2700 clear_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags); 2704 2701 clear_bit(ENA_FLAG_ONGOING_RESET, &adapter->flags); ··· 3455 3452 ena_com_rss_destroy(ena_dev); 3456 3453 err_free_msix: 3457 3454 ena_com_dev_reset(ena_dev, ENA_REGS_RESET_INIT_ERR); 3455 + /* stop submitting admin commands on a device that was reset */ 3456 + ena_com_set_admin_running_state(ena_dev, false); 3458 3457 ena_free_mgmnt_irq(adapter); 3459 3458 ena_disable_msix(adapter); 3460 3459 err_worker_destroy: ··· 3503 3498 3504 3499 cancel_work_sync(&adapter->reset_task); 3505 3500 3506 - unregister_netdev(netdev); 3507 - 3508 - /* If the device is running then we want to make sure the device will be 3509 - * reset to make sure no more events will be issued by the device. 3510 - */ 3511 - if (test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags)) 3512 - set_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags); 3513 - 3514 3501 rtnl_lock(); 3515 3502 ena_destroy_device(adapter, true); 3516 3503 rtnl_unlock(); 3504 + 3505 + unregister_netdev(netdev); 3517 3506 3518 3507 free_netdev(netdev); 3519 3508
+1 -1
drivers/net/ethernet/amazon/ena/ena_netdev.h
··· 45 45 46 46 #define DRV_MODULE_VER_MAJOR 2 47 47 #define DRV_MODULE_VER_MINOR 0 48 - #define DRV_MODULE_VER_SUBMINOR 1 48 + #define DRV_MODULE_VER_SUBMINOR 2 49 49 50 50 #define DRV_MODULE_NAME "ena" 51 51 #ifndef DRV_MODULE_VERSION
+3 -1
drivers/net/ethernet/amd/sunlance.c
··· 1419 1419 1420 1420 prop = of_get_property(nd, "tpe-link-test?", NULL); 1421 1421 if (!prop) 1422 - goto no_link_test; 1422 + goto node_put; 1423 1423 1424 1424 if (strcmp(prop, "true")) { 1425 1425 printk(KERN_NOTICE "SunLance: warning: overriding option " ··· 1428 1428 "to ecd@skynet.be\n"); 1429 1429 auxio_set_lte(AUXIO_LTE_ON); 1430 1430 } 1431 + node_put: 1432 + of_node_put(nd); 1431 1433 no_link_test: 1432 1434 lp->auto_select = 1; 1433 1435 lp->tpe = 0;
+7
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
··· 2191 2191 #define PMF_DMAE_C(bp) (BP_PORT(bp) * MAX_DMAE_C_PER_PORT + \ 2192 2192 E1HVN_MAX) 2193 2193 2194 + /* Following is the DMAE channel number allocation for the clients. 2195 + * MFW: OCBB/OCSD implementations use DMAE channels 14/15 respectively. 2196 + * Driver: 0-3 and 8-11 (for PF dmae operations) 2197 + * 4 and 12 (for stats requests) 2198 + */ 2199 + #define BNX2X_FW_DMAE_C 13 /* Channel for FW DMAE operations */ 2200 + 2194 2201 /* PCIE link and speed */ 2195 2202 #define PCICFG_LINK_WIDTH 0x1f00000 2196 2203 #define PCICFG_LINK_WIDTH_SHIFT 20
+1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c
··· 6149 6149 rdata->sd_vlan_tag = cpu_to_le16(start_params->sd_vlan_tag); 6150 6150 rdata->path_id = BP_PATH(bp); 6151 6151 rdata->network_cos_mode = start_params->network_cos_mode; 6152 + rdata->dmae_cmd_id = BNX2X_FW_DMAE_C; 6152 6153 6153 6154 rdata->vxlan_dst_port = cpu_to_le16(start_params->vxlan_dst_port); 6154 6155 rdata->geneve_dst_port = cpu_to_le16(start_params->geneve_dst_port);
+68 -2
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 1675 1675 } else { 1676 1676 if (rxcmp1->rx_cmp_cfa_code_errors_v2 & RX_CMP_L4_CS_ERR_BITS) { 1677 1677 if (dev->features & NETIF_F_RXCSUM) 1678 - cpr->rx_l4_csum_errors++; 1678 + bnapi->cp_ring.rx_l4_csum_errors++; 1679 1679 } 1680 1680 } 1681 1681 ··· 8714 8714 return rc; 8715 8715 } 8716 8716 8717 + static int bnxt_dbg_hwrm_ring_info_get(struct bnxt *bp, u8 ring_type, 8718 + u32 ring_id, u32 *prod, u32 *cons) 8719 + { 8720 + struct hwrm_dbg_ring_info_get_output *resp = bp->hwrm_cmd_resp_addr; 8721 + struct hwrm_dbg_ring_info_get_input req = {0}; 8722 + int rc; 8723 + 8724 + bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_DBG_RING_INFO_GET, -1, -1); 8725 + req.ring_type = ring_type; 8726 + req.fw_ring_id = cpu_to_le32(ring_id); 8727 + mutex_lock(&bp->hwrm_cmd_lock); 8728 + rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); 8729 + if (!rc) { 8730 + *prod = le32_to_cpu(resp->producer_index); 8731 + *cons = le32_to_cpu(resp->consumer_index); 8732 + } 8733 + mutex_unlock(&bp->hwrm_cmd_lock); 8734 + return rc; 8735 + } 8736 + 8717 8737 static void bnxt_dump_tx_sw_state(struct bnxt_napi *bnapi) 8718 8738 { 8719 8739 struct bnxt_tx_ring_info *txr = bnapi->tx_ring; ··· 8841 8821 bnxt_queue_sp_work(bp); 8842 8822 } 8843 8823 } 8824 + 8825 + if ((bp->flags & BNXT_FLAG_CHIP_P5) && netif_carrier_ok(dev)) { 8826 + set_bit(BNXT_RING_COAL_NOW_SP_EVENT, &bp->sp_event); 8827 + bnxt_queue_sp_work(bp); 8828 + } 8844 8829 bnxt_restart_timer: 8845 8830 mod_timer(&bp->timer, jiffies + bp->current_interval); 8846 8831 } ··· 8874 8849 if (test_bit(BNXT_STATE_OPEN, &bp->state)) 8875 8850 bnxt_reset_task(bp, silent); 8876 8851 bnxt_rtnl_unlock_sp(bp); 8852 + } 8853 + 8854 + static void bnxt_chk_missed_irq(struct bnxt *bp) 8855 + { 8856 + int i; 8857 + 8858 + if (!(bp->flags & BNXT_FLAG_CHIP_P5)) 8859 + return; 8860 + 8861 + for (i = 0; i < bp->cp_nr_rings; i++) { 8862 + struct bnxt_napi *bnapi = bp->bnapi[i]; 8863 + struct bnxt_cp_ring_info *cpr; 8864 + u32 fw_ring_id; 8865 + int j; 8866 + 8867 + if (!bnapi) 8868 + continue; 8869 + 8870 + cpr = &bnapi->cp_ring; 8871 + for (j = 0; j < 2; j++) { 8872 + struct bnxt_cp_ring_info *cpr2 = cpr->cp_ring_arr[j]; 8873 + u32 val[2]; 8874 + 8875 + if (!cpr2 || cpr2->has_more_work || 8876 + !bnxt_has_work(bp, cpr2)) 8877 + continue; 8878 + 8879 + if (cpr2->cp_raw_cons != cpr2->last_cp_raw_cons) { 8880 + cpr2->last_cp_raw_cons = cpr2->cp_raw_cons; 8881 + continue; 8882 + } 8883 + fw_ring_id = cpr2->cp_ring_struct.fw_ring_id; 8884 + bnxt_dbg_hwrm_ring_info_get(bp, 8885 + DBG_RING_INFO_GET_REQ_RING_TYPE_L2_CMPL, 8886 + fw_ring_id, &val[0], &val[1]); 8887 + cpr->missed_irqs++; 8888 + } 8889 + } 8877 8890 } 8878 8891 8879 8892 static void bnxt_cfg_ntp_filters(struct bnxt *); ··· 8992 8929 8993 8930 if (test_and_clear_bit(BNXT_FLOW_STATS_SP_EVENT, &bp->sp_event)) 8994 8931 bnxt_tc_flow_stats_work(bp); 8932 + 8933 + if (test_and_clear_bit(BNXT_RING_COAL_NOW_SP_EVENT, &bp->sp_event)) 8934 + bnxt_chk_missed_irq(bp); 8995 8935 8996 8936 /* These functions below will clear BNXT_STATE_IN_SP_TASK. They 8997 8937 * must be the last functions to be called before exiting. ··· 10153 10087 } 10154 10088 10155 10089 bnxt_hwrm_func_qcfg(bp); 10090 + bnxt_hwrm_vnic_qcaps(bp); 10156 10091 bnxt_hwrm_port_led_qcaps(bp); 10157 10092 bnxt_ethtool_init(bp); 10158 10093 bnxt_dcb_init(bp); ··· 10187 10120 VNIC_RSS_CFG_REQ_HASH_TYPE_UDP_IPV6; 10188 10121 } 10189 10122 10190 - bnxt_hwrm_vnic_qcaps(bp); 10191 10123 if (bnxt_rfs_supported(bp)) { 10192 10124 dev->hw_features |= NETIF_F_NTUPLE; 10193 10125 if (bnxt_rfs_capable(bp)) {
+4
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 798 798 u8 had_work_done:1; 799 799 u8 has_more_work:1; 800 800 801 + u32 last_cp_raw_cons; 802 + 801 803 struct bnxt_coal rx_ring_coal; 802 804 u64 rx_packets; 803 805 u64 rx_bytes; ··· 818 816 dma_addr_t hw_stats_map; 819 817 u32 hw_stats_ctx_id; 820 818 u64 rx_l4_csum_errors; 819 + u64 missed_irqs; 821 820 822 821 struct bnxt_ring_struct cp_ring_struct; 823 822 ··· 1530 1527 #define BNXT_LINK_SPEED_CHNG_SP_EVENT 14 1531 1528 #define BNXT_FLOW_STATS_SP_EVENT 15 1532 1529 #define BNXT_UPDATE_PHY_SP_EVENT 16 1530 + #define BNXT_RING_COAL_NOW_SP_EVENT 17 1533 1531 1534 1532 struct bnxt_hw_resc hw_resc; 1535 1533 struct bnxt_pf_info pf;
+6 -3
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
··· 137 137 return rc; 138 138 } 139 139 140 - #define BNXT_NUM_STATS 21 140 + #define BNXT_NUM_STATS 22 141 141 142 142 #define BNXT_RX_STATS_ENTRY(counter) \ 143 143 { BNXT_RX_STATS_OFFSET(counter), __stringify(counter) } ··· 384 384 for (k = 0; k < stat_fields; j++, k++) 385 385 buf[j] = le64_to_cpu(hw_stats[k]); 386 386 buf[j++] = cpr->rx_l4_csum_errors; 387 + buf[j++] = cpr->missed_irqs; 387 388 388 389 bnxt_sw_func_stats[RX_TOTAL_DISCARDS].counter += 389 390 le64_to_cpu(cpr->hw_stats->rx_discard_pkts); ··· 468 467 sprintf(buf, "[%d]: tpa_aborts", i); 469 468 buf += ETH_GSTRING_LEN; 470 469 sprintf(buf, "[%d]: rx_l4_csum_errors", i); 470 + buf += ETH_GSTRING_LEN; 471 + sprintf(buf, "[%d]: missed_irqs", i); 471 472 buf += ETH_GSTRING_LEN; 472 473 } 473 474 for (i = 0; i < BNXT_NUM_SW_FUNC_STATS; i++) { ··· 2945 2942 record->asic_state = 0; 2946 2943 strlcpy(record->system_name, utsname()->nodename, 2947 2944 sizeof(record->system_name)); 2948 - record->year = cpu_to_le16(tm.tm_year); 2949 - record->month = cpu_to_le16(tm.tm_mon); 2945 + record->year = cpu_to_le16(tm.tm_year + 1900); 2946 + record->month = cpu_to_le16(tm.tm_mon + 1); 2950 2947 record->day = cpu_to_le16(tm.tm_mday); 2951 2948 record->hour = cpu_to_le16(tm.tm_hour); 2952 2949 record->minute = cpu_to_le16(tm.tm_min);
+3
drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
··· 43 43 if (ulp_id == BNXT_ROCE_ULP) { 44 44 unsigned int max_stat_ctxs; 45 45 46 + if (bp->flags & BNXT_FLAG_CHIP_P5) 47 + return -EOPNOTSUPP; 48 + 46 49 max_stat_ctxs = bnxt_get_max_func_stat_ctxs(bp); 47 50 if (max_stat_ctxs <= BNXT_MIN_ROCE_STAT_CTXS || 48 51 bp->num_stat_ctxs == max_stat_ctxs)
+16 -2
drivers/net/ethernet/broadcom/tg3.c
··· 12422 12422 { 12423 12423 struct tg3 *tp = netdev_priv(dev); 12424 12424 int i, irq_sync = 0, err = 0; 12425 + bool reset_phy = false; 12425 12426 12426 12427 if ((ering->rx_pending > tp->rx_std_ring_mask) || 12427 12428 (ering->rx_jumbo_pending > tp->rx_jmb_ring_mask) || ··· 12454 12453 12455 12454 if (netif_running(dev)) { 12456 12455 tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 12457 - err = tg3_restart_hw(tp, false); 12456 + /* Reset PHY to avoid PHY lock up */ 12457 + if (tg3_asic_rev(tp) == ASIC_REV_5717 || 12458 + tg3_asic_rev(tp) == ASIC_REV_5719 || 12459 + tg3_asic_rev(tp) == ASIC_REV_5720) 12460 + reset_phy = true; 12461 + 12462 + err = tg3_restart_hw(tp, reset_phy); 12458 12463 if (!err) 12459 12464 tg3_netif_start(tp); 12460 12465 } ··· 12494 12487 { 12495 12488 struct tg3 *tp = netdev_priv(dev); 12496 12489 int err = 0; 12490 + bool reset_phy = false; 12497 12491 12498 12492 if (tp->link_config.autoneg == AUTONEG_ENABLE) 12499 12493 tg3_warn_mgmt_link_flap(tp); ··· 12564 12556 12565 12557 if (netif_running(dev)) { 12566 12558 tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 12567 - err = tg3_restart_hw(tp, false); 12559 + /* Reset PHY to avoid PHY lock up */ 12560 + if (tg3_asic_rev(tp) == ASIC_REV_5717 || 12561 + tg3_asic_rev(tp) == ASIC_REV_5719 || 12562 + tg3_asic_rev(tp) == ASIC_REV_5720) 12563 + reset_phy = true; 12564 + 12565 + err = tg3_restart_hw(tp, reset_phy); 12568 12566 if (!err) 12569 12567 tg3_netif_start(tp); 12570 12568 }
+7 -2
drivers/net/ethernet/cavium/thunder/nicvf_main.c
··· 1784 1784 bool if_up = netif_running(nic->netdev); 1785 1785 struct bpf_prog *old_prog; 1786 1786 bool bpf_attached = false; 1787 + int ret = 0; 1787 1788 1788 1789 /* For now just support only the usual MTU sized frames */ 1789 1790 if (prog && (dev->mtu > 1500)) { ··· 1818 1817 if (nic->xdp_prog) { 1819 1818 /* Attach BPF program */ 1820 1819 nic->xdp_prog = bpf_prog_add(nic->xdp_prog, nic->rx_queues - 1); 1821 - if (!IS_ERR(nic->xdp_prog)) 1820 + if (!IS_ERR(nic->xdp_prog)) { 1822 1821 bpf_attached = true; 1822 + } else { 1823 + ret = PTR_ERR(nic->xdp_prog); 1824 + nic->xdp_prog = NULL; 1825 + } 1823 1826 } 1824 1827 1825 1828 /* Calculate Tx queues needed for XDP and network stack */ ··· 1835 1830 netif_trans_update(nic->netdev); 1836 1831 } 1837 1832 1838 - return 0; 1833 + return ret; 1839 1834 } 1840 1835 1841 1836 static int nicvf_xdp(struct net_device *netdev, struct netdev_bpf *xdp)
+3 -1
drivers/net/ethernet/cavium/thunder/nicvf_queues.c
··· 585 585 if (!sq->dmem.base) 586 586 return; 587 587 588 - if (sq->tso_hdrs) 588 + if (sq->tso_hdrs) { 589 589 dma_free_coherent(&nic->pdev->dev, 590 590 sq->dmem.q_len * TSO_HEADER_SIZE, 591 591 sq->tso_hdrs, sq->tso_hdrs_phys); 592 + sq->tso_hdrs = NULL; 593 + } 592 594 593 595 /* Free pending skbs in the queue */ 594 596 smp_rmb();
-1
drivers/net/ethernet/chelsio/Kconfig
··· 67 67 config CHELSIO_T4 68 68 tristate "Chelsio Communications T4/T5/T6 Ethernet support" 69 69 depends on PCI && (IPV6 || IPV6=n) 70 - depends on THERMAL || !THERMAL 71 70 select FW_LOADER 72 71 select MDIO 73 72 select ZLIB_DEFLATE
+1 -3
drivers/net/ethernet/chelsio/cxgb4/Makefile
··· 12 12 cxgb4-$(CONFIG_CHELSIO_T4_DCB) += cxgb4_dcb.o 13 13 cxgb4-$(CONFIG_CHELSIO_T4_FCOE) += cxgb4_fcoe.o 14 14 cxgb4-$(CONFIG_DEBUG_FS) += cxgb4_debugfs.o 15 - ifdef CONFIG_THERMAL 16 - cxgb4-objs += cxgb4_thermal.o 17 - endif 15 + cxgb4-$(CONFIG_THERMAL) += cxgb4_thermal.o
+2 -2
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 5863 5863 if (!is_t4(adapter->params.chip)) 5864 5864 cxgb4_ptp_init(adapter); 5865 5865 5866 - if (IS_ENABLED(CONFIG_THERMAL) && 5866 + if (IS_REACHABLE(CONFIG_THERMAL) && 5867 5867 !is_t4(adapter->params.chip) && (adapter->flags & FW_OK)) 5868 5868 cxgb4_thermal_init(adapter); 5869 5869 ··· 5932 5932 5933 5933 if (!is_t4(adapter->params.chip)) 5934 5934 cxgb4_ptp_stop(adapter); 5935 - if (IS_ENABLED(CONFIG_THERMAL)) 5935 + if (IS_REACHABLE(CONFIG_THERMAL)) 5936 5936 cxgb4_thermal_remove(adapter); 5937 5937 5938 5938 /* If we allocated filters, free up state associated with any
+1 -1
drivers/net/ethernet/cortina/gemini.c
··· 660 660 661 661 u64_stats_update_begin(&port->tx_stats_syncp); 662 662 port->tx_frag_stats[nfrags]++; 663 - u64_stats_update_end(&port->ir_stats_syncp); 663 + u64_stats_update_end(&port->tx_stats_syncp); 664 664 } 665 665 } 666 666
+3 -4
drivers/net/ethernet/faraday/ftmac100.c
··· 872 872 struct net_device *netdev = dev_id; 873 873 struct ftmac100 *priv = netdev_priv(netdev); 874 874 875 - if (likely(netif_running(netdev))) { 876 - /* Disable interrupts for polling */ 877 - ftmac100_disable_all_int(priv); 875 + /* Disable interrupts for polling */ 876 + ftmac100_disable_all_int(priv); 877 + if (likely(netif_running(netdev))) 878 878 napi_schedule(&priv->napi); 879 - } 880 879 881 880 return IRQ_HANDLED; 882 881 }
+31 -41
drivers/net/ethernet/ibm/ibmvnic.c
··· 485 485 486 486 for (j = 0; j < rx_pool->size; j++) { 487 487 if (rx_pool->rx_buff[j].skb) { 488 - dev_kfree_skb_any(rx_pool->rx_buff[i].skb); 489 - rx_pool->rx_buff[i].skb = NULL; 488 + dev_kfree_skb_any(rx_pool->rx_buff[j].skb); 489 + rx_pool->rx_buff[j].skb = NULL; 490 490 } 491 491 } 492 492 ··· 1103 1103 return 0; 1104 1104 } 1105 1105 1106 - mutex_lock(&adapter->reset_lock); 1107 - 1108 1106 if (adapter->state != VNIC_CLOSED) { 1109 1107 rc = ibmvnic_login(netdev); 1110 - if (rc) { 1111 - mutex_unlock(&adapter->reset_lock); 1108 + if (rc) 1112 1109 return rc; 1113 - } 1114 1110 1115 1111 rc = init_resources(adapter); 1116 1112 if (rc) { 1117 1113 netdev_err(netdev, "failed to initialize resources\n"); 1118 1114 release_resources(adapter); 1119 - mutex_unlock(&adapter->reset_lock); 1120 1115 return rc; 1121 1116 } 1122 1117 } 1123 1118 1124 1119 rc = __ibmvnic_open(netdev); 1125 1120 netif_carrier_on(netdev); 1126 - 1127 - mutex_unlock(&adapter->reset_lock); 1128 1121 1129 1122 return rc; 1130 1123 } ··· 1262 1269 return 0; 1263 1270 } 1264 1271 1265 - mutex_lock(&adapter->reset_lock); 1266 1272 rc = __ibmvnic_close(netdev); 1267 1273 ibmvnic_cleanup(netdev); 1268 - mutex_unlock(&adapter->reset_lock); 1269 1274 1270 1275 return rc; 1271 1276 } ··· 1737 1746 struct ibmvnic_rwi *rwi, u32 reset_state) 1738 1747 { 1739 1748 u64 old_num_rx_queues, old_num_tx_queues; 1749 + u64 old_num_rx_slots, old_num_tx_slots; 1740 1750 struct net_device *netdev = adapter->netdev; 1741 1751 int i, rc; 1742 1752 ··· 1749 1757 1750 1758 old_num_rx_queues = adapter->req_rx_queues; 1751 1759 old_num_tx_queues = adapter->req_tx_queues; 1760 + old_num_rx_slots = adapter->req_rx_add_entries_per_subcrq; 1761 + old_num_tx_slots = adapter->req_tx_entries_per_subcrq; 1752 1762 1753 1763 ibmvnic_cleanup(netdev); 1754 1764 ··· 1813 1819 if (rc) 1814 1820 return rc; 1815 1821 } else if (adapter->req_rx_queues != old_num_rx_queues || 1816 - adapter->req_tx_queues != old_num_tx_queues) { 1817 - adapter->map_id = 1; 1822 + adapter->req_tx_queues != old_num_tx_queues || 1823 + adapter->req_rx_add_entries_per_subcrq != 1824 + old_num_rx_slots || 1825 + adapter->req_tx_entries_per_subcrq != 1826 + old_num_tx_slots) { 1818 1827 release_rx_pools(adapter); 1819 1828 release_tx_pools(adapter); 1820 - rc = init_rx_pools(netdev); 1821 - if (rc) 1822 - return rc; 1823 - rc = init_tx_pools(netdev); 1829 + release_napi(adapter); 1830 + release_vpd_data(adapter); 1831 + 1832 + rc = init_resources(adapter); 1824 1833 if (rc) 1825 1834 return rc; 1826 1835 1827 - release_napi(adapter); 1828 - rc = init_napi(adapter); 1829 - if (rc) 1830 - return rc; 1831 1836 } else { 1832 1837 rc = reset_tx_pools(adapter); 1833 1838 if (rc) ··· 1910 1917 adapter->state = VNIC_PROBED; 1911 1918 return 0; 1912 1919 } 1913 - /* netif_set_real_num_xx_queues needs to take rtnl lock here 1914 - * unless wait_for_reset is set, in which case the rtnl lock 1915 - * has already been taken before initializing the reset 1916 - */ 1917 - if (!adapter->wait_for_reset) { 1918 - rtnl_lock(); 1919 - rc = init_resources(adapter); 1920 - rtnl_unlock(); 1921 - } else { 1922 - rc = init_resources(adapter); 1923 - } 1920 + 1921 + rc = init_resources(adapter); 1924 1922 if (rc) 1925 1923 return rc; 1926 1924 ··· 1970 1986 struct ibmvnic_rwi *rwi; 1971 1987 struct ibmvnic_adapter *adapter; 1972 1988 struct net_device *netdev; 1989 + bool we_lock_rtnl = false; 1973 1990 u32 reset_state; 1974 1991 int rc = 0; 1975 1992 1976 1993 adapter = container_of(work, struct ibmvnic_adapter, ibmvnic_reset); 1977 1994 netdev = adapter->netdev; 1978 1995 1979 - mutex_lock(&adapter->reset_lock); 1996 + /* netif_set_real_num_xx_queues needs to take rtnl lock here 1997 + * unless wait_for_reset is set, in which case the rtnl lock 1998 + * has already been taken before initializing the reset 1999 + */ 2000 + if (!adapter->wait_for_reset) { 2001 + rtnl_lock(); 2002 + we_lock_rtnl = true; 2003 + } 1980 2004 reset_state = adapter->state; 1981 2005 1982 2006 rwi = get_next_rwi(adapter); ··· 2012 2020 if (rc) { 2013 2021 netdev_dbg(adapter->netdev, "Reset failed\n"); 2014 2022 free_all_rwi(adapter); 2015 - mutex_unlock(&adapter->reset_lock); 2016 - return; 2017 2023 } 2018 2024 2019 2025 adapter->resetting = false; 2020 - mutex_unlock(&adapter->reset_lock); 2026 + if (we_lock_rtnl) 2027 + rtnl_unlock(); 2021 2028 } 2022 2029 2023 2030 static int ibmvnic_reset(struct ibmvnic_adapter *adapter, ··· 4759 4768 4760 4769 INIT_WORK(&adapter->ibmvnic_reset, __ibmvnic_reset); 4761 4770 INIT_LIST_HEAD(&adapter->rwi_list); 4762 - mutex_init(&adapter->reset_lock); 4763 4771 mutex_init(&adapter->rwi_lock); 4764 4772 adapter->resetting = false; 4765 4773 ··· 4830 4840 struct ibmvnic_adapter *adapter = netdev_priv(netdev); 4831 4841 4832 4842 adapter->state = VNIC_REMOVING; 4833 - unregister_netdev(netdev); 4834 - mutex_lock(&adapter->reset_lock); 4843 + rtnl_lock(); 4844 + unregister_netdevice(netdev); 4835 4845 4836 4846 release_resources(adapter); 4837 4847 release_sub_crqs(adapter, 1); ··· 4842 4852 4843 4853 adapter->state = VNIC_REMOVED; 4844 4854 4845 - mutex_unlock(&adapter->reset_lock); 4855 + rtnl_unlock(); 4846 4856 device_remove_file(&dev->dev, &dev_attr_failover); 4847 4857 free_netdev(netdev); 4848 4858 dev_set_drvdata(&dev->dev, NULL);
+1 -1
drivers/net/ethernet/ibm/ibmvnic.h
··· 1075 1075 struct tasklet_struct tasklet; 1076 1076 enum vnic_state state; 1077 1077 enum ibmvnic_reset_reason reset_reason; 1078 - struct mutex reset_lock, rwi_lock; 1078 + struct mutex rwi_lock; 1079 1079 struct list_head rwi_list; 1080 1080 struct work_struct ibmvnic_reset; 1081 1081 bool resetting;
+3 -2
drivers/net/ethernet/lantiq_xrx200.c
··· 512 512 err = register_netdev(net_dev); 513 513 if (err) 514 514 goto err_unprepare_clk; 515 - return err; 515 + 516 + return 0; 516 517 517 518 err_unprepare_clk: 518 519 clk_disable_unprepare(priv->clk); ··· 521 520 err_uninit_dma: 522 521 xrx200_hw_cleanup(priv); 523 522 524 - return 0; 523 + return err; 525 524 } 526 525 527 526 static int xrx200_remove(struct platform_device *pdev)
+3 -9
drivers/net/ethernet/marvell/mvneta.c
··· 3343 3343 if (state->interface != PHY_INTERFACE_MODE_NA && 3344 3344 state->interface != PHY_INTERFACE_MODE_QSGMII && 3345 3345 state->interface != PHY_INTERFACE_MODE_SGMII && 3346 - state->interface != PHY_INTERFACE_MODE_2500BASEX && 3347 3346 !phy_interface_mode_is_8023z(state->interface) && 3348 3347 !phy_interface_mode_is_rgmii(state->interface)) { 3349 3348 bitmap_zero(supported, __ETHTOOL_LINK_MODE_MASK_NBITS); ··· 3356 3357 /* Asymmetric pause is unsupported */ 3357 3358 phylink_set(mask, Pause); 3358 3359 3359 - /* We cannot use 1Gbps when using the 2.5G interface. */ 3360 - if (state->interface == PHY_INTERFACE_MODE_2500BASEX) { 3361 - phylink_set(mask, 2500baseT_Full); 3362 - phylink_set(mask, 2500baseX_Full); 3363 - } else { 3364 - phylink_set(mask, 1000baseT_Full); 3365 - phylink_set(mask, 1000baseX_Full); 3366 - } 3360 + /* Half-duplex at speeds higher than 100Mbit is unsupported */ 3361 + phylink_set(mask, 1000baseT_Full); 3362 + phylink_set(mask, 1000baseX_Full); 3367 3363 3368 3364 if (!phy_interface_mode_is_8023z(state->interface)) { 3369 3365 /* 10M and 100M are only supported in non-802.3z mode */
+1 -1
drivers/net/ethernet/mellanox/mlx4/alloc.c
··· 337 337 static u32 __mlx4_alloc_from_zone(struct mlx4_zone_entry *zone, int count, 338 338 int align, u32 skip_mask, u32 *puid) 339 339 { 340 - u32 uid; 340 + u32 uid = 0; 341 341 u32 res; 342 342 struct mlx4_zone_allocator *zone_alloc = zone->allocator; 343 343 struct mlx4_zone_entry *curr_node;
+2 -2
drivers/net/ethernet/mellanox/mlx4/mlx4.h
··· 540 540 struct resource_allocator { 541 541 spinlock_t alloc_lock; /* protect quotas */ 542 542 union { 543 - int res_reserved; 544 - int res_port_rsvd[MLX4_MAX_PORTS]; 543 + unsigned int res_reserved; 544 + unsigned int res_port_rsvd[MLX4_MAX_PORTS]; 545 545 }; 546 546 union { 547 547 int res_free;
+1
drivers/net/ethernet/mellanox/mlx4/mr.c
··· 363 363 container_of((void *)mpt_entry, struct mlx4_cmd_mailbox, 364 364 buf); 365 365 366 + (*mpt_entry)->lkey = 0; 366 367 err = mlx4_SW2HW_MPT(dev, mailbox, key); 367 368 } 368 369
+1
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 569 569 570 570 unsigned long state; 571 571 int ix; 572 + unsigned int hw_mtu; 572 573 573 574 struct net_dim dim; /* Dynamic Interrupt Moderation */ 574 575
+15 -21
drivers/net/ethernet/mellanox/mlx5/core/en/port.c
··· 88 88 89 89 eth_proto_oper = MLX5_GET(ptys_reg, out, eth_proto_oper); 90 90 *speed = mlx5e_port_ptys2speed(eth_proto_oper); 91 - if (!(*speed)) { 92 - mlx5_core_warn(mdev, "cannot get port speed\n"); 91 + if (!(*speed)) 93 92 err = -EINVAL; 94 - } 95 93 96 94 return err; 97 95 } ··· 256 258 case 40000: 257 259 if (!write) 258 260 *fec_policy = MLX5_GET(pplm_reg, pplm, 259 - fec_override_cap_10g_40g); 261 + fec_override_admin_10g_40g); 260 262 else 261 263 MLX5_SET(pplm_reg, pplm, 262 264 fec_override_admin_10g_40g, *fec_policy); ··· 308 310 case 10000: 309 311 case 40000: 310 312 *fec_cap = MLX5_GET(pplm_reg, pplm, 311 - fec_override_admin_10g_40g); 313 + fec_override_cap_10g_40g); 312 314 break; 313 315 case 25000: 314 316 *fec_cap = MLX5_GET(pplm_reg, pplm, ··· 392 394 393 395 int mlx5e_set_fec_mode(struct mlx5_core_dev *dev, u8 fec_policy) 394 396 { 397 + u8 fec_policy_nofec = BIT(MLX5E_FEC_NOFEC); 395 398 bool fec_mode_not_supp_in_speed = false; 396 - u8 no_fec_policy = BIT(MLX5E_FEC_NOFEC); 397 399 u32 out[MLX5_ST_SZ_DW(pplm_reg)] = {}; 398 400 u32 in[MLX5_ST_SZ_DW(pplm_reg)] = {}; 399 401 int sz = MLX5_ST_SZ_BYTES(pplm_reg); 400 - u32 current_fec_speed; 402 + u8 fec_policy_auto = 0; 401 403 u8 fec_caps = 0; 402 404 int err; 403 405 int i; ··· 413 415 if (err) 414 416 return err; 415 417 416 - err = mlx5e_port_linkspeed(dev, &current_fec_speed); 417 - if (err) 418 - return err; 418 + MLX5_SET(pplm_reg, out, local_port, 1); 419 419 420 - memset(in, 0, sz); 421 - MLX5_SET(pplm_reg, in, local_port, 1); 422 - for (i = 0; i < MLX5E_FEC_SUPPORTED_SPEEDS && !!fec_policy; i++) { 420 + for (i = 0; i < MLX5E_FEC_SUPPORTED_SPEEDS; i++) { 423 421 mlx5e_get_fec_cap_field(out, &fec_caps, fec_supported_speeds[i]); 424 - /* policy supported for link speed */ 425 - if (!!(fec_caps & fec_policy)) { 426 - mlx5e_fec_admin_field(in, &fec_policy, 1, 422 + /* policy supported for link speed, or policy is auto */ 423 + if (fec_caps & fec_policy || fec_policy == fec_policy_auto) { 424 + mlx5e_fec_admin_field(out, &fec_policy, 1, 427 425 fec_supported_speeds[i]); 428 426 } else { 429 - if (fec_supported_speeds[i] == current_fec_speed) 430 - return -EOPNOTSUPP; 431 - mlx5e_fec_admin_field(in, &no_fec_policy, 1, 432 - fec_supported_speeds[i]); 427 + /* turn off FEC if supported. Else, leave it the same */ 428 + if (fec_caps & fec_policy_nofec) 429 + mlx5e_fec_admin_field(out, &fec_policy_nofec, 1, 430 + fec_supported_speeds[i]); 433 431 fec_mode_not_supp_in_speed = true; 434 432 } 435 433 } ··· 435 441 "FEC policy 0x%x is not supported for some speeds", 436 442 fec_policy); 437 443 438 - return mlx5_core_access_reg(dev, in, sz, out, sz, MLX5_REG_PPLM, 0, 1); 444 + return mlx5_core_access_reg(dev, out, sz, out, sz, MLX5_REG_PPLM, 0, 1); 439 445 }
+3 -1
drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
··· 130 130 int err; 131 131 132 132 err = mlx5e_port_linkspeed(priv->mdev, &speed); 133 - if (err) 133 + if (err) { 134 + mlx5_core_warn(priv->mdev, "cannot get port speed\n"); 134 135 return 0; 136 + } 135 137 136 138 xoff = (301 + 216 * priv->dcbx.cable_len / 100) * speed / 1000 + 272 * mtu / 100; 137 139
+1 -2
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
··· 843 843 ethtool_link_ksettings_add_link_mode(link_ksettings, supported, 844 844 Autoneg); 845 845 846 - err = get_fec_supported_advertised(mdev, link_ksettings); 847 - if (err) 846 + if (get_fec_supported_advertised(mdev, link_ksettings)) 848 847 netdev_dbg(netdev, "%s: FEC caps query failed: %d\n", 849 848 __func__, err); 850 849
+31 -6
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 502 502 rq->channel = c; 503 503 rq->ix = c->ix; 504 504 rq->mdev = mdev; 505 + rq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu); 505 506 rq->stats = &c->priv->channel_stats[c->ix].rq; 506 507 507 508 rq->xdp_prog = params->xdp_prog ? bpf_prog_inc(params->xdp_prog) : NULL; ··· 1624 1623 int err; 1625 1624 u32 i; 1626 1625 1626 + err = mlx5_vector2eqn(mdev, param->eq_ix, &eqn_not_used, &irqn); 1627 + if (err) 1628 + return err; 1629 + 1627 1630 err = mlx5_cqwq_create(mdev, &param->wq, param->cqc, &cq->wq, 1628 1631 &cq->wq_ctrl); 1629 1632 if (err) 1630 1633 return err; 1631 - 1632 - mlx5_vector2eqn(mdev, param->eq_ix, &eqn_not_used, &irqn); 1633 1634 1634 1635 mcq->cqe_sz = 64; 1635 1636 mcq->set_ci_db = cq->wq_ctrl.db.db; ··· 1690 1687 int eqn; 1691 1688 int err; 1692 1689 1690 + err = mlx5_vector2eqn(mdev, param->eq_ix, &eqn, &irqn_not_used); 1691 + if (err) 1692 + return err; 1693 + 1693 1694 inlen = MLX5_ST_SZ_BYTES(create_cq_in) + 1694 1695 sizeof(u64) * cq->wq_ctrl.buf.npages; 1695 1696 in = kvzalloc(inlen, GFP_KERNEL); ··· 1706 1699 1707 1700 mlx5_fill_page_frag_array(&cq->wq_ctrl.buf, 1708 1701 (__be64 *)MLX5_ADDR_OF(create_cq_in, in, pas)); 1709 - 1710 - mlx5_vector2eqn(mdev, param->eq_ix, &eqn, &irqn_not_used); 1711 1702 1712 1703 MLX5_SET(cqc, cqc, cq_period_mode, param->cq_period_mode); 1713 1704 MLX5_SET(cqc, cqc, c_eqn, eqn); ··· 1926 1921 int err; 1927 1922 int eqn; 1928 1923 1924 + err = mlx5_vector2eqn(priv->mdev, ix, &eqn, &irq); 1925 + if (err) 1926 + return err; 1927 + 1929 1928 c = kvzalloc_node(sizeof(*c), GFP_KERNEL, cpu_to_node(cpu)); 1930 1929 if (!c) 1931 1930 return -ENOMEM; ··· 1946 1937 c->xdp = !!params->xdp_prog; 1947 1938 c->stats = &priv->channel_stats[ix].ch; 1948 1939 1949 - mlx5_vector2eqn(priv->mdev, ix, &eqn, &irq); 1950 1940 c->irq_desc = irq_to_desc(irq); 1951 1941 1952 1942 netif_napi_add(netdev, &c->napi, mlx5e_napi_poll, 64); ··· 3582 3574 return 0; 3583 3575 } 3584 3576 3577 + #ifdef CONFIG_MLX5_ESWITCH 3585 3578 static int set_feature_tc_num_filters(struct net_device *netdev, bool enable) 3586 3579 { 3587 3580 struct mlx5e_priv *priv = netdev_priv(netdev); ··· 3595 3586 3596 3587 return 0; 3597 3588 } 3589 + #endif 3598 3590 3599 3591 static int set_feature_rx_all(struct net_device *netdev, bool enable) 3600 3592 { ··· 3694 3684 err |= MLX5E_HANDLE_FEATURE(NETIF_F_LRO, set_feature_lro); 3695 3685 err |= MLX5E_HANDLE_FEATURE(NETIF_F_HW_VLAN_CTAG_FILTER, 3696 3686 set_feature_cvlan_filter); 3687 + #ifdef CONFIG_MLX5_ESWITCH 3697 3688 err |= MLX5E_HANDLE_FEATURE(NETIF_F_HW_TC, set_feature_tc_num_filters); 3689 + #endif 3698 3690 err |= MLX5E_HANDLE_FEATURE(NETIF_F_RXALL, set_feature_rx_all); 3699 3691 err |= MLX5E_HANDLE_FEATURE(NETIF_F_RXFCS, set_feature_rx_fcs); 3700 3692 err |= MLX5E_HANDLE_FEATURE(NETIF_F_HW_VLAN_CTAG_RX, set_feature_rx_vlan); ··· 3767 3755 } 3768 3756 3769 3757 if (params->rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ) { 3758 + bool is_linear = mlx5e_rx_mpwqe_is_linear_skb(priv->mdev, &new_channels.params); 3770 3759 u8 ppw_old = mlx5e_mpwqe_log_pkts_per_wqe(params); 3771 3760 u8 ppw_new = mlx5e_mpwqe_log_pkts_per_wqe(&new_channels.params); 3772 3761 3773 - reset = reset && (ppw_old != ppw_new); 3762 + reset = reset && (is_linear || (ppw_old != ppw_new)); 3774 3763 } 3775 3764 3776 3765 if (!reset) { ··· 4691 4678 FT_CAP(modify_root) && 4692 4679 FT_CAP(identified_miss_table_mode) && 4693 4680 FT_CAP(flow_table_modify)) { 4681 + #ifdef CONFIG_MLX5_ESWITCH 4694 4682 netdev->hw_features |= NETIF_F_HW_TC; 4683 + #endif 4695 4684 #ifdef CONFIG_MLX5_EN_ARFS 4696 4685 netdev->hw_features |= NETIF_F_NTUPLE; 4697 4686 #endif ··· 5019 5004 int mlx5e_attach_netdev(struct mlx5e_priv *priv) 5020 5005 { 5021 5006 const struct mlx5e_profile *profile; 5007 + int max_nch; 5022 5008 int err; 5023 5009 5024 5010 profile = priv->profile; 5025 5011 clear_bit(MLX5E_STATE_DESTROYING, &priv->state); 5012 + 5013 + /* max number of channels may have changed */ 5014 + max_nch = mlx5e_get_max_num_channels(priv->mdev); 5015 + if (priv->channels.params.num_channels > max_nch) { 5016 + mlx5_core_warn(priv->mdev, "MLX5E: Reducing number of channels to %d\n", max_nch); 5017 + priv->channels.params.num_channels = max_nch; 5018 + mlx5e_build_default_indir_rqt(priv->channels.params.indirection_rqt, 5019 + MLX5E_INDIR_RQT_SIZE, max_nch); 5020 + } 5026 5021 5027 5022 err = profile->init_tx(priv); 5028 5023 if (err)
+6
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 1104 1104 u32 frag_size; 1105 1105 bool consumed; 1106 1106 1107 + /* Check packet size. Note LRO doesn't use linear SKB */ 1108 + if (unlikely(cqe_bcnt > rq->hw_mtu)) { 1109 + rq->stats->oversize_pkts_sw_drop++; 1110 + return NULL; 1111 + } 1112 + 1107 1113 va = page_address(di->page) + head_offset; 1108 1114 data = va + rx_headroom; 1109 1115 frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt32);
+10 -16
drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
··· 98 98 return 1; 99 99 } 100 100 101 - #ifdef CONFIG_INET 102 - /* loopback test */ 103 - #define MLX5E_TEST_PKT_SIZE (MLX5E_RX_MAX_HEAD - NET_IP_ALIGN) 104 - static const char mlx5e_test_text[ETH_GSTRING_LEN] = "MLX5E SELF TEST"; 105 - #define MLX5E_TEST_MAGIC 0x5AEED15C001ULL 106 - 107 101 struct mlx5ehdr { 108 102 __be32 version; 109 103 __be64 magic; 110 - char text[ETH_GSTRING_LEN]; 111 104 }; 105 + 106 + #ifdef CONFIG_INET 107 + /* loopback test */ 108 + #define MLX5E_TEST_PKT_SIZE (sizeof(struct ethhdr) + sizeof(struct iphdr) +\ 109 + sizeof(struct udphdr) + sizeof(struct mlx5ehdr)) 110 + #define MLX5E_TEST_MAGIC 0x5AEED15C001ULL 112 111 113 112 static struct sk_buff *mlx5e_test_get_udp_skb(struct mlx5e_priv *priv) 114 113 { ··· 116 117 struct ethhdr *ethh; 117 118 struct udphdr *udph; 118 119 struct iphdr *iph; 119 - int datalen, iplen; 120 - 121 - datalen = MLX5E_TEST_PKT_SIZE - 122 - (sizeof(*ethh) + sizeof(*iph) + sizeof(*udph)); 120 + int iplen; 123 121 124 122 skb = netdev_alloc_skb(priv->netdev, MLX5E_TEST_PKT_SIZE); 125 123 if (!skb) { ··· 145 149 /* Fill UDP header */ 146 150 udph->source = htons(9); 147 151 udph->dest = htons(9); /* Discard Protocol */ 148 - udph->len = htons(datalen + sizeof(struct udphdr)); 152 + udph->len = htons(sizeof(struct mlx5ehdr) + sizeof(struct udphdr)); 149 153 udph->check = 0; 150 154 151 155 /* Fill IP header */ ··· 153 157 iph->ttl = 32; 154 158 iph->version = 4; 155 159 iph->protocol = IPPROTO_UDP; 156 - iplen = sizeof(struct iphdr) + sizeof(struct udphdr) + datalen; 160 + iplen = sizeof(struct iphdr) + sizeof(struct udphdr) + 161 + sizeof(struct mlx5ehdr); 157 162 iph->tot_len = htons(iplen); 158 163 iph->frag_off = 0; 159 164 iph->saddr = 0; ··· 167 170 mlxh = skb_put(skb, sizeof(*mlxh)); 168 171 mlxh->version = 0; 169 172 mlxh->magic = cpu_to_be64(MLX5E_TEST_MAGIC); 170 - strlcpy(mlxh->text, mlx5e_test_text, sizeof(mlxh->text)); 171 - datalen -= sizeof(*mlxh); 172 - skb_put_zero(skb, datalen); 173 173 174 174 skb->csum = 0; 175 175 skb->ip_summed = CHECKSUM_PARTIAL;
+3
drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
··· 83 83 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_wqe_err) }, 84 84 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_mpwqe_filler_cqes) }, 85 85 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_mpwqe_filler_strides) }, 86 + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_oversize_pkts_sw_drop) }, 86 87 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_buff_alloc_err) }, 87 88 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_cqe_compress_blks) }, 88 89 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_cqe_compress_pkts) }, ··· 162 161 s->rx_wqe_err += rq_stats->wqe_err; 163 162 s->rx_mpwqe_filler_cqes += rq_stats->mpwqe_filler_cqes; 164 163 s->rx_mpwqe_filler_strides += rq_stats->mpwqe_filler_strides; 164 + s->rx_oversize_pkts_sw_drop += rq_stats->oversize_pkts_sw_drop; 165 165 s->rx_buff_alloc_err += rq_stats->buff_alloc_err; 166 166 s->rx_cqe_compress_blks += rq_stats->cqe_compress_blks; 167 167 s->rx_cqe_compress_pkts += rq_stats->cqe_compress_pkts; ··· 1191 1189 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, wqe_err) }, 1192 1190 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, mpwqe_filler_cqes) }, 1193 1191 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, mpwqe_filler_strides) }, 1192 + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, oversize_pkts_sw_drop) }, 1194 1193 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, buff_alloc_err) }, 1195 1194 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, cqe_compress_blks) }, 1196 1195 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, cqe_compress_pkts) },
+2
drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
··· 96 96 u64 rx_wqe_err; 97 97 u64 rx_mpwqe_filler_cqes; 98 98 u64 rx_mpwqe_filler_strides; 99 + u64 rx_oversize_pkts_sw_drop; 99 100 u64 rx_buff_alloc_err; 100 101 u64 rx_cqe_compress_blks; 101 102 u64 rx_cqe_compress_pkts; ··· 194 193 u64 wqe_err; 195 194 u64 mpwqe_filler_cqes; 196 195 u64 mpwqe_filler_strides; 196 + u64 oversize_pkts_sw_drop; 197 197 u64 buff_alloc_err; 198 198 u64 cqe_compress_blks; 199 199 u64 cqe_compress_pkts;
+35 -34
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 1447 1447 inner_headers); 1448 1448 } 1449 1449 1450 - if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) { 1451 - struct flow_dissector_key_eth_addrs *key = 1450 + if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) { 1451 + struct flow_dissector_key_basic *key = 1452 1452 skb_flow_dissector_target(f->dissector, 1453 - FLOW_DISSECTOR_KEY_ETH_ADDRS, 1453 + FLOW_DISSECTOR_KEY_BASIC, 1454 1454 f->key); 1455 - struct flow_dissector_key_eth_addrs *mask = 1455 + struct flow_dissector_key_basic *mask = 1456 1456 skb_flow_dissector_target(f->dissector, 1457 - FLOW_DISSECTOR_KEY_ETH_ADDRS, 1457 + FLOW_DISSECTOR_KEY_BASIC, 1458 1458 f->mask); 1459 + MLX5_SET(fte_match_set_lyr_2_4, headers_c, ethertype, 1460 + ntohs(mask->n_proto)); 1461 + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype, 1462 + ntohs(key->n_proto)); 1459 1463 1460 - ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c, 1461 - dmac_47_16), 1462 - mask->dst); 1463 - ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, 1464 - dmac_47_16), 1465 - key->dst); 1466 - 1467 - ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c, 1468 - smac_47_16), 1469 - mask->src); 1470 - ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, 1471 - smac_47_16), 1472 - key->src); 1473 - 1474 - if (!is_zero_ether_addr(mask->src) || !is_zero_ether_addr(mask->dst)) 1464 + if (mask->n_proto) 1475 1465 *match_level = MLX5_MATCH_L2; 1476 1466 } 1477 1467 ··· 1495 1505 1496 1506 *match_level = MLX5_MATCH_L2; 1497 1507 } 1498 - } else { 1508 + } else if (*match_level != MLX5_MATCH_NONE) { 1499 1509 MLX5_SET(fte_match_set_lyr_2_4, headers_c, svlan_tag, 1); 1500 1510 MLX5_SET(fte_match_set_lyr_2_4, headers_c, cvlan_tag, 1); 1511 + *match_level = MLX5_MATCH_L2; 1501 1512 } 1502 1513 1503 1514 if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_CVLAN)) { ··· 1536 1545 } 1537 1546 } 1538 1547 1539 - if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) { 1540 - struct flow_dissector_key_basic *key = 1548 + if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) { 1549 + struct flow_dissector_key_eth_addrs *key = 1541 1550 skb_flow_dissector_target(f->dissector, 1542 - FLOW_DISSECTOR_KEY_BASIC, 1551 + FLOW_DISSECTOR_KEY_ETH_ADDRS, 1543 1552 f->key); 1544 - struct flow_dissector_key_basic *mask = 1553 + struct flow_dissector_key_eth_addrs *mask = 1545 1554 skb_flow_dissector_target(f->dissector, 1546 - FLOW_DISSECTOR_KEY_BASIC, 1555 + FLOW_DISSECTOR_KEY_ETH_ADDRS, 1547 1556 f->mask); 1548 - MLX5_SET(fte_match_set_lyr_2_4, headers_c, ethertype, 1549 - ntohs(mask->n_proto)); 1550 - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype, 1551 - ntohs(key->n_proto)); 1552 1557 1553 - if (mask->n_proto) 1558 + ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c, 1559 + dmac_47_16), 1560 + mask->dst); 1561 + ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, 1562 + dmac_47_16), 1563 + key->dst); 1564 + 1565 + ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c, 1566 + smac_47_16), 1567 + mask->src); 1568 + ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, 1569 + smac_47_16), 1570 + key->src); 1571 + 1572 + if (!is_zero_ether_addr(mask->src) || !is_zero_ether_addr(mask->dst)) 1554 1573 *match_level = MLX5_MATCH_L2; 1555 1574 } 1556 1575 ··· 1587 1586 1588 1587 /* the HW doesn't need L3 inline to match on frag=no */ 1589 1588 if (!(key->flags & FLOW_DIS_IS_FRAGMENT)) 1590 - *match_level = MLX5_INLINE_MODE_L2; 1589 + *match_level = MLX5_MATCH_L2; 1591 1590 /* *** L2 attributes parsing up to here *** */ 1592 1591 else 1593 - *match_level = MLX5_INLINE_MODE_IP; 1592 + *match_level = MLX5_MATCH_L3; 1594 1593 } 1595 1594 } 1596 1595 ··· 2980 2979 if (!actions_match_supported(priv, exts, parse_attr, flow, extack)) 2981 2980 return -EOPNOTSUPP; 2982 2981 2983 - if (attr->out_count > 1 && !mlx5_esw_has_fwd_fdb(priv->mdev)) { 2982 + if (attr->mirror_count > 0 && !mlx5_esw_has_fwd_fdb(priv->mdev)) { 2984 2983 NL_SET_ERR_MSG_MOD(extack, 2985 2984 "current firmware doesn't support split rule for port mirroring"); 2986 2985 netdev_warn_once(priv->netdev, "current firmware doesn't support split rule for port mirroring\n");
+8 -2
drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
··· 83 83 }; 84 84 85 85 static const struct rhashtable_params rhash_sa = { 86 - .key_len = FIELD_SIZEOF(struct mlx5_fpga_ipsec_sa_ctx, hw_sa), 87 - .key_offset = offsetof(struct mlx5_fpga_ipsec_sa_ctx, hw_sa), 86 + /* Keep out "cmd" field from the key as it's 87 + * value is not constant during the lifetime 88 + * of the key object. 89 + */ 90 + .key_len = FIELD_SIZEOF(struct mlx5_fpga_ipsec_sa_ctx, hw_sa) - 91 + FIELD_SIZEOF(struct mlx5_ifc_fpga_ipsec_sa_v1, cmd), 92 + .key_offset = offsetof(struct mlx5_fpga_ipsec_sa_ctx, hw_sa) + 93 + FIELD_SIZEOF(struct mlx5_ifc_fpga_ipsec_sa_v1, cmd), 88 94 .head_offset = offsetof(struct mlx5_fpga_ipsec_sa_ctx, hash), 89 95 .automatic_shrinking = true, 90 96 .min_size = 1,
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
··· 560 560 561 561 netif_carrier_off(epriv->netdev); 562 562 mlx5_fs_remove_rx_underlay_qpn(mdev, ipriv->qp.qpn); 563 - mlx5i_uninit_underlay_qp(epriv); 564 563 mlx5e_deactivate_priv_channels(epriv); 565 564 mlx5e_close_channels(&epriv->channels); 565 + mlx5i_uninit_underlay_qp(epriv); 566 566 unlock: 567 567 mutex_unlock(&epriv->state_lock); 568 568 return 0;
+7 -7
drivers/net/ethernet/qlogic/qed/qed_dcbx.c
··· 191 191 static void 192 192 qed_dcbx_set_params(struct qed_dcbx_results *p_data, 193 193 struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, 194 - bool enable, u8 prio, u8 tc, 194 + bool app_tlv, bool enable, u8 prio, u8 tc, 195 195 enum dcbx_protocol_type type, 196 196 enum qed_pci_personality personality) 197 197 { ··· 210 210 p_data->arr[type].dont_add_vlan0 = true; 211 211 212 212 /* QM reconf data */ 213 - if (p_hwfn->hw_info.personality == personality) 213 + if (app_tlv && p_hwfn->hw_info.personality == personality) 214 214 qed_hw_info_set_offload_tc(&p_hwfn->hw_info, tc); 215 215 216 216 /* Configure dcbx vlan priority in doorbell block for roce EDPM */ ··· 225 225 static void 226 226 qed_dcbx_update_app_info(struct qed_dcbx_results *p_data, 227 227 struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, 228 - bool enable, u8 prio, u8 tc, 228 + bool app_tlv, bool enable, u8 prio, u8 tc, 229 229 enum dcbx_protocol_type type) 230 230 { 231 231 enum qed_pci_personality personality; ··· 240 240 241 241 personality = qed_dcbx_app_update[i].personality; 242 242 243 - qed_dcbx_set_params(p_data, p_hwfn, p_ptt, enable, 243 + qed_dcbx_set_params(p_data, p_hwfn, p_ptt, app_tlv, enable, 244 244 prio, tc, type, personality); 245 245 } 246 246 } ··· 319 319 enable = true; 320 320 } 321 321 322 - qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, enable, 323 - priority, tc, type); 322 + qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, true, 323 + enable, priority, tc, type); 324 324 } 325 325 } 326 326 ··· 341 341 continue; 342 342 343 343 enable = (type == DCBX_PROTOCOL_ETH) ? false : !!dcbx_version; 344 - qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, enable, 344 + qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, false, enable, 345 345 priority, tc, type); 346 346 } 347 347
+35 -9
drivers/net/ethernet/qlogic/qed/qed_dev.c
··· 185 185 qed_iscsi_free(p_hwfn); 186 186 qed_ooo_free(p_hwfn); 187 187 } 188 + 189 + if (QED_IS_RDMA_PERSONALITY(p_hwfn)) 190 + qed_rdma_info_free(p_hwfn); 191 + 188 192 qed_iov_free(p_hwfn); 189 193 qed_l2_free(p_hwfn); 190 194 qed_dmae_info_free(p_hwfn); ··· 485 481 struct qed_qm_info *qm_info = &p_hwfn->qm_info; 486 482 487 483 /* Can't have multiple flags set here */ 488 - if (bitmap_weight((unsigned long *)&pq_flags, sizeof(pq_flags)) > 1) 484 + if (bitmap_weight((unsigned long *)&pq_flags, 485 + sizeof(pq_flags) * BITS_PER_BYTE) > 1) { 486 + DP_ERR(p_hwfn, "requested multiple pq flags 0x%x\n", pq_flags); 489 487 goto err; 488 + } 489 + 490 + if (!(qed_get_pq_flags(p_hwfn) & pq_flags)) { 491 + DP_ERR(p_hwfn, "pq flag 0x%x is not set\n", pq_flags); 492 + goto err; 493 + } 490 494 491 495 switch (pq_flags) { 492 496 case PQ_FLAGS_RLS: ··· 518 506 } 519 507 520 508 err: 521 - DP_ERR(p_hwfn, "BAD pq flags %d\n", pq_flags); 522 - return NULL; 509 + return &qm_info->start_pq; 523 510 } 524 511 525 512 /* save pq index in qm info */ ··· 542 531 { 543 532 u8 max_tc = qed_init_qm_get_num_tcs(p_hwfn); 544 533 534 + if (max_tc == 0) { 535 + DP_ERR(p_hwfn, "pq with flag 0x%lx do not exist\n", 536 + PQ_FLAGS_MCOS); 537 + return p_hwfn->qm_info.start_pq; 538 + } 539 + 545 540 if (tc > max_tc) 546 541 DP_ERR(p_hwfn, "tc %d must be smaller than %d\n", tc, max_tc); 547 542 548 - return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + tc; 543 + return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + (tc % max_tc); 549 544 } 550 545 551 546 u16 qed_get_cm_pq_idx_vf(struct qed_hwfn *p_hwfn, u16 vf) 552 547 { 553 548 u16 max_vf = qed_init_qm_get_num_vfs(p_hwfn); 554 549 550 + if (max_vf == 0) { 551 + DP_ERR(p_hwfn, "pq with flag 0x%lx do not exist\n", 552 + PQ_FLAGS_VFS); 553 + return p_hwfn->qm_info.start_pq; 554 + } 555 + 555 556 if (vf > max_vf) 556 557 DP_ERR(p_hwfn, "vf %d must be smaller than %d\n", vf, max_vf); 557 558 558 - return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + vf; 559 + return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + (vf % max_vf); 559 560 } 560 561 561 562 u16 qed_get_cm_pq_idx_ofld_mtc(struct qed_hwfn *p_hwfn, u8 tc) ··· 1100 1077 if (rc) 1101 1078 goto alloc_err; 1102 1079 rc = qed_ooo_alloc(p_hwfn); 1080 + if (rc) 1081 + goto alloc_err; 1082 + } 1083 + 1084 + if (QED_IS_RDMA_PERSONALITY(p_hwfn)) { 1085 + rc = qed_rdma_info_alloc(p_hwfn); 1103 1086 if (rc) 1104 1087 goto alloc_err; 1105 1088 } ··· 2131 2102 if (!p_ptt) 2132 2103 return -EAGAIN; 2133 2104 2134 - /* If roce info is allocated it means roce is initialized and should 2135 - * be enabled in searcher. 2136 - */ 2137 2105 if (p_hwfn->p_rdma_info && 2138 - p_hwfn->b_rdma_enabled_in_prs) 2106 + p_hwfn->p_rdma_info->active && p_hwfn->b_rdma_enabled_in_prs) 2139 2107 qed_wr(p_hwfn, p_ptt, p_hwfn->rdma_prs_search_reg, 0x1); 2140 2108 2141 2109 /* Re-open incoming traffic */
+2
drivers/net/ethernet/qlogic/qed/qed_int.c
··· 992 992 */ 993 993 do { 994 994 index = p_sb_attn->sb_index; 995 + /* finish reading index before the loop condition */ 996 + dma_rmb(); 995 997 attn_bits = le32_to_cpu(p_sb_attn->atten_bits); 996 998 attn_acks = le32_to_cpu(p_sb_attn->atten_ack); 997 999 } while (index != p_sb_attn->sb_index);
+1 -1
drivers/net/ethernet/qlogic/qed/qed_main.c
··· 1782 1782 return -EBUSY; 1783 1783 } 1784 1784 rc = qed_mcp_drain(hwfn, ptt); 1785 + qed_ptt_release(hwfn, ptt); 1785 1786 if (rc) 1786 1787 return rc; 1787 - qed_ptt_release(hwfn, ptt); 1788 1788 } 1789 1789 1790 1790 return 0;
+29 -21
drivers/net/ethernet/qlogic/qed/qed_rdma.c
··· 140 140 return FEAT_NUM((struct qed_hwfn *)p_hwfn, QED_PF_L2_QUE) + rel_sb_id; 141 141 } 142 142 143 - static int qed_rdma_alloc(struct qed_hwfn *p_hwfn, 144 - struct qed_ptt *p_ptt, 145 - struct qed_rdma_start_in_params *params) 143 + int qed_rdma_info_alloc(struct qed_hwfn *p_hwfn) 146 144 { 147 145 struct qed_rdma_info *p_rdma_info; 146 + 147 + p_rdma_info = kzalloc(sizeof(*p_rdma_info), GFP_KERNEL); 148 + if (!p_rdma_info) 149 + return -ENOMEM; 150 + 151 + spin_lock_init(&p_rdma_info->lock); 152 + 153 + p_hwfn->p_rdma_info = p_rdma_info; 154 + return 0; 155 + } 156 + 157 + void qed_rdma_info_free(struct qed_hwfn *p_hwfn) 158 + { 159 + kfree(p_hwfn->p_rdma_info); 160 + p_hwfn->p_rdma_info = NULL; 161 + } 162 + 163 + static int qed_rdma_alloc(struct qed_hwfn *p_hwfn) 164 + { 165 + struct qed_rdma_info *p_rdma_info = p_hwfn->p_rdma_info; 148 166 u32 num_cons, num_tasks; 149 167 int rc = -ENOMEM; 150 168 151 169 DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "Allocating RDMA\n"); 152 170 153 - /* Allocate a struct with current pf rdma info */ 154 - p_rdma_info = kzalloc(sizeof(*p_rdma_info), GFP_KERNEL); 155 - if (!p_rdma_info) 156 - return rc; 157 - 158 - p_hwfn->p_rdma_info = p_rdma_info; 159 171 if (QED_IS_IWARP_PERSONALITY(p_hwfn)) 160 172 p_rdma_info->proto = PROTOCOLID_IWARP; 161 173 else ··· 195 183 /* Allocate a struct with device params and fill it */ 196 184 p_rdma_info->dev = kzalloc(sizeof(*p_rdma_info->dev), GFP_KERNEL); 197 185 if (!p_rdma_info->dev) 198 - goto free_rdma_info; 186 + return rc; 199 187 200 188 /* Allocate a struct with port params and fill it */ 201 189 p_rdma_info->port = kzalloc(sizeof(*p_rdma_info->port), GFP_KERNEL); ··· 310 298 kfree(p_rdma_info->port); 311 299 free_rdma_dev: 312 300 kfree(p_rdma_info->dev); 313 - free_rdma_info: 314 - kfree(p_rdma_info); 315 301 316 302 return rc; 317 303 } ··· 380 370 381 371 kfree(p_rdma_info->port); 382 372 kfree(p_rdma_info->dev); 383 - 384 - kfree(p_rdma_info); 385 373 } 386 374 387 375 static void qed_rdma_free_tid(void *rdma_cxt, u32 itid) ··· 687 679 688 680 DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "RDMA setup\n"); 689 681 690 - spin_lock_init(&p_hwfn->p_rdma_info->lock); 691 - 692 682 qed_rdma_init_devinfo(p_hwfn, params); 693 683 qed_rdma_init_port(p_hwfn); 694 684 qed_rdma_init_events(p_hwfn, params); ··· 733 727 /* Disable RoCE search */ 734 728 qed_wr(p_hwfn, p_ptt, p_hwfn->rdma_prs_search_reg, 0); 735 729 p_hwfn->b_rdma_enabled_in_prs = false; 736 - 730 + p_hwfn->p_rdma_info->active = 0; 737 731 qed_wr(p_hwfn, p_ptt, PRS_REG_ROCE_DEST_QP_MAX_PF, 0); 738 732 739 733 ll2_ethertype_en = qed_rd(p_hwfn, p_ptt, PRS_REG_LIGHT_L2_ETHERTYPE_EN); ··· 1242 1236 u8 max_stats_queues; 1243 1237 int rc; 1244 1238 1245 - if (!rdma_cxt || !in_params || !out_params || !p_hwfn->p_rdma_info) { 1239 + if (!rdma_cxt || !in_params || !out_params || 1240 + !p_hwfn->p_rdma_info->active) { 1246 1241 DP_ERR(p_hwfn->cdev, 1247 1242 "qed roce create qp failed due to NULL entry (rdma_cxt=%p, in=%p, out=%p, roce_info=?\n", 1248 1243 rdma_cxt, in_params, out_params); ··· 1809 1802 { 1810 1803 bool result; 1811 1804 1812 - /* if rdma info has not been allocated, naturally there are no qps */ 1813 - if (!p_hwfn->p_rdma_info) 1805 + /* if rdma wasn't activated yet, naturally there are no qps */ 1806 + if (!p_hwfn->p_rdma_info->active) 1814 1807 return false; 1815 1808 1816 1809 spin_lock_bh(&p_hwfn->p_rdma_info->lock); ··· 1856 1849 if (!p_ptt) 1857 1850 goto err; 1858 1851 1859 - rc = qed_rdma_alloc(p_hwfn, p_ptt, params); 1852 + rc = qed_rdma_alloc(p_hwfn); 1860 1853 if (rc) 1861 1854 goto err1; 1862 1855 ··· 1865 1858 goto err2; 1866 1859 1867 1860 qed_ptt_release(p_hwfn, p_ptt); 1861 + p_hwfn->p_rdma_info->active = 1; 1868 1862 1869 1863 return rc; 1870 1864
+5
drivers/net/ethernet/qlogic/qed/qed_rdma.h
··· 102 102 u16 max_queue_zones; 103 103 enum protocol_type proto; 104 104 struct qed_iwarp_info iwarp; 105 + u8 active:1; 105 106 }; 106 107 107 108 struct qed_rdma_qp { ··· 177 176 #if IS_ENABLED(CONFIG_QED_RDMA) 178 177 void qed_rdma_dpm_bar(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt); 179 178 void qed_rdma_dpm_conf(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt); 179 + int qed_rdma_info_alloc(struct qed_hwfn *p_hwfn); 180 + void qed_rdma_info_free(struct qed_hwfn *p_hwfn); 180 181 #else 181 182 static inline void qed_rdma_dpm_conf(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) {} 182 183 static inline void qed_rdma_dpm_bar(struct qed_hwfn *p_hwfn, 183 184 struct qed_ptt *p_ptt) {} 185 + static inline int qed_rdma_info_alloc(struct qed_hwfn *p_hwfn) {return -EINVAL;} 186 + static inline void qed_rdma_info_free(struct qed_hwfn *p_hwfn) {} 184 187 #endif 185 188 186 189 int
+5 -5
drivers/net/phy/mdio-gpio.c
··· 63 63 * assume the pin serves as pull-up. If direction is 64 64 * output, the default value is high. 65 65 */ 66 - gpiod_set_value(bitbang->mdo, 1); 66 + gpiod_set_value_cansleep(bitbang->mdo, 1); 67 67 return; 68 68 } 69 69 ··· 78 78 struct mdio_gpio_info *bitbang = 79 79 container_of(ctrl, struct mdio_gpio_info, ctrl); 80 80 81 - return gpiod_get_value(bitbang->mdio); 81 + return gpiod_get_value_cansleep(bitbang->mdio); 82 82 } 83 83 84 84 static void mdio_set(struct mdiobb_ctrl *ctrl, int what) ··· 87 87 container_of(ctrl, struct mdio_gpio_info, ctrl); 88 88 89 89 if (bitbang->mdo) 90 - gpiod_set_value(bitbang->mdo, what); 90 + gpiod_set_value_cansleep(bitbang->mdo, what); 91 91 else 92 - gpiod_set_value(bitbang->mdio, what); 92 + gpiod_set_value_cansleep(bitbang->mdio, what); 93 93 } 94 94 95 95 static void mdc_set(struct mdiobb_ctrl *ctrl, int what) ··· 97 97 struct mdio_gpio_info *bitbang = 98 98 container_of(ctrl, struct mdio_gpio_info, ctrl); 99 99 100 - gpiod_set_value(bitbang->mdc, what); 100 + gpiod_set_value_cansleep(bitbang->mdc, what); 101 101 } 102 102 103 103 static const struct mdiobb_ops mdio_gpio_ops = {
+5 -9
drivers/net/phy/mscc.c
··· 810 810 811 811 phydev->mdix_ctrl = ETH_TP_MDI_AUTO; 812 812 mutex_lock(&phydev->lock); 813 - rc = phy_select_page(phydev, MSCC_PHY_PAGE_EXTENDED_2); 814 - if (rc < 0) 815 - goto out_unlock; 816 813 817 - reg_val = phy_read(phydev, MSCC_PHY_RGMII_CNTL); 818 - reg_val &= ~(RGMII_RX_CLK_DELAY_MASK); 819 - reg_val |= (RGMII_RX_CLK_DELAY_1_1_NS << RGMII_RX_CLK_DELAY_POS); 820 - phy_write(phydev, MSCC_PHY_RGMII_CNTL, reg_val); 814 + reg_val = RGMII_RX_CLK_DELAY_1_1_NS << RGMII_RX_CLK_DELAY_POS; 821 815 822 - out_unlock: 823 - rc = phy_restore_page(phydev, rc, rc > 0 ? 0 : rc); 816 + rc = phy_modify_paged(phydev, MSCC_PHY_PAGE_EXTENDED_2, 817 + MSCC_PHY_RGMII_CNTL, RGMII_RX_CLK_DELAY_MASK, 818 + reg_val); 819 + 824 820 mutex_unlock(&phydev->lock); 825 821 826 822 return rc;
-2
drivers/net/team/team.c
··· 985 985 team->en_port_count--; 986 986 team_queue_override_port_del(team, port); 987 987 team_adjust_ops(team); 988 - team_notify_peers(team); 989 - team_mcast_rejoin(team); 990 988 team_lower_state_changed(port); 991 989 } 992 990
+6 -1
drivers/net/tun.c
··· 1536 1536 1537 1537 if (!rx_batched || (!more && skb_queue_empty(queue))) { 1538 1538 local_bh_disable(); 1539 + skb_record_rx_queue(skb, tfile->queue_index); 1539 1540 netif_receive_skb(skb); 1540 1541 local_bh_enable(); 1541 1542 return; ··· 1556 1555 struct sk_buff *nskb; 1557 1556 1558 1557 local_bh_disable(); 1559 - while ((nskb = __skb_dequeue(&process_queue))) 1558 + while ((nskb = __skb_dequeue(&process_queue))) { 1559 + skb_record_rx_queue(nskb, tfile->queue_index); 1560 1560 netif_receive_skb(nskb); 1561 + } 1562 + skb_record_rx_queue(skb, tfile->queue_index); 1561 1563 netif_receive_skb(skb); 1562 1564 local_bh_enable(); 1563 1565 } ··· 2455 2451 if (!rcu_dereference(tun->steering_prog)) 2456 2452 rxhash = __skb_get_hash_symmetric(skb); 2457 2453 2454 + skb_record_rx_queue(skb, tfile->queue_index); 2458 2455 netif_receive_skb(skb); 2459 2456 2460 2457 stats = get_cpu_ptr(tun->pcpu_stats);
+5 -8
drivers/net/virtio_net.c
··· 70 70 VIRTIO_NET_F_GUEST_TSO4, 71 71 VIRTIO_NET_F_GUEST_TSO6, 72 72 VIRTIO_NET_F_GUEST_ECN, 73 - VIRTIO_NET_F_GUEST_UFO 73 + VIRTIO_NET_F_GUEST_UFO, 74 + VIRTIO_NET_F_GUEST_CSUM 74 75 }; 75 76 76 77 struct virtnet_stat_desc { ··· 2335 2334 if (!vi->guest_offloads) 2336 2335 return 0; 2337 2336 2338 - if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_CSUM)) 2339 - offloads = 1ULL << VIRTIO_NET_F_GUEST_CSUM; 2340 - 2341 2337 return virtnet_set_guest_offloads(vi, offloads); 2342 2338 } 2343 2339 ··· 2344 2346 2345 2347 if (!vi->guest_offloads) 2346 2348 return 0; 2347 - if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_CSUM)) 2348 - offloads |= 1ULL << VIRTIO_NET_F_GUEST_CSUM; 2349 2349 2350 2350 return virtnet_set_guest_offloads(vi, offloads); 2351 2351 } ··· 2361 2365 && (virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_TSO4) || 2362 2366 virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_TSO6) || 2363 2367 virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_ECN) || 2364 - virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_UFO))) { 2365 - NL_SET_ERR_MSG_MOD(extack, "Can't set XDP while host is implementing LRO, disable LRO first"); 2368 + virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_UFO) || 2369 + virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_CSUM))) { 2370 + NL_SET_ERR_MSG_MOD(extack, "Can't set XDP while host is implementing LRO/CSUM, disable LRO/CSUM first"); 2366 2371 return -EOPNOTSUPP; 2367 2372 } 2368 2373
+1 -1
drivers/net/wireless/ath/ath10k/mac.c
··· 6867 6867 u32 bitmap; 6868 6868 6869 6869 if (drop) { 6870 - if (vif->type == NL80211_IFTYPE_STATION) { 6870 + if (vif && vif->type == NL80211_IFTYPE_STATION) { 6871 6871 bitmap = ~(1 << WMI_MGMT_TID); 6872 6872 list_for_each_entry(arvif, &ar->arvifs, list) { 6873 6873 if (arvif->vdev_type == WMI_VDEV_TYPE_STA)
+1 -2
drivers/net/wireless/ath/ath9k/main.c
··· 1251 1251 struct ath_vif *avp = (void *)vif->drv_priv; 1252 1252 struct ath_node *an = &avp->mcast_node; 1253 1253 1254 + mutex_lock(&sc->mutex); 1254 1255 if (IS_ENABLED(CONFIG_ATH9K_TX99)) { 1255 1256 if (sc->cur_chan->nvifs >= 1) { 1256 1257 mutex_unlock(&sc->mutex); ··· 1259 1258 } 1260 1259 sc->tx99_vif = vif; 1261 1260 } 1262 - 1263 - mutex_lock(&sc->mutex); 1264 1261 1265 1262 ath_dbg(common, CONFIG, "Attach a VIF of type: %d\n", vif->type); 1266 1263 sc->cur_chan->nvifs++;
+2 -1
drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
··· 6005 6005 * for subsequent chanspecs. 6006 6006 */ 6007 6007 channel->flags = IEEE80211_CHAN_NO_HT40 | 6008 - IEEE80211_CHAN_NO_80MHZ; 6008 + IEEE80211_CHAN_NO_80MHZ | 6009 + IEEE80211_CHAN_NO_160MHZ; 6009 6010 ch.bw = BRCMU_CHAN_BW_20; 6010 6011 cfg->d11inf.encchspec(&ch); 6011 6012 chaninfo = ch.chspec;
+3
drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c
··· 193 193 } 194 194 break; 195 195 case BRCMU_CHSPEC_D11AC_BW_160: 196 + ch->bw = BRCMU_CHAN_BW_160; 197 + ch->sb = brcmu_maskget16(ch->chspec, BRCMU_CHSPEC_D11AC_SB_MASK, 198 + BRCMU_CHSPEC_D11AC_SB_SHIFT); 196 199 switch (ch->sb) { 197 200 case BRCMU_CHAN_SB_LLL: 198 201 ch->control_ch_num -= CH_70MHZ_APART;
+3 -1
drivers/net/wireless/intel/iwlwifi/fw/acpi.h
··· 6 6 * GPL LICENSE SUMMARY 7 7 * 8 8 * Copyright(c) 2017 Intel Deutschland GmbH 9 + * Copyright(c) 2018 Intel Corporation 9 10 * 10 11 * This program is free software; you can redistribute it and/or modify 11 12 * it under the terms of version 2 of the GNU General Public License as ··· 27 26 * BSD LICENSE 28 27 * 29 28 * Copyright(c) 2017 Intel Deutschland GmbH 29 + * Copyright(c) 2018 Intel Corporation 30 30 * All rights reserved. 31 31 * 32 32 * Redistribution and use in source and binary forms, with or without ··· 83 81 #define ACPI_WRDS_WIFI_DATA_SIZE (ACPI_SAR_TABLE_SIZE + 2) 84 82 #define ACPI_EWRD_WIFI_DATA_SIZE ((ACPI_SAR_PROFILE_NUM - 1) * \ 85 83 ACPI_SAR_TABLE_SIZE + 3) 86 - #define ACPI_WGDS_WIFI_DATA_SIZE 18 84 + #define ACPI_WGDS_WIFI_DATA_SIZE 19 87 85 #define ACPI_WRDD_WIFI_DATA_SIZE 2 88 86 #define ACPI_SPLC_WIFI_DATA_SIZE 2 89 87
+5 -1
drivers/net/wireless/intel/iwlwifi/fw/runtime.h
··· 154 154 const struct iwl_fw_runtime_ops *ops, void *ops_ctx, 155 155 struct dentry *dbgfs_dir); 156 156 157 - void iwl_fw_runtime_exit(struct iwl_fw_runtime *fwrt); 157 + static inline void iwl_fw_runtime_free(struct iwl_fw_runtime *fwrt) 158 + { 159 + kfree(fwrt->dump.d3_debug_data); 160 + fwrt->dump.d3_debug_data = NULL; 161 + } 158 162 159 163 void iwl_fw_runtime_suspend(struct iwl_fw_runtime *fwrt); 160 164
+29 -9
drivers/net/wireless/intel/iwlwifi/mvm/fw.c
··· 893 893 IWL_DEBUG_RADIO(mvm, "Sending GEO_TX_POWER_LIMIT\n"); 894 894 895 895 BUILD_BUG_ON(ACPI_NUM_GEO_PROFILES * ACPI_WGDS_NUM_BANDS * 896 - ACPI_WGDS_TABLE_SIZE != ACPI_WGDS_WIFI_DATA_SIZE); 896 + ACPI_WGDS_TABLE_SIZE + 1 != ACPI_WGDS_WIFI_DATA_SIZE); 897 897 898 898 BUILD_BUG_ON(ACPI_NUM_GEO_PROFILES > IWL_NUM_GEO_PROFILES); 899 899 ··· 928 928 return -ENOENT; 929 929 } 930 930 931 + static int iwl_mvm_sar_get_wgds_table(struct iwl_mvm *mvm) 932 + { 933 + return -ENOENT; 934 + } 935 + 931 936 static int iwl_mvm_sar_geo_init(struct iwl_mvm *mvm) 932 937 { 933 938 return 0; ··· 959 954 IWL_DEBUG_RADIO(mvm, 960 955 "WRDS SAR BIOS table invalid or unavailable. (%d)\n", 961 956 ret); 962 - /* if not available, don't fail and don't bother with EWRD */ 963 - return 0; 957 + /* 958 + * If not available, don't fail and don't bother with EWRD. 959 + * Return 1 to tell that we can't use WGDS either. 960 + */ 961 + return 1; 964 962 } 965 963 966 964 ret = iwl_mvm_sar_get_ewrd_table(mvm); ··· 976 968 /* choose profile 1 (WRDS) as default for both chains */ 977 969 ret = iwl_mvm_sar_select_profile(mvm, 1, 1); 978 970 979 - /* if we don't have profile 0 from BIOS, just skip it */ 971 + /* 972 + * If we don't have profile 0 from BIOS, just skip it. This 973 + * means that SAR Geo will not be enabled either, even if we 974 + * have other valid profiles. 975 + */ 980 976 if (ret == -ENOENT) 981 - return 0; 977 + return 1; 982 978 983 979 return ret; 984 980 } ··· 1180 1168 iwl_mvm_unref(mvm, IWL_MVM_REF_UCODE_DOWN); 1181 1169 1182 1170 ret = iwl_mvm_sar_init(mvm); 1183 - if (ret) 1184 - goto error; 1171 + if (ret == 0) { 1172 + ret = iwl_mvm_sar_geo_init(mvm); 1173 + } else if (ret > 0 && !iwl_mvm_sar_get_wgds_table(mvm)) { 1174 + /* 1175 + * If basic SAR is not available, we check for WGDS, 1176 + * which should *not* be available either. If it is 1177 + * available, issue an error, because we can't use SAR 1178 + * Geo without basic SAR. 1179 + */ 1180 + IWL_ERR(mvm, "BIOS contains WGDS but no WRDS\n"); 1181 + } 1185 1182 1186 - ret = iwl_mvm_sar_geo_init(mvm); 1187 - if (ret) 1183 + if (ret < 0) 1188 1184 goto error; 1189 1185 1190 1186 iwl_mvm_leds_sync(mvm);
+6 -6
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
··· 301 301 goto out; 302 302 } 303 303 304 - if (changed) 305 - *changed = (resp->status == MCC_RESP_NEW_CHAN_PROFILE); 304 + if (changed) { 305 + u32 status = le32_to_cpu(resp->status); 306 + 307 + *changed = (status == MCC_RESP_NEW_CHAN_PROFILE || 308 + status == MCC_RESP_ILLEGAL); 309 + } 306 310 307 311 regd = iwl_parse_nvm_mcc_info(mvm->trans->dev, mvm->cfg, 308 312 __le32_to_cpu(resp->n_channels), ··· 4447 4443 sinfo->signal_avg = mvmsta->avg_energy; 4448 4444 sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL_AVG); 4449 4445 } 4450 - 4451 - if (!fw_has_capa(&mvm->fw->ucode_capa, 4452 - IWL_UCODE_TLV_CAPA_RADIO_BEACON_STATS)) 4453 - return; 4454 4446 4455 4447 /* if beacon filtering isn't on mac80211 does it anyway */ 4456 4448 if (!(vif->driver_flags & IEEE80211_VIF_BEACON_FILTER))
+2 -3
drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
··· 539 539 } 540 540 541 541 IWL_DEBUG_LAR(mvm, 542 - "MCC response status: 0x%x. new MCC: 0x%x ('%c%c') change: %d n_chans: %d\n", 543 - status, mcc, mcc >> 8, mcc & 0xff, 544 - !!(status == MCC_RESP_NEW_CHAN_PROFILE), n_channels); 542 + "MCC response status: 0x%x. new MCC: 0x%x ('%c%c') n_chans: %d\n", 543 + status, mcc, mcc >> 8, mcc & 0xff, n_channels); 545 544 546 545 exit: 547 546 iwl_free_resp(&cmd);
+2
drivers/net/wireless/intel/iwlwifi/mvm/ops.c
··· 858 858 iwl_mvm_thermal_exit(mvm); 859 859 out_free: 860 860 iwl_fw_flush_dump(&mvm->fwrt); 861 + iwl_fw_runtime_free(&mvm->fwrt); 861 862 862 863 if (iwlmvm_mod_params.init_dbg) 863 864 return op_mode; ··· 911 910 912 911 iwl_mvm_tof_clean(mvm); 913 912 913 + iwl_fw_runtime_free(&mvm->fwrt); 914 914 mutex_destroy(&mvm->mutex); 915 915 mutex_destroy(&mvm->d0i3_suspend_mutex); 916 916
+6
drivers/net/wireless/mediatek/mt76/Kconfig
··· 1 1 config MT76_CORE 2 2 tristate 3 3 4 + config MT76_LEDS 5 + bool 6 + depends on MT76_CORE 7 + depends on LEDS_CLASS=y || MT76_CORE=LEDS_CLASS 8 + default y 9 + 4 10 config MT76_USB 5 11 tristate 6 12 depends on MT76_CORE
+5 -3
drivers/net/wireless/mediatek/mt76/mac80211.c
··· 345 345 mt76_check_sband(dev, NL80211_BAND_2GHZ); 346 346 mt76_check_sband(dev, NL80211_BAND_5GHZ); 347 347 348 - ret = mt76_led_init(dev); 349 - if (ret) 350 - return ret; 348 + if (IS_ENABLED(CONFIG_MT76_LEDS)) { 349 + ret = mt76_led_init(dev); 350 + if (ret) 351 + return ret; 352 + } 351 353 352 354 return ieee80211_register_hw(hw); 353 355 }
-1
drivers/net/wireless/mediatek/mt76/mt76x02.h
··· 71 71 struct mac_address macaddr_list[8]; 72 72 73 73 struct mutex phy_mutex; 74 - struct mutex mutex; 75 74 76 75 u8 txdone_seq; 77 76 DECLARE_KFIFO_PTR(txstatus_fifo, struct mt76x02_tx_status);
+4 -2
drivers/net/wireless/mediatek/mt76/mt76x2/pci_init.c
··· 507 507 mt76x2_dfs_init_detector(dev); 508 508 509 509 /* init led callbacks */ 510 - dev->mt76.led_cdev.brightness_set = mt76x2_led_set_brightness; 511 - dev->mt76.led_cdev.blink_set = mt76x2_led_set_blink; 510 + if (IS_ENABLED(CONFIG_MT76_LEDS)) { 511 + dev->mt76.led_cdev.brightness_set = mt76x2_led_set_brightness; 512 + dev->mt76.led_cdev.blink_set = mt76x2_led_set_blink; 513 + } 512 514 513 515 ret = mt76_register_device(&dev->mt76, true, mt76x02_rates, 514 516 ARRAY_SIZE(mt76x02_rates));
+2 -2
drivers/net/wireless/mediatek/mt76/mt76x2/pci_main.c
··· 272 272 if (val != ~0 && val > 0xffff) 273 273 return -EINVAL; 274 274 275 - mutex_lock(&dev->mutex); 275 + mutex_lock(&dev->mt76.mutex); 276 276 mt76x2_mac_set_tx_protection(dev, val); 277 - mutex_unlock(&dev->mutex); 277 + mutex_unlock(&dev->mt76.mutex); 278 278 279 279 return 0; 280 280 }
+11 -6
drivers/net/wireless/ti/wlcore/sdio.c
··· 285 285 struct resource res[2]; 286 286 mmc_pm_flag_t mmcflags; 287 287 int ret = -ENOMEM; 288 - int irq, wakeirq; 288 + int irq, wakeirq, num_irqs; 289 289 const char *chip_family; 290 290 291 291 /* We are only able to handle the wlan function */ ··· 353 353 irqd_get_trigger_type(irq_get_irq_data(irq)); 354 354 res[0].name = "irq"; 355 355 356 - res[1].start = wakeirq; 357 - res[1].flags = IORESOURCE_IRQ | 358 - irqd_get_trigger_type(irq_get_irq_data(wakeirq)); 359 - res[1].name = "wakeirq"; 360 356 361 - ret = platform_device_add_resources(glue->core, res, ARRAY_SIZE(res)); 357 + if (wakeirq > 0) { 358 + res[1].start = wakeirq; 359 + res[1].flags = IORESOURCE_IRQ | 360 + irqd_get_trigger_type(irq_get_irq_data(wakeirq)); 361 + res[1].name = "wakeirq"; 362 + num_irqs = 2; 363 + } else { 364 + num_irqs = 1; 365 + } 366 + ret = platform_device_add_resources(glue->core, res, num_irqs); 362 367 if (ret) { 363 368 dev_err(glue->dev, "can't add resources\n"); 364 369 goto out_dev_put;
+64 -11
drivers/nvme/host/fc.c
··· 152 152 153 153 bool ioq_live; 154 154 bool assoc_active; 155 + atomic_t err_work_active; 155 156 u64 association_id; 156 157 157 158 struct list_head ctrl_list; /* rport->ctrl_list */ ··· 161 160 struct blk_mq_tag_set tag_set; 162 161 163 162 struct delayed_work connect_work; 163 + struct work_struct err_work; 164 164 165 165 struct kref ref; 166 166 u32 flags; ··· 1533 1531 struct nvme_fc_fcp_op *aen_op = ctrl->aen_ops; 1534 1532 int i; 1535 1533 1534 + /* ensure we've initialized the ops once */ 1535 + if (!(aen_op->flags & FCOP_FLAGS_AEN)) 1536 + return; 1537 + 1536 1538 for (i = 0; i < NVME_NR_AEN_COMMANDS; i++, aen_op++) 1537 1539 __nvme_fc_abort_op(ctrl, aen_op); 1538 1540 } ··· 2055 2049 static void 2056 2050 nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg) 2057 2051 { 2058 - /* only proceed if in LIVE state - e.g. on first error */ 2052 + int active; 2053 + 2054 + /* 2055 + * if an error (io timeout, etc) while (re)connecting, 2056 + * it's an error on creating the new association. 2057 + * Start the error recovery thread if it hasn't already 2058 + * been started. It is expected there could be multiple 2059 + * ios hitting this path before things are cleaned up. 2060 + */ 2061 + if (ctrl->ctrl.state == NVME_CTRL_CONNECTING) { 2062 + active = atomic_xchg(&ctrl->err_work_active, 1); 2063 + if (!active && !schedule_work(&ctrl->err_work)) { 2064 + atomic_set(&ctrl->err_work_active, 0); 2065 + WARN_ON(1); 2066 + } 2067 + return; 2068 + } 2069 + 2070 + /* Otherwise, only proceed if in LIVE state - e.g. on first error */ 2059 2071 if (ctrl->ctrl.state != NVME_CTRL_LIVE) 2060 2072 return; 2061 2073 ··· 2838 2814 { 2839 2815 struct nvme_fc_ctrl *ctrl = to_fc_ctrl(nctrl); 2840 2816 2817 + cancel_work_sync(&ctrl->err_work); 2841 2818 cancel_delayed_work_sync(&ctrl->connect_work); 2842 2819 /* 2843 2820 * kill the association on the link side. this will block ··· 2891 2866 } 2892 2867 2893 2868 static void 2869 + __nvme_fc_terminate_io(struct nvme_fc_ctrl *ctrl) 2870 + { 2871 + nvme_stop_keep_alive(&ctrl->ctrl); 2872 + 2873 + /* will block will waiting for io to terminate */ 2874 + nvme_fc_delete_association(ctrl); 2875 + 2876 + if (ctrl->ctrl.state != NVME_CTRL_CONNECTING && 2877 + !nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) 2878 + dev_err(ctrl->ctrl.device, 2879 + "NVME-FC{%d}: error_recovery: Couldn't change state " 2880 + "to CONNECTING\n", ctrl->cnum); 2881 + } 2882 + 2883 + static void 2894 2884 nvme_fc_reset_ctrl_work(struct work_struct *work) 2895 2885 { 2896 2886 struct nvme_fc_ctrl *ctrl = 2897 2887 container_of(work, struct nvme_fc_ctrl, ctrl.reset_work); 2898 2888 int ret; 2899 2889 2890 + __nvme_fc_terminate_io(ctrl); 2891 + 2900 2892 nvme_stop_ctrl(&ctrl->ctrl); 2901 - 2902 - /* will block will waiting for io to terminate */ 2903 - nvme_fc_delete_association(ctrl); 2904 - 2905 - if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) { 2906 - dev_err(ctrl->ctrl.device, 2907 - "NVME-FC{%d}: error_recovery: Couldn't change state " 2908 - "to CONNECTING\n", ctrl->cnum); 2909 - return; 2910 - } 2911 2893 2912 2894 if (ctrl->rport->remoteport.port_state == FC_OBJSTATE_ONLINE) 2913 2895 ret = nvme_fc_create_association(ctrl); ··· 2927 2895 dev_info(ctrl->ctrl.device, 2928 2896 "NVME-FC{%d}: controller reset complete\n", 2929 2897 ctrl->cnum); 2898 + } 2899 + 2900 + static void 2901 + nvme_fc_connect_err_work(struct work_struct *work) 2902 + { 2903 + struct nvme_fc_ctrl *ctrl = 2904 + container_of(work, struct nvme_fc_ctrl, err_work); 2905 + 2906 + __nvme_fc_terminate_io(ctrl); 2907 + 2908 + atomic_set(&ctrl->err_work_active, 0); 2909 + 2910 + /* 2911 + * Rescheduling the connection after recovering 2912 + * from the io error is left to the reconnect work 2913 + * item, which is what should have stalled waiting on 2914 + * the io that had the error that scheduled this work. 2915 + */ 2930 2916 } 2931 2917 2932 2918 static const struct nvme_ctrl_ops nvme_fc_ctrl_ops = { ··· 3057 3007 ctrl->cnum = idx; 3058 3008 ctrl->ioq_live = false; 3059 3009 ctrl->assoc_active = false; 3010 + atomic_set(&ctrl->err_work_active, 0); 3060 3011 init_waitqueue_head(&ctrl->ioabort_wait); 3061 3012 3062 3013 get_device(ctrl->dev); ··· 3065 3014 3066 3015 INIT_WORK(&ctrl->ctrl.reset_work, nvme_fc_reset_ctrl_work); 3067 3016 INIT_DELAYED_WORK(&ctrl->connect_work, nvme_fc_connect_ctrl_work); 3017 + INIT_WORK(&ctrl->err_work, nvme_fc_connect_err_work); 3068 3018 spin_lock_init(&ctrl->lock); 3069 3019 3070 3020 /* io queue count */ ··· 3155 3103 fail_ctrl: 3156 3104 nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_DELETING); 3157 3105 cancel_work_sync(&ctrl->ctrl.reset_work); 3106 + cancel_work_sync(&ctrl->err_work); 3158 3107 cancel_delayed_work_sync(&ctrl->connect_work); 3159 3108 3160 3109 ctrl->ctrl.opts = NULL;
+6 -4
drivers/nvmem/core.c
··· 44 44 int bytes; 45 45 int bit_offset; 46 46 int nbits; 47 + struct device_node *np; 47 48 struct nvmem_device *nvmem; 48 49 struct list_head node; 49 50 }; ··· 299 298 mutex_lock(&nvmem_mutex); 300 299 list_del(&cell->node); 301 300 mutex_unlock(&nvmem_mutex); 301 + of_node_put(cell->np); 302 302 kfree(cell->name); 303 303 kfree(cell); 304 304 } ··· 532 530 return -ENOMEM; 533 531 534 532 cell->nvmem = nvmem; 533 + cell->np = of_node_get(child); 535 534 cell->offset = be32_to_cpup(addr++); 536 535 cell->bytes = be32_to_cpup(addr); 537 536 cell->name = kasprintf(GFP_KERNEL, "%pOFn", child); ··· 963 960 964 961 #if IS_ENABLED(CONFIG_OF) 965 962 static struct nvmem_cell * 966 - nvmem_find_cell_by_index(struct nvmem_device *nvmem, int index) 963 + nvmem_find_cell_by_node(struct nvmem_device *nvmem, struct device_node *np) 967 964 { 968 965 struct nvmem_cell *cell = NULL; 969 - int i = 0; 970 966 971 967 mutex_lock(&nvmem_mutex); 972 968 list_for_each_entry(cell, &nvmem->cells, node) { 973 - if (index == i++) 969 + if (np == cell->np) 974 970 break; 975 971 } 976 972 mutex_unlock(&nvmem_mutex); ··· 1013 1011 if (IS_ERR(nvmem)) 1014 1012 return ERR_CAST(nvmem); 1015 1013 1016 - cell = nvmem_find_cell_by_index(nvmem, index); 1014 + cell = nvmem_find_cell_by_node(nvmem, cell_np); 1017 1015 if (!cell) { 1018 1016 __nvmem_device_put(nvmem); 1019 1017 return ERR_PTR(-ENOENT);
+4 -1
drivers/opp/ti-opp-supply.c
··· 288 288 int ret; 289 289 290 290 vdd_uv = _get_optimal_vdd_voltage(dev, &opp_data, 291 - new_supply_vbb->u_volt); 291 + new_supply_vdd->u_volt); 292 + 293 + if (new_supply_vdd->u_volt_min < vdd_uv) 294 + new_supply_vdd->u_volt_min = vdd_uv; 292 295 293 296 /* Scaling up? Scale voltage before frequency */ 294 297 if (freq > old_freq) {
-5
drivers/pci/pci-acpi.c
··· 793 793 { 794 794 struct pci_dev *pci_dev = to_pci_dev(dev); 795 795 struct acpi_device *adev = ACPI_COMPANION(dev); 796 - int node; 797 796 798 797 if (!adev) 799 798 return; 800 - 801 - node = acpi_get_node(adev->handle); 802 - if (node != NUMA_NO_NODE) 803 - set_dev_node(dev, node); 804 799 805 800 pci_acpi_optimize_delay(pci_dev, adev->handle); 806 801
+1 -1
drivers/pinctrl/meson/pinctrl-meson-gxbb.c
··· 830 830 831 831 static struct meson_bank meson_gxbb_aobus_banks[] = { 832 832 /* name first last irq pullen pull dir out in */ 833 - BANK("AO", GPIOAO_0, GPIOAO_13, 0, 13, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0), 833 + BANK("AO", GPIOAO_0, GPIOAO_13, 0, 13, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0), 834 834 }; 835 835 836 836 static struct meson_pinctrl_data meson_gxbb_periphs_pinctrl_data = {
+1 -1
drivers/pinctrl/meson/pinctrl-meson-gxl.c
··· 807 807 808 808 static struct meson_bank meson_gxl_aobus_banks[] = { 809 809 /* name first last irq pullen pull dir out in */ 810 - BANK("AO", GPIOAO_0, GPIOAO_9, 0, 9, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0), 810 + BANK("AO", GPIOAO_0, GPIOAO_9, 0, 9, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0), 811 811 }; 812 812 813 813 static struct meson_pinctrl_data meson_gxl_periphs_pinctrl_data = {
+1 -1
drivers/pinctrl/meson/pinctrl-meson.c
··· 192 192 dev_dbg(pc->dev, "pin %u: disable bias\n", pin); 193 193 194 194 meson_calc_reg_and_bit(bank, pin, REG_PULL, &reg, &bit); 195 - ret = regmap_update_bits(pc->reg_pull, reg, 195 + ret = regmap_update_bits(pc->reg_pullen, reg, 196 196 BIT(bit), 0); 197 197 if (ret) 198 198 return ret;
+1 -1
drivers/pinctrl/meson/pinctrl-meson8.c
··· 1053 1053 1054 1054 static struct meson_bank meson8_aobus_banks[] = { 1055 1055 /* name first last irq pullen pull dir out in */ 1056 - BANK("AO", GPIOAO_0, GPIO_TEST_N, 0, 13, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0), 1056 + BANK("AO", GPIOAO_0, GPIO_TEST_N, 0, 13, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0), 1057 1057 }; 1058 1058 1059 1059 static struct meson_pinctrl_data meson8_cbus_pinctrl_data = {
+1 -1
drivers/pinctrl/meson/pinctrl-meson8b.c
··· 906 906 907 907 static struct meson_bank meson8b_aobus_banks[] = { 908 908 /* name first lastc irq pullen pull dir out in */ 909 - BANK("AO", GPIOAO_0, GPIO_TEST_N, 0, 13, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0), 909 + BANK("AO", GPIOAO_0, GPIO_TEST_N, 0, 13, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0), 910 910 }; 911 911 912 912 static struct meson_pinctrl_data meson8b_cbus_pinctrl_data = {
+3 -1
drivers/rtc/hctosys.c
··· 50 50 tv64.tv_sec = rtc_tm_to_time64(&tm); 51 51 52 52 #if BITS_PER_LONG == 32 53 - if (tv64.tv_sec > INT_MAX) 53 + if (tv64.tv_sec > INT_MAX) { 54 + err = -ERANGE; 54 55 goto err_read; 56 + } 55 57 #endif 56 58 57 59 err = do_settimeofday64(&tv64);
+12 -4
drivers/rtc/rtc-cmos.c
··· 257 257 struct cmos_rtc *cmos = dev_get_drvdata(dev); 258 258 unsigned char rtc_control; 259 259 260 + /* This not only a rtc_op, but also called directly */ 260 261 if (!is_valid_irq(cmos->irq)) 261 262 return -EIO; 262 263 ··· 453 452 unsigned char mon, mday, hrs, min, sec, rtc_control; 454 453 int ret; 455 454 455 + /* This not only a rtc_op, but also called directly */ 456 456 if (!is_valid_irq(cmos->irq)) 457 457 return -EIO; 458 458 ··· 518 516 struct cmos_rtc *cmos = dev_get_drvdata(dev); 519 517 unsigned long flags; 520 518 521 - if (!is_valid_irq(cmos->irq)) 522 - return -EINVAL; 523 - 524 519 spin_lock_irqsave(&rtc_lock, flags); 525 520 526 521 if (enabled) ··· 576 577 .set_alarm = cmos_set_alarm, 577 578 .proc = cmos_procfs, 578 579 .alarm_irq_enable = cmos_alarm_irq_enable, 580 + }; 581 + 582 + static const struct rtc_class_ops cmos_rtc_ops_no_alarm = { 583 + .read_time = cmos_read_time, 584 + .set_time = cmos_set_time, 585 + .proc = cmos_procfs, 579 586 }; 580 587 581 588 /*----------------------------------------------------------------*/ ··· 860 855 dev_dbg(dev, "IRQ %d is already in use\n", rtc_irq); 861 856 goto cleanup1; 862 857 } 858 + 859 + cmos_rtc.rtc->ops = &cmos_rtc_ops; 860 + } else { 861 + cmos_rtc.rtc->ops = &cmos_rtc_ops_no_alarm; 863 862 } 864 863 865 - cmos_rtc.rtc->ops = &cmos_rtc_ops; 866 864 cmos_rtc.rtc->nvram_old_abi = true; 867 865 retval = rtc_register_device(cmos_rtc.rtc); 868 866 if (retval)
+3
drivers/rtc/rtc-pcf2127.c
··· 303 303 memcpy(buf + 1, val, val_size); 304 304 305 305 ret = i2c_master_send(client, buf, val_size + 1); 306 + 307 + kfree(buf); 308 + 306 309 if (ret != val_size + 1) 307 310 return ret < 0 ? ret : -EIO; 308 311
+1 -1
drivers/s390/net/ism_drv.c
··· 415 415 break; 416 416 417 417 clear_bit_inv(bit, bv); 418 + ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET] = 0; 418 419 barrier(); 419 420 smcd_handle_irq(ism->smcd, bit + ISM_DMB_BIT_OFFSET); 420 - ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET] = 0; 421 421 } 422 422 423 423 if (ism->sba->e) {
+1
drivers/scsi/Kconfig
··· 578 578 config SCSI_MYRS 579 579 tristate "Mylex DAC960/DAC1100 PCI RAID Controller (SCSI Interface)" 580 580 depends on PCI 581 + depends on !CPU_BIG_ENDIAN || COMPILE_TEST 581 582 select RAID_ATTRS 582 583 help 583 584 This driver adds support for the Mylex DAC960, AcceleRAID, and
+1 -1
drivers/scsi/NCR5380.c
··· 1198 1198 1199 1199 out: 1200 1200 if (!hostdata->selecting) 1201 - return NULL; 1201 + return false; 1202 1202 hostdata->selecting = NULL; 1203 1203 return ret; 1204 1204 }
-2
drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
··· 904 904 { 905 905 struct hisi_hba *hisi_hba = dq->hisi_hba; 906 906 struct hisi_sas_slot *s, *s1, *s2 = NULL; 907 - struct list_head *dq_list; 908 907 int dlvry_queue = dq->id; 909 908 int wp; 910 909 911 - dq_list = &dq->list; 912 910 list_for_each_entry_safe(s, s1, &dq->list, delivery) { 913 911 if (!s->ready) 914 912 break;
-2
drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
··· 1670 1670 { 1671 1671 struct hisi_hba *hisi_hba = dq->hisi_hba; 1672 1672 struct hisi_sas_slot *s, *s1, *s2 = NULL; 1673 - struct list_head *dq_list; 1674 1673 int dlvry_queue = dq->id; 1675 1674 int wp; 1676 1675 1677 - dq_list = &dq->list; 1678 1676 list_for_each_entry_safe(s, s1, &dq->list, delivery) { 1679 1677 if (!s->ready) 1680 1678 break;
-2
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
··· 886 886 { 887 887 struct hisi_hba *hisi_hba = dq->hisi_hba; 888 888 struct hisi_sas_slot *s, *s1, *s2 = NULL; 889 - struct list_head *dq_list; 890 889 int dlvry_queue = dq->id; 891 890 int wp; 892 891 893 - dq_list = &dq->list; 894 892 list_for_each_entry_safe(s, s1, &dq->list, delivery) { 895 893 if (!s->ready) 896 894 break;
+2
drivers/scsi/lpfc/lpfc_debugfs.c
··· 698 698 rport = lpfc_ndlp_get_nrport(ndlp); 699 699 if (rport) 700 700 nrport = rport->remoteport; 701 + else 702 + nrport = NULL; 701 703 spin_unlock(&phba->hbalock); 702 704 if (!nrport) 703 705 continue;
+2 -1
drivers/scsi/myrb.c
··· 1049 1049 enquiry2->fw.firmware_type = '0'; 1050 1050 enquiry2->fw.turn_id = 0; 1051 1051 } 1052 - sprintf(cb->fw_version, "%d.%02d-%c-%02d", 1052 + snprintf(cb->fw_version, sizeof(cb->fw_version), 1053 + "%d.%02d-%c-%02d", 1053 1054 enquiry2->fw.major_version, 1054 1055 enquiry2->fw.minor_version, 1055 1056 enquiry2->fw.firmware_type,
+8 -5
drivers/scsi/myrs.c
··· 163 163 dma_addr_t ctlr_info_addr; 164 164 union myrs_sgl *sgl; 165 165 unsigned char status; 166 - struct myrs_ctlr_info old; 166 + unsigned short ldev_present, ldev_critical, ldev_offline; 167 167 168 - memcpy(&old, cs->ctlr_info, sizeof(struct myrs_ctlr_info)); 168 + ldev_present = cs->ctlr_info->ldev_present; 169 + ldev_critical = cs->ctlr_info->ldev_critical; 170 + ldev_offline = cs->ctlr_info->ldev_offline; 171 + 169 172 ctlr_info_addr = dma_map_single(&cs->pdev->dev, cs->ctlr_info, 170 173 sizeof(struct myrs_ctlr_info), 171 174 DMA_FROM_DEVICE); ··· 201 198 cs->ctlr_info->rbld_active + 202 199 cs->ctlr_info->exp_active != 0) 203 200 cs->needs_update = true; 204 - if (cs->ctlr_info->ldev_present != old.ldev_present || 205 - cs->ctlr_info->ldev_critical != old.ldev_critical || 206 - cs->ctlr_info->ldev_offline != old.ldev_offline) 201 + if (cs->ctlr_info->ldev_present != ldev_present || 202 + cs->ctlr_info->ldev_critical != ldev_critical || 203 + cs->ctlr_info->ldev_offline != ldev_offline) 207 204 shost_printk(KERN_INFO, cs->host, 208 205 "Logical drive count changes (%d/%d/%d)\n", 209 206 cs->ctlr_info->ldev_critical,
+1
drivers/scsi/qla2xxx/qla_init.c
··· 4763 4763 fcport->loop_id = FC_NO_LOOP_ID; 4764 4764 qla2x00_set_fcport_state(fcport, FCS_UNCONFIGURED); 4765 4765 fcport->supported_classes = FC_COS_UNSPECIFIED; 4766 + fcport->fp_speed = PORT_SPEED_UNKNOWN; 4766 4767 4767 4768 fcport->ct_desc.ct_sns = dma_alloc_coherent(&vha->hw->pdev->dev, 4768 4769 sizeof(struct ct_sns_pkt), &fcport->ct_desc.ct_sns_dma,
+9 -3
drivers/scsi/qla2xxx/qla_os.c
··· 67 67 MODULE_PARM_DESC(ql2xplogiabsentdevice, 68 68 "Option to enable PLOGI to devices that are not present after " 69 69 "a Fabric scan. This is needed for several broken switches. " 70 - "Default is 0 - no PLOGI. 1 - perfom PLOGI."); 70 + "Default is 0 - no PLOGI. 1 - perform PLOGI."); 71 71 72 72 int ql2xloginretrycount = 0; 73 73 module_param(ql2xloginretrycount, int, S_IRUGO); ··· 1749 1749 static void 1750 1750 __qla2x00_abort_all_cmds(struct qla_qpair *qp, int res) 1751 1751 { 1752 - int cnt; 1752 + int cnt, status; 1753 1753 unsigned long flags; 1754 1754 srb_t *sp; 1755 1755 scsi_qla_host_t *vha = qp->vha; ··· 1799 1799 if (!sp_get(sp)) { 1800 1800 spin_unlock_irqrestore 1801 1801 (qp->qp_lock_ptr, flags); 1802 - qla2xxx_eh_abort( 1802 + status = qla2xxx_eh_abort( 1803 1803 GET_CMD_SP(sp)); 1804 1804 spin_lock_irqsave 1805 1805 (qp->qp_lock_ptr, flags); 1806 + /* 1807 + * Get rid of extra reference caused 1808 + * by early exit from qla2xxx_eh_abort 1809 + */ 1810 + if (status == FAST_IO_FAIL) 1811 + atomic_dec(&sp->ref_count); 1806 1812 } 1807 1813 } 1808 1814 sp->done(sp, res);
+8
drivers/scsi/scsi_lib.c
··· 697 697 */ 698 698 scsi_mq_uninit_cmd(cmd); 699 699 700 + /* 701 + * queue is still alive, so grab the ref for preventing it 702 + * from being cleaned up during running queue. 703 + */ 704 + percpu_ref_get(&q->q_usage_counter); 705 + 700 706 __blk_mq_end_request(req, error); 701 707 702 708 if (scsi_target(sdev)->single_lun || ··· 710 704 kblockd_schedule_work(&sdev->requeue_work); 711 705 else 712 706 blk_mq_run_hw_queues(q, true); 707 + 708 + percpu_ref_put(&q->q_usage_counter); 713 709 } else { 714 710 unsigned long flags; 715 711
+9
drivers/scsi/ufs/ufs-hisi.c
··· 20 20 #include "unipro.h" 21 21 #include "ufs-hisi.h" 22 22 #include "ufshci.h" 23 + #include "ufs_quirks.h" 23 24 24 25 static int ufs_hisi_check_hibern8(struct ufs_hba *hba) 25 26 { ··· 391 390 392 391 static void ufs_hisi_pwr_change_pre_change(struct ufs_hba *hba) 393 392 { 393 + if (hba->dev_quirks & UFS_DEVICE_QUIRK_HOST_VS_DEBUGSAVECONFIGTIME) { 394 + pr_info("ufs flash device must set VS_DebugSaveConfigTime 0x10\n"); 395 + /* VS_DebugSaveConfigTime */ 396 + ufshcd_dme_set(hba, UIC_ARG_MIB(0xD0A0), 0x10); 397 + /* sync length */ 398 + ufshcd_dme_set(hba, UIC_ARG_MIB(0x1556), 0x48); 399 + } 400 + 394 401 /* update */ 395 402 ufshcd_dme_set(hba, UIC_ARG_MIB(0x15A8), 0x1); 396 403 /* PA_TxSkip */
+6
drivers/scsi/ufs/ufs_quirks.h
··· 131 131 */ 132 132 #define UFS_DEVICE_QUIRK_HOST_PA_SAVECONFIGTIME (1 << 8) 133 133 134 + /* 135 + * Some UFS devices require VS_DebugSaveConfigTime is 0x10, 136 + * enabling this quirk ensure this. 137 + */ 138 + #define UFS_DEVICE_QUIRK_HOST_VS_DEBUGSAVECONFIGTIME (1 << 9) 139 + 134 140 #endif /* UFS_QUIRKS_H_ */
+2 -7
drivers/scsi/ufs/ufshcd.c
··· 231 231 UFS_FIX(UFS_VENDOR_SKHYNIX, UFS_ANY_MODEL, UFS_DEVICE_NO_VCCQ), 232 232 UFS_FIX(UFS_VENDOR_SKHYNIX, UFS_ANY_MODEL, 233 233 UFS_DEVICE_QUIRK_HOST_PA_SAVECONFIGTIME), 234 + UFS_FIX(UFS_VENDOR_SKHYNIX, "hB8aL1" /*H28U62301AMR*/, 235 + UFS_DEVICE_QUIRK_HOST_VS_DEBUGSAVECONFIGTIME), 234 236 235 237 END_FIX 236 238 }; ··· 8101 8099 err = -ENOMEM; 8102 8100 goto out_error; 8103 8101 } 8104 - 8105 - /* 8106 - * Do not use blk-mq at this time because blk-mq does not support 8107 - * runtime pm. 8108 - */ 8109 - host->use_blk_mq = false; 8110 - 8111 8102 hba = shost_priv(host); 8112 8103 hba->host = host; 8113 8104 hba->dev = dev;
-3
drivers/slimbus/qcom-ngd-ctrl.c
··· 777 777 u8 la = txn->la; 778 778 bool usr_msg = false; 779 779 780 - if (txn->mc & SLIM_MSG_CLK_PAUSE_SEQ_FLG) 781 - return -EPROTONOSUPPORT; 782 - 783 780 if (txn->mt == SLIM_MSG_MT_CORE && 784 781 (txn->mc >= SLIM_MSG_MC_BEGIN_RECONFIGURATION && 785 782 txn->mc <= SLIM_MSG_MC_RECONFIGURE_NOW))
-6
drivers/slimbus/slimbus.h
··· 61 61 #define SLIM_MSG_MC_NEXT_REMOVE_CHANNEL 0x58 62 62 #define SLIM_MSG_MC_RECONFIGURE_NOW 0x5F 63 63 64 - /* 65 - * Clock pause flag to indicate that the reconfig message 66 - * corresponds to clock pause sequence 67 - */ 68 - #define SLIM_MSG_CLK_PAUSE_SEQ_FLG (1U << 8) 69 - 70 64 /* Clock pause values per SLIMbus spec */ 71 65 #define SLIM_CLK_FAST 0 72 66 #define SLIM_CLK_CONST_PHASE 1
+1
drivers/staging/media/davinci_vpfe/dm365_ipipeif.c
··· 310 310 ipipeif_write(val, ipipeif_base_addr, IPIPEIF_CFG2); 311 311 break; 312 312 } 313 + /* fall through */ 313 314 314 315 case IPIPEIF_SDRAM_YUV: 315 316 /* Set clock divider */
+1 -1
drivers/staging/media/sunxi/cedrus/cedrus.c
··· 253 253 254 254 static const struct media_device_ops cedrus_m2m_media_ops = { 255 255 .req_validate = cedrus_request_validate, 256 - .req_queue = vb2_m2m_request_queue, 256 + .req_queue = v4l2_m2m_request_queue, 257 257 }; 258 258 259 259 static int cedrus_probe(struct platform_device *pdev)
+2 -2
drivers/target/target_core_transport.c
··· 1778 1778 void transport_generic_request_failure(struct se_cmd *cmd, 1779 1779 sense_reason_t sense_reason) 1780 1780 { 1781 - int ret = 0; 1781 + int ret = 0, post_ret; 1782 1782 1783 1783 pr_debug("-----[ Storage Engine Exception; sense_reason %d\n", 1784 1784 sense_reason); ··· 1790 1790 transport_complete_task_attr(cmd); 1791 1791 1792 1792 if (cmd->transport_complete_callback) 1793 - cmd->transport_complete_callback(cmd, false, NULL); 1793 + cmd->transport_complete_callback(cmd, false, &post_ret); 1794 1794 1795 1795 if (transport_check_aborted_status(cmd, 1)) 1796 1796 return;
+5 -2
drivers/uio/uio.c
··· 961 961 if (ret) 962 962 goto err_uio_dev_add_attributes; 963 963 964 + info->uio_dev = idev; 965 + 964 966 if (info->irq && (info->irq != UIO_IRQ_CUSTOM)) { 965 967 /* 966 968 * Note that we deliberately don't use devm_request_irq ··· 974 972 */ 975 973 ret = request_irq(info->irq, uio_interrupt, 976 974 info->irq_flags, info->name, idev); 977 - if (ret) 975 + if (ret) { 976 + info->uio_dev = NULL; 978 977 goto err_request_irq; 978 + } 979 979 } 980 980 981 - info->uio_dev = idev; 982 981 return 0; 983 982 984 983 err_request_irq:
+3
drivers/usb/class/cdc-acm.c
··· 1696 1696 { USB_DEVICE(0x0572, 0x1328), /* Shiro / Aztech USB MODEM UM-3100 */ 1697 1697 .driver_info = NO_UNION_NORMAL, /* has no union descriptor */ 1698 1698 }, 1699 + { USB_DEVICE(0x0572, 0x1349), /* Hiro (Conexant) USB MODEM H50228 */ 1700 + .driver_info = NO_UNION_NORMAL, /* has no union descriptor */ 1701 + }, 1699 1702 { USB_DEVICE(0x20df, 0x0001), /* Simtec Electronics Entropy Key */ 1700 1703 .driver_info = QUIRK_CONTROL_LINE_STATE, }, 1701 1704 { USB_DEVICE(0x2184, 0x001c) }, /* GW Instek AFG-2225 */
+14 -4
drivers/usb/core/hub.c
··· 2794 2794 int i, status; 2795 2795 u16 portchange, portstatus; 2796 2796 struct usb_port *port_dev = hub->ports[port1 - 1]; 2797 + int reset_recovery_time; 2797 2798 2798 2799 if (!hub_is_superspeed(hub->hdev)) { 2799 2800 if (warm) { ··· 2850 2849 USB_PORT_FEAT_C_BH_PORT_RESET); 2851 2850 usb_clear_port_feature(hub->hdev, port1, 2852 2851 USB_PORT_FEAT_C_PORT_LINK_STATE); 2853 - usb_clear_port_feature(hub->hdev, port1, 2852 + 2853 + if (udev) 2854 + usb_clear_port_feature(hub->hdev, port1, 2854 2855 USB_PORT_FEAT_C_CONNECTION); 2855 2856 2856 2857 /* ··· 2888 2885 2889 2886 done: 2890 2887 if (status == 0) { 2891 - /* TRSTRCY = 10 ms; plus some extra */ 2892 2888 if (port_dev->quirks & USB_PORT_QUIRK_FAST_ENUM) 2893 2889 usleep_range(10000, 12000); 2894 - else 2895 - msleep(10 + 40); 2890 + else { 2891 + /* TRSTRCY = 10 ms; plus some extra */ 2892 + reset_recovery_time = 10 + 40; 2893 + 2894 + /* Hub needs extra delay after resetting its port. */ 2895 + if (hub->hdev->quirks & USB_QUIRK_HUB_SLOW_RESET) 2896 + reset_recovery_time += 100; 2897 + 2898 + msleep(reset_recovery_time); 2899 + } 2896 2900 2897 2901 if (udev) { 2898 2902 struct usb_hcd *hcd = bus_to_hcd(udev->bus);
+14
drivers/usb/core/quirks.c
··· 128 128 case 'n': 129 129 flags |= USB_QUIRK_DELAY_CTRL_MSG; 130 130 break; 131 + case 'o': 132 + flags |= USB_QUIRK_HUB_SLOW_RESET; 133 + break; 131 134 /* Ignore unrecognized flag characters */ 132 135 } 133 136 } ··· 383 380 { USB_DEVICE(0x1a0a, 0x0200), .driver_info = 384 381 USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL }, 385 382 383 + /* Terminus Technology Inc. Hub */ 384 + { USB_DEVICE(0x1a40, 0x0101), .driver_info = USB_QUIRK_HUB_SLOW_RESET }, 385 + 386 386 /* Corsair K70 RGB */ 387 387 { USB_DEVICE(0x1b1c, 0x1b13), .driver_info = USB_QUIRK_DELAY_INIT }, 388 388 ··· 396 390 /* Corsair Strafe RGB */ 397 391 { USB_DEVICE(0x1b1c, 0x1b20), .driver_info = USB_QUIRK_DELAY_INIT | 398 392 USB_QUIRK_DELAY_CTRL_MSG }, 393 + 394 + /* Corsair K70 LUX RGB */ 395 + { USB_DEVICE(0x1b1c, 0x1b33), .driver_info = USB_QUIRK_DELAY_INIT }, 399 396 400 397 /* Corsair K70 LUX */ 401 398 { USB_DEVICE(0x1b1c, 0x1b36), .driver_info = USB_QUIRK_DELAY_INIT }, ··· 419 410 /* Hauppauge HVR-950q */ 420 411 { USB_DEVICE(0x2040, 0x7200), .driver_info = 421 412 USB_QUIRK_CONFIG_INTF_STRINGS }, 413 + 414 + /* Raydium Touchscreen */ 415 + { USB_DEVICE(0x2386, 0x3114), .driver_info = USB_QUIRK_NO_LPM }, 416 + 417 + { USB_DEVICE(0x2386, 0x3119), .driver_info = USB_QUIRK_NO_LPM }, 422 418 423 419 /* DJI CineSSD */ 424 420 { USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
+1
drivers/usb/dwc2/pci.c
··· 120 120 dwc2 = platform_device_alloc("dwc2", PLATFORM_DEVID_AUTO); 121 121 if (!dwc2) { 122 122 dev_err(dev, "couldn't allocate dwc2 device\n"); 123 + ret = -ENOMEM; 123 124 goto err; 124 125 } 125 126
+1
drivers/usb/dwc3/core.c
··· 1499 1499 1500 1500 err5: 1501 1501 dwc3_event_buffers_cleanup(dwc); 1502 + dwc3_ulpi_exit(dwc); 1502 1503 1503 1504 err4: 1504 1505 dwc3_free_scratch_buffers(dwc);
+3 -1
drivers/usb/dwc3/dwc3-pci.c
··· 283 283 static void dwc3_pci_remove(struct pci_dev *pci) 284 284 { 285 285 struct dwc3_pci *dwc = pci_get_drvdata(pci); 286 + struct pci_dev *pdev = dwc->pci; 286 287 287 - gpiod_remove_lookup_table(&platform_bytcr_gpios); 288 + if (pdev->device == PCI_DEVICE_ID_INTEL_BYT) 289 + gpiod_remove_lookup_table(&platform_bytcr_gpios); 288 290 #ifdef CONFIG_PM 289 291 cancel_work_sync(&dwc->wakeup_work); 290 292 #endif
+4 -4
drivers/usb/dwc3/gadget.c
··· 1081 1081 /* Now prepare one extra TRB to align transfer size */ 1082 1082 trb = &dep->trb_pool[dep->trb_enqueue]; 1083 1083 __dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr, 1084 - maxp - rem, false, 0, 1084 + maxp - rem, false, 1, 1085 1085 req->request.stream_id, 1086 1086 req->request.short_not_ok, 1087 1087 req->request.no_interrupt); ··· 1125 1125 /* Now prepare one extra TRB to align transfer size */ 1126 1126 trb = &dep->trb_pool[dep->trb_enqueue]; 1127 1127 __dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr, maxp - rem, 1128 - false, 0, req->request.stream_id, 1128 + false, 1, req->request.stream_id, 1129 1129 req->request.short_not_ok, 1130 1130 req->request.no_interrupt); 1131 1131 } else if (req->request.zero && req->request.length && ··· 1141 1141 /* Now prepare one extra TRB to handle ZLP */ 1142 1142 trb = &dep->trb_pool[dep->trb_enqueue]; 1143 1143 __dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr, 0, 1144 - false, 0, req->request.stream_id, 1144 + false, 1, req->request.stream_id, 1145 1145 req->request.short_not_ok, 1146 1146 req->request.no_interrupt); 1147 1147 } else { ··· 2259 2259 * with one TRB pending in the ring. We need to manually clear HWO bit 2260 2260 * from that TRB. 2261 2261 */ 2262 - if ((req->zero || req->unaligned) && (trb->ctrl & DWC3_TRB_CTRL_HWO)) { 2262 + if ((req->zero || req->unaligned) && !(trb->ctrl & DWC3_TRB_CTRL_CHN)) { 2263 2263 trb->ctrl &= ~DWC3_TRB_CTRL_HWO; 2264 2264 return 1; 2265 2265 }
+8 -18
drivers/usb/gadget/function/f_fs.c
··· 215 215 216 216 struct mm_struct *mm; 217 217 struct work_struct work; 218 - struct work_struct cancellation_work; 219 218 220 219 struct usb_ep *ep; 221 220 struct usb_request *req; ··· 1072 1073 return 0; 1073 1074 } 1074 1075 1075 - static void ffs_aio_cancel_worker(struct work_struct *work) 1076 - { 1077 - struct ffs_io_data *io_data = container_of(work, struct ffs_io_data, 1078 - cancellation_work); 1079 - 1080 - ENTER(); 1081 - 1082 - usb_ep_dequeue(io_data->ep, io_data->req); 1083 - } 1084 - 1085 1076 static int ffs_aio_cancel(struct kiocb *kiocb) 1086 1077 { 1087 1078 struct ffs_io_data *io_data = kiocb->private; 1088 - struct ffs_data *ffs = io_data->ffs; 1079 + struct ffs_epfile *epfile = kiocb->ki_filp->private_data; 1089 1080 int value; 1090 1081 1091 1082 ENTER(); 1092 1083 1093 - if (likely(io_data && io_data->ep && io_data->req)) { 1094 - INIT_WORK(&io_data->cancellation_work, ffs_aio_cancel_worker); 1095 - queue_work(ffs->io_completion_wq, &io_data->cancellation_work); 1096 - value = -EINPROGRESS; 1097 - } else { 1084 + spin_lock_irq(&epfile->ffs->eps_lock); 1085 + 1086 + if (likely(io_data && io_data->ep && io_data->req)) 1087 + value = usb_ep_dequeue(io_data->ep, io_data->req); 1088 + else 1098 1089 value = -EINVAL; 1099 - } 1090 + 1091 + spin_unlock_irq(&epfile->ffs->eps_lock); 1100 1092 1101 1093 return value; 1102 1094 }
+4 -2
drivers/usb/host/xhci-histb.c
··· 325 325 struct xhci_hcd_histb *histb = platform_get_drvdata(dev); 326 326 struct usb_hcd *hcd = histb->hcd; 327 327 struct xhci_hcd *xhci = hcd_to_xhci(hcd); 328 + struct usb_hcd *shared_hcd = xhci->shared_hcd; 328 329 329 330 xhci->xhc_state |= XHCI_STATE_REMOVING; 330 331 331 - usb_remove_hcd(xhci->shared_hcd); 332 + usb_remove_hcd(shared_hcd); 333 + xhci->shared_hcd = NULL; 332 334 device_wakeup_disable(&dev->dev); 333 335 334 336 usb_remove_hcd(hcd); 335 - usb_put_hcd(xhci->shared_hcd); 337 + usb_put_hcd(shared_hcd); 336 338 337 339 xhci_histb_host_disable(histb); 338 340 usb_put_hcd(hcd);
+49 -17
drivers/usb/host/xhci-hub.c
··· 876 876 status |= USB_PORT_STAT_SUSPEND; 877 877 } 878 878 if ((raw_port_status & PORT_PLS_MASK) == XDEV_RESUME && 879 - !DEV_SUPERSPEED_ANY(raw_port_status)) { 879 + !DEV_SUPERSPEED_ANY(raw_port_status) && hcd->speed < HCD_USB3) { 880 880 if ((raw_port_status & PORT_RESET) || 881 881 !(raw_port_status & PORT_PE)) 882 882 return 0xffffffff; ··· 921 921 time_left = wait_for_completion_timeout( 922 922 &bus_state->rexit_done[wIndex], 923 923 msecs_to_jiffies( 924 - XHCI_MAX_REXIT_TIMEOUT)); 924 + XHCI_MAX_REXIT_TIMEOUT_MS)); 925 925 spin_lock_irqsave(&xhci->lock, flags); 926 926 927 927 if (time_left) { ··· 935 935 } else { 936 936 int port_status = readl(port->addr); 937 937 xhci_warn(xhci, "Port resume took longer than %i msec, port status = 0x%x\n", 938 - XHCI_MAX_REXIT_TIMEOUT, 938 + XHCI_MAX_REXIT_TIMEOUT_MS, 939 939 port_status); 940 940 status |= USB_PORT_STAT_SUSPEND; 941 941 clear_bit(wIndex, &bus_state->rexit_ports); ··· 1474 1474 unsigned long flags; 1475 1475 struct xhci_hub *rhub; 1476 1476 struct xhci_port **ports; 1477 + u32 portsc_buf[USB_MAXCHILDREN]; 1478 + bool wake_enabled; 1477 1479 1478 1480 rhub = xhci_get_rhub(hcd); 1479 1481 ports = rhub->ports; 1480 1482 max_ports = rhub->num_ports; 1481 1483 bus_state = &xhci->bus_state[hcd_index(hcd)]; 1484 + wake_enabled = hcd->self.root_hub->do_remote_wakeup; 1482 1485 1483 1486 spin_lock_irqsave(&xhci->lock, flags); 1484 1487 1485 - if (hcd->self.root_hub->do_remote_wakeup) { 1488 + if (wake_enabled) { 1486 1489 if (bus_state->resuming_ports || /* USB2 */ 1487 1490 bus_state->port_remote_wakeup) { /* USB3 */ 1488 1491 spin_unlock_irqrestore(&xhci->lock, flags); ··· 1493 1490 return -EBUSY; 1494 1491 } 1495 1492 } 1496 - 1497 - port_index = max_ports; 1493 + /* 1494 + * Prepare ports for suspend, but don't write anything before all ports 1495 + * are checked and we know bus suspend can proceed 1496 + */ 1498 1497 bus_state->bus_suspended = 0; 1498 + port_index = max_ports; 1499 1499 while (port_index--) { 1500 - /* suspend the port if the port is not suspended */ 1501 1500 u32 t1, t2; 1502 - int slot_id; 1503 1501 1504 1502 t1 = readl(ports[port_index]->addr); 1505 1503 t2 = xhci_port_state_to_neutral(t1); 1504 + portsc_buf[port_index] = 0; 1506 1505 1507 - if ((t1 & PORT_PE) && !(t1 & PORT_PLS_MASK)) { 1508 - xhci_dbg(xhci, "port %d not suspended\n", port_index); 1509 - slot_id = xhci_find_slot_id_by_port(hcd, xhci, 1510 - port_index + 1); 1511 - if (slot_id) { 1506 + /* Bail out if a USB3 port has a new device in link training */ 1507 + if ((t1 & PORT_PLS_MASK) == XDEV_POLLING) { 1508 + bus_state->bus_suspended = 0; 1509 + spin_unlock_irqrestore(&xhci->lock, flags); 1510 + xhci_dbg(xhci, "Bus suspend bailout, port in polling\n"); 1511 + return -EBUSY; 1512 + } 1513 + 1514 + /* suspend ports in U0, or bail out for new connect changes */ 1515 + if ((t1 & PORT_PE) && (t1 & PORT_PLS_MASK) == XDEV_U0) { 1516 + if ((t1 & PORT_CSC) && wake_enabled) { 1517 + bus_state->bus_suspended = 0; 1512 1518 spin_unlock_irqrestore(&xhci->lock, flags); 1513 - xhci_stop_device(xhci, slot_id, 1); 1514 - spin_lock_irqsave(&xhci->lock, flags); 1519 + xhci_dbg(xhci, "Bus suspend bailout, port connect change\n"); 1520 + return -EBUSY; 1515 1521 } 1522 + xhci_dbg(xhci, "port %d not suspended\n", port_index); 1516 1523 t2 &= ~PORT_PLS_MASK; 1517 1524 t2 |= PORT_LINK_STROBE | XDEV_U3; 1518 1525 set_bit(port_index, &bus_state->bus_suspended); ··· 1531 1518 * including the USB 3.0 roothub, but only if CONFIG_PM 1532 1519 * is enabled, so also enable remote wake here. 1533 1520 */ 1534 - if (hcd->self.root_hub->do_remote_wakeup) { 1521 + if (wake_enabled) { 1535 1522 if (t1 & PORT_CONNECT) { 1536 1523 t2 |= PORT_WKOC_E | PORT_WKDISC_E; 1537 1524 t2 &= ~PORT_WKCONN_E; ··· 1551 1538 1552 1539 t1 = xhci_port_state_to_neutral(t1); 1553 1540 if (t1 != t2) 1554 - writel(t2, ports[port_index]->addr); 1541 + portsc_buf[port_index] = t2; 1542 + } 1543 + 1544 + /* write port settings, stopping and suspending ports if needed */ 1545 + port_index = max_ports; 1546 + while (port_index--) { 1547 + if (!portsc_buf[port_index]) 1548 + continue; 1549 + if (test_bit(port_index, &bus_state->bus_suspended)) { 1550 + int slot_id; 1551 + 1552 + slot_id = xhci_find_slot_id_by_port(hcd, xhci, 1553 + port_index + 1); 1554 + if (slot_id) { 1555 + spin_unlock_irqrestore(&xhci->lock, flags); 1556 + xhci_stop_device(xhci, slot_id, 1); 1557 + spin_lock_irqsave(&xhci->lock, flags); 1558 + } 1559 + } 1560 + writel(portsc_buf[port_index], ports[port_index]->addr); 1555 1561 } 1556 1562 hcd->state = HC_STATE_SUSPENDED; 1557 1563 bus_state->next_statechange = jiffies + msecs_to_jiffies(10);
+4 -2
drivers/usb/host/xhci-mtk.c
··· 590 590 struct xhci_hcd_mtk *mtk = platform_get_drvdata(dev); 591 591 struct usb_hcd *hcd = mtk->hcd; 592 592 struct xhci_hcd *xhci = hcd_to_xhci(hcd); 593 + struct usb_hcd *shared_hcd = xhci->shared_hcd; 593 594 594 - usb_remove_hcd(xhci->shared_hcd); 595 + usb_remove_hcd(shared_hcd); 596 + xhci->shared_hcd = NULL; 595 597 device_init_wakeup(&dev->dev, false); 596 598 597 599 usb_remove_hcd(hcd); 598 - usb_put_hcd(xhci->shared_hcd); 600 + usb_put_hcd(shared_hcd); 599 601 usb_put_hcd(hcd); 600 602 xhci_mtk_sch_exit(mtk); 601 603 xhci_mtk_clks_disable(mtk);
+6
drivers/usb/host/xhci-pci.c
··· 248 248 if (pdev->vendor == PCI_VENDOR_ID_TI && pdev->device == 0x8241) 249 249 xhci->quirks |= XHCI_LIMIT_ENDPOINT_INTERVAL_7; 250 250 251 + if ((pdev->vendor == PCI_VENDOR_ID_BROADCOM || 252 + pdev->vendor == PCI_VENDOR_ID_CAVIUM) && 253 + pdev->device == 0x9026) 254 + xhci->quirks |= XHCI_RESET_PLL_ON_DISCONNECT; 255 + 251 256 if (xhci->quirks & XHCI_RESET_ON_RESUME) 252 257 xhci_dbg_trace(xhci, trace_xhci_dbg_quirks, 253 258 "QUIRK: Resetting on resume"); ··· 385 380 if (xhci->shared_hcd) { 386 381 usb_remove_hcd(xhci->shared_hcd); 387 382 usb_put_hcd(xhci->shared_hcd); 383 + xhci->shared_hcd = NULL; 388 384 } 389 385 390 386 /* Workaround for spurious wakeups at shutdown with HSW */
+4 -2
drivers/usb/host/xhci-plat.c
··· 362 362 struct xhci_hcd *xhci = hcd_to_xhci(hcd); 363 363 struct clk *clk = xhci->clk; 364 364 struct clk *reg_clk = xhci->reg_clk; 365 + struct usb_hcd *shared_hcd = xhci->shared_hcd; 365 366 366 367 xhci->xhc_state |= XHCI_STATE_REMOVING; 367 368 368 - usb_remove_hcd(xhci->shared_hcd); 369 + usb_remove_hcd(shared_hcd); 370 + xhci->shared_hcd = NULL; 369 371 usb_phy_shutdown(hcd->usb_phy); 370 372 371 373 usb_remove_hcd(hcd); 372 - usb_put_hcd(xhci->shared_hcd); 374 + usb_put_hcd(shared_hcd); 373 375 374 376 clk_disable_unprepare(clk); 375 377 clk_disable_unprepare(reg_clk);
+43 -2
drivers/usb/host/xhci-ring.c
··· 1521 1521 usb_wakeup_notification(udev->parent, udev->portnum); 1522 1522 } 1523 1523 1524 + /* 1525 + * Quirk hanlder for errata seen on Cavium ThunderX2 processor XHCI 1526 + * Controller. 1527 + * As per ThunderX2errata-129 USB 2 device may come up as USB 1 1528 + * If a connection to a USB 1 device is followed by another connection 1529 + * to a USB 2 device. 1530 + * 1531 + * Reset the PHY after the USB device is disconnected if device speed 1532 + * is less than HCD_USB3. 1533 + * Retry the reset sequence max of 4 times checking the PLL lock status. 1534 + * 1535 + */ 1536 + static void xhci_cavium_reset_phy_quirk(struct xhci_hcd *xhci) 1537 + { 1538 + struct usb_hcd *hcd = xhci_to_hcd(xhci); 1539 + u32 pll_lock_check; 1540 + u32 retry_count = 4; 1541 + 1542 + do { 1543 + /* Assert PHY reset */ 1544 + writel(0x6F, hcd->regs + 0x1048); 1545 + udelay(10); 1546 + /* De-assert the PHY reset */ 1547 + writel(0x7F, hcd->regs + 0x1048); 1548 + udelay(200); 1549 + pll_lock_check = readl(hcd->regs + 0x1070); 1550 + } while (!(pll_lock_check & 0x1) && --retry_count); 1551 + } 1552 + 1524 1553 static void handle_port_status(struct xhci_hcd *xhci, 1525 1554 union xhci_trb *event) 1526 1555 { ··· 1581 1552 port = &xhci->hw_ports[port_id - 1]; 1582 1553 if (!port || !port->rhub || port->hcd_portnum == DUPLICATE_ENTRY) { 1583 1554 xhci_warn(xhci, "Event for invalid port %u\n", port_id); 1555 + bogus_port_status = true; 1556 + goto cleanup; 1557 + } 1558 + 1559 + /* We might get interrupts after shared_hcd is removed */ 1560 + if (port->rhub == &xhci->usb3_rhub && xhci->shared_hcd == NULL) { 1561 + xhci_dbg(xhci, "ignore port event for removed USB3 hcd\n"); 1584 1562 bogus_port_status = true; 1585 1563 goto cleanup; 1586 1564 } ··· 1675 1639 * RExit to a disconnect state). If so, let the the driver know it's 1676 1640 * out of the RExit state. 1677 1641 */ 1678 - if (!DEV_SUPERSPEED_ANY(portsc) && 1642 + if (!DEV_SUPERSPEED_ANY(portsc) && hcd->speed < HCD_USB3 && 1679 1643 test_and_clear_bit(hcd_portnum, 1680 1644 &bus_state->rexit_ports)) { 1681 1645 complete(&bus_state->rexit_done[hcd_portnum]); ··· 1683 1647 goto cleanup; 1684 1648 } 1685 1649 1686 - if (hcd->speed < HCD_USB3) 1650 + if (hcd->speed < HCD_USB3) { 1687 1651 xhci_test_and_clear_bit(xhci, port, PORT_PLC); 1652 + if ((xhci->quirks & XHCI_RESET_PLL_ON_DISCONNECT) && 1653 + (portsc & PORT_CSC) && !(portsc & PORT_CONNECT)) 1654 + xhci_cavium_reset_phy_quirk(xhci); 1655 + } 1688 1656 1689 1657 cleanup: 1690 1658 /* Update event ring dequeue pointer before dropping the lock */ ··· 2306 2266 goto cleanup; 2307 2267 case COMP_RING_UNDERRUN: 2308 2268 case COMP_RING_OVERRUN: 2269 + case COMP_STOPPED_LENGTH_INVALID: 2309 2270 goto cleanup; 2310 2271 default: 2311 2272 xhci_err(xhci, "ERROR Transfer event for unknown stream ring slot %u ep %u\n",
+1
drivers/usb/host/xhci-tegra.c
··· 1303 1303 1304 1304 usb_remove_hcd(xhci->shared_hcd); 1305 1305 usb_put_hcd(xhci->shared_hcd); 1306 + xhci->shared_hcd = NULL; 1306 1307 usb_remove_hcd(tegra->hcd); 1307 1308 usb_put_hcd(tegra->hcd); 1308 1309
-2
drivers/usb/host/xhci.c
··· 719 719 720 720 /* Only halt host and free memory after both hcds are removed */ 721 721 if (!usb_hcd_is_primary_hcd(hcd)) { 722 - /* usb core will free this hcd shortly, unset pointer */ 723 - xhci->shared_hcd = NULL; 724 722 mutex_unlock(&xhci->mutex); 725 723 return; 726 724 }
+2 -1
drivers/usb/host/xhci.h
··· 1680 1680 * It can take up to 20 ms to transition from RExit to U0 on the 1681 1681 * Intel Lynx Point LP xHCI host. 1682 1682 */ 1683 - #define XHCI_MAX_REXIT_TIMEOUT (20 * 1000) 1683 + #define XHCI_MAX_REXIT_TIMEOUT_MS 20 1684 1684 1685 1685 static inline unsigned int hcd_index(struct usb_hcd *hcd) 1686 1686 { ··· 1849 1849 #define XHCI_INTEL_USB_ROLE_SW BIT_ULL(31) 1850 1850 #define XHCI_ZERO_64B_REGS BIT_ULL(32) 1851 1851 #define XHCI_DEFAULT_PM_RUNTIME_ALLOW BIT_ULL(33) 1852 + #define XHCI_RESET_PLL_ON_DISCONNECT BIT_ULL(34) 1852 1853 1853 1854 unsigned int num_active_eps; 1854 1855 unsigned int limit_active_eps;
+1
drivers/usb/misc/appledisplay.c
··· 50 50 { APPLEDISPLAY_DEVICE(0x9219) }, 51 51 { APPLEDISPLAY_DEVICE(0x921c) }, 52 52 { APPLEDISPLAY_DEVICE(0x921d) }, 53 + { APPLEDISPLAY_DEVICE(0x9222) }, 53 54 { APPLEDISPLAY_DEVICE(0x9236) }, 54 55 55 56 /* Terminating entry */
+10 -1
fs/afs/rxrpc.c
··· 576 576 { 577 577 signed long rtt2, timeout; 578 578 long ret; 579 + bool stalled = false; 579 580 u64 rtt; 580 581 u32 life, last_life; 581 582 ··· 610 609 611 610 life = rxrpc_kernel_check_life(call->net->socket, call->rxcall); 612 611 if (timeout == 0 && 613 - life == last_life && signal_pending(current)) 612 + life == last_life && signal_pending(current)) { 613 + if (stalled) 614 614 break; 615 + __set_current_state(TASK_RUNNING); 616 + rxrpc_kernel_probe_life(call->net->socket, call->rxcall); 617 + timeout = rtt2; 618 + stalled = true; 619 + continue; 620 + } 615 621 616 622 if (life != last_life) { 617 623 timeout = rtt2; 618 624 last_life = life; 625 + stalled = false; 619 626 } 620 627 621 628 timeout = schedule_timeout(timeout);
+35 -25
fs/dax.c
··· 98 98 return xa_mk_value(flags | (pfn_t_to_pfn(pfn) << DAX_SHIFT)); 99 99 } 100 100 101 - static void *dax_make_page_entry(struct page *page) 102 - { 103 - pfn_t pfn = page_to_pfn_t(page); 104 - return dax_make_entry(pfn, PageHead(page) ? DAX_PMD : 0); 105 - } 106 - 107 101 static bool dax_is_locked(void *entry) 108 102 { 109 103 return xa_to_value(entry) & DAX_LOCKED; ··· 110 116 return 0; 111 117 } 112 118 113 - static int dax_is_pmd_entry(void *entry) 119 + static unsigned long dax_is_pmd_entry(void *entry) 114 120 { 115 121 return xa_to_value(entry) & DAX_PMD; 116 122 } 117 123 118 - static int dax_is_pte_entry(void *entry) 124 + static bool dax_is_pte_entry(void *entry) 119 125 { 120 126 return !(xa_to_value(entry) & DAX_PMD); 121 127 } ··· 216 222 ewait.wait.func = wake_exceptional_entry_func; 217 223 218 224 for (;;) { 219 - entry = xas_load(xas); 220 - if (!entry || xa_is_internal(entry) || 221 - WARN_ON_ONCE(!xa_is_value(entry)) || 225 + entry = xas_find_conflict(xas); 226 + if (!entry || WARN_ON_ONCE(!xa_is_value(entry)) || 222 227 !dax_is_locked(entry)) 223 228 return entry; 224 229 ··· 248 255 { 249 256 void *old; 250 257 258 + BUG_ON(dax_is_locked(entry)); 251 259 xas_reset(xas); 252 260 xas_lock_irq(xas); 253 261 old = xas_store(xas, entry); ··· 346 352 return NULL; 347 353 } 348 354 355 + /* 356 + * dax_lock_mapping_entry - Lock the DAX entry corresponding to a page 357 + * @page: The page whose entry we want to lock 358 + * 359 + * Context: Process context. 360 + * Return: %true if the entry was locked or does not need to be locked. 361 + */ 349 362 bool dax_lock_mapping_entry(struct page *page) 350 363 { 351 364 XA_STATE(xas, NULL, 0); 352 365 void *entry; 366 + bool locked; 353 367 368 + /* Ensure page->mapping isn't freed while we look at it */ 369 + rcu_read_lock(); 354 370 for (;;) { 355 371 struct address_space *mapping = READ_ONCE(page->mapping); 356 372 373 + locked = false; 357 374 if (!dax_mapping(mapping)) 358 - return false; 375 + break; 359 376 360 377 /* 361 378 * In the device-dax case there's no need to lock, a ··· 375 370 * otherwise we would not have a valid pfn_to_page() 376 371 * translation. 377 372 */ 373 + locked = true; 378 374 if (S_ISCHR(mapping->host->i_mode)) 379 - return true; 375 + break; 380 376 381 377 xas.xa = &mapping->i_pages; 382 378 xas_lock_irq(&xas); ··· 388 382 xas_set(&xas, page->index); 389 383 entry = xas_load(&xas); 390 384 if (dax_is_locked(entry)) { 385 + rcu_read_unlock(); 391 386 entry = get_unlocked_entry(&xas); 392 - /* Did the page move while we slept? */ 393 - if (dax_to_pfn(entry) != page_to_pfn(page)) { 394 - xas_unlock_irq(&xas); 395 - continue; 396 - } 387 + xas_unlock_irq(&xas); 388 + put_unlocked_entry(&xas, entry); 389 + rcu_read_lock(); 390 + continue; 397 391 } 398 392 dax_lock_entry(&xas, entry); 399 393 xas_unlock_irq(&xas); 400 - return true; 394 + break; 401 395 } 396 + rcu_read_unlock(); 397 + return locked; 402 398 } 403 399 404 400 void dax_unlock_mapping_entry(struct page *page) 405 401 { 406 402 struct address_space *mapping = page->mapping; 407 403 XA_STATE(xas, &mapping->i_pages, page->index); 404 + void *entry; 408 405 409 406 if (S_ISCHR(mapping->host->i_mode)) 410 407 return; 411 408 412 - dax_unlock_entry(&xas, dax_make_page_entry(page)); 409 + rcu_read_lock(); 410 + entry = xas_load(&xas); 411 + rcu_read_unlock(); 412 + entry = dax_make_entry(page_to_pfn_t(page), dax_is_pmd_entry(entry)); 413 + dax_unlock_entry(&xas, entry); 413 414 } 414 415 415 416 /* ··· 458 445 retry: 459 446 xas_lock_irq(xas); 460 447 entry = get_unlocked_entry(xas); 461 - if (xa_is_internal(entry)) 462 - goto fallback; 463 448 464 449 if (entry) { 465 - if (WARN_ON_ONCE(!xa_is_value(entry))) { 450 + if (!xa_is_value(entry)) { 466 451 xas_set_err(xas, EIO); 467 452 goto out_unlock; 468 453 } ··· 1639 1628 /* Did we race with someone splitting entry or so? */ 1640 1629 if (!entry || 1641 1630 (order == 0 && !dax_is_pte_entry(entry)) || 1642 - (order == PMD_ORDER && (xa_is_internal(entry) || 1643 - !dax_is_pmd_entry(entry)))) { 1631 + (order == PMD_ORDER && !dax_is_pmd_entry(entry))) { 1644 1632 put_unlocked_entry(&xas, entry); 1645 1633 xas_unlock_irq(&xas); 1646 1634 trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf,
+3 -2
fs/exec.c
··· 62 62 #include <linux/oom.h> 63 63 #include <linux/compat.h> 64 64 #include <linux/vmalloc.h> 65 + #include <linux/freezer.h> 65 66 66 67 #include <linux/uaccess.h> 67 68 #include <asm/mmu_context.h> ··· 1084 1083 while (sig->notify_count) { 1085 1084 __set_current_state(TASK_KILLABLE); 1086 1085 spin_unlock_irq(lock); 1087 - schedule(); 1086 + freezable_schedule(); 1088 1087 if (unlikely(__fatal_signal_pending(tsk))) 1089 1088 goto killed; 1090 1089 spin_lock_irq(lock); ··· 1112 1111 __set_current_state(TASK_KILLABLE); 1113 1112 write_unlock_irq(&tasklist_lock); 1114 1113 cgroup_threadgroup_change_end(tsk); 1115 - schedule(); 1114 + freezable_schedule(); 1116 1115 if (unlikely(__fatal_signal_pending(tsk))) 1117 1116 goto killed; 1118 1117 }
+12 -4
fs/fuse/dev.c
··· 165 165 166 166 static void fuse_drop_waiting(struct fuse_conn *fc) 167 167 { 168 - if (fc->connected) { 169 - atomic_dec(&fc->num_waiting); 170 - } else if (atomic_dec_and_test(&fc->num_waiting)) { 168 + /* 169 + * lockess check of fc->connected is okay, because atomic_dec_and_test() 170 + * provides a memory barrier mached with the one in fuse_wait_aborted() 171 + * to ensure no wake-up is missed. 172 + */ 173 + if (atomic_dec_and_test(&fc->num_waiting) && 174 + !READ_ONCE(fc->connected)) { 171 175 /* wake up aborters */ 172 176 wake_up_all(&fc->blocked_waitq); 173 177 } ··· 1772 1768 req->in.args[1].size = total_len; 1773 1769 1774 1770 err = fuse_request_send_notify_reply(fc, req, outarg->notify_unique); 1775 - if (err) 1771 + if (err) { 1776 1772 fuse_retrieve_end(fc, req); 1773 + fuse_put_request(fc, req); 1774 + } 1777 1775 1778 1776 return err; 1779 1777 } ··· 2225 2219 2226 2220 void fuse_wait_aborted(struct fuse_conn *fc) 2227 2221 { 2222 + /* matches implicit memory barrier in fuse_drop_waiting() */ 2223 + smp_mb(); 2228 2224 wait_event(fc->blocked_waitq, atomic_read(&fc->num_waiting) == 0); 2229 2225 } 2230 2226
+3 -1
fs/fuse/file.c
··· 2924 2924 } 2925 2925 2926 2926 if (io->async) { 2927 + bool blocking = io->blocking; 2928 + 2927 2929 fuse_aio_complete(io, ret < 0 ? ret : 0, -1); 2928 2930 2929 2931 /* we have a non-extending, async request, so return */ 2930 - if (!io->blocking) 2932 + if (!blocking) 2931 2933 return -EIOCBQUEUED; 2932 2934 2933 2935 wait_for_completion(&wait);
+27 -27
fs/gfs2/bmap.c
··· 826 826 ret = gfs2_meta_inode_buffer(ip, &dibh); 827 827 if (ret) 828 828 goto unlock; 829 - iomap->private = dibh; 829 + mp->mp_bh[0] = dibh; 830 830 831 831 if (gfs2_is_stuffed(ip)) { 832 832 if (flags & IOMAP_WRITE) { ··· 863 863 len = lblock_stop - lblock + 1; 864 864 iomap->length = len << inode->i_blkbits; 865 865 866 - get_bh(dibh); 867 - mp->mp_bh[0] = dibh; 868 - 869 866 height = ip->i_height; 870 867 while ((lblock + 1) * sdp->sd_sb.sb_bsize > sdp->sd_heightsize[height]) 871 868 height++; ··· 895 898 iomap->bdev = inode->i_sb->s_bdev; 896 899 unlock: 897 900 up_read(&ip->i_rw_mutex); 898 - if (ret && dibh) 899 - brelse(dibh); 900 901 return ret; 901 902 902 903 do_alloc: ··· 975 980 976 981 static int gfs2_iomap_begin_write(struct inode *inode, loff_t pos, 977 982 loff_t length, unsigned flags, 978 - struct iomap *iomap) 983 + struct iomap *iomap, 984 + struct metapath *mp) 979 985 { 980 - struct metapath mp = { .mp_aheight = 1, }; 981 986 struct gfs2_inode *ip = GFS2_I(inode); 982 987 struct gfs2_sbd *sdp = GFS2_SB(inode); 983 988 unsigned int data_blocks = 0, ind_blocks = 0, rblocks; ··· 991 996 unstuff = gfs2_is_stuffed(ip) && 992 997 pos + length > gfs2_max_stuffed_size(ip); 993 998 994 - ret = gfs2_iomap_get(inode, pos, length, flags, iomap, &mp); 999 + ret = gfs2_iomap_get(inode, pos, length, flags, iomap, mp); 995 1000 if (ret) 996 - goto out_release; 1001 + goto out_unlock; 997 1002 998 1003 alloc_required = unstuff || iomap->type == IOMAP_HOLE; 999 1004 ··· 1008 1013 1009 1014 ret = gfs2_quota_lock_check(ip, &ap); 1010 1015 if (ret) 1011 - goto out_release; 1016 + goto out_unlock; 1012 1017 1013 1018 ret = gfs2_inplace_reserve(ip, &ap); 1014 1019 if (ret) ··· 1033 1038 ret = gfs2_unstuff_dinode(ip, NULL); 1034 1039 if (ret) 1035 1040 goto out_trans_end; 1036 - release_metapath(&mp); 1037 - brelse(iomap->private); 1038 - iomap->private = NULL; 1041 + release_metapath(mp); 1039 1042 ret = gfs2_iomap_get(inode, iomap->offset, iomap->length, 1040 - flags, iomap, &mp); 1043 + flags, iomap, mp); 1041 1044 if (ret) 1042 1045 goto out_trans_end; 1043 1046 } 1044 1047 1045 1048 if (iomap->type == IOMAP_HOLE) { 1046 - ret = gfs2_iomap_alloc(inode, iomap, flags, &mp); 1049 + ret = gfs2_iomap_alloc(inode, iomap, flags, mp); 1047 1050 if (ret) { 1048 1051 gfs2_trans_end(sdp); 1049 1052 gfs2_inplace_release(ip); ··· 1049 1056 goto out_qunlock; 1050 1057 } 1051 1058 } 1052 - release_metapath(&mp); 1053 1059 if (!gfs2_is_stuffed(ip) && gfs2_is_jdata(ip)) 1054 1060 iomap->page_done = gfs2_iomap_journaled_page_done; 1055 1061 return 0; ··· 1061 1069 out_qunlock: 1062 1070 if (alloc_required) 1063 1071 gfs2_quota_unlock(ip); 1064 - out_release: 1065 - if (iomap->private) 1066 - brelse(iomap->private); 1067 - release_metapath(&mp); 1072 + out_unlock: 1068 1073 gfs2_write_unlock(inode); 1069 1074 return ret; 1070 1075 } ··· 1077 1088 1078 1089 trace_gfs2_iomap_start(ip, pos, length, flags); 1079 1090 if ((flags & IOMAP_WRITE) && !(flags & IOMAP_DIRECT)) { 1080 - ret = gfs2_iomap_begin_write(inode, pos, length, flags, iomap); 1091 + ret = gfs2_iomap_begin_write(inode, pos, length, flags, iomap, &mp); 1081 1092 } else { 1082 1093 ret = gfs2_iomap_get(inode, pos, length, flags, iomap, &mp); 1083 - release_metapath(&mp); 1094 + 1084 1095 /* 1085 1096 * Silently fall back to buffered I/O for stuffed files or if 1086 1097 * we've hot a hole (see gfs2_file_direct_write). ··· 1089 1100 iomap->type != IOMAP_MAPPED) 1090 1101 ret = -ENOTBLK; 1091 1102 } 1103 + if (!ret) { 1104 + get_bh(mp.mp_bh[0]); 1105 + iomap->private = mp.mp_bh[0]; 1106 + } 1107 + release_metapath(&mp); 1092 1108 trace_gfs2_iomap_end(ip, iomap, ret); 1093 1109 return ret; 1094 1110 } ··· 1902 1908 if (ret < 0) 1903 1909 goto out; 1904 1910 1905 - /* issue read-ahead on metadata */ 1906 - if (mp.mp_aheight > 1) { 1907 - for (; ret > 1; ret--) { 1908 - metapointer_range(&mp, mp.mp_aheight - ret, 1911 + /* On the first pass, issue read-ahead on metadata. */ 1912 + if (mp.mp_aheight > 1 && strip_h == ip->i_height - 1) { 1913 + unsigned int height = mp.mp_aheight - 1; 1914 + 1915 + /* No read-ahead for data blocks. */ 1916 + if (mp.mp_aheight - 1 == strip_h) 1917 + height--; 1918 + 1919 + for (; height >= mp.mp_aheight - ret; height--) { 1920 + metapointer_range(&mp, height, 1909 1921 start_list, start_aligned, 1910 1922 end_list, end_aligned, 1911 1923 &start, &end);
+2 -1
fs/gfs2/rgrp.c
··· 733 733 734 734 if (gl) { 735 735 glock_clear_object(gl, rgd); 736 + gfs2_rgrp_brelse(rgd); 736 737 gfs2_glock_put(gl); 737 738 } 738 739 ··· 1175 1174 * @rgd: the struct gfs2_rgrpd describing the RG to read in 1176 1175 * 1177 1176 * Read in all of a Resource Group's header and bitmap blocks. 1178 - * Caller must eventually call gfs2_rgrp_relse() to free the bitmaps. 1177 + * Caller must eventually call gfs2_rgrp_brelse() to free the bitmaps. 1179 1178 * 1180 1179 * Returns: errno 1181 1180 */
+5 -2
fs/inode.c
··· 730 730 return LRU_REMOVED; 731 731 } 732 732 733 - /* recently referenced inodes get one more pass */ 734 - if (inode->i_state & I_REFERENCED) { 733 + /* 734 + * Recently referenced inodes and inodes with many attached pages 735 + * get one more pass. 736 + */ 737 + if (inode->i_state & I_REFERENCED || inode->i_data.nrpages > 1) { 735 738 inode->i_state &= ~I_REFERENCED; 736 739 spin_unlock(&inode->i_lock); 737 740 return LRU_ROTATE;
+41 -12
fs/iomap.c
··· 142 142 iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, 143 143 loff_t *pos, loff_t length, unsigned *offp, unsigned *lenp) 144 144 { 145 + loff_t orig_pos = *pos; 146 + loff_t isize = i_size_read(inode); 145 147 unsigned block_bits = inode->i_blkbits; 146 148 unsigned block_size = (1 << block_bits); 147 149 unsigned poff = offset_in_page(*pos); 148 150 unsigned plen = min_t(loff_t, PAGE_SIZE - poff, length); 149 151 unsigned first = poff >> block_bits; 150 152 unsigned last = (poff + plen - 1) >> block_bits; 151 - unsigned end = offset_in_page(i_size_read(inode)) >> block_bits; 152 153 153 154 /* 154 155 * If the block size is smaller than the page size we need to check the ··· 184 183 * handle both halves separately so that we properly zero data in the 185 184 * page cache for blocks that are entirely outside of i_size. 186 185 */ 187 - if (first <= end && last > end) 188 - plen -= (last - end) * block_size; 186 + if (orig_pos <= isize && orig_pos + length > isize) { 187 + unsigned end = offset_in_page(isize - 1) >> block_bits; 188 + 189 + if (first <= end && last > end) 190 + plen -= (last - end) * block_size; 191 + } 189 192 190 193 *offp = poff; 191 194 *lenp = plen; ··· 1585 1580 struct bio *bio; 1586 1581 bool need_zeroout = false; 1587 1582 bool use_fua = false; 1588 - int nr_pages, ret; 1583 + int nr_pages, ret = 0; 1589 1584 size_t copied = 0; 1590 1585 1591 1586 if ((pos | length | align) & ((1 << blkbits) - 1)) ··· 1601 1596 1602 1597 if (iomap->flags & IOMAP_F_NEW) { 1603 1598 need_zeroout = true; 1604 - } else { 1599 + } else if (iomap->type == IOMAP_MAPPED) { 1605 1600 /* 1606 - * Use a FUA write if we need datasync semantics, this 1607 - * is a pure data IO that doesn't require any metadata 1608 - * updates and the underlying device supports FUA. This 1609 - * allows us to avoid cache flushes on IO completion. 1601 + * Use a FUA write if we need datasync semantics, this is a pure 1602 + * data IO that doesn't require any metadata updates (including 1603 + * after IO completion such as unwritten extent conversion) and 1604 + * the underlying device supports FUA. This allows us to avoid 1605 + * cache flushes on IO completion. 1610 1606 */ 1611 1607 if (!(iomap->flags & (IOMAP_F_SHARED|IOMAP_F_DIRTY)) && 1612 1608 (dio->flags & IOMAP_DIO_WRITE_FUA) && ··· 1650 1644 1651 1645 ret = bio_iov_iter_get_pages(bio, &iter); 1652 1646 if (unlikely(ret)) { 1647 + /* 1648 + * We have to stop part way through an IO. We must fall 1649 + * through to the sub-block tail zeroing here, otherwise 1650 + * this short IO may expose stale data in the tail of 1651 + * the block we haven't written data to. 1652 + */ 1653 1653 bio_put(bio); 1654 - return copied ? copied : ret; 1654 + goto zero_tail; 1655 1655 } 1656 1656 1657 1657 n = bio->bi_iter.bi_size; ··· 1688 1676 dio->submit.cookie = submit_bio(bio); 1689 1677 } while (nr_pages); 1690 1678 1691 - if (need_zeroout) { 1679 + /* 1680 + * We need to zeroout the tail of a sub-block write if the extent type 1681 + * requires zeroing or the write extends beyond EOF. If we don't zero 1682 + * the block tail in the latter case, we can expose stale data via mmap 1683 + * reads of the EOF block. 1684 + */ 1685 + zero_tail: 1686 + if (need_zeroout || 1687 + ((dio->flags & IOMAP_DIO_WRITE) && pos >= i_size_read(inode))) { 1692 1688 /* zero out from the end of the write to the end of the block */ 1693 1689 pad = pos & (fs_block_size - 1); 1694 1690 if (pad) 1695 1691 iomap_dio_zero(dio, iomap, pos, fs_block_size - pad); 1696 1692 } 1697 - return copied; 1693 + return copied ? copied : ret; 1698 1694 } 1699 1695 1700 1696 static loff_t ··· 1877 1857 dio->wait_for_completion = true; 1878 1858 ret = 0; 1879 1859 } 1860 + 1861 + /* 1862 + * Splicing to pipes can fail on a full pipe. We have to 1863 + * swallow this to make it look like a short IO 1864 + * otherwise the higher splice layers will completely 1865 + * mishandle the error and stop moving data. 1866 + */ 1867 + if (ret == -EFAULT) 1868 + ret = 0; 1880 1869 break; 1881 1870 } 1882 1871 pos += ret;
+3 -3
fs/namespace.c
··· 695 695 696 696 hlist_for_each_entry(mp, chain, m_hash) { 697 697 if (mp->m_dentry == dentry) { 698 - /* might be worth a WARN_ON() */ 699 - if (d_unlinked(dentry)) 700 - return ERR_PTR(-ENOENT); 701 698 mp->m_count++; 702 699 return mp; 703 700 } ··· 708 711 int ret; 709 712 710 713 if (d_mountpoint(dentry)) { 714 + /* might be worth a WARN_ON() */ 715 + if (d_unlinked(dentry)) 716 + return ERR_PTR(-ENOENT); 711 717 mountpoint: 712 718 read_seqlock_excl(&mount_lock); 713 719 mp = lookup_mountpoint(dentry);
+13 -13
fs/nfs/callback_proc.c
··· 66 66 out_iput: 67 67 rcu_read_unlock(); 68 68 trace_nfs4_cb_getattr(cps->clp, &args->fh, inode, -ntohl(res->status)); 69 - iput(inode); 69 + nfs_iput_and_deactive(inode); 70 70 out: 71 71 dprintk("%s: exit with status = %d\n", __func__, ntohl(res->status)); 72 72 return res->status; ··· 108 108 } 109 109 trace_nfs4_cb_recall(cps->clp, &args->fh, inode, 110 110 &args->stateid, -ntohl(res)); 111 - iput(inode); 111 + nfs_iput_and_deactive(inode); 112 112 out: 113 113 dprintk("%s: exit with status = %d\n", __func__, ntohl(res)); 114 114 return res; ··· 686 686 { 687 687 struct cb_offloadargs *args = data; 688 688 struct nfs_server *server; 689 - struct nfs4_copy_state *copy; 689 + struct nfs4_copy_state *copy, *tmp_copy; 690 690 bool found = false; 691 + 692 + copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS); 693 + if (!copy) 694 + return htonl(NFS4ERR_SERVERFAULT); 691 695 692 696 spin_lock(&cps->clp->cl_lock); 693 697 rcu_read_lock(); 694 698 list_for_each_entry_rcu(server, &cps->clp->cl_superblocks, 695 699 client_link) { 696 - list_for_each_entry(copy, &server->ss_copies, copies) { 700 + list_for_each_entry(tmp_copy, &server->ss_copies, copies) { 697 701 if (memcmp(args->coa_stateid.other, 698 - copy->stateid.other, 702 + tmp_copy->stateid.other, 699 703 sizeof(args->coa_stateid.other))) 700 704 continue; 701 - nfs4_copy_cb_args(copy, args); 702 - complete(&copy->completion); 705 + nfs4_copy_cb_args(tmp_copy, args); 706 + complete(&tmp_copy->completion); 703 707 found = true; 704 708 goto out; 705 709 } ··· 711 707 out: 712 708 rcu_read_unlock(); 713 709 if (!found) { 714 - copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS); 715 - if (!copy) { 716 - spin_unlock(&cps->clp->cl_lock); 717 - return htonl(NFS4ERR_SERVERFAULT); 718 - } 719 710 memcpy(&copy->stateid, &args->coa_stateid, NFS4_STATEID_SIZE); 720 711 nfs4_copy_cb_args(copy, args); 721 712 list_add_tail(&copy->copies, &cps->clp->pending_cb_stateids); 722 - } 713 + } else 714 + kfree(copy); 723 715 spin_unlock(&cps->clp->cl_lock); 724 716 725 717 return 0;
+9 -2
fs/nfs/delegation.c
··· 850 850 const struct nfs_fh *fhandle) 851 851 { 852 852 struct nfs_delegation *delegation; 853 - struct inode *res = NULL; 853 + struct inode *freeme, *res = NULL; 854 854 855 855 list_for_each_entry_rcu(delegation, &server->delegations, super_list) { 856 856 spin_lock(&delegation->lock); 857 857 if (delegation->inode != NULL && 858 858 nfs_compare_fh(fhandle, &NFS_I(delegation->inode)->fh) == 0) { 859 - res = igrab(delegation->inode); 859 + freeme = igrab(delegation->inode); 860 + if (freeme && nfs_sb_active(freeme->i_sb)) 861 + res = freeme; 860 862 spin_unlock(&delegation->lock); 861 863 if (res != NULL) 862 864 return res; 865 + if (freeme) { 866 + rcu_read_unlock(); 867 + iput(freeme); 868 + rcu_read_lock(); 869 + } 863 870 return ERR_PTR(-EAGAIN); 864 871 } 865 872 spin_unlock(&delegation->lock);
+9 -12
fs/nfs/flexfilelayout/flexfilelayout.c
··· 1361 1361 task)) 1362 1362 return; 1363 1363 1364 - if (ff_layout_read_prepare_common(task, hdr)) 1365 - return; 1366 - 1367 - if (nfs4_set_rw_stateid(&hdr->args.stateid, hdr->args.context, 1368 - hdr->args.lock_context, FMODE_READ) == -EIO) 1369 - rpc_exit(task, -EIO); /* lost lock, terminate I/O */ 1364 + ff_layout_read_prepare_common(task, hdr); 1370 1365 } 1371 1366 1372 1367 static void ff_layout_read_call_done(struct rpc_task *task, void *data) ··· 1537 1542 task)) 1538 1543 return; 1539 1544 1540 - if (ff_layout_write_prepare_common(task, hdr)) 1541 - return; 1542 - 1543 - if (nfs4_set_rw_stateid(&hdr->args.stateid, hdr->args.context, 1544 - hdr->args.lock_context, FMODE_WRITE) == -EIO) 1545 - rpc_exit(task, -EIO); /* lost lock, terminate I/O */ 1545 + ff_layout_write_prepare_common(task, hdr); 1546 1546 } 1547 1547 1548 1548 static void ff_layout_write_call_done(struct rpc_task *task, void *data) ··· 1732 1742 fh = nfs4_ff_layout_select_ds_fh(lseg, idx); 1733 1743 if (fh) 1734 1744 hdr->args.fh = fh; 1745 + 1746 + if (!nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid)) 1747 + goto out_failed; 1748 + 1735 1749 /* 1736 1750 * Note that if we ever decide to split across DSes, 1737 1751 * then we may need to handle dense-like offsets. ··· 1797 1803 fh = nfs4_ff_layout_select_ds_fh(lseg, idx); 1798 1804 if (fh) 1799 1805 hdr->args.fh = fh; 1806 + 1807 + if (!nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid)) 1808 + goto out_failed; 1800 1809 1801 1810 /* 1802 1811 * Note that if we ever decide to split across DSes,
+4
fs/nfs/flexfilelayout/flexfilelayout.h
··· 215 215 unsigned int maxnum); 216 216 struct nfs_fh * 217 217 nfs4_ff_layout_select_ds_fh(struct pnfs_layout_segment *lseg, u32 mirror_idx); 218 + int 219 + nfs4_ff_layout_select_ds_stateid(struct pnfs_layout_segment *lseg, 220 + u32 mirror_idx, 221 + nfs4_stateid *stateid); 218 222 219 223 struct nfs4_pnfs_ds * 220 224 nfs4_ff_layout_prepare_ds(struct pnfs_layout_segment *lseg, u32 ds_idx,
+19
fs/nfs/flexfilelayout/flexfilelayoutdev.c
··· 370 370 return fh; 371 371 } 372 372 373 + int 374 + nfs4_ff_layout_select_ds_stateid(struct pnfs_layout_segment *lseg, 375 + u32 mirror_idx, 376 + nfs4_stateid *stateid) 377 + { 378 + struct nfs4_ff_layout_mirror *mirror = FF_LAYOUT_COMP(lseg, mirror_idx); 379 + 380 + if (!ff_layout_mirror_valid(lseg, mirror, false)) { 381 + pr_err_ratelimited("NFS: %s: No data server for mirror offset index %d\n", 382 + __func__, mirror_idx); 383 + goto out; 384 + } 385 + 386 + nfs4_stateid_copy(stateid, &mirror->stateid); 387 + return 1; 388 + out: 389 + return 0; 390 + } 391 + 373 392 /** 374 393 * nfs4_ff_layout_prepare_ds - prepare a DS connection for an RPC call 375 394 * @lseg: the layout segment we're operating on
+10 -9
fs/nfs/nfs42proc.c
··· 137 137 struct file *dst, 138 138 nfs4_stateid *src_stateid) 139 139 { 140 - struct nfs4_copy_state *copy; 140 + struct nfs4_copy_state *copy, *tmp_copy; 141 141 int status = NFS4_OK; 142 142 bool found_pending = false; 143 143 struct nfs_open_context *ctx = nfs_file_open_context(dst); 144 144 145 + copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS); 146 + if (!copy) 147 + return -ENOMEM; 148 + 145 149 spin_lock(&server->nfs_client->cl_lock); 146 - list_for_each_entry(copy, &server->nfs_client->pending_cb_stateids, 150 + list_for_each_entry(tmp_copy, &server->nfs_client->pending_cb_stateids, 147 151 copies) { 148 - if (memcmp(&res->write_res.stateid, &copy->stateid, 152 + if (memcmp(&res->write_res.stateid, &tmp_copy->stateid, 149 153 NFS4_STATEID_SIZE)) 150 154 continue; 151 155 found_pending = true; 152 - list_del(&copy->copies); 156 + list_del(&tmp_copy->copies); 153 157 break; 154 158 } 155 159 if (found_pending) { 156 160 spin_unlock(&server->nfs_client->cl_lock); 161 + kfree(copy); 162 + copy = tmp_copy; 157 163 goto out; 158 164 } 159 165 160 - copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS); 161 - if (!copy) { 162 - spin_unlock(&server->nfs_client->cl_lock); 163 - return -ENOMEM; 164 - } 165 166 memcpy(&copy->stateid, &res->write_res.stateid, NFS4_STATEID_SIZE); 166 167 init_completion(&copy->completion); 167 168 copy->parent_state = ctx->state;
+2
fs/nfs/nfs4_fs.h
··· 41 41 NFS4CLNT_MOVED, 42 42 NFS4CLNT_LEASE_MOVED, 43 43 NFS4CLNT_DELEGATION_EXPIRED, 44 + NFS4CLNT_RUN_MANAGER, 45 + NFS4CLNT_DELEGRETURN_RUNNING, 44 46 }; 45 47 46 48 #define NFS4_RENEW_TIMEOUT 0x01
+17 -9
fs/nfs/nfs4state.c
··· 1210 1210 struct task_struct *task; 1211 1211 char buf[INET6_ADDRSTRLEN + sizeof("-manager") + 1]; 1212 1212 1213 + set_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state); 1213 1214 if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0) 1214 1215 return; 1215 1216 __module_get(THIS_MODULE); ··· 2504 2503 2505 2504 /* Ensure exclusive access to NFSv4 state */ 2506 2505 do { 2506 + clear_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state); 2507 2507 if (test_bit(NFS4CLNT_PURGE_STATE, &clp->cl_state)) { 2508 2508 section = "purge state"; 2509 2509 status = nfs4_purge_lease(clp); ··· 2595 2593 } 2596 2594 2597 2595 nfs4_end_drain_session(clp); 2598 - if (test_and_clear_bit(NFS4CLNT_DELEGRETURN, &clp->cl_state)) { 2599 - nfs_client_return_marked_delegations(clp); 2600 - continue; 2596 + nfs4_clear_state_manager_bit(clp); 2597 + 2598 + if (!test_and_set_bit(NFS4CLNT_DELEGRETURN_RUNNING, &clp->cl_state)) { 2599 + if (test_and_clear_bit(NFS4CLNT_DELEGRETURN, &clp->cl_state)) { 2600 + nfs_client_return_marked_delegations(clp); 2601 + set_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state); 2602 + } 2603 + clear_bit(NFS4CLNT_DELEGRETURN_RUNNING, &clp->cl_state); 2601 2604 } 2602 2605 2603 - nfs4_clear_state_manager_bit(clp); 2604 2606 /* Did we race with an attempt to give us more work? */ 2605 - if (clp->cl_state == 0) 2606 - break; 2607 + if (!test_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state)) 2608 + return; 2607 2609 if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0) 2608 - break; 2609 - } while (refcount_read(&clp->cl_count) > 1); 2610 - return; 2610 + return; 2611 + } while (refcount_read(&clp->cl_count) > 1 && !signalled()); 2612 + goto out_drain; 2613 + 2611 2614 out_error: 2612 2615 if (strlen(section)) 2613 2616 section_sep = ": "; ··· 2620 2613 " with error %d\n", section_sep, section, 2621 2614 clp->cl_hostname, -status); 2622 2615 ssleep(1); 2616 + out_drain: 2623 2617 nfs4_end_drain_session(clp); 2624 2618 nfs4_clear_state_manager_bit(clp); 2625 2619 }
+3
fs/nfsd/nfs4proc.c
··· 1038 1038 { 1039 1039 __be32 status; 1040 1040 1041 + if (!cstate->save_fh.fh_dentry) 1042 + return nfserr_nofilehandle; 1043 + 1041 1044 status = nfs4_preprocess_stateid_op(rqstp, cstate, &cstate->save_fh, 1042 1045 src_stateid, RD_STATE, src, NULL); 1043 1046 if (status) {
+1 -3
fs/nilfs2/btnode.c
··· 266 266 return; 267 267 268 268 if (nbh == NULL) { /* blocksize == pagesize */ 269 - xa_lock_irq(&btnc->i_pages); 270 - __xa_erase(&btnc->i_pages, newkey); 271 - xa_unlock_irq(&btnc->i_pages); 269 + xa_erase_irq(&btnc->i_pages, newkey); 272 270 unlock_page(ctxt->bh->b_page); 273 271 } else 274 272 brelse(nbh);
+5 -5
fs/notify/fanotify/fanotify.c
··· 115 115 continue; 116 116 mark = iter_info->marks[type]; 117 117 /* 118 - * if the event is for a child and this inode doesn't care about 119 - * events on the child, don't send it! 118 + * If the event is for a child and this mark doesn't care about 119 + * events on a child, don't send it! 120 120 */ 121 - if (type == FSNOTIFY_OBJ_TYPE_INODE && 122 - (event_mask & FS_EVENT_ON_CHILD) && 123 - !(mark->mask & FS_EVENT_ON_CHILD)) 121 + if (event_mask & FS_EVENT_ON_CHILD && 122 + (type != FSNOTIFY_OBJ_TYPE_INODE || 123 + !(mark->mask & FS_EVENT_ON_CHILD))) 124 124 continue; 125 125 126 126 marks_mask |= mark->mask;
+5 -2
fs/notify/fsnotify.c
··· 167 167 parent = dget_parent(dentry); 168 168 p_inode = parent->d_inode; 169 169 170 - if (unlikely(!fsnotify_inode_watches_children(p_inode))) 170 + if (unlikely(!fsnotify_inode_watches_children(p_inode))) { 171 171 __fsnotify_update_child_dentry_flags(p_inode); 172 - else if (p_inode->i_fsnotify_mask & mask) { 172 + } else if (p_inode->i_fsnotify_mask & mask & ALL_FSNOTIFY_EVENTS) { 173 173 struct name_snapshot name; 174 174 175 175 /* we are notifying a parent so come up with the new mask which ··· 339 339 sb = mnt->mnt.mnt_sb; 340 340 mnt_or_sb_mask = mnt->mnt_fsnotify_mask | sb->s_fsnotify_mask; 341 341 } 342 + /* An event "on child" is not intended for a mount/sb mark */ 343 + if (mask & FS_EVENT_ON_CHILD) 344 + mnt_or_sb_mask = 0; 342 345 343 346 /* 344 347 * Optimization: srcu_read_lock() has a memory barrier which can
+10 -2
fs/ocfs2/aops.c
··· 2411 2411 /* this io's submitter should not have unlocked this before we could */ 2412 2412 BUG_ON(!ocfs2_iocb_is_rw_locked(iocb)); 2413 2413 2414 - if (bytes > 0 && private) 2415 - ret = ocfs2_dio_end_io_write(inode, private, offset, bytes); 2414 + if (bytes <= 0) 2415 + mlog_ratelimited(ML_ERROR, "Direct IO failed, bytes = %lld", 2416 + (long long)bytes); 2417 + if (private) { 2418 + if (bytes > 0) 2419 + ret = ocfs2_dio_end_io_write(inode, private, offset, 2420 + bytes); 2421 + else 2422 + ocfs2_dio_free_write_ctx(inode, private); 2423 + } 2416 2424 2417 2425 ocfs2_iocb_clear_rw_locked(iocb); 2418 2426
+9
fs/ocfs2/cluster/masklog.h
··· 178 178 ##__VA_ARGS__); \ 179 179 } while (0) 180 180 181 + #define mlog_ratelimited(mask, fmt, ...) \ 182 + do { \ 183 + static DEFINE_RATELIMIT_STATE(_rs, \ 184 + DEFAULT_RATELIMIT_INTERVAL, \ 185 + DEFAULT_RATELIMIT_BURST); \ 186 + if (__ratelimit(&_rs)) \ 187 + mlog(mask, fmt, ##__VA_ARGS__); \ 188 + } while (0) 189 + 181 190 #define mlog_errno(st) ({ \ 182 191 int _st = (st); \ 183 192 if (_st != -ERESTARTSYS && _st != -EINTR && \
+7 -8
fs/read_write.c
··· 2094 2094 off = same->src_offset; 2095 2095 len = same->src_length; 2096 2096 2097 - ret = -EISDIR; 2098 2097 if (S_ISDIR(src->i_mode)) 2099 - goto out; 2098 + return -EISDIR; 2100 2099 2101 - ret = -EINVAL; 2102 2100 if (!S_ISREG(src->i_mode)) 2103 - goto out; 2101 + return -EINVAL; 2102 + 2103 + if (!file->f_op->remap_file_range) 2104 + return -EOPNOTSUPP; 2104 2105 2105 2106 ret = remap_verify_area(file, off, len, false); 2106 2107 if (ret < 0) 2107 - goto out; 2108 + return ret; 2108 2109 ret = 0; 2109 2110 2110 2111 if (off + len > i_size_read(src)) ··· 2148 2147 fdput(dst_fd); 2149 2148 next_loop: 2150 2149 if (fatal_signal_pending(current)) 2151 - goto out; 2150 + break; 2152 2151 } 2153 - 2154 - out: 2155 2152 return ret; 2156 2153 } 2157 2154 EXPORT_SYMBOL(vfs_dedupe_file_range);
+4 -1
fs/xfs/libxfs/xfs_bmap.c
··· 1694 1694 case BMAP_LEFT_FILLING | BMAP_RIGHT_FILLING | BMAP_RIGHT_CONTIG: 1695 1695 /* 1696 1696 * Filling in all of a previously delayed allocation extent. 1697 - * The right neighbor is contiguous, the left is not. 1697 + * The right neighbor is contiguous, the left is not. Take care 1698 + * with delay -> unwritten extent allocation here because the 1699 + * delalloc record we are overwriting is always written. 1698 1700 */ 1699 1701 PREV.br_startblock = new->br_startblock; 1700 1702 PREV.br_blockcount += RIGHT.br_blockcount; 1703 + PREV.br_state = new->br_state; 1701 1704 1702 1705 xfs_iext_next(ifp, &bma->icur); 1703 1706 xfs_iext_remove(bma->ip, &bma->icur, state);
+7 -4
fs/xfs/libxfs/xfs_ialloc_btree.c
··· 538 538 539 539 static xfs_extlen_t 540 540 xfs_inobt_max_size( 541 - struct xfs_mount *mp) 541 + struct xfs_mount *mp, 542 + xfs_agnumber_t agno) 542 543 { 544 + xfs_agblock_t agblocks = xfs_ag_block_count(mp, agno); 545 + 543 546 /* Bail out if we're uninitialized, which can happen in mkfs. */ 544 547 if (mp->m_inobt_mxr[0] == 0) 545 548 return 0; 546 549 547 550 return xfs_btree_calc_size(mp->m_inobt_mnr, 548 - (uint64_t)mp->m_sb.sb_agblocks * mp->m_sb.sb_inopblock / 549 - XFS_INODES_PER_CHUNK); 551 + (uint64_t)agblocks * mp->m_sb.sb_inopblock / 552 + XFS_INODES_PER_CHUNK); 550 553 } 551 554 552 555 static int ··· 597 594 if (error) 598 595 return error; 599 596 600 - *ask += xfs_inobt_max_size(mp); 597 + *ask += xfs_inobt_max_size(mp, agno); 601 598 *used += tree_len; 602 599 return 0; 603 600 }
+2 -8
fs/xfs/xfs_bmap_util.c
··· 1042 1042 goto out_unlock; 1043 1043 } 1044 1044 1045 - static int 1045 + int 1046 1046 xfs_flush_unmap_range( 1047 1047 struct xfs_inode *ip, 1048 1048 xfs_off_t offset, ··· 1195 1195 * Writeback and invalidate cache for the remainder of the file as we're 1196 1196 * about to shift down every extent from offset to EOF. 1197 1197 */ 1198 - error = filemap_write_and_wait_range(VFS_I(ip)->i_mapping, offset, -1); 1199 - if (error) 1200 - return error; 1201 - error = invalidate_inode_pages2_range(VFS_I(ip)->i_mapping, 1202 - offset >> PAGE_SHIFT, -1); 1203 - if (error) 1204 - return error; 1198 + error = xfs_flush_unmap_range(ip, offset, XFS_ISIZE(ip)); 1205 1199 1206 1200 /* 1207 1201 * Clean out anything hanging around in the cow fork now that
+3
fs/xfs/xfs_bmap_util.h
··· 80 80 int whichfork, xfs_extnum_t *nextents, 81 81 xfs_filblks_t *count); 82 82 83 + int xfs_flush_unmap_range(struct xfs_inode *ip, xfs_off_t offset, 84 + xfs_off_t len); 85 + 83 86 #endif /* __XFS_BMAP_UTIL_H__ */
+21 -7
fs/xfs/xfs_buf_item.c
··· 1233 1233 } 1234 1234 1235 1235 /* 1236 - * Requeue a failed buffer for writeback 1236 + * Requeue a failed buffer for writeback. 1237 1237 * 1238 - * Return true if the buffer has been re-queued properly, false otherwise 1238 + * We clear the log item failed state here as well, but we have to be careful 1239 + * about reference counts because the only active reference counts on the buffer 1240 + * may be the failed log items. Hence if we clear the log item failed state 1241 + * before queuing the buffer for IO we can release all active references to 1242 + * the buffer and free it, leading to use after free problems in 1243 + * xfs_buf_delwri_queue. It makes no difference to the buffer or log items which 1244 + * order we process them in - the buffer is locked, and we own the buffer list 1245 + * so nothing on them is going to change while we are performing this action. 1246 + * 1247 + * Hence we can safely queue the buffer for IO before we clear the failed log 1248 + * item state, therefore always having an active reference to the buffer and 1249 + * avoiding the transient zero-reference state that leads to use-after-free. 1250 + * 1251 + * Return true if the buffer was added to the buffer list, false if it was 1252 + * already on the buffer list. 1239 1253 */ 1240 1254 bool 1241 1255 xfs_buf_resubmit_failed_buffers( ··· 1257 1243 struct list_head *buffer_list) 1258 1244 { 1259 1245 struct xfs_log_item *lip; 1246 + bool ret; 1247 + 1248 + ret = xfs_buf_delwri_queue(bp, buffer_list); 1260 1249 1261 1250 /* 1262 - * Clear XFS_LI_FAILED flag from all items before resubmit 1263 - * 1264 - * XFS_LI_FAILED set/clear is protected by ail_lock, caller this 1251 + * XFS_LI_FAILED set/clear is protected by ail_lock, caller of this 1265 1252 * function already have it acquired 1266 1253 */ 1267 1254 list_for_each_entry(lip, &bp->b_li_list, li_bio_list) 1268 1255 xfs_clear_li_failed(lip); 1269 1256 1270 - /* Add this buffer back to the delayed write list */ 1271 - return xfs_buf_delwri_queue(bp, buffer_list); 1257 + return ret; 1272 1258 }
+1 -1
fs/xfs/xfs_file.c
··· 920 920 } 921 921 922 922 923 - loff_t 923 + STATIC loff_t 924 924 xfs_file_remap_range( 925 925 struct file *file_in, 926 926 loff_t pos_in,
+14 -4
fs/xfs/xfs_reflink.c
··· 296 296 if (error) 297 297 return error; 298 298 299 + xfs_trim_extent(imap, got.br_startoff, got.br_blockcount); 299 300 trace_xfs_reflink_cow_alloc(ip, &got); 300 301 return 0; 301 302 } ··· 1352 1351 if (ret) 1353 1352 goto out_unlock; 1354 1353 1355 - /* Zap any page cache for the destination file's range. */ 1356 - truncate_inode_pages_range(&inode_out->i_data, 1357 - round_down(pos_out, PAGE_SIZE), 1358 - round_up(pos_out + *len, PAGE_SIZE) - 1); 1354 + /* 1355 + * If pos_out > EOF, we may have dirtied blocks between EOF and 1356 + * pos_out. In that case, we need to extend the flush and unmap to cover 1357 + * from EOF to the end of the copy length. 1358 + */ 1359 + if (pos_out > XFS_ISIZE(dest)) { 1360 + loff_t flen = *len + (pos_out - XFS_ISIZE(dest)); 1361 + ret = xfs_flush_unmap_range(dest, XFS_ISIZE(dest), flen); 1362 + } else { 1363 + ret = xfs_flush_unmap_range(dest, pos_out, *len); 1364 + } 1365 + if (ret) 1366 + goto out_unlock; 1359 1367 1360 1368 return 1; 1361 1369 out_unlock:
+4 -1
fs/xfs/xfs_trace.h
··· 280 280 ), 281 281 TP_fast_assign( 282 282 __entry->dev = bp->b_target->bt_dev; 283 - __entry->bno = bp->b_bn; 283 + if (bp->b_bn == XFS_BUF_DADDR_NULL) 284 + __entry->bno = bp->b_maps[0].bm_bn; 285 + else 286 + __entry->bno = bp->b_bn; 284 287 __entry->nblks = bp->b_length; 285 288 __entry->hold = atomic_read(&bp->b_hold); 286 289 __entry->pincount = atomic_read(&bp->b_pin_count);
+1
include/linux/can/dev.h
··· 169 169 170 170 void can_put_echo_skb(struct sk_buff *skb, struct net_device *dev, 171 171 unsigned int idx); 172 + struct sk_buff *__can_get_echo_skb(struct net_device *dev, unsigned int idx, u8 *len_ptr); 172 173 unsigned int can_get_echo_skb(struct net_device *dev, unsigned int idx); 173 174 void can_free_echo_skb(struct net_device *dev, unsigned int idx); 174 175
+6 -1
include/linux/can/rx-offload.h
··· 41 41 int can_rx_offload_add_fifo(struct net_device *dev, struct can_rx_offload *offload, unsigned int weight); 42 42 int can_rx_offload_irq_offload_timestamp(struct can_rx_offload *offload, u64 reg); 43 43 int can_rx_offload_irq_offload_fifo(struct can_rx_offload *offload); 44 - int can_rx_offload_irq_queue_err_skb(struct can_rx_offload *offload, struct sk_buff *skb); 44 + int can_rx_offload_queue_sorted(struct can_rx_offload *offload, 45 + struct sk_buff *skb, u32 timestamp); 46 + unsigned int can_rx_offload_get_echo_skb(struct can_rx_offload *offload, 47 + unsigned int idx, u32 timestamp); 48 + int can_rx_offload_queue_tail(struct can_rx_offload *offload, 49 + struct sk_buff *skb); 45 50 void can_rx_offload_reset(struct can_rx_offload *offload); 46 51 void can_rx_offload_del(struct can_rx_offload *offload); 47 52 void can_rx_offload_enable(struct can_rx_offload *offload);
+1 -1
include/linux/dma-direct.h
··· 5 5 #include <linux/dma-mapping.h> 6 6 #include <linux/mem_encrypt.h> 7 7 8 - #define DIRECT_MAPPING_ERROR 0 8 + #define DIRECT_MAPPING_ERROR (~(dma_addr_t)0) 9 9 10 10 #ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA 11 11 #include <asm/dma-direct.h>
+7
include/linux/efi.h
··· 1167 1167 extern void efi_reboot(enum reboot_mode reboot_mode, const char *__unused); 1168 1168 1169 1169 extern bool efi_is_table_address(unsigned long phys_addr); 1170 + 1171 + extern int efi_apply_persistent_mem_reservations(void); 1170 1172 #else 1171 1173 static inline bool efi_enabled(int feature) 1172 1174 { ··· 1186 1184 static inline bool efi_is_table_address(unsigned long phys_addr) 1187 1185 { 1188 1186 return false; 1187 + } 1188 + 1189 + static inline int efi_apply_persistent_mem_reservations(void) 1190 + { 1191 + return 0; 1189 1192 } 1190 1193 #endif 1191 1194
-28
include/linux/hid.h
··· 1139 1139 int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, u32 size, 1140 1140 int interrupt); 1141 1141 1142 - 1143 - /** 1144 - * struct hid_scroll_counter - Utility class for processing high-resolution 1145 - * scroll events. 1146 - * @dev: the input device for which events should be reported. 1147 - * @microns_per_hi_res_unit: the amount moved by the user's finger for each 1148 - * high-resolution unit reported by the mouse, in 1149 - * microns. 1150 - * @resolution_multiplier: the wheel's resolution in high-resolution mode as a 1151 - * multiple of its lower resolution. For example, if 1152 - * moving the wheel by one "notch" would result in a 1153 - * value of 1 in low-resolution mode but 8 in 1154 - * high-resolution, the multiplier is 8. 1155 - * @remainder: counts the number of high-resolution units moved since the last 1156 - * low-resolution event (REL_WHEEL or REL_HWHEEL) was sent. Should 1157 - * only be used by class methods. 1158 - */ 1159 - struct hid_scroll_counter { 1160 - struct input_dev *dev; 1161 - int microns_per_hi_res_unit; 1162 - int resolution_multiplier; 1163 - 1164 - int remainder; 1165 - }; 1166 - 1167 - void hid_scroll_counter_handle_scroll(struct hid_scroll_counter *counter, 1168 - int hi_res_value); 1169 - 1170 1142 /* HID quirks API */ 1171 1143 unsigned long hid_lookup_quirk(const struct hid_device *hdev); 1172 1144 int hid_quirks_init(char **quirks_param, __u16 bus, int count);
+2
include/linux/net_dim.h
··· 406 406 } 407 407 /* fall through */ 408 408 case NET_DIM_START_MEASURE: 409 + net_dim_sample(end_sample.event_ctr, end_sample.pkt_ctr, end_sample.byte_ctr, 410 + &dim->start_sample); 409 411 dim->state = NET_DIM_MEASURE_IN_PROGRESS; 410 412 break; 411 413 case NET_DIM_APPLY_NEW_PROFILE:
+17 -1
include/linux/skbuff.h
··· 1326 1326 } 1327 1327 } 1328 1328 1329 + static inline void skb_zcopy_set_nouarg(struct sk_buff *skb, void *val) 1330 + { 1331 + skb_shinfo(skb)->destructor_arg = (void *)((uintptr_t) val | 0x1UL); 1332 + skb_shinfo(skb)->tx_flags |= SKBTX_ZEROCOPY_FRAG; 1333 + } 1334 + 1335 + static inline bool skb_zcopy_is_nouarg(struct sk_buff *skb) 1336 + { 1337 + return (uintptr_t) skb_shinfo(skb)->destructor_arg & 0x1UL; 1338 + } 1339 + 1340 + static inline void *skb_zcopy_get_nouarg(struct sk_buff *skb) 1341 + { 1342 + return (void *)((uintptr_t) skb_shinfo(skb)->destructor_arg & ~0x1UL); 1343 + } 1344 + 1329 1345 /* Release a reference on a zerocopy structure */ 1330 1346 static inline void skb_zcopy_clear(struct sk_buff *skb, bool zerocopy) 1331 1347 { ··· 1351 1335 if (uarg->callback == sock_zerocopy_callback) { 1352 1336 uarg->zerocopy = uarg->zerocopy && zerocopy; 1353 1337 sock_zerocopy_put(uarg); 1354 - } else { 1338 + } else if (!skb_zcopy_is_nouarg(skb)) { 1355 1339 uarg->callback(uarg, zerocopy); 1356 1340 } 1357 1341
+1
include/linux/tcp.h
··· 196 196 u32 rcv_tstamp; /* timestamp of last received ACK (for keepalives) */ 197 197 u32 lsndtime; /* timestamp of last sent data packet (for restart window) */ 198 198 u32 last_oow_ack_time; /* timestamp of last out-of-window ACK */ 199 + u32 compressed_ack_rcv_nxt; 199 200 200 201 u32 tsoffset; /* timestamp offset */ 201 202
+3
include/linux/usb/quirks.h
··· 66 66 /* Device needs a pause after every control message. */ 67 67 #define USB_QUIRK_DELAY_CTRL_MSG BIT(13) 68 68 69 + /* Hub needs extra delay after resetting its port. */ 70 + #define USB_QUIRK_HUB_SLOW_RESET BIT(14) 71 + 69 72 #endif /* __LINUX_USB_QUIRKS_H */
+203 -64
include/linux/xarray.h
··· 289 289 void xa_init_flags(struct xarray *, gfp_t flags); 290 290 void *xa_load(struct xarray *, unsigned long index); 291 291 void *xa_store(struct xarray *, unsigned long index, void *entry, gfp_t); 292 - void *xa_cmpxchg(struct xarray *, unsigned long index, 293 - void *old, void *entry, gfp_t); 294 - int xa_reserve(struct xarray *, unsigned long index, gfp_t); 292 + void *xa_erase(struct xarray *, unsigned long index); 295 293 void *xa_store_range(struct xarray *, unsigned long first, unsigned long last, 296 294 void *entry, gfp_t); 297 295 bool xa_get_mark(struct xarray *, unsigned long index, xa_mark_t); ··· 339 341 static inline bool xa_marked(const struct xarray *xa, xa_mark_t mark) 340 342 { 341 343 return xa->xa_flags & XA_FLAGS_MARK(mark); 342 - } 343 - 344 - /** 345 - * xa_erase() - Erase this entry from the XArray. 346 - * @xa: XArray. 347 - * @index: Index of entry. 348 - * 349 - * This function is the equivalent of calling xa_store() with %NULL as 350 - * the third argument. The XArray does not need to allocate memory, so 351 - * the user does not need to provide GFP flags. 352 - * 353 - * Context: Process context. Takes and releases the xa_lock. 354 - * Return: The entry which used to be at this index. 355 - */ 356 - static inline void *xa_erase(struct xarray *xa, unsigned long index) 357 - { 358 - return xa_store(xa, index, NULL, 0); 359 - } 360 - 361 - /** 362 - * xa_insert() - Store this entry in the XArray unless another entry is 363 - * already present. 364 - * @xa: XArray. 365 - * @index: Index into array. 366 - * @entry: New entry. 367 - * @gfp: Memory allocation flags. 368 - * 369 - * If you would rather see the existing entry in the array, use xa_cmpxchg(). 370 - * This function is for users who don't care what the entry is, only that 371 - * one is present. 372 - * 373 - * Context: Process context. Takes and releases the xa_lock. 374 - * May sleep if the @gfp flags permit. 375 - * Return: 0 if the store succeeded. -EEXIST if another entry was present. 376 - * -ENOMEM if memory could not be allocated. 377 - */ 378 - static inline int xa_insert(struct xarray *xa, unsigned long index, 379 - void *entry, gfp_t gfp) 380 - { 381 - void *curr = xa_cmpxchg(xa, index, NULL, entry, gfp); 382 - if (!curr) 383 - return 0; 384 - if (xa_is_err(curr)) 385 - return xa_err(curr); 386 - return -EEXIST; 387 - } 388 - 389 - /** 390 - * xa_release() - Release a reserved entry. 391 - * @xa: XArray. 392 - * @index: Index of entry. 393 - * 394 - * After calling xa_reserve(), you can call this function to release the 395 - * reservation. If the entry at @index has been stored to, this function 396 - * will do nothing. 397 - */ 398 - static inline void xa_release(struct xarray *xa, unsigned long index) 399 - { 400 - xa_cmpxchg(xa, index, NULL, NULL, 0); 401 344 } 402 345 403 346 /** ··· 394 455 void *__xa_cmpxchg(struct xarray *, unsigned long index, void *old, 395 456 void *entry, gfp_t); 396 457 int __xa_alloc(struct xarray *, u32 *id, u32 max, void *entry, gfp_t); 458 + int __xa_reserve(struct xarray *, unsigned long index, gfp_t); 397 459 void __xa_set_mark(struct xarray *, unsigned long index, xa_mark_t); 398 460 void __xa_clear_mark(struct xarray *, unsigned long index, xa_mark_t); 399 461 ··· 427 487 } 428 488 429 489 /** 490 + * xa_store_bh() - Store this entry in the XArray. 491 + * @xa: XArray. 492 + * @index: Index into array. 493 + * @entry: New entry. 494 + * @gfp: Memory allocation flags. 495 + * 496 + * This function is like calling xa_store() except it disables softirqs 497 + * while holding the array lock. 498 + * 499 + * Context: Any context. Takes and releases the xa_lock while 500 + * disabling softirqs. 501 + * Return: The entry which used to be at this index. 502 + */ 503 + static inline void *xa_store_bh(struct xarray *xa, unsigned long index, 504 + void *entry, gfp_t gfp) 505 + { 506 + void *curr; 507 + 508 + xa_lock_bh(xa); 509 + curr = __xa_store(xa, index, entry, gfp); 510 + xa_unlock_bh(xa); 511 + 512 + return curr; 513 + } 514 + 515 + /** 516 + * xa_store_irq() - Erase this entry from the XArray. 517 + * @xa: XArray. 518 + * @index: Index into array. 519 + * @entry: New entry. 520 + * @gfp: Memory allocation flags. 521 + * 522 + * This function is like calling xa_store() except it disables interrupts 523 + * while holding the array lock. 524 + * 525 + * Context: Process context. Takes and releases the xa_lock while 526 + * disabling interrupts. 527 + * Return: The entry which used to be at this index. 528 + */ 529 + static inline void *xa_store_irq(struct xarray *xa, unsigned long index, 530 + void *entry, gfp_t gfp) 531 + { 532 + void *curr; 533 + 534 + xa_lock_irq(xa); 535 + curr = __xa_store(xa, index, entry, gfp); 536 + xa_unlock_irq(xa); 537 + 538 + return curr; 539 + } 540 + 541 + /** 430 542 * xa_erase_bh() - Erase this entry from the XArray. 431 543 * @xa: XArray. 432 544 * @index: Index of entry. ··· 487 495 * the third argument. The XArray does not need to allocate memory, so 488 496 * the user does not need to provide GFP flags. 489 497 * 490 - * Context: Process context. Takes and releases the xa_lock while 498 + * Context: Any context. Takes and releases the xa_lock while 491 499 * disabling softirqs. 492 500 * Return: The entry which used to be at this index. 493 501 */ ··· 524 532 xa_unlock_irq(xa); 525 533 526 534 return entry; 535 + } 536 + 537 + /** 538 + * xa_cmpxchg() - Conditionally replace an entry in the XArray. 539 + * @xa: XArray. 540 + * @index: Index into array. 541 + * @old: Old value to test against. 542 + * @entry: New value to place in array. 543 + * @gfp: Memory allocation flags. 544 + * 545 + * If the entry at @index is the same as @old, replace it with @entry. 546 + * If the return value is equal to @old, then the exchange was successful. 547 + * 548 + * Context: Any context. Takes and releases the xa_lock. May sleep 549 + * if the @gfp flags permit. 550 + * Return: The old value at this index or xa_err() if an error happened. 551 + */ 552 + static inline void *xa_cmpxchg(struct xarray *xa, unsigned long index, 553 + void *old, void *entry, gfp_t gfp) 554 + { 555 + void *curr; 556 + 557 + xa_lock(xa); 558 + curr = __xa_cmpxchg(xa, index, old, entry, gfp); 559 + xa_unlock(xa); 560 + 561 + return curr; 562 + } 563 + 564 + /** 565 + * xa_insert() - Store this entry in the XArray unless another entry is 566 + * already present. 567 + * @xa: XArray. 568 + * @index: Index into array. 569 + * @entry: New entry. 570 + * @gfp: Memory allocation flags. 571 + * 572 + * If you would rather see the existing entry in the array, use xa_cmpxchg(). 573 + * This function is for users who don't care what the entry is, only that 574 + * one is present. 575 + * 576 + * Context: Process context. Takes and releases the xa_lock. 577 + * May sleep if the @gfp flags permit. 578 + * Return: 0 if the store succeeded. -EEXIST if another entry was present. 579 + * -ENOMEM if memory could not be allocated. 580 + */ 581 + static inline int xa_insert(struct xarray *xa, unsigned long index, 582 + void *entry, gfp_t gfp) 583 + { 584 + void *curr = xa_cmpxchg(xa, index, NULL, entry, gfp); 585 + if (!curr) 586 + return 0; 587 + if (xa_is_err(curr)) 588 + return xa_err(curr); 589 + return -EEXIST; 527 590 } 528 591 529 592 /** ··· 622 575 * Updates the @id pointer with the index, then stores the entry at that 623 576 * index. A concurrent lookup will not see an uninitialised @id. 624 577 * 625 - * Context: Process context. Takes and releases the xa_lock while 578 + * Context: Any context. Takes and releases the xa_lock while 626 579 * disabling softirqs. May sleep if the @gfp flags permit. 627 580 * Return: 0 on success, -ENOMEM if memory allocation fails or -ENOSPC if 628 581 * there is no more space in the XArray. ··· 666 619 xa_unlock_irq(xa); 667 620 668 621 return err; 622 + } 623 + 624 + /** 625 + * xa_reserve() - Reserve this index in the XArray. 626 + * @xa: XArray. 627 + * @index: Index into array. 628 + * @gfp: Memory allocation flags. 629 + * 630 + * Ensures there is somewhere to store an entry at @index in the array. 631 + * If there is already something stored at @index, this function does 632 + * nothing. If there was nothing there, the entry is marked as reserved. 633 + * Loading from a reserved entry returns a %NULL pointer. 634 + * 635 + * If you do not use the entry that you have reserved, call xa_release() 636 + * or xa_erase() to free any unnecessary memory. 637 + * 638 + * Context: Any context. Takes and releases the xa_lock. 639 + * May sleep if the @gfp flags permit. 640 + * Return: 0 if the reservation succeeded or -ENOMEM if it failed. 641 + */ 642 + static inline 643 + int xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp) 644 + { 645 + int ret; 646 + 647 + xa_lock(xa); 648 + ret = __xa_reserve(xa, index, gfp); 649 + xa_unlock(xa); 650 + 651 + return ret; 652 + } 653 + 654 + /** 655 + * xa_reserve_bh() - Reserve this index in the XArray. 656 + * @xa: XArray. 657 + * @index: Index into array. 658 + * @gfp: Memory allocation flags. 659 + * 660 + * A softirq-disabling version of xa_reserve(). 661 + * 662 + * Context: Any context. Takes and releases the xa_lock while 663 + * disabling softirqs. 664 + * Return: 0 if the reservation succeeded or -ENOMEM if it failed. 665 + */ 666 + static inline 667 + int xa_reserve_bh(struct xarray *xa, unsigned long index, gfp_t gfp) 668 + { 669 + int ret; 670 + 671 + xa_lock_bh(xa); 672 + ret = __xa_reserve(xa, index, gfp); 673 + xa_unlock_bh(xa); 674 + 675 + return ret; 676 + } 677 + 678 + /** 679 + * xa_reserve_irq() - Reserve this index in the XArray. 680 + * @xa: XArray. 681 + * @index: Index into array. 682 + * @gfp: Memory allocation flags. 683 + * 684 + * An interrupt-disabling version of xa_reserve(). 685 + * 686 + * Context: Process context. Takes and releases the xa_lock while 687 + * disabling interrupts. 688 + * Return: 0 if the reservation succeeded or -ENOMEM if it failed. 689 + */ 690 + static inline 691 + int xa_reserve_irq(struct xarray *xa, unsigned long index, gfp_t gfp) 692 + { 693 + int ret; 694 + 695 + xa_lock_irq(xa); 696 + ret = __xa_reserve(xa, index, gfp); 697 + xa_unlock_irq(xa); 698 + 699 + return ret; 700 + } 701 + 702 + /** 703 + * xa_release() - Release a reserved entry. 704 + * @xa: XArray. 705 + * @index: Index of entry. 706 + * 707 + * After calling xa_reserve(), you can call this function to release the 708 + * reservation. If the entry at @index has been stored to, this function 709 + * will do nothing. 710 + */ 711 + static inline void xa_release(struct xarray *xa, unsigned long index) 712 + { 713 + xa_cmpxchg(xa, index, NULL, NULL, 0); 669 714 } 670 715 671 716 /* Everything below here is the Advanced API. Proceed with caution. */
+1 -1
include/media/v4l2-mem2mem.h
··· 624 624 625 625 /* v4l2 request helper */ 626 626 627 - void vb2_m2m_request_queue(struct media_request *req); 627 + void v4l2_m2m_request_queue(struct media_request *req); 628 628 629 629 /* v4l2 ioctl helpers */ 630 630
+2 -1
include/net/af_rxrpc.h
··· 77 77 struct sockaddr_rxrpc *, struct key *); 78 78 int rxrpc_kernel_check_call(struct socket *, struct rxrpc_call *, 79 79 enum rxrpc_call_completion *, u32 *); 80 - u32 rxrpc_kernel_check_life(struct socket *, struct rxrpc_call *); 80 + u32 rxrpc_kernel_check_life(const struct socket *, const struct rxrpc_call *); 81 + void rxrpc_kernel_probe_life(struct socket *, struct rxrpc_call *); 81 82 u32 rxrpc_kernel_get_epoch(struct socket *, struct rxrpc_call *); 82 83 bool rxrpc_kernel_get_reply_time(struct socket *, struct rxrpc_call *, 83 84 ktime_t *);
+12
include/net/sctp/sctp.h
··· 608 608 SCTP_DEFAULT_MINSEGMENT)); 609 609 } 610 610 611 + static inline bool sctp_transport_pmtu_check(struct sctp_transport *t) 612 + { 613 + __u32 pmtu = sctp_dst_mtu(t->dst); 614 + 615 + if (t->pathmtu == pmtu) 616 + return true; 617 + 618 + t->pathmtu = pmtu; 619 + 620 + return false; 621 + } 622 + 611 623 #endif /* __net_sctp_h__ */
+4 -4
include/trace/events/kyber.h
··· 31 31 32 32 TP_fast_assign( 33 33 __entry->dev = disk_devt(dev_to_disk(kobj_to_dev(q->kobj.parent))); 34 - strlcpy(__entry->domain, domain, DOMAIN_LEN); 35 - strlcpy(__entry->type, type, DOMAIN_LEN); 34 + strlcpy(__entry->domain, domain, sizeof(__entry->domain)); 35 + strlcpy(__entry->type, type, sizeof(__entry->type)); 36 36 __entry->percentile = percentile; 37 37 __entry->numerator = numerator; 38 38 __entry->denominator = denominator; ··· 60 60 61 61 TP_fast_assign( 62 62 __entry->dev = disk_devt(dev_to_disk(kobj_to_dev(q->kobj.parent))); 63 - strlcpy(__entry->domain, domain, DOMAIN_LEN); 63 + strlcpy(__entry->domain, domain, sizeof(__entry->domain)); 64 64 __entry->depth = depth; 65 65 ), 66 66 ··· 82 82 83 83 TP_fast_assign( 84 84 __entry->dev = disk_devt(dev_to_disk(kobj_to_dev(q->kobj.parent))); 85 - strlcpy(__entry->domain, domain, DOMAIN_LEN); 85 + strlcpy(__entry->domain, domain, sizeof(__entry->domain)); 86 86 ), 87 87 88 88 TP_printk("%d,%d %s", MAJOR(__entry->dev), MINOR(__entry->dev),
+2
include/trace/events/rxrpc.h
··· 181 181 enum rxrpc_propose_ack_trace { 182 182 rxrpc_propose_ack_client_tx_end, 183 183 rxrpc_propose_ack_input_data, 184 + rxrpc_propose_ack_ping_for_check_life, 184 185 rxrpc_propose_ack_ping_for_keepalive, 185 186 rxrpc_propose_ack_ping_for_lost_ack, 186 187 rxrpc_propose_ack_ping_for_lost_reply, ··· 381 380 #define rxrpc_propose_ack_traces \ 382 381 EM(rxrpc_propose_ack_client_tx_end, "ClTxEnd") \ 383 382 EM(rxrpc_propose_ack_input_data, "DataIn ") \ 383 + EM(rxrpc_propose_ack_ping_for_check_life, "ChkLife") \ 384 384 EM(rxrpc_propose_ack_ping_for_keepalive, "KeepAlv") \ 385 385 EM(rxrpc_propose_ack_ping_for_lost_ack, "LostAck") \ 386 386 EM(rxrpc_propose_ack_ping_for_lost_reply, "LostRpl") \
-10
include/uapi/linux/input-event-codes.h
··· 716 716 * the situation described above. 717 717 */ 718 718 #define REL_RESERVED 0x0a 719 - #define REL_WHEEL_HI_RES 0x0b 720 719 #define REL_MAX 0x0f 721 720 #define REL_CNT (REL_MAX+1) 722 721 ··· 751 752 #define ABS_VOLUME 0x20 752 753 753 754 #define ABS_MISC 0x28 754 - 755 - /* 756 - * 0x2e is reserved and should not be used in input drivers. 757 - * It was used by HID as ABS_MISC+6 and userspace needs to detect if 758 - * the next ABS_* event is correct or is just ABS_MISC + n. 759 - * We define here ABS_RESERVED so userspace can rely on it and detect 760 - * the situation described above. 761 - */ 762 - #define ABS_RESERVED 0x2e 763 755 764 756 #define ABS_MT_SLOT 0x2f /* MT slot being modified */ 765 757 #define ABS_MT_TOUCH_MAJOR 0x30 /* Major axis of touching ellipse */
+5
include/uapi/linux/v4l2-controls.h
··· 50 50 #ifndef __LINUX_V4L2_CONTROLS_H 51 51 #define __LINUX_V4L2_CONTROLS_H 52 52 53 + #include <linux/types.h> 54 + 53 55 /* Control classes */ 54 56 #define V4L2_CTRL_CLASS_USER 0x00980000 /* Old-style 'user' controls */ 55 57 #define V4L2_CTRL_CLASS_MPEG 0x00990000 /* MPEG-compression controls */ ··· 1112 1110 __u8 profile_and_level_indication; 1113 1111 __u8 progressive_sequence; 1114 1112 __u8 chroma_format; 1113 + __u8 pad; 1115 1114 }; 1116 1115 1117 1116 struct v4l2_mpeg2_picture { ··· 1131 1128 __u8 alternate_scan; 1132 1129 __u8 repeat_first_field; 1133 1130 __u8 progressive_frame; 1131 + __u8 pad; 1134 1132 }; 1135 1133 1136 1134 struct v4l2_ctrl_mpeg2_slice_params { ··· 1146 1142 1147 1143 __u8 backward_ref_index; 1148 1144 __u8 forward_ref_index; 1145 + __u8 pad; 1149 1146 }; 1150 1147 1151 1148 struct v4l2_ctrl_mpeg2_quantization {
+2 -2
kernel/debug/kdb/kdb_bt.c
··· 179 179 kdb_printf("no process for cpu %ld\n", cpu); 180 180 return 0; 181 181 } 182 - sprintf(buf, "btt 0x%p\n", KDB_TSK(cpu)); 182 + sprintf(buf, "btt 0x%px\n", KDB_TSK(cpu)); 183 183 kdb_parse(buf); 184 184 return 0; 185 185 } 186 186 kdb_printf("btc: cpu status: "); 187 187 kdb_parse("cpu\n"); 188 188 for_each_online_cpu(cpu) { 189 - sprintf(buf, "btt 0x%p\n", KDB_TSK(cpu)); 189 + sprintf(buf, "btt 0x%px\n", KDB_TSK(cpu)); 190 190 kdb_parse(buf); 191 191 touch_nmi_watchdog(); 192 192 }
+9 -6
kernel/debug/kdb/kdb_io.c
··· 216 216 int count; 217 217 int i; 218 218 int diag, dtab_count; 219 - int key; 219 + int key, buf_size, ret; 220 220 221 221 222 222 diag = kdbgetintenv("DTABCOUNT", &dtab_count); ··· 336 336 else 337 337 p_tmp = tmpbuffer; 338 338 len = strlen(p_tmp); 339 - count = kallsyms_symbol_complete(p_tmp, 340 - sizeof(tmpbuffer) - 341 - (p_tmp - tmpbuffer)); 339 + buf_size = sizeof(tmpbuffer) - (p_tmp - tmpbuffer); 340 + count = kallsyms_symbol_complete(p_tmp, buf_size); 342 341 if (tab == 2 && count > 0) { 343 342 kdb_printf("\n%d symbols are found.", count); 344 343 if (count > dtab_count) { ··· 349 350 } 350 351 kdb_printf("\n"); 351 352 for (i = 0; i < count; i++) { 352 - if (WARN_ON(!kallsyms_symbol_next(p_tmp, i))) 353 + ret = kallsyms_symbol_next(p_tmp, i, buf_size); 354 + if (WARN_ON(!ret)) 353 355 break; 354 - kdb_printf("%s ", p_tmp); 356 + if (ret != -E2BIG) 357 + kdb_printf("%s ", p_tmp); 358 + else 359 + kdb_printf("%s... ", p_tmp); 355 360 *(p_tmp + len) = '\0'; 356 361 } 357 362 if (i >= dtab_count)
+2 -2
kernel/debug/kdb/kdb_keyboard.c
··· 173 173 case KT_LATIN: 174 174 if (isprint(keychar)) 175 175 break; /* printable characters */ 176 - /* drop through */ 176 + /* fall through */ 177 177 case KT_SPEC: 178 178 if (keychar == K_ENTER) 179 179 break; 180 - /* drop through */ 180 + /* fall through */ 181 181 default: 182 182 return -1; /* ignore unprintables */ 183 183 }
+10 -25
kernel/debug/kdb/kdb_main.c
··· 1192 1192 if (reason == KDB_REASON_DEBUG) { 1193 1193 /* special case below */ 1194 1194 } else { 1195 - kdb_printf("\nEntering kdb (current=0x%p, pid %d) ", 1195 + kdb_printf("\nEntering kdb (current=0x%px, pid %d) ", 1196 1196 kdb_current, kdb_current ? kdb_current->pid : 0); 1197 1197 #if defined(CONFIG_SMP) 1198 1198 kdb_printf("on processor %d ", raw_smp_processor_id()); ··· 1208 1208 */ 1209 1209 switch (db_result) { 1210 1210 case KDB_DB_BPT: 1211 - kdb_printf("\nEntering kdb (0x%p, pid %d) ", 1211 + kdb_printf("\nEntering kdb (0x%px, pid %d) ", 1212 1212 kdb_current, kdb_current->pid); 1213 1213 #if defined(CONFIG_SMP) 1214 1214 kdb_printf("on processor %d ", raw_smp_processor_id()); ··· 1493 1493 char cbuf[32]; 1494 1494 char *c = cbuf; 1495 1495 int i; 1496 + int j; 1496 1497 unsigned long word; 1497 1498 1498 1499 memset(cbuf, '\0', sizeof(cbuf)); ··· 1539 1538 wc.word = word; 1540 1539 #define printable_char(c) \ 1541 1540 ({unsigned char __c = c; isascii(__c) && isprint(__c) ? __c : '.'; }) 1542 - switch (bytesperword) { 1543 - case 8: 1541 + for (j = 0; j < bytesperword; j++) 1544 1542 *c++ = printable_char(*cp++); 1545 - *c++ = printable_char(*cp++); 1546 - *c++ = printable_char(*cp++); 1547 - *c++ = printable_char(*cp++); 1548 - addr += 4; 1549 - case 4: 1550 - *c++ = printable_char(*cp++); 1551 - *c++ = printable_char(*cp++); 1552 - addr += 2; 1553 - case 2: 1554 - *c++ = printable_char(*cp++); 1555 - addr++; 1556 - case 1: 1557 - *c++ = printable_char(*cp++); 1558 - addr++; 1559 - break; 1560 - } 1543 + addr += bytesperword; 1561 1544 #undef printable_char 1562 1545 } 1563 1546 } ··· 2033 2048 if (mod->state == MODULE_STATE_UNFORMED) 2034 2049 continue; 2035 2050 2036 - kdb_printf("%-20s%8u 0x%p ", mod->name, 2051 + kdb_printf("%-20s%8u 0x%px ", mod->name, 2037 2052 mod->core_layout.size, (void *)mod); 2038 2053 #ifdef CONFIG_MODULE_UNLOAD 2039 2054 kdb_printf("%4d ", module_refcount(mod)); ··· 2044 2059 kdb_printf(" (Loading)"); 2045 2060 else 2046 2061 kdb_printf(" (Live)"); 2047 - kdb_printf(" 0x%p", mod->core_layout.base); 2062 + kdb_printf(" 0x%px", mod->core_layout.base); 2048 2063 2049 2064 #ifdef CONFIG_MODULE_UNLOAD 2050 2065 { ··· 2326 2341 return; 2327 2342 2328 2343 cpu = kdb_process_cpu(p); 2329 - kdb_printf("0x%p %8d %8d %d %4d %c 0x%p %c%s\n", 2344 + kdb_printf("0x%px %8d %8d %d %4d %c 0x%px %c%s\n", 2330 2345 (void *)p, p->pid, p->parent->pid, 2331 2346 kdb_task_has_cpu(p), kdb_process_cpu(p), 2332 2347 kdb_task_state_char(p), ··· 2339 2354 } else { 2340 2355 if (KDB_TSK(cpu) != p) 2341 2356 kdb_printf(" Error: does not match running " 2342 - "process table (0x%p)\n", KDB_TSK(cpu)); 2357 + "process table (0x%px)\n", KDB_TSK(cpu)); 2343 2358 } 2344 2359 } 2345 2360 } ··· 2672 2687 for_each_kdbcmd(kp, i) { 2673 2688 if (kp->cmd_name && (strcmp(kp->cmd_name, cmd) == 0)) { 2674 2689 kdb_printf("Duplicate kdb command registered: " 2675 - "%s, func %p help %s\n", cmd, func, help); 2690 + "%s, func %px help %s\n", cmd, func, help); 2676 2691 return 1; 2677 2692 } 2678 2693 }
+1 -1
kernel/debug/kdb/kdb_private.h
··· 83 83 unsigned long sym_start; 84 84 unsigned long sym_end; 85 85 } kdb_symtab_t; 86 - extern int kallsyms_symbol_next(char *prefix_name, int flag); 86 + extern int kallsyms_symbol_next(char *prefix_name, int flag, int buf_size); 87 87 extern int kallsyms_symbol_complete(char *prefix_name, int max_len); 88 88 89 89 /* Exported Symbols for kernel loadable modules to use. */
+14 -14
kernel/debug/kdb/kdb_support.c
··· 40 40 int kdbgetsymval(const char *symname, kdb_symtab_t *symtab) 41 41 { 42 42 if (KDB_DEBUG(AR)) 43 - kdb_printf("kdbgetsymval: symname=%s, symtab=%p\n", symname, 43 + kdb_printf("kdbgetsymval: symname=%s, symtab=%px\n", symname, 44 44 symtab); 45 45 memset(symtab, 0, sizeof(*symtab)); 46 46 symtab->sym_start = kallsyms_lookup_name(symname); ··· 88 88 char *knt1 = NULL; 89 89 90 90 if (KDB_DEBUG(AR)) 91 - kdb_printf("kdbnearsym: addr=0x%lx, symtab=%p\n", addr, symtab); 91 + kdb_printf("kdbnearsym: addr=0x%lx, symtab=%px\n", addr, symtab); 92 92 memset(symtab, 0, sizeof(*symtab)); 93 93 94 94 if (addr < 4096) ··· 149 149 symtab->mod_name = "kernel"; 150 150 if (KDB_DEBUG(AR)) 151 151 kdb_printf("kdbnearsym: returns %d symtab->sym_start=0x%lx, " 152 - "symtab->mod_name=%p, symtab->sym_name=%p (%s)\n", ret, 152 + "symtab->mod_name=%px, symtab->sym_name=%px (%s)\n", ret, 153 153 symtab->sym_start, symtab->mod_name, symtab->sym_name, 154 154 symtab->sym_name); 155 155 ··· 221 221 * Parameters: 222 222 * prefix_name prefix of a symbol name to lookup 223 223 * flag 0 means search from the head, 1 means continue search. 224 + * buf_size maximum length that can be written to prefix_name 225 + * buffer 224 226 * Returns: 225 227 * 1 if a symbol matches the given prefix. 226 228 * 0 if no string found 227 229 */ 228 - int kallsyms_symbol_next(char *prefix_name, int flag) 230 + int kallsyms_symbol_next(char *prefix_name, int flag, int buf_size) 229 231 { 230 232 int prefix_len = strlen(prefix_name); 231 233 static loff_t pos; ··· 237 235 pos = 0; 238 236 239 237 while ((name = kdb_walk_kallsyms(&pos))) { 240 - if (strncmp(name, prefix_name, prefix_len) == 0) { 241 - strncpy(prefix_name, name, strlen(name)+1); 242 - return 1; 243 - } 238 + if (!strncmp(name, prefix_name, prefix_len)) 239 + return strscpy(prefix_name, name, buf_size); 244 240 } 245 241 return 0; 246 242 } ··· 432 432 *word = w8; 433 433 break; 434 434 } 435 - /* drop through */ 435 + /* fall through */ 436 436 default: 437 437 diag = KDB_BADWIDTH; 438 438 kdb_printf("kdb_getphysword: bad width %ld\n", (long) size); ··· 481 481 *word = w8; 482 482 break; 483 483 } 484 - /* drop through */ 484 + /* fall through */ 485 485 default: 486 486 diag = KDB_BADWIDTH; 487 487 kdb_printf("kdb_getword: bad width %ld\n", (long) size); ··· 525 525 diag = kdb_putarea(addr, w8); 526 526 break; 527 527 } 528 - /* drop through */ 528 + /* fall through */ 529 529 default: 530 530 diag = KDB_BADWIDTH; 531 531 kdb_printf("kdb_putword: bad width %ld\n", (long) size); ··· 887 887 __func__, dah_first); 888 888 if (dah_first) { 889 889 h_used = (struct debug_alloc_header *)debug_alloc_pool; 890 - kdb_printf("%s: h_used %p size %d\n", __func__, h_used, 890 + kdb_printf("%s: h_used %px size %d\n", __func__, h_used, 891 891 h_used->size); 892 892 } 893 893 do { 894 894 h_used = (struct debug_alloc_header *) 895 895 ((char *)h_free + dah_overhead + h_free->size); 896 - kdb_printf("%s: h_used %p size %d caller %p\n", 896 + kdb_printf("%s: h_used %px size %d caller %px\n", 897 897 __func__, h_used, h_used->size, h_used->caller); 898 898 h_free = (struct debug_alloc_header *) 899 899 (debug_alloc_pool + h_free->next); ··· 902 902 ((char *)h_free + dah_overhead + h_free->size); 903 903 if ((char *)h_used - debug_alloc_pool != 904 904 sizeof(debug_alloc_pool_aligned)) 905 - kdb_printf("%s: h_used %p size %d caller %p\n", 905 + kdb_printf("%s: h_used %px size %d caller %px\n", 906 906 __func__, h_used, h_used->size, h_used->caller); 907 907 out: 908 908 spin_unlock(&dap_lock);
+2 -1
kernel/dma/swiotlb.c
··· 679 679 } 680 680 681 681 if (!dev_is_dma_coherent(dev) && 682 - (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) 682 + (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0 && 683 + dev_addr != DIRECT_MAPPING_ERROR) 683 684 arch_sync_dma_for_device(dev, phys, size, dir); 684 685 685 686 return dev_addr;
+48 -14
kernel/sched/fair.c
··· 5674 5674 return target; 5675 5675 } 5676 5676 5677 - static unsigned long cpu_util_wake(int cpu, struct task_struct *p); 5677 + static unsigned long cpu_util_without(int cpu, struct task_struct *p); 5678 5678 5679 - static unsigned long capacity_spare_wake(int cpu, struct task_struct *p) 5679 + static unsigned long capacity_spare_without(int cpu, struct task_struct *p) 5680 5680 { 5681 - return max_t(long, capacity_of(cpu) - cpu_util_wake(cpu, p), 0); 5681 + return max_t(long, capacity_of(cpu) - cpu_util_without(cpu, p), 0); 5682 5682 } 5683 5683 5684 5684 /* ··· 5738 5738 5739 5739 avg_load += cfs_rq_load_avg(&cpu_rq(i)->cfs); 5740 5740 5741 - spare_cap = capacity_spare_wake(i, p); 5741 + spare_cap = capacity_spare_without(i, p); 5742 5742 5743 5743 if (spare_cap > max_spare_cap) 5744 5744 max_spare_cap = spare_cap; ··· 5889 5889 return prev_cpu; 5890 5890 5891 5891 /* 5892 - * We need task's util for capacity_spare_wake, sync it up to prev_cpu's 5893 - * last_update_time. 5892 + * We need task's util for capacity_spare_without, sync it up to 5893 + * prev_cpu's last_update_time. 5894 5894 */ 5895 5895 if (!(sd_flag & SD_BALANCE_FORK)) 5896 5896 sync_entity_load_avg(&p->se); ··· 6216 6216 } 6217 6217 6218 6218 /* 6219 - * cpu_util_wake: Compute CPU utilization with any contributions from 6220 - * the waking task p removed. 6219 + * cpu_util_without: compute cpu utilization without any contributions from *p 6220 + * @cpu: the CPU which utilization is requested 6221 + * @p: the task which utilization should be discounted 6222 + * 6223 + * The utilization of a CPU is defined by the utilization of tasks currently 6224 + * enqueued on that CPU as well as tasks which are currently sleeping after an 6225 + * execution on that CPU. 6226 + * 6227 + * This method returns the utilization of the specified CPU by discounting the 6228 + * utilization of the specified task, whenever the task is currently 6229 + * contributing to the CPU utilization. 6221 6230 */ 6222 - static unsigned long cpu_util_wake(int cpu, struct task_struct *p) 6231 + static unsigned long cpu_util_without(int cpu, struct task_struct *p) 6223 6232 { 6224 6233 struct cfs_rq *cfs_rq; 6225 6234 unsigned int util; ··· 6240 6231 cfs_rq = &cpu_rq(cpu)->cfs; 6241 6232 util = READ_ONCE(cfs_rq->avg.util_avg); 6242 6233 6243 - /* Discount task's blocked util from CPU's util */ 6234 + /* Discount task's util from CPU's util */ 6244 6235 util -= min_t(unsigned int, util, task_util(p)); 6245 6236 6246 6237 /* ··· 6249 6240 * a) if *p is the only task sleeping on this CPU, then: 6250 6241 * cpu_util (== task_util) > util_est (== 0) 6251 6242 * and thus we return: 6252 - * cpu_util_wake = (cpu_util - task_util) = 0 6243 + * cpu_util_without = (cpu_util - task_util) = 0 6253 6244 * 6254 6245 * b) if other tasks are SLEEPING on this CPU, which is now exiting 6255 6246 * IDLE, then: 6256 6247 * cpu_util >= task_util 6257 6248 * cpu_util > util_est (== 0) 6258 6249 * and thus we discount *p's blocked utilization to return: 6259 - * cpu_util_wake = (cpu_util - task_util) >= 0 6250 + * cpu_util_without = (cpu_util - task_util) >= 0 6260 6251 * 6261 6252 * c) if other tasks are RUNNABLE on that CPU and 6262 6253 * util_est > cpu_util ··· 6269 6260 * covered by the following code when estimated utilization is 6270 6261 * enabled. 6271 6262 */ 6272 - if (sched_feat(UTIL_EST)) 6273 - util = max(util, READ_ONCE(cfs_rq->avg.util_est.enqueued)); 6263 + if (sched_feat(UTIL_EST)) { 6264 + unsigned int estimated = 6265 + READ_ONCE(cfs_rq->avg.util_est.enqueued); 6266 + 6267 + /* 6268 + * Despite the following checks we still have a small window 6269 + * for a possible race, when an execl's select_task_rq_fair() 6270 + * races with LB's detach_task(): 6271 + * 6272 + * detach_task() 6273 + * p->on_rq = TASK_ON_RQ_MIGRATING; 6274 + * ---------------------------------- A 6275 + * deactivate_task() \ 6276 + * dequeue_task() + RaceTime 6277 + * util_est_dequeue() / 6278 + * ---------------------------------- B 6279 + * 6280 + * The additional check on "current == p" it's required to 6281 + * properly fix the execl regression and it helps in further 6282 + * reducing the chances for the above race. 6283 + */ 6284 + if (unlikely(task_on_rq_queued(p) || current == p)) { 6285 + estimated -= min_t(unsigned int, estimated, 6286 + (_task_util_est(p) | UTIL_AVG_UNCHANGED)); 6287 + } 6288 + util = max(util, estimated); 6289 + } 6274 6290 6275 6291 /* 6276 6292 * Utilization (estimated) can exceed the CPU capacity, thus let's
+24 -23
kernel/sched/psi.c
··· 633 633 */ 634 634 void cgroup_move_task(struct task_struct *task, struct css_set *to) 635 635 { 636 - bool move_psi = !psi_disabled; 637 636 unsigned int task_flags = 0; 638 637 struct rq_flags rf; 639 638 struct rq *rq; 640 639 641 - if (move_psi) { 642 - rq = task_rq_lock(task, &rf); 643 - 644 - if (task_on_rq_queued(task)) 645 - task_flags = TSK_RUNNING; 646 - else if (task->in_iowait) 647 - task_flags = TSK_IOWAIT; 648 - 649 - if (task->flags & PF_MEMSTALL) 650 - task_flags |= TSK_MEMSTALL; 651 - 652 - if (task_flags) 653 - psi_task_change(task, task_flags, 0); 640 + if (psi_disabled) { 641 + /* 642 + * Lame to do this here, but the scheduler cannot be locked 643 + * from the outside, so we move cgroups from inside sched/. 644 + */ 645 + rcu_assign_pointer(task->cgroups, to); 646 + return; 654 647 } 655 648 656 - /* 657 - * Lame to do this here, but the scheduler cannot be locked 658 - * from the outside, so we move cgroups from inside sched/. 659 - */ 649 + rq = task_rq_lock(task, &rf); 650 + 651 + if (task_on_rq_queued(task)) 652 + task_flags = TSK_RUNNING; 653 + else if (task->in_iowait) 654 + task_flags = TSK_IOWAIT; 655 + 656 + if (task->flags & PF_MEMSTALL) 657 + task_flags |= TSK_MEMSTALL; 658 + 659 + if (task_flags) 660 + psi_task_change(task, task_flags, 0); 661 + 662 + /* See comment above */ 660 663 rcu_assign_pointer(task->cgroups, to); 661 664 662 - if (move_psi) { 663 - if (task_flags) 664 - psi_task_change(task, 0, task_flags); 665 + if (task_flags) 666 + psi_task_change(task, 0, task_flags); 665 667 666 - task_rq_unlock(rq, task, &rf); 667 - } 668 + task_rq_unlock(rq, task, &rf); 668 669 } 669 670 #endif /* CONFIG_CGROUPS */ 670 671
+1
lib/test_firmware.c
··· 837 837 if (req->fw->size > PAGE_SIZE) { 838 838 pr_err("Testing interface must use PAGE_SIZE firmware for now\n"); 839 839 rc = -EINVAL; 840 + goto out; 840 841 } 841 842 memcpy(buf, req->fw->data, req->fw->size); 842 843
+47 -3
lib/test_xarray.c
··· 208 208 XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_2)); 209 209 210 210 /* We should see two elements in the array */ 211 + rcu_read_lock(); 211 212 xas_for_each(&xas, entry, ULONG_MAX) 212 213 seen++; 214 + rcu_read_unlock(); 213 215 XA_BUG_ON(xa, seen != 2); 214 216 215 217 /* One of which is marked */ 216 218 xas_set(&xas, 0); 217 219 seen = 0; 220 + rcu_read_lock(); 218 221 xas_for_each_marked(&xas, entry, ULONG_MAX, XA_MARK_0) 219 222 seen++; 223 + rcu_read_unlock(); 220 224 XA_BUG_ON(xa, seen != 1); 221 225 } 222 226 XA_BUG_ON(xa, xa_get_mark(xa, next, XA_MARK_0)); ··· 377 373 xa_erase_index(xa, 12345678); 378 374 XA_BUG_ON(xa, !xa_empty(xa)); 379 375 376 + /* And so does xa_insert */ 377 + xa_reserve(xa, 12345678, GFP_KERNEL); 378 + XA_BUG_ON(xa, xa_insert(xa, 12345678, xa_mk_value(12345678), 0) != 0); 379 + xa_erase_index(xa, 12345678); 380 + XA_BUG_ON(xa, !xa_empty(xa)); 381 + 380 382 /* Can iterate through a reserved entry */ 381 383 xa_store_index(xa, 5, GFP_KERNEL); 382 384 xa_reserve(xa, 6, GFP_KERNEL); ··· 446 436 XA_BUG_ON(xa, xa_load(xa, max) != NULL); 447 437 XA_BUG_ON(xa, xa_load(xa, min - 1) != NULL); 448 438 439 + xas_lock(&xas); 449 440 XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(min)) != xa_mk_value(index)); 441 + xas_unlock(&xas); 450 442 XA_BUG_ON(xa, xa_load(xa, min) != xa_mk_value(min)); 451 443 XA_BUG_ON(xa, xa_load(xa, max - 1) != xa_mk_value(min)); 452 444 XA_BUG_ON(xa, xa_load(xa, max) != NULL); ··· 464 452 XA_STATE(xas, xa, index); 465 453 xa_store_order(xa, index, order, xa_mk_value(0), GFP_KERNEL); 466 454 455 + xas_lock(&xas); 467 456 XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(1)) != xa_mk_value(0)); 468 457 XA_BUG_ON(xa, xas.xa_index != index); 469 458 XA_BUG_ON(xa, xas_store(&xas, NULL) != xa_mk_value(1)); 459 + xas_unlock(&xas); 470 460 XA_BUG_ON(xa, !xa_empty(xa)); 471 461 } 472 462 #endif ··· 512 498 rcu_read_unlock(); 513 499 514 500 /* We can erase multiple values with a single store */ 515 - xa_store_order(xa, 0, 63, NULL, GFP_KERNEL); 501 + xa_store_order(xa, 0, BITS_PER_LONG - 1, NULL, GFP_KERNEL); 516 502 XA_BUG_ON(xa, !xa_empty(xa)); 517 503 518 504 /* Even when the first slot is empty but the others aren't */ ··· 716 702 } 717 703 } 718 704 719 - static noinline void check_find(struct xarray *xa) 705 + static noinline void check_find_1(struct xarray *xa) 720 706 { 721 707 unsigned long i, j, k; 722 708 ··· 762 748 XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_0)); 763 749 } 764 750 XA_BUG_ON(xa, !xa_empty(xa)); 751 + } 752 + 753 + static noinline void check_find_2(struct xarray *xa) 754 + { 755 + void *entry; 756 + unsigned long i, j, index = 0; 757 + 758 + xa_for_each(xa, entry, index, ULONG_MAX, XA_PRESENT) { 759 + XA_BUG_ON(xa, true); 760 + } 761 + 762 + for (i = 0; i < 1024; i++) { 763 + xa_store_index(xa, index, GFP_KERNEL); 764 + j = 0; 765 + index = 0; 766 + xa_for_each(xa, entry, index, ULONG_MAX, XA_PRESENT) { 767 + XA_BUG_ON(xa, xa_mk_value(index) != entry); 768 + XA_BUG_ON(xa, index != j++); 769 + } 770 + } 771 + 772 + xa_destroy(xa); 773 + } 774 + 775 + static noinline void check_find(struct xarray *xa) 776 + { 777 + check_find_1(xa); 778 + check_find_2(xa); 765 779 check_multi_find(xa); 766 780 check_multi_find_2(xa); 767 781 } ··· 1109 1067 __check_store_range(xa, 4095 + i, 4095 + j); 1110 1068 __check_store_range(xa, 4096 + i, 4096 + j); 1111 1069 __check_store_range(xa, 123456 + i, 123456 + j); 1112 - __check_store_range(xa, UINT_MAX + i, UINT_MAX + j); 1070 + __check_store_range(xa, (1 << 24) + i, (1 << 24) + j); 1113 1071 } 1114 1072 } 1115 1073 } ··· 1188 1146 XA_STATE(xas, xa, 1 << order); 1189 1147 1190 1148 xa_store_order(xa, 0, order, xa, GFP_KERNEL); 1149 + rcu_read_lock(); 1191 1150 xas_load(&xas); 1192 1151 XA_BUG_ON(xa, xas.xa_node->count == 0); 1193 1152 XA_BUG_ON(xa, xas.xa_node->count > (1 << order)); 1194 1153 XA_BUG_ON(xa, xas.xa_node->nr_values != 0); 1154 + rcu_read_unlock(); 1195 1155 1196 1156 xa_store_order(xa, 1 << order, order, xa_mk_value(1 << order), 1197 1157 GFP_KERNEL);
+1 -2
lib/ubsan.c
··· 427 427 EXPORT_SYMBOL(__ubsan_handle_shift_out_of_bounds); 428 428 429 429 430 - void __noreturn 431 - __ubsan_handle_builtin_unreachable(struct unreachable_data *data) 430 + void __ubsan_handle_builtin_unreachable(struct unreachable_data *data) 432 431 { 433 432 unsigned long flags; 434 433
+60 -79
lib/xarray.c
··· 610 610 * (see the xa_cmpxchg() implementation for an example). 611 611 * 612 612 * Return: If the slot already existed, returns the contents of this slot. 613 - * If the slot was newly created, returns NULL. If it failed to create the 614 - * slot, returns NULL and indicates the error in @xas. 613 + * If the slot was newly created, returns %NULL. If it failed to create the 614 + * slot, returns %NULL and indicates the error in @xas. 615 615 */ 616 616 static void *xas_create(struct xa_state *xas) 617 617 { ··· 1334 1334 XA_STATE(xas, xa, index); 1335 1335 return xas_result(&xas, xas_store(&xas, NULL)); 1336 1336 } 1337 - EXPORT_SYMBOL_GPL(__xa_erase); 1337 + EXPORT_SYMBOL(__xa_erase); 1338 1338 1339 1339 /** 1340 - * xa_store() - Store this entry in the XArray. 1340 + * xa_erase() - Erase this entry from the XArray. 1341 1341 * @xa: XArray. 1342 - * @index: Index into array. 1343 - * @entry: New entry. 1344 - * @gfp: Memory allocation flags. 1342 + * @index: Index of entry. 1345 1343 * 1346 - * After this function returns, loads from this index will return @entry. 1347 - * Storing into an existing multislot entry updates the entry of every index. 1348 - * The marks associated with @index are unaffected unless @entry is %NULL. 1344 + * This function is the equivalent of calling xa_store() with %NULL as 1345 + * the third argument. The XArray does not need to allocate memory, so 1346 + * the user does not need to provide GFP flags. 1349 1347 * 1350 - * Context: Process context. Takes and releases the xa_lock. May sleep 1351 - * if the @gfp flags permit. 1352 - * Return: The old entry at this index on success, xa_err(-EINVAL) if @entry 1353 - * cannot be stored in an XArray, or xa_err(-ENOMEM) if memory allocation 1354 - * failed. 1348 + * Context: Any context. Takes and releases the xa_lock. 1349 + * Return: The entry which used to be at this index. 1355 1350 */ 1356 - void *xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp) 1351 + void *xa_erase(struct xarray *xa, unsigned long index) 1357 1352 { 1358 - XA_STATE(xas, xa, index); 1359 - void *curr; 1353 + void *entry; 1360 1354 1361 - if (WARN_ON_ONCE(xa_is_internal(entry))) 1362 - return XA_ERROR(-EINVAL); 1355 + xa_lock(xa); 1356 + entry = __xa_erase(xa, index); 1357 + xa_unlock(xa); 1363 1358 1364 - do { 1365 - xas_lock(&xas); 1366 - curr = xas_store(&xas, entry); 1367 - if (xa_track_free(xa) && entry) 1368 - xas_clear_mark(&xas, XA_FREE_MARK); 1369 - xas_unlock(&xas); 1370 - } while (xas_nomem(&xas, gfp)); 1371 - 1372 - return xas_result(&xas, curr); 1359 + return entry; 1373 1360 } 1374 - EXPORT_SYMBOL(xa_store); 1361 + EXPORT_SYMBOL(xa_erase); 1375 1362 1376 1363 /** 1377 1364 * __xa_store() - Store this entry in the XArray. ··· 1382 1395 1383 1396 if (WARN_ON_ONCE(xa_is_internal(entry))) 1384 1397 return XA_ERROR(-EINVAL); 1398 + if (xa_track_free(xa) && !entry) 1399 + entry = XA_ZERO_ENTRY; 1385 1400 1386 1401 do { 1387 1402 curr = xas_store(&xas, entry); 1388 - if (xa_track_free(xa) && entry) 1403 + if (xa_track_free(xa)) 1389 1404 xas_clear_mark(&xas, XA_FREE_MARK); 1390 1405 } while (__xas_nomem(&xas, gfp)); 1391 1406 ··· 1396 1407 EXPORT_SYMBOL(__xa_store); 1397 1408 1398 1409 /** 1399 - * xa_cmpxchg() - Conditionally replace an entry in the XArray. 1410 + * xa_store() - Store this entry in the XArray. 1400 1411 * @xa: XArray. 1401 1412 * @index: Index into array. 1402 - * @old: Old value to test against. 1403 - * @entry: New value to place in array. 1413 + * @entry: New entry. 1404 1414 * @gfp: Memory allocation flags. 1405 1415 * 1406 - * If the entry at @index is the same as @old, replace it with @entry. 1407 - * If the return value is equal to @old, then the exchange was successful. 1416 + * After this function returns, loads from this index will return @entry. 1417 + * Storing into an existing multislot entry updates the entry of every index. 1418 + * The marks associated with @index are unaffected unless @entry is %NULL. 1408 1419 * 1409 - * Context: Process context. Takes and releases the xa_lock. May sleep 1410 - * if the @gfp flags permit. 1411 - * Return: The old value at this index or xa_err() if an error happened. 1420 + * Context: Any context. Takes and releases the xa_lock. 1421 + * May sleep if the @gfp flags permit. 1422 + * Return: The old entry at this index on success, xa_err(-EINVAL) if @entry 1423 + * cannot be stored in an XArray, or xa_err(-ENOMEM) if memory allocation 1424 + * failed. 1412 1425 */ 1413 - void *xa_cmpxchg(struct xarray *xa, unsigned long index, 1414 - void *old, void *entry, gfp_t gfp) 1426 + void *xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp) 1415 1427 { 1416 - XA_STATE(xas, xa, index); 1417 1428 void *curr; 1418 1429 1419 - if (WARN_ON_ONCE(xa_is_internal(entry))) 1420 - return XA_ERROR(-EINVAL); 1430 + xa_lock(xa); 1431 + curr = __xa_store(xa, index, entry, gfp); 1432 + xa_unlock(xa); 1421 1433 1422 - do { 1423 - xas_lock(&xas); 1424 - curr = xas_load(&xas); 1425 - if (curr == XA_ZERO_ENTRY) 1426 - curr = NULL; 1427 - if (curr == old) { 1428 - xas_store(&xas, entry); 1429 - if (xa_track_free(xa) && entry) 1430 - xas_clear_mark(&xas, XA_FREE_MARK); 1431 - } 1432 - xas_unlock(&xas); 1433 - } while (xas_nomem(&xas, gfp)); 1434 - 1435 - return xas_result(&xas, curr); 1434 + return curr; 1436 1435 } 1437 - EXPORT_SYMBOL(xa_cmpxchg); 1436 + EXPORT_SYMBOL(xa_store); 1438 1437 1439 1438 /** 1440 1439 * __xa_cmpxchg() - Store this entry in the XArray. ··· 1448 1471 1449 1472 if (WARN_ON_ONCE(xa_is_internal(entry))) 1450 1473 return XA_ERROR(-EINVAL); 1474 + if (xa_track_free(xa) && !entry) 1475 + entry = XA_ZERO_ENTRY; 1451 1476 1452 1477 do { 1453 1478 curr = xas_load(&xas); ··· 1457 1478 curr = NULL; 1458 1479 if (curr == old) { 1459 1480 xas_store(&xas, entry); 1460 - if (xa_track_free(xa) && entry) 1481 + if (xa_track_free(xa)) 1461 1482 xas_clear_mark(&xas, XA_FREE_MARK); 1462 1483 } 1463 1484 } while (__xas_nomem(&xas, gfp)); ··· 1467 1488 EXPORT_SYMBOL(__xa_cmpxchg); 1468 1489 1469 1490 /** 1470 - * xa_reserve() - Reserve this index in the XArray. 1491 + * __xa_reserve() - Reserve this index in the XArray. 1471 1492 * @xa: XArray. 1472 1493 * @index: Index into array. 1473 1494 * @gfp: Memory allocation flags. ··· 1475 1496 * Ensures there is somewhere to store an entry at @index in the array. 1476 1497 * If there is already something stored at @index, this function does 1477 1498 * nothing. If there was nothing there, the entry is marked as reserved. 1478 - * Loads from @index will continue to see a %NULL pointer until a 1479 - * subsequent store to @index. 1499 + * Loading from a reserved entry returns a %NULL pointer. 1480 1500 * 1481 1501 * If you do not use the entry that you have reserved, call xa_release() 1482 1502 * or xa_erase() to free any unnecessary memory. 1483 1503 * 1484 - * Context: Process context. Takes and releases the xa_lock, IRQ or BH safe 1485 - * if specified in XArray flags. May sleep if the @gfp flags permit. 1504 + * Context: Any context. Expects the xa_lock to be held on entry. May 1505 + * release the lock, sleep and reacquire the lock if the @gfp flags permit. 1486 1506 * Return: 0 if the reservation succeeded or -ENOMEM if it failed. 1487 1507 */ 1488 - int xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp) 1508 + int __xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp) 1489 1509 { 1490 1510 XA_STATE(xas, xa, index); 1491 - unsigned int lock_type = xa_lock_type(xa); 1492 1511 void *curr; 1493 1512 1494 1513 do { 1495 - xas_lock_type(&xas, lock_type); 1496 1514 curr = xas_load(&xas); 1497 - if (!curr) 1515 + if (!curr) { 1498 1516 xas_store(&xas, XA_ZERO_ENTRY); 1499 - xas_unlock_type(&xas, lock_type); 1500 - } while (xas_nomem(&xas, gfp)); 1517 + if (xa_track_free(xa)) 1518 + xas_clear_mark(&xas, XA_FREE_MARK); 1519 + } 1520 + } while (__xas_nomem(&xas, gfp)); 1501 1521 1502 1522 return xas_error(&xas); 1503 1523 } 1504 - EXPORT_SYMBOL(xa_reserve); 1524 + EXPORT_SYMBOL(__xa_reserve); 1505 1525 1506 1526 #ifdef CONFIG_XARRAY_MULTI 1507 1527 static void xas_set_range(struct xa_state *xas, unsigned long first, ··· 1565 1587 do { 1566 1588 xas_lock(&xas); 1567 1589 if (entry) { 1568 - unsigned int order = (last == ~0UL) ? 64 : 1569 - ilog2(last + 1); 1590 + unsigned int order = BITS_PER_LONG; 1591 + if (last + 1) 1592 + order = __ffs(last + 1); 1570 1593 xas_set_order(&xas, last, order); 1571 1594 xas_create(&xas); 1572 1595 if (xas_error(&xas)) ··· 1641 1662 * @index: Index of entry. 1642 1663 * @mark: Mark number. 1643 1664 * 1644 - * Attempting to set a mark on a NULL entry does not succeed. 1665 + * Attempting to set a mark on a %NULL entry does not succeed. 1645 1666 * 1646 1667 * Context: Any context. Expects xa_lock to be held on entry. 1647 1668 */ ··· 1653 1674 if (entry) 1654 1675 xas_set_mark(&xas, mark); 1655 1676 } 1656 - EXPORT_SYMBOL_GPL(__xa_set_mark); 1677 + EXPORT_SYMBOL(__xa_set_mark); 1657 1678 1658 1679 /** 1659 1680 * __xa_clear_mark() - Clear this mark on this entry while locked. ··· 1671 1692 if (entry) 1672 1693 xas_clear_mark(&xas, mark); 1673 1694 } 1674 - EXPORT_SYMBOL_GPL(__xa_clear_mark); 1695 + EXPORT_SYMBOL(__xa_clear_mark); 1675 1696 1676 1697 /** 1677 1698 * xa_get_mark() - Inquire whether this mark is set on this entry. ··· 1711 1732 * @index: Index of entry. 1712 1733 * @mark: Mark number. 1713 1734 * 1714 - * Attempting to set a mark on a NULL entry does not succeed. 1735 + * Attempting to set a mark on a %NULL entry does not succeed. 1715 1736 * 1716 1737 * Context: Process context. Takes and releases the xa_lock. 1717 1738 */ ··· 1808 1829 entry = xas_find_marked(&xas, max, filter); 1809 1830 else 1810 1831 entry = xas_find(&xas, max); 1832 + if (xas.xa_node == XAS_BOUNDS) 1833 + break; 1811 1834 if (xas.xa_shift) { 1812 1835 if (xas.xa_index & ((1UL << xas.xa_shift) - 1)) 1813 1836 continue; ··· 1880 1899 * 1881 1900 * The @filter may be an XArray mark value, in which case entries which are 1882 1901 * marked with that mark will be copied. It may also be %XA_PRESENT, in 1883 - * which case all entries which are not NULL will be copied. 1902 + * which case all entries which are not %NULL will be copied. 1884 1903 * 1885 1904 * The entries returned may not represent a snapshot of the XArray at a 1886 1905 * moment in time. For example, if another thread stores to index 5, then
+8 -2
mm/gup.c
··· 385 385 * @vma: vm_area_struct mapping @address 386 386 * @address: virtual address to look up 387 387 * @flags: flags modifying lookup behaviour 388 - * @page_mask: on output, *page_mask is set according to the size of the page 388 + * @ctx: contains dev_pagemap for %ZONE_DEVICE memory pinning and a 389 + * pointer to output page_mask 389 390 * 390 391 * @flags can have FOLL_ flags set, defined in <linux/mm.h> 391 392 * 392 - * Returns the mapped (struct page *), %NULL if no mapping exists, or 393 + * When getting pages from ZONE_DEVICE memory, the @ctx->pgmap caches 394 + * the device's dev_pagemap metadata to avoid repeating expensive lookups. 395 + * 396 + * On output, the @ctx->page_mask is set according to the size of the page. 397 + * 398 + * Return: the mapped (struct page *), %NULL if no mapping exists, or 393 399 * an error pointer if there is a mapping to something not represented 394 400 * by a page descriptor (see also vm_normal_page()). 395 401 */
+19 -4
mm/hugetlb.c
··· 3233 3233 int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, 3234 3234 struct vm_area_struct *vma) 3235 3235 { 3236 - pte_t *src_pte, *dst_pte, entry; 3236 + pte_t *src_pte, *dst_pte, entry, dst_entry; 3237 3237 struct page *ptepage; 3238 3238 unsigned long addr; 3239 3239 int cow; ··· 3261 3261 break; 3262 3262 } 3263 3263 3264 - /* If the pagetables are shared don't copy or take references */ 3265 - if (dst_pte == src_pte) 3264 + /* 3265 + * If the pagetables are shared don't copy or take references. 3266 + * dst_pte == src_pte is the common case of src/dest sharing. 3267 + * 3268 + * However, src could have 'unshared' and dst shares with 3269 + * another vma. If dst_pte !none, this implies sharing. 3270 + * Check here before taking page table lock, and once again 3271 + * after taking the lock below. 3272 + */ 3273 + dst_entry = huge_ptep_get(dst_pte); 3274 + if ((dst_pte == src_pte) || !huge_pte_none(dst_entry)) 3266 3275 continue; 3267 3276 3268 3277 dst_ptl = huge_pte_lock(h, dst, dst_pte); 3269 3278 src_ptl = huge_pte_lockptr(h, src, src_pte); 3270 3279 spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); 3271 3280 entry = huge_ptep_get(src_pte); 3272 - if (huge_pte_none(entry)) { /* skip none entry */ 3281 + dst_entry = huge_ptep_get(dst_pte); 3282 + if (huge_pte_none(entry) || !huge_pte_none(dst_entry)) { 3283 + /* 3284 + * Skip if src entry none. Also, skip in the 3285 + * unlikely case dst entry !none as this implies 3286 + * sharing with another vma. 3287 + */ 3273 3288 ; 3274 3289 } else if (unlikely(is_hugetlb_entry_migration(entry) || 3275 3290 is_hugetlb_entry_hwpoisoned(entry))) {
+1 -1
mm/memblock.c
··· 1179 1179 1180 1180 #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP 1181 1181 /* 1182 - * Common iterator interface used to define for_each_mem_range(). 1182 + * Common iterator interface used to define for_each_mem_pfn_range(). 1183 1183 */ 1184 1184 void __init_memblock __next_mem_pfn_range(int *idx, int nid, 1185 1185 unsigned long *out_start_pfn,
+17 -11
mm/page_alloc.c
··· 4061 4061 int reserve_flags; 4062 4062 4063 4063 /* 4064 - * In the slowpath, we sanity check order to avoid ever trying to 4065 - * reclaim >= MAX_ORDER areas which will never succeed. Callers may 4066 - * be using allocators in order of preference for an area that is 4067 - * too large. 4068 - */ 4069 - if (order >= MAX_ORDER) { 4070 - WARN_ON_ONCE(!(gfp_mask & __GFP_NOWARN)); 4071 - return NULL; 4072 - } 4073 - 4074 - /* 4075 4064 * We also sanity check to catch abuse of atomic reserves being used by 4076 4065 * callers that are not in atomic context. 4077 4066 */ ··· 4352 4363 unsigned int alloc_flags = ALLOC_WMARK_LOW; 4353 4364 gfp_t alloc_mask; /* The gfp_t that was actually used for allocation */ 4354 4365 struct alloc_context ac = { }; 4366 + 4367 + /* 4368 + * There are several places where we assume that the order value is sane 4369 + * so bail out early if the request is out of bound. 4370 + */ 4371 + if (unlikely(order >= MAX_ORDER)) { 4372 + WARN_ON_ONCE(!(gfp_mask & __GFP_NOWARN)); 4373 + return NULL; 4374 + } 4355 4375 4356 4376 gfp_mask &= gfp_allowed_mask; 4357 4377 alloc_mask = gfp_mask; ··· 7785 7787 7786 7788 if (PageReserved(page)) 7787 7789 goto unmovable; 7790 + 7791 + /* 7792 + * If the zone is movable and we have ruled out all reserved 7793 + * pages then it should be reasonably safe to assume the rest 7794 + * is movable. 7795 + */ 7796 + if (zone_idx(zone) == ZONE_MOVABLE) 7797 + continue; 7788 7798 7789 7799 /* 7790 7800 * Hugepages are not in LRU lists, but they're movable.
+1 -3
mm/shmem.c
··· 2563 2563 inode_lock(inode); 2564 2564 /* We're holding i_mutex so we can access i_size directly */ 2565 2565 2566 - if (offset < 0) 2567 - offset = -EINVAL; 2568 - else if (offset >= inode->i_size) 2566 + if (offset < 0 || offset >= inode->i_size) 2569 2567 offset = -ENXIO; 2570 2568 else { 2571 2569 start = offset >> PAGE_SHIFT;
+3 -3
mm/swapfile.c
··· 2813 2813 unsigned int type; 2814 2814 int i; 2815 2815 2816 - p = kzalloc(sizeof(*p), GFP_KERNEL); 2816 + p = kvzalloc(sizeof(*p), GFP_KERNEL); 2817 2817 if (!p) 2818 2818 return ERR_PTR(-ENOMEM); 2819 2819 ··· 2824 2824 } 2825 2825 if (type >= MAX_SWAPFILES) { 2826 2826 spin_unlock(&swap_lock); 2827 - kfree(p); 2827 + kvfree(p); 2828 2828 return ERR_PTR(-EPERM); 2829 2829 } 2830 2830 if (type >= nr_swapfiles) { ··· 2838 2838 smp_wmb(); 2839 2839 nr_swapfiles++; 2840 2840 } else { 2841 - kfree(p); 2841 + kvfree(p); 2842 2842 p = swap_info[type]; 2843 2843 /* 2844 2844 * Do not memset this entry: a racing procfs swap_next()
+4 -3
mm/vmstat.c
··· 1827 1827 1828 1828 /* 1829 1829 * The fast way of checking if there are any vmstat diffs. 1830 - * This works because the diffs are byte sized items. 1831 1830 */ 1832 - if (memchr_inv(p->vm_stat_diff, 0, NR_VM_ZONE_STAT_ITEMS)) 1831 + if (memchr_inv(p->vm_stat_diff, 0, NR_VM_ZONE_STAT_ITEMS * 1832 + sizeof(p->vm_stat_diff[0]))) 1833 1833 return true; 1834 1834 #ifdef CONFIG_NUMA 1835 - if (memchr_inv(p->vm_numa_stat_diff, 0, NR_VM_NUMA_STAT_ITEMS)) 1835 + if (memchr_inv(p->vm_numa_stat_diff, 0, NR_VM_NUMA_STAT_ITEMS * 1836 + sizeof(p->vm_numa_stat_diff[0]))) 1836 1837 return true; 1837 1838 #endif 1838 1839 }
+63 -40
mm/z3fold.c
··· 99 99 #define NCHUNKS ((PAGE_SIZE - ZHDR_SIZE_ALIGNED) >> CHUNK_SHIFT) 100 100 101 101 #define BUDDY_MASK (0x3) 102 + #define BUDDY_SHIFT 2 102 103 103 104 /** 104 105 * struct z3fold_pool - stores metadata for each z3fold pool ··· 146 145 MIDDLE_CHUNK_MAPPED, 147 146 NEEDS_COMPACTING, 148 147 PAGE_STALE, 149 - UNDER_RECLAIM 148 + PAGE_CLAIMED, /* by either reclaim or free */ 150 149 }; 151 150 152 151 /***************** ··· 175 174 clear_bit(MIDDLE_CHUNK_MAPPED, &page->private); 176 175 clear_bit(NEEDS_COMPACTING, &page->private); 177 176 clear_bit(PAGE_STALE, &page->private); 178 - clear_bit(UNDER_RECLAIM, &page->private); 177 + clear_bit(PAGE_CLAIMED, &page->private); 179 178 180 179 spin_lock_init(&zhdr->page_lock); 181 180 kref_init(&zhdr->refcount); ··· 224 223 unsigned long handle; 225 224 226 225 handle = (unsigned long)zhdr; 227 - if (bud != HEADLESS) 228 - handle += (bud + zhdr->first_num) & BUDDY_MASK; 226 + if (bud != HEADLESS) { 227 + handle |= (bud + zhdr->first_num) & BUDDY_MASK; 228 + if (bud == LAST) 229 + handle |= (zhdr->last_chunks << BUDDY_SHIFT); 230 + } 229 231 return handle; 230 232 } 231 233 ··· 236 232 static struct z3fold_header *handle_to_z3fold_header(unsigned long handle) 237 233 { 238 234 return (struct z3fold_header *)(handle & PAGE_MASK); 235 + } 236 + 237 + /* only for LAST bud, returns zero otherwise */ 238 + static unsigned short handle_to_chunks(unsigned long handle) 239 + { 240 + return (handle & ~PAGE_MASK) >> BUDDY_SHIFT; 239 241 } 240 242 241 243 /* ··· 730 720 page = virt_to_page(zhdr); 731 721 732 722 if (test_bit(PAGE_HEADLESS, &page->private)) { 733 - /* HEADLESS page stored */ 734 - bud = HEADLESS; 735 - } else { 736 - z3fold_page_lock(zhdr); 737 - bud = handle_to_buddy(handle); 738 - 739 - switch (bud) { 740 - case FIRST: 741 - zhdr->first_chunks = 0; 742 - break; 743 - case MIDDLE: 744 - zhdr->middle_chunks = 0; 745 - zhdr->start_middle = 0; 746 - break; 747 - case LAST: 748 - zhdr->last_chunks = 0; 749 - break; 750 - default: 751 - pr_err("%s: unknown bud %d\n", __func__, bud); 752 - WARN_ON(1); 753 - z3fold_page_unlock(zhdr); 754 - return; 723 + /* if a headless page is under reclaim, just leave. 724 + * NB: we use test_and_set_bit for a reason: if the bit 725 + * has not been set before, we release this page 726 + * immediately so we don't care about its value any more. 727 + */ 728 + if (!test_and_set_bit(PAGE_CLAIMED, &page->private)) { 729 + spin_lock(&pool->lock); 730 + list_del(&page->lru); 731 + spin_unlock(&pool->lock); 732 + free_z3fold_page(page); 733 + atomic64_dec(&pool->pages_nr); 755 734 } 735 + return; 756 736 } 757 737 758 - if (bud == HEADLESS) { 759 - spin_lock(&pool->lock); 760 - list_del(&page->lru); 761 - spin_unlock(&pool->lock); 762 - free_z3fold_page(page); 763 - atomic64_dec(&pool->pages_nr); 738 + /* Non-headless case */ 739 + z3fold_page_lock(zhdr); 740 + bud = handle_to_buddy(handle); 741 + 742 + switch (bud) { 743 + case FIRST: 744 + zhdr->first_chunks = 0; 745 + break; 746 + case MIDDLE: 747 + zhdr->middle_chunks = 0; 748 + break; 749 + case LAST: 750 + zhdr->last_chunks = 0; 751 + break; 752 + default: 753 + pr_err("%s: unknown bud %d\n", __func__, bud); 754 + WARN_ON(1); 755 + z3fold_page_unlock(zhdr); 764 756 return; 765 757 } 766 758 ··· 770 758 atomic64_dec(&pool->pages_nr); 771 759 return; 772 760 } 773 - if (test_bit(UNDER_RECLAIM, &page->private)) { 761 + if (test_bit(PAGE_CLAIMED, &page->private)) { 774 762 z3fold_page_unlock(zhdr); 775 763 return; 776 764 } ··· 848 836 } 849 837 list_for_each_prev(pos, &pool->lru) { 850 838 page = list_entry(pos, struct page, lru); 851 - if (test_bit(PAGE_HEADLESS, &page->private)) 852 - /* candidate found */ 853 - break; 839 + 840 + /* this bit could have been set by free, in which case 841 + * we pass over to the next page in the pool. 842 + */ 843 + if (test_and_set_bit(PAGE_CLAIMED, &page->private)) 844 + continue; 854 845 855 846 zhdr = page_address(page); 856 - if (!z3fold_page_trylock(zhdr)) 847 + if (test_bit(PAGE_HEADLESS, &page->private)) 848 + break; 849 + 850 + if (!z3fold_page_trylock(zhdr)) { 851 + zhdr = NULL; 857 852 continue; /* can't evict at this point */ 853 + } 858 854 kref_get(&zhdr->refcount); 859 855 list_del_init(&zhdr->buddy); 860 856 zhdr->cpu = -1; 861 - set_bit(UNDER_RECLAIM, &page->private); 862 857 break; 863 858 } 859 + 860 + if (!zhdr) 861 + break; 864 862 865 863 list_del_init(&page->lru); 866 864 spin_unlock(&pool->lock); ··· 920 898 if (test_bit(PAGE_HEADLESS, &page->private)) { 921 899 if (ret == 0) { 922 900 free_z3fold_page(page); 901 + atomic64_dec(&pool->pages_nr); 923 902 return 0; 924 903 } 925 904 spin_lock(&pool->lock); ··· 928 905 spin_unlock(&pool->lock); 929 906 } else { 930 907 z3fold_page_lock(zhdr); 931 - clear_bit(UNDER_RECLAIM, &page->private); 908 + clear_bit(PAGE_CLAIMED, &page->private); 932 909 if (kref_put(&zhdr->refcount, 933 910 release_z3fold_page_locked)) { 934 911 atomic64_dec(&pool->pages_nr); ··· 987 964 set_bit(MIDDLE_CHUNK_MAPPED, &page->private); 988 965 break; 989 966 case LAST: 990 - addr += PAGE_SIZE - (zhdr->last_chunks << CHUNK_SHIFT); 967 + addr += PAGE_SIZE - (handle_to_chunks(handle) << CHUNK_SHIFT); 991 968 break; 992 969 default: 993 970 pr_err("unknown buddy id %d\n", buddy);
+4 -2
net/batman-adv/bat_v_elp.c
··· 352 352 */ 353 353 int batadv_v_elp_iface_enable(struct batadv_hard_iface *hard_iface) 354 354 { 355 + static const size_t tvlv_padding = sizeof(__be32); 355 356 struct batadv_elp_packet *elp_packet; 356 357 unsigned char *elp_buff; 357 358 u32 random_seqno; 358 359 size_t size; 359 360 int res = -ENOMEM; 360 361 361 - size = ETH_HLEN + NET_IP_ALIGN + BATADV_ELP_HLEN; 362 + size = ETH_HLEN + NET_IP_ALIGN + BATADV_ELP_HLEN + tvlv_padding; 362 363 hard_iface->bat_v.elp_skb = dev_alloc_skb(size); 363 364 if (!hard_iface->bat_v.elp_skb) 364 365 goto out; 365 366 366 367 skb_reserve(hard_iface->bat_v.elp_skb, ETH_HLEN + NET_IP_ALIGN); 367 - elp_buff = skb_put_zero(hard_iface->bat_v.elp_skb, BATADV_ELP_HLEN); 368 + elp_buff = skb_put_zero(hard_iface->bat_v.elp_skb, 369 + BATADV_ELP_HLEN + tvlv_padding); 368 370 elp_packet = (struct batadv_elp_packet *)elp_buff; 369 371 370 372 elp_packet->packet_type = BATADV_ELP;
+1 -1
net/batman-adv/fragmentation.c
··· 275 275 kfree(entry); 276 276 277 277 packet = (struct batadv_frag_packet *)skb_out->data; 278 - size = ntohs(packet->total_size); 278 + size = ntohs(packet->total_size) + hdr_size; 279 279 280 280 /* Make room for the rest of the fragments. */ 281 281 if (pskb_expand_head(skb_out, 0, size - skb_out->len, GFP_ATOMIC) < 0) {
+7
net/bridge/br_private.h
··· 102 102 struct metadata_dst *tunnel_dst; 103 103 }; 104 104 105 + /* private vlan flags */ 106 + enum { 107 + BR_VLFLAG_PER_PORT_STATS = BIT(0), 108 + }; 109 + 105 110 /** 106 111 * struct net_bridge_vlan - per-vlan entry 107 112 * 108 113 * @vnode: rhashtable member 109 114 * @vid: VLAN id 110 115 * @flags: bridge vlan flags 116 + * @priv_flags: private (in-kernel) bridge vlan flags 111 117 * @stats: per-cpu VLAN statistics 112 118 * @br: if MASTER flag set, this points to a bridge struct 113 119 * @port: if MASTER flag unset, this points to a port struct ··· 133 127 struct rhash_head tnode; 134 128 u16 vid; 135 129 u16 flags; 130 + u16 priv_flags; 136 131 struct br_vlan_stats __percpu *stats; 137 132 union { 138 133 struct net_bridge *br;
+2 -1
net/bridge/br_vlan.c
··· 197 197 v = container_of(rcu, struct net_bridge_vlan, rcu); 198 198 WARN_ON(br_vlan_is_master(v)); 199 199 /* if we had per-port stats configured then free them here */ 200 - if (v->brvlan->stats != v->stats) 200 + if (v->priv_flags & BR_VLFLAG_PER_PORT_STATS) 201 201 free_percpu(v->stats); 202 202 v->stats = NULL; 203 203 kfree(v); ··· 264 264 err = -ENOMEM; 265 265 goto out_filt; 266 266 } 267 + v->priv_flags |= BR_VLFLAG_PER_PORT_STATS; 267 268 } else { 268 269 v->stats = masterv->stats; 269 270 }
+9 -8
net/can/raw.c
··· 745 745 } else 746 746 ifindex = ro->ifindex; 747 747 748 - if (ro->fd_frames) { 749 - if (unlikely(size != CANFD_MTU && size != CAN_MTU)) 750 - return -EINVAL; 751 - } else { 752 - if (unlikely(size != CAN_MTU)) 753 - return -EINVAL; 754 - } 755 - 756 748 dev = dev_get_by_index(sock_net(sk), ifindex); 757 749 if (!dev) 758 750 return -ENXIO; 751 + 752 + err = -EINVAL; 753 + if (ro->fd_frames && dev->mtu == CANFD_MTU) { 754 + if (unlikely(size != CANFD_MTU && size != CAN_MTU)) 755 + goto put_dev; 756 + } else { 757 + if (unlikely(size != CAN_MTU)) 758 + goto put_dev; 759 + } 759 760 760 761 skb = sock_alloc_send_skb(sk, size + sizeof(struct can_skb_priv), 761 762 msg->msg_flags & MSG_DONTWAIT, &err);
+9 -3
net/ceph/messenger.c
··· 580 580 struct bio_vec bvec; 581 581 int ret; 582 582 583 - /* sendpage cannot properly handle pages with page_count == 0, 584 - * we need to fallback to sendmsg if that's the case */ 585 - if (page_count(page) >= 1) 583 + /* 584 + * sendpage cannot properly handle pages with page_count == 0, 585 + * we need to fall back to sendmsg if that's the case. 586 + * 587 + * Same goes for slab pages: skb_can_coalesce() allows 588 + * coalescing neighboring slab objects into a single frag which 589 + * triggers one of hardened usercopy checks. 590 + */ 591 + if (page_count(page) >= 1 && !PageSlab(page)) 586 592 return __ceph_tcp_sendpage(sock, page, offset, size, more); 587 593 588 594 bvec.bv_page = page;
+9 -2
net/core/dev.c
··· 5655 5655 skb->vlan_tci = 0; 5656 5656 skb->dev = napi->dev; 5657 5657 skb->skb_iif = 0; 5658 + 5659 + /* eth_type_trans() assumes pkt_type is PACKET_HOST */ 5660 + skb->pkt_type = PACKET_HOST; 5661 + 5658 5662 skb->encapsulation = 0; 5659 5663 skb_shinfo(skb)->gso_type = 0; 5660 5664 skb->truesize = SKB_TRUESIZE(skb_end_offset(skb)); ··· 5970 5966 if (work_done) 5971 5967 timeout = n->dev->gro_flush_timeout; 5972 5968 5969 + /* When the NAPI instance uses a timeout and keeps postponing 5970 + * it, we need to bound somehow the time packets are kept in 5971 + * the GRO layer 5972 + */ 5973 + napi_gro_flush(n, !!timeout); 5973 5974 if (timeout) 5974 5975 hrtimer_start(&n->timer, ns_to_ktime(timeout), 5975 5976 HRTIMER_MODE_REL_PINNED); 5976 - else 5977 - napi_gro_flush(n, false); 5978 5977 } 5979 5978 if (unlikely(!list_empty(&n->poll_list))) { 5980 5979 /* If n->poll_list is not empty, we need to mask irqs */
+5
net/core/skbuff.c
··· 4854 4854 nf_reset(skb); 4855 4855 nf_reset_trace(skb); 4856 4856 4857 + #ifdef CONFIG_NET_SWITCHDEV 4858 + skb->offload_fwd_mark = 0; 4859 + skb->offload_mr_fwd_mark = 0; 4860 + #endif 4861 + 4857 4862 if (!xnet) 4858 4863 return; 4859 4864
+1 -1
net/ipv4/ip_tunnel_core.c
··· 80 80 81 81 iph->version = 4; 82 82 iph->ihl = sizeof(struct iphdr) >> 2; 83 - iph->frag_off = df; 83 + iph->frag_off = ip_mtu_locked(&rt->dst) ? 0 : df; 84 84 iph->protocol = proto; 85 85 iph->tos = tos; 86 86 iph->daddr = dst;
+13 -2
net/ipv4/tcp_input.c
··· 4268 4268 * If the sack array is full, forget about the last one. 4269 4269 */ 4270 4270 if (this_sack >= TCP_NUM_SACKS) { 4271 - if (tp->compressed_ack) 4271 + if (tp->compressed_ack > TCP_FASTRETRANS_THRESH) 4272 4272 tcp_send_ack(sk); 4273 4273 this_sack--; 4274 4274 tp->rx_opt.num_sacks--; ··· 4363 4363 if (TCP_SKB_CB(from)->has_rxtstamp) { 4364 4364 TCP_SKB_CB(to)->has_rxtstamp = true; 4365 4365 to->tstamp = from->tstamp; 4366 + skb_hwtstamps(to)->hwtstamp = skb_hwtstamps(from)->hwtstamp; 4366 4367 } 4367 4368 4368 4369 return true; ··· 5189 5188 if (!tcp_is_sack(tp) || 5190 5189 tp->compressed_ack >= sock_net(sk)->ipv4.sysctl_tcp_comp_sack_nr) 5191 5190 goto send_now; 5192 - tp->compressed_ack++; 5191 + 5192 + if (tp->compressed_ack_rcv_nxt != tp->rcv_nxt) { 5193 + tp->compressed_ack_rcv_nxt = tp->rcv_nxt; 5194 + if (tp->compressed_ack > TCP_FASTRETRANS_THRESH) 5195 + NET_ADD_STATS(sock_net(sk), LINUX_MIB_TCPACKCOMPRESSED, 5196 + tp->compressed_ack - TCP_FASTRETRANS_THRESH); 5197 + tp->compressed_ack = 0; 5198 + } 5199 + 5200 + if (++tp->compressed_ack <= TCP_FASTRETRANS_THRESH) 5201 + goto send_now; 5193 5202 5194 5203 if (hrtimer_is_queued(&tp->compressed_ack_timer)) 5195 5204 return;
+3 -3
net/ipv4/tcp_output.c
··· 180 180 { 181 181 struct tcp_sock *tp = tcp_sk(sk); 182 182 183 - if (unlikely(tp->compressed_ack)) { 183 + if (unlikely(tp->compressed_ack > TCP_FASTRETRANS_THRESH)) { 184 184 NET_ADD_STATS(sock_net(sk), LINUX_MIB_TCPACKCOMPRESSED, 185 - tp->compressed_ack); 186 - tp->compressed_ack = 0; 185 + tp->compressed_ack - TCP_FASTRETRANS_THRESH); 186 + tp->compressed_ack = TCP_FASTRETRANS_THRESH; 187 187 if (hrtimer_try_to_cancel(&tp->compressed_ack_timer) == 1) 188 188 __sock_put(sk); 189 189 }
+1 -1
net/ipv4/tcp_timer.c
··· 740 740 741 741 bh_lock_sock(sk); 742 742 if (!sock_owned_by_user(sk)) { 743 - if (tp->compressed_ack) 743 + if (tp->compressed_ack > TCP_FASTRETRANS_THRESH) 744 744 tcp_send_ack(sk); 745 745 } else { 746 746 if (!test_and_set_bit(TCP_DELACK_TIMER_DEFERRED,
+13 -6
net/ipv6/addrconf.c
··· 179 179 static void addrconf_dad_work(struct work_struct *w); 180 180 static void addrconf_dad_completed(struct inet6_ifaddr *ifp, bool bump_id, 181 181 bool send_na); 182 - static void addrconf_dad_run(struct inet6_dev *idev); 182 + static void addrconf_dad_run(struct inet6_dev *idev, bool restart); 183 183 static void addrconf_rs_timer(struct timer_list *t); 184 184 static void __ipv6_ifa_notify(int event, struct inet6_ifaddr *ifa); 185 185 static void ipv6_ifa_notify(int event, struct inet6_ifaddr *ifa); ··· 3439 3439 void *ptr) 3440 3440 { 3441 3441 struct net_device *dev = netdev_notifier_info_to_dev(ptr); 3442 + struct netdev_notifier_change_info *change_info; 3442 3443 struct netdev_notifier_changeupper_info *info; 3443 3444 struct inet6_dev *idev = __in6_dev_get(dev); 3444 3445 struct net *net = dev_net(dev); ··· 3514 3513 break; 3515 3514 } 3516 3515 3517 - if (idev) { 3516 + if (!IS_ERR_OR_NULL(idev)) { 3518 3517 if (idev->if_flags & IF_READY) { 3519 3518 /* device is already configured - 3520 3519 * but resend MLD reports, we might ··· 3522 3521 * multicast snooping switches 3523 3522 */ 3524 3523 ipv6_mc_up(idev); 3524 + change_info = ptr; 3525 + if (change_info->flags_changed & IFF_NOARP) 3526 + addrconf_dad_run(idev, true); 3525 3527 rt6_sync_up(dev, RTNH_F_LINKDOWN); 3526 3528 break; 3527 3529 } ··· 3559 3555 3560 3556 if (!IS_ERR_OR_NULL(idev)) { 3561 3557 if (run_pending) 3562 - addrconf_dad_run(idev); 3558 + addrconf_dad_run(idev, false); 3563 3559 3564 3560 /* Device has an address by now */ 3565 3561 rt6_sync_up(dev, RTNH_F_DEAD); ··· 4177 4173 addrconf_verify_rtnl(); 4178 4174 } 4179 4175 4180 - static void addrconf_dad_run(struct inet6_dev *idev) 4176 + static void addrconf_dad_run(struct inet6_dev *idev, bool restart) 4181 4177 { 4182 4178 struct inet6_ifaddr *ifp; 4183 4179 4184 4180 read_lock_bh(&idev->lock); 4185 4181 list_for_each_entry(ifp, &idev->addr_list, if_list) { 4186 4182 spin_lock(&ifp->lock); 4187 - if (ifp->flags & IFA_F_TENTATIVE && 4188 - ifp->state == INET6_IFADDR_STATE_DAD) 4183 + if ((ifp->flags & IFA_F_TENTATIVE && 4184 + ifp->state == INET6_IFADDR_STATE_DAD) || restart) { 4185 + if (restart) 4186 + ifp->state = INET6_IFADDR_STATE_PREDAD; 4189 4187 addrconf_dad_kick(ifp); 4188 + } 4190 4189 spin_unlock(&ifp->lock); 4191 4190 } 4192 4191 read_unlock_bh(&idev->lock);
+8 -6
net/ipv6/route.c
··· 2232 2232 if (rt) { 2233 2233 rcu_read_lock(); 2234 2234 if (rt->rt6i_flags & RTF_CACHE) { 2235 - if (dst_hold_safe(&rt->dst)) 2236 - rt6_remove_exception_rt(rt); 2235 + rt6_remove_exception_rt(rt); 2237 2236 } else { 2238 2237 struct fib6_info *from; 2239 2238 struct fib6_node *fn; ··· 2359 2360 2360 2361 void ip6_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, __be32 mtu) 2361 2362 { 2363 + int oif = sk->sk_bound_dev_if; 2362 2364 struct dst_entry *dst; 2363 2365 2364 - ip6_update_pmtu(skb, sock_net(sk), mtu, 2365 - sk->sk_bound_dev_if, sk->sk_mark, sk->sk_uid); 2366 + if (!oif && skb->dev) 2367 + oif = l3mdev_master_ifindex(skb->dev); 2368 + 2369 + ip6_update_pmtu(skb, sock_net(sk), mtu, oif, sk->sk_mark, sk->sk_uid); 2366 2370 2367 2371 dst = __sk_dst_get(sk); 2368 2372 if (!dst || !dst->obsolete || ··· 3216 3214 if (cfg->fc_flags & RTF_GATEWAY && 3217 3215 !ipv6_addr_equal(&cfg->fc_gateway, &rt->rt6i_gateway)) 3218 3216 goto out; 3219 - if (dst_hold_safe(&rt->dst)) 3220 - rc = rt6_remove_exception_rt(rt); 3217 + 3218 + rc = rt6_remove_exception_rt(rt); 3221 3219 out: 3222 3220 return rc; 3223 3221 }
+4 -5
net/l2tp/l2tp_core.c
··· 1490 1490 goto err_sock; 1491 1491 } 1492 1492 1493 - sk = sock->sk; 1494 - 1495 - sock_hold(sk); 1496 - tunnel->sock = sk; 1497 1493 tunnel->l2tp_net = net; 1498 - 1499 1494 pn = l2tp_pernet(net); 1500 1495 1501 1496 spin_lock_bh(&pn->l2tp_tunnel_list_lock); ··· 1504 1509 } 1505 1510 list_add_rcu(&tunnel->list, &pn->l2tp_tunnel_list); 1506 1511 spin_unlock_bh(&pn->l2tp_tunnel_list_lock); 1512 + 1513 + sk = sock->sk; 1514 + sock_hold(sk); 1515 + tunnel->sock = sk; 1507 1516 1508 1517 if (tunnel->encap == L2TP_ENCAPTYPE_UDP) { 1509 1518 struct udp_tunnel_sock_cfg udp_cfg = {
+2 -2
net/packet/af_packet.c
··· 2394 2394 void *ph; 2395 2395 __u32 ts; 2396 2396 2397 - ph = skb_shinfo(skb)->destructor_arg; 2397 + ph = skb_zcopy_get_nouarg(skb); 2398 2398 packet_dec_pending(&po->tx_ring); 2399 2399 2400 2400 ts = __packet_set_timestamp(po, ph, skb); ··· 2461 2461 skb->mark = po->sk.sk_mark; 2462 2462 skb->tstamp = sockc->transmit_time; 2463 2463 sock_tx_timestamp(&po->sk, sockc->tsflags, &skb_shinfo(skb)->tx_flags); 2464 - skb_shinfo(skb)->destructor_arg = ph.raw; 2464 + skb_zcopy_set_nouarg(skb, ph.raw); 2465 2465 2466 2466 skb_reserve(skb, hlen); 2467 2467 skb_reset_network_header(skb);
+23 -4
net/rxrpc/af_rxrpc.c
··· 375 375 * getting ACKs from the server. Returns a number representing the life state 376 376 * which can be compared to that returned by a previous call. 377 377 * 378 - * If this is a client call, ping ACKs will be sent to the server to find out 379 - * whether it's still responsive and whether the call is still alive on the 380 - * server. 378 + * If the life state stalls, rxrpc_kernel_probe_life() should be called and 379 + * then 2RTT waited. 381 380 */ 382 - u32 rxrpc_kernel_check_life(struct socket *sock, struct rxrpc_call *call) 381 + u32 rxrpc_kernel_check_life(const struct socket *sock, 382 + const struct rxrpc_call *call) 383 383 { 384 384 return call->acks_latest; 385 385 } 386 386 EXPORT_SYMBOL(rxrpc_kernel_check_life); 387 + 388 + /** 389 + * rxrpc_kernel_probe_life - Poke the peer to see if it's still alive 390 + * @sock: The socket the call is on 391 + * @call: The call to check 392 + * 393 + * In conjunction with rxrpc_kernel_check_life(), allow a kernel service to 394 + * find out whether a call is still alive by pinging it. This should cause the 395 + * life state to be bumped in about 2*RTT. 396 + * 397 + * The must be called in TASK_RUNNING state on pain of might_sleep() objecting. 398 + */ 399 + void rxrpc_kernel_probe_life(struct socket *sock, struct rxrpc_call *call) 400 + { 401 + rxrpc_propose_ACK(call, RXRPC_ACK_PING, 0, 0, true, false, 402 + rxrpc_propose_ack_ping_for_check_life); 403 + rxrpc_send_ack_packet(call, true, NULL); 404 + } 405 + EXPORT_SYMBOL(rxrpc_kernel_probe_life); 387 406 388 407 /** 389 408 * rxrpc_kernel_get_epoch - Retrieve the epoch value from a call.
+2 -1
net/sched/act_pedit.c
··· 201 201 goto out_release; 202 202 } 203 203 } else { 204 - return err; 204 + ret = err; 205 + goto out_free; 205 206 } 206 207 207 208 p = to_pedit(*a);
+22 -14
net/sched/act_police.c
··· 27 27 u32 tcfp_ewma_rate; 28 28 s64 tcfp_burst; 29 29 u32 tcfp_mtu; 30 - s64 tcfp_toks; 31 - s64 tcfp_ptoks; 32 30 s64 tcfp_mtu_ptoks; 33 - s64 tcfp_t_c; 34 31 struct psched_ratecfg rate; 35 32 bool rate_present; 36 33 struct psched_ratecfg peak; ··· 38 41 struct tcf_police { 39 42 struct tc_action common; 40 43 struct tcf_police_params __rcu *params; 44 + 45 + spinlock_t tcfp_lock ____cacheline_aligned_in_smp; 46 + s64 tcfp_toks; 47 + s64 tcfp_ptoks; 48 + s64 tcfp_t_c; 41 49 }; 42 50 43 51 #define to_police(pc) ((struct tcf_police *)pc) ··· 124 122 return ret; 125 123 } 126 124 ret = ACT_P_CREATED; 125 + spin_lock_init(&(to_police(*a)->tcfp_lock)); 127 126 } else if (!ovr) { 128 127 tcf_idr_release(*a, bind); 129 128 return -EEXIST; ··· 189 186 } 190 187 191 188 new->tcfp_burst = PSCHED_TICKS2NS(parm->burst); 192 - new->tcfp_toks = new->tcfp_burst; 193 - if (new->peak_present) { 189 + if (new->peak_present) 194 190 new->tcfp_mtu_ptoks = (s64)psched_l2t_ns(&new->peak, 195 191 new->tcfp_mtu); 196 - new->tcfp_ptoks = new->tcfp_mtu_ptoks; 197 - } 198 192 199 193 if (tb[TCA_POLICE_AVRATE]) 200 194 new->tcfp_ewma_rate = nla_get_u32(tb[TCA_POLICE_AVRATE]); ··· 207 207 } 208 208 209 209 spin_lock_bh(&police->tcf_lock); 210 - new->tcfp_t_c = ktime_get_ns(); 210 + spin_lock_bh(&police->tcfp_lock); 211 + police->tcfp_t_c = ktime_get_ns(); 212 + police->tcfp_toks = new->tcfp_burst; 213 + if (new->peak_present) 214 + police->tcfp_ptoks = new->tcfp_mtu_ptoks; 215 + spin_unlock_bh(&police->tcfp_lock); 211 216 police->tcf_action = parm->action; 212 217 rcu_swap_protected(police->params, 213 218 new, ··· 262 257 } 263 258 264 259 now = ktime_get_ns(); 265 - toks = min_t(s64, now - p->tcfp_t_c, p->tcfp_burst); 260 + spin_lock_bh(&police->tcfp_lock); 261 + toks = min_t(s64, now - police->tcfp_t_c, p->tcfp_burst); 266 262 if (p->peak_present) { 267 - ptoks = toks + p->tcfp_ptoks; 263 + ptoks = toks + police->tcfp_ptoks; 268 264 if (ptoks > p->tcfp_mtu_ptoks) 269 265 ptoks = p->tcfp_mtu_ptoks; 270 266 ptoks -= (s64)psched_l2t_ns(&p->peak, 271 267 qdisc_pkt_len(skb)); 272 268 } 273 - toks += p->tcfp_toks; 269 + toks += police->tcfp_toks; 274 270 if (toks > p->tcfp_burst) 275 271 toks = p->tcfp_burst; 276 272 toks -= (s64)psched_l2t_ns(&p->rate, qdisc_pkt_len(skb)); 277 273 if ((toks|ptoks) >= 0) { 278 - p->tcfp_t_c = now; 279 - p->tcfp_toks = toks; 280 - p->tcfp_ptoks = ptoks; 274 + police->tcfp_t_c = now; 275 + police->tcfp_toks = toks; 276 + police->tcfp_ptoks = ptoks; 277 + spin_unlock_bh(&police->tcfp_lock); 281 278 ret = p->tcfp_result; 282 279 goto inc_drops; 283 280 } 281 + spin_unlock_bh(&police->tcfp_lock); 284 282 } 285 283 286 284 inc_overlimits:
+18 -11
net/sched/sch_fq.c
··· 469 469 goto begin; 470 470 } 471 471 prefetch(&skb->end); 472 - f->credit -= qdisc_pkt_len(skb); 472 + plen = qdisc_pkt_len(skb); 473 + f->credit -= plen; 473 474 474 - if (ktime_to_ns(skb->tstamp) || !q->rate_enable) 475 + if (!q->rate_enable) 475 476 goto out; 476 477 477 478 rate = q->flow_max_rate; 478 - if (skb->sk) 479 - rate = min(skb->sk->sk_pacing_rate, rate); 480 479 481 - if (rate <= q->low_rate_threshold) { 482 - f->credit = 0; 483 - plen = qdisc_pkt_len(skb); 484 - } else { 485 - plen = max(qdisc_pkt_len(skb), q->quantum); 486 - if (f->credit > 0) 487 - goto out; 480 + /* If EDT time was provided for this skb, we need to 481 + * update f->time_next_packet only if this qdisc enforces 482 + * a flow max rate. 483 + */ 484 + if (!skb->tstamp) { 485 + if (skb->sk) 486 + rate = min(skb->sk->sk_pacing_rate, rate); 487 + 488 + if (rate <= q->low_rate_threshold) { 489 + f->credit = 0; 490 + } else { 491 + plen = max(plen, q->quantum); 492 + if (f->credit > 0) 493 + goto out; 494 + } 488 495 } 489 496 if (rate != ~0UL) { 490 497 u64 len = (u64)plen * NSEC_PER_SEC;
+4 -20
net/sctp/output.c
··· 118 118 sctp_transport_route(tp, NULL, sp); 119 119 if (asoc->param_flags & SPP_PMTUD_ENABLE) 120 120 sctp_assoc_sync_pmtu(asoc); 121 + } else if (!sctp_transport_pmtu_check(tp)) { 122 + if (asoc->param_flags & SPP_PMTUD_ENABLE) 123 + sctp_assoc_sync_pmtu(asoc); 121 124 } 122 125 123 126 if (asoc->pmtu_pending) { ··· 399 396 return retval; 400 397 } 401 398 402 - static void sctp_packet_release_owner(struct sk_buff *skb) 403 - { 404 - sk_free(skb->sk); 405 - } 406 - 407 - static void sctp_packet_set_owner_w(struct sk_buff *skb, struct sock *sk) 408 - { 409 - skb_orphan(skb); 410 - skb->sk = sk; 411 - skb->destructor = sctp_packet_release_owner; 412 - 413 - /* 414 - * The data chunks have already been accounted for in sctp_sendmsg(), 415 - * therefore only reserve a single byte to keep socket around until 416 - * the packet has been transmitted. 417 - */ 418 - refcount_inc(&sk->sk_wmem_alloc); 419 - } 420 - 421 399 static void sctp_packet_gso_append(struct sk_buff *head, struct sk_buff *skb) 422 400 { 423 401 if (SCTP_OUTPUT_CB(head)->last == head) ··· 585 601 if (!head) 586 602 goto out; 587 603 skb_reserve(head, packet->overhead + MAX_HEADER); 588 - sctp_packet_set_owner_w(head, sk); 604 + skb_set_owner_w(head, sk); 589 605 590 606 /* set sctp header */ 591 607 sh = skb_push(head, sizeof(struct sctphdr));
+5 -21
net/sctp/socket.c
··· 3940 3940 unsigned int optlen) 3941 3941 { 3942 3942 struct sctp_assoc_value params; 3943 - struct sctp_association *asoc; 3944 - int retval = -EINVAL; 3945 3943 3946 3944 if (optlen != sizeof(params)) 3947 - goto out; 3945 + return -EINVAL; 3948 3946 3949 - if (copy_from_user(&params, optval, optlen)) { 3950 - retval = -EFAULT; 3951 - goto out; 3952 - } 3947 + if (copy_from_user(&params, optval, optlen)) 3948 + return -EFAULT; 3953 3949 3954 - asoc = sctp_id2assoc(sk, params.assoc_id); 3955 - if (asoc) { 3956 - asoc->prsctp_enable = !!params.assoc_value; 3957 - } else if (!params.assoc_id) { 3958 - struct sctp_sock *sp = sctp_sk(sk); 3950 + sctp_sk(sk)->ep->prsctp_enable = !!params.assoc_value; 3959 3951 3960 - sp->ep->prsctp_enable = !!params.assoc_value; 3961 - } else { 3962 - goto out; 3963 - } 3964 - 3965 - retval = 0; 3966 - 3967 - out: 3968 - return retval; 3952 + return 0; 3969 3953 } 3970 3954 3971 3955 static int sctp_setsockopt_default_prinfo(struct sock *sk,
-1
net/sctp/stream.c
··· 535 535 goto out; 536 536 } 537 537 538 - stream->incnt = incnt; 539 538 stream->outcnt = outcnt; 540 539 541 540 asoc->strreset_outstanding = !!out + !!in;
+7 -4
net/smc/af_smc.c
··· 127 127 smc = smc_sk(sk); 128 128 129 129 /* cleanup for a dangling non-blocking connect */ 130 + if (smc->connect_info && sk->sk_state == SMC_INIT) 131 + tcp_abort(smc->clcsock->sk, ECONNABORTED); 130 132 flush_work(&smc->connect_work); 131 133 kfree(smc->connect_info); 132 134 smc->connect_info = NULL; ··· 549 547 550 548 mutex_lock(&smc_create_lgr_pending); 551 549 local_contact = smc_conn_create(smc, false, aclc->hdr.flag, ibdev, 552 - ibport, &aclc->lcl, NULL, 0); 550 + ibport, ntoh24(aclc->qpn), &aclc->lcl, 551 + NULL, 0); 553 552 if (local_contact < 0) { 554 553 if (local_contact == -ENOMEM) 555 554 reason_code = SMC_CLC_DECL_MEM;/* insufficient memory*/ ··· 621 618 int rc = 0; 622 619 623 620 mutex_lock(&smc_create_lgr_pending); 624 - local_contact = smc_conn_create(smc, true, aclc->hdr.flag, NULL, 0, 621 + local_contact = smc_conn_create(smc, true, aclc->hdr.flag, NULL, 0, 0, 625 622 NULL, ismdev, aclc->gid); 626 623 if (local_contact < 0) 627 624 return smc_connect_abort(smc, SMC_CLC_DECL_MEM, 0); ··· 1086 1083 int *local_contact) 1087 1084 { 1088 1085 /* allocate connection / link group */ 1089 - *local_contact = smc_conn_create(new_smc, false, 0, ibdev, ibport, 1086 + *local_contact = smc_conn_create(new_smc, false, 0, ibdev, ibport, 0, 1090 1087 &pclc->lcl, NULL, 0); 1091 1088 if (*local_contact < 0) { 1092 1089 if (*local_contact == -ENOMEM) ··· 1110 1107 struct smc_clc_msg_smcd *pclc_smcd; 1111 1108 1112 1109 pclc_smcd = smc_get_clc_msg_smcd(pclc); 1113 - *local_contact = smc_conn_create(new_smc, true, 0, NULL, 0, NULL, 1110 + *local_contact = smc_conn_create(new_smc, true, 0, NULL, 0, 0, NULL, 1114 1111 ismdev, pclc_smcd->gid); 1115 1112 if (*local_contact < 0) { 1116 1113 if (*local_contact == -ENOMEM)
+15 -11
net/smc/smc_cdc.c
··· 81 81 sizeof(struct smc_cdc_msg) > SMC_WR_BUF_SIZE, 82 82 "must increase SMC_WR_BUF_SIZE to at least sizeof(struct smc_cdc_msg)"); 83 83 BUILD_BUG_ON_MSG( 84 - sizeof(struct smc_cdc_msg) != SMC_WR_TX_SIZE, 84 + offsetofend(struct smc_cdc_msg, reserved) > SMC_WR_TX_SIZE, 85 85 "must adapt SMC_WR_TX_SIZE to sizeof(struct smc_cdc_msg); if not all smc_wr upper layer protocols use the same message size any more, must start to set link->wr_tx_sges[i].length on each individual smc_wr_tx_send()"); 86 86 BUILD_BUG_ON_MSG( 87 87 sizeof(struct smc_cdc_tx_pend) > SMC_WR_TX_PEND_PRIV_SIZE, ··· 177 177 int smcd_cdc_msg_send(struct smc_connection *conn) 178 178 { 179 179 struct smc_sock *smc = container_of(conn, struct smc_sock, conn); 180 + union smc_host_cursor curs; 180 181 struct smcd_cdc_msg cdc; 181 182 int rc, diff; 182 183 183 184 memset(&cdc, 0, sizeof(cdc)); 184 185 cdc.common.type = SMC_CDC_MSG_TYPE; 185 - cdc.prod_wrap = conn->local_tx_ctrl.prod.wrap; 186 - cdc.prod_count = conn->local_tx_ctrl.prod.count; 187 - 188 - cdc.cons_wrap = conn->local_tx_ctrl.cons.wrap; 189 - cdc.cons_count = conn->local_tx_ctrl.cons.count; 190 - cdc.prod_flags = conn->local_tx_ctrl.prod_flags; 191 - cdc.conn_state_flags = conn->local_tx_ctrl.conn_state_flags; 186 + curs.acurs.counter = atomic64_read(&conn->local_tx_ctrl.prod.acurs); 187 + cdc.prod.wrap = curs.wrap; 188 + cdc.prod.count = curs.count; 189 + curs.acurs.counter = atomic64_read(&conn->local_tx_ctrl.cons.acurs); 190 + cdc.cons.wrap = curs.wrap; 191 + cdc.cons.count = curs.count; 192 + cdc.cons.prod_flags = conn->local_tx_ctrl.prod_flags; 193 + cdc.cons.conn_state_flags = conn->local_tx_ctrl.conn_state_flags; 192 194 rc = smcd_tx_ism_write(conn, &cdc, sizeof(cdc), 0, 1); 193 195 if (rc) 194 196 return rc; 195 - smc_curs_copy(&conn->rx_curs_confirmed, &conn->local_tx_ctrl.cons, 196 - conn); 197 + smc_curs_copy(&conn->rx_curs_confirmed, &curs, conn); 197 198 /* Calculate transmitted data and increment free send buffer space */ 198 199 diff = smc_curs_diff(conn->sndbuf_desc->len, &conn->tx_curs_fin, 199 200 &conn->tx_curs_sent); ··· 332 331 static void smcd_cdc_rx_tsklet(unsigned long data) 333 332 { 334 333 struct smc_connection *conn = (struct smc_connection *)data; 334 + struct smcd_cdc_msg *data_cdc; 335 335 struct smcd_cdc_msg cdc; 336 336 struct smc_sock *smc; 337 337 338 338 if (!conn) 339 339 return; 340 340 341 - memcpy(&cdc, conn->rmb_desc->cpu_addr, sizeof(cdc)); 341 + data_cdc = (struct smcd_cdc_msg *)conn->rmb_desc->cpu_addr; 342 + smcd_curs_copy(&cdc.prod, &data_cdc->prod, conn); 343 + smcd_curs_copy(&cdc.cons, &data_cdc->cons, conn); 342 344 smc = container_of(conn, struct smc_sock, conn); 343 345 smc_cdc_msg_recv(smc, (struct smc_cdc_msg *)&cdc); 344 346 }
+45 -15
net/smc/smc_cdc.h
··· 48 48 struct smc_cdc_producer_flags prod_flags; 49 49 struct smc_cdc_conn_state_flags conn_state_flags; 50 50 u8 reserved[18]; 51 - } __packed; /* format defined in RFC7609 */ 51 + }; 52 + 53 + /* SMC-D cursor format */ 54 + union smcd_cdc_cursor { 55 + struct { 56 + u16 wrap; 57 + u32 count; 58 + struct smc_cdc_producer_flags prod_flags; 59 + struct smc_cdc_conn_state_flags conn_state_flags; 60 + } __packed; 61 + #ifdef KERNEL_HAS_ATOMIC64 62 + atomic64_t acurs; /* for atomic processing */ 63 + #else 64 + u64 acurs; /* for atomic processing */ 65 + #endif 66 + } __aligned(8); 52 67 53 68 /* CDC message for SMC-D */ 54 69 struct smcd_cdc_msg { 55 70 struct smc_wr_rx_hdr common; /* Type = 0xFE */ 56 71 u8 res1[7]; 57 - u16 prod_wrap; 58 - u32 prod_count; 59 - u8 res2[2]; 60 - u16 cons_wrap; 61 - u32 cons_count; 62 - struct smc_cdc_producer_flags prod_flags; 63 - struct smc_cdc_conn_state_flags conn_state_flags; 72 + union smcd_cdc_cursor prod; 73 + union smcd_cdc_cursor cons; 64 74 u8 res3[8]; 65 - } __packed; 75 + } __aligned(8); 66 76 67 77 static inline bool smc_cdc_rxed_any_close(struct smc_connection *conn) 68 78 { ··· 133 123 static inline void smc_curs_copy_net(union smc_cdc_cursor *tgt, 134 124 union smc_cdc_cursor *src, 135 125 struct smc_connection *conn) 126 + { 127 + #ifndef KERNEL_HAS_ATOMIC64 128 + unsigned long flags; 129 + 130 + spin_lock_irqsave(&conn->acurs_lock, flags); 131 + tgt->acurs = src->acurs; 132 + spin_unlock_irqrestore(&conn->acurs_lock, flags); 133 + #else 134 + atomic64_set(&tgt->acurs, atomic64_read(&src->acurs)); 135 + #endif 136 + } 137 + 138 + static inline void smcd_curs_copy(union smcd_cdc_cursor *tgt, 139 + union smcd_cdc_cursor *src, 140 + struct smc_connection *conn) 136 141 { 137 142 #ifndef KERNEL_HAS_ATOMIC64 138 143 unsigned long flags; ··· 247 222 static inline void smcd_cdc_msg_to_host(struct smc_host_cdc_msg *local, 248 223 struct smcd_cdc_msg *peer) 249 224 { 250 - local->prod.wrap = peer->prod_wrap; 251 - local->prod.count = peer->prod_count; 252 - local->cons.wrap = peer->cons_wrap; 253 - local->cons.count = peer->cons_count; 254 - local->prod_flags = peer->prod_flags; 255 - local->conn_state_flags = peer->conn_state_flags; 225 + union smc_host_cursor temp; 226 + 227 + temp.wrap = peer->prod.wrap; 228 + temp.count = peer->prod.count; 229 + atomic64_set(&local->prod.acurs, atomic64_read(&temp.acurs)); 230 + 231 + temp.wrap = peer->cons.wrap; 232 + temp.count = peer->cons.count; 233 + atomic64_set(&local->cons.acurs, atomic64_read(&temp.acurs)); 234 + local->prod_flags = peer->cons.prod_flags; 235 + local->conn_state_flags = peer->cons.conn_state_flags; 256 236 } 257 237 258 238 static inline void smc_cdc_msg_to_host(struct smc_host_cdc_msg *local,
+14 -6
net/smc/smc_core.c
··· 184 184 185 185 if (!lgr->is_smcd && lnk->state != SMC_LNK_INACTIVE) 186 186 smc_llc_link_inactive(lnk); 187 + if (lgr->is_smcd) 188 + smc_ism_signal_shutdown(lgr); 187 189 smc_lgr_free(lgr); 188 190 } 189 191 } ··· 487 485 } 488 486 489 487 /* Called when SMC-D device is terminated or peer is lost */ 490 - void smc_smcd_terminate(struct smcd_dev *dev, u64 peer_gid) 488 + void smc_smcd_terminate(struct smcd_dev *dev, u64 peer_gid, unsigned short vlan) 491 489 { 492 490 struct smc_link_group *lgr, *l; 493 491 LIST_HEAD(lgr_free_list); ··· 497 495 list_for_each_entry_safe(lgr, l, &smc_lgr_list.list, list) { 498 496 if (lgr->is_smcd && lgr->smcd == dev && 499 497 (!peer_gid || lgr->peer_gid == peer_gid) && 500 - !list_empty(&lgr->list)) { 498 + (vlan == VLAN_VID_MASK || lgr->vlan_id == vlan)) { 501 499 __smc_lgr_terminate(lgr); 502 500 list_move(&lgr->list, &lgr_free_list); 503 501 } ··· 508 506 list_for_each_entry_safe(lgr, l, &lgr_free_list, list) { 509 507 list_del_init(&lgr->list); 510 508 cancel_delayed_work_sync(&lgr->free_work); 509 + if (!peer_gid && vlan == VLAN_VID_MASK) /* dev terminated? */ 510 + smc_ism_signal_shutdown(lgr); 511 511 smc_lgr_free(lgr); 512 512 } 513 513 } ··· 563 559 564 560 static bool smcr_lgr_match(struct smc_link_group *lgr, 565 561 struct smc_clc_msg_local *lcl, 566 - enum smc_lgr_role role) 562 + enum smc_lgr_role role, u32 clcqpn) 567 563 { 568 564 return !memcmp(lgr->peer_systemid, lcl->id_for_peer, 569 565 SMC_SYSTEMID_LEN) && ··· 571 567 SMC_GID_SIZE) && 572 568 !memcmp(lgr->lnk[SMC_SINGLE_LINK].peer_mac, lcl->mac, 573 569 sizeof(lcl->mac)) && 574 - lgr->role == role; 570 + lgr->role == role && 571 + (lgr->role == SMC_SERV || 572 + lgr->lnk[SMC_SINGLE_LINK].peer_qpn == clcqpn); 575 573 } 576 574 577 575 static bool smcd_lgr_match(struct smc_link_group *lgr, ··· 584 578 585 579 /* create a new SMC connection (and a new link group if necessary) */ 586 580 int smc_conn_create(struct smc_sock *smc, bool is_smcd, int srv_first_contact, 587 - struct smc_ib_device *smcibdev, u8 ibport, 581 + struct smc_ib_device *smcibdev, u8 ibport, u32 clcqpn, 588 582 struct smc_clc_msg_local *lcl, struct smcd_dev *smcd, 589 583 u64 peer_gid) 590 584 { ··· 609 603 list_for_each_entry(lgr, &smc_lgr_list.list, list) { 610 604 write_lock_bh(&lgr->conns_lock); 611 605 if ((is_smcd ? smcd_lgr_match(lgr, smcd, peer_gid) : 612 - smcr_lgr_match(lgr, lcl, role)) && 606 + smcr_lgr_match(lgr, lcl, role, clcqpn)) && 613 607 !lgr->sync_err && 614 608 lgr->vlan_id == vlan_id && 615 609 (role == SMC_CLNT || ··· 1030 1024 smc_llc_link_inactive(lnk); 1031 1025 } 1032 1026 cancel_delayed_work_sync(&lgr->free_work); 1027 + if (lgr->is_smcd) 1028 + smc_ism_signal_shutdown(lgr); 1033 1029 smc_lgr_free(lgr); /* free link group */ 1034 1030 } 1035 1031 }
+3 -2
net/smc/smc_core.h
··· 247 247 void smc_lgr_forget(struct smc_link_group *lgr); 248 248 void smc_lgr_terminate(struct smc_link_group *lgr); 249 249 void smc_port_terminate(struct smc_ib_device *smcibdev, u8 ibport); 250 - void smc_smcd_terminate(struct smcd_dev *dev, u64 peer_gid); 250 + void smc_smcd_terminate(struct smcd_dev *dev, u64 peer_gid, 251 + unsigned short vlan); 251 252 int smc_buf_create(struct smc_sock *smc, bool is_smcd); 252 253 int smc_uncompress_bufsize(u8 compressed); 253 254 int smc_rmb_rtoken_handling(struct smc_connection *conn, ··· 263 262 264 263 void smc_conn_free(struct smc_connection *conn); 265 264 int smc_conn_create(struct smc_sock *smc, bool is_smcd, int srv_first_contact, 266 - struct smc_ib_device *smcibdev, u8 ibport, 265 + struct smc_ib_device *smcibdev, u8 ibport, u32 clcqpn, 267 266 struct smc_clc_msg_local *lcl, struct smcd_dev *smcd, 268 267 u64 peer_gid); 269 268 void smcd_conn_free(struct smc_connection *conn);
+32 -11
net/smc/smc_ism.c
··· 187 187 #define ISM_EVENT_REQUEST 0x0001 188 188 #define ISM_EVENT_RESPONSE 0x0002 189 189 #define ISM_EVENT_REQUEST_IR 0x00000001 190 + #define ISM_EVENT_CODE_SHUTDOWN 0x80 190 191 #define ISM_EVENT_CODE_TESTLINK 0x83 192 + 193 + union smcd_sw_event_info { 194 + u64 info; 195 + struct { 196 + u8 uid[SMC_LGR_ID_SIZE]; 197 + unsigned short vlan_id; 198 + u16 code; 199 + }; 200 + }; 191 201 192 202 static void smcd_handle_sw_event(struct smc_ism_event_work *wrk) 193 203 { 194 - union { 195 - u64 info; 196 - struct { 197 - u32 uid; 198 - unsigned short vlanid; 199 - u16 code; 200 - }; 201 - } ev_info; 204 + union smcd_sw_event_info ev_info; 202 205 206 + ev_info.info = wrk->event.info; 203 207 switch (wrk->event.code) { 208 + case ISM_EVENT_CODE_SHUTDOWN: /* Peer shut down DMBs */ 209 + smc_smcd_terminate(wrk->smcd, wrk->event.tok, ev_info.vlan_id); 210 + break; 204 211 case ISM_EVENT_CODE_TESTLINK: /* Activity timer */ 205 - ev_info.info = wrk->event.info; 206 212 if (ev_info.code == ISM_EVENT_REQUEST) { 207 213 ev_info.code = ISM_EVENT_RESPONSE; 208 214 wrk->smcd->ops->signal_event(wrk->smcd, ··· 221 215 } 222 216 } 223 217 218 + int smc_ism_signal_shutdown(struct smc_link_group *lgr) 219 + { 220 + int rc; 221 + union smcd_sw_event_info ev_info; 222 + 223 + memcpy(ev_info.uid, lgr->id, SMC_LGR_ID_SIZE); 224 + ev_info.vlan_id = lgr->vlan_id; 225 + ev_info.code = ISM_EVENT_REQUEST; 226 + rc = lgr->smcd->ops->signal_event(lgr->smcd, lgr->peer_gid, 227 + ISM_EVENT_REQUEST_IR, 228 + ISM_EVENT_CODE_SHUTDOWN, 229 + ev_info.info); 230 + return rc; 231 + } 232 + 224 233 /* worker for SMC-D events */ 225 234 static void smc_ism_event_work(struct work_struct *work) 226 235 { ··· 244 223 245 224 switch (wrk->event.type) { 246 225 case ISM_EVENT_GID: /* GID event, token is peer GID */ 247 - smc_smcd_terminate(wrk->smcd, wrk->event.tok); 226 + smc_smcd_terminate(wrk->smcd, wrk->event.tok, VLAN_VID_MASK); 248 227 break; 249 228 case ISM_EVENT_DMB: 250 229 break; ··· 310 289 spin_unlock(&smcd_dev_list.lock); 311 290 flush_workqueue(smcd->event_wq); 312 291 destroy_workqueue(smcd->event_wq); 313 - smc_smcd_terminate(smcd, 0); 292 + smc_smcd_terminate(smcd, 0, VLAN_VID_MASK); 314 293 315 294 device_del(&smcd->dev); 316 295 }
+1
net/smc/smc_ism.h
··· 45 45 int smc_ism_unregister_dmb(struct smcd_dev *dev, struct smc_buf_desc *dmb_desc); 46 46 int smc_ism_write(struct smcd_dev *dev, const struct smc_ism_position *pos, 47 47 void *data, size_t len); 48 + int smc_ism_signal_shutdown(struct smc_link_group *lgr); 48 49 #endif
+3 -1
net/smc/smc_wr.c
··· 215 215 216 216 pend = container_of(wr_pend_priv, struct smc_wr_tx_pend, priv); 217 217 if (pend->idx < link->wr_tx_cnt) { 218 + u32 idx = pend->idx; 219 + 218 220 /* clear the full struct smc_wr_tx_pend including .priv */ 219 221 memset(&link->wr_tx_pends[pend->idx], 0, 220 222 sizeof(link->wr_tx_pends[pend->idx])); 221 223 memset(&link->wr_tx_bufs[pend->idx], 0, 222 224 sizeof(link->wr_tx_bufs[pend->idx])); 223 - test_and_clear_bit(pend->idx, link->wr_tx_mask); 225 + test_and_clear_bit(idx, link->wr_tx_mask); 224 226 return 1; 225 227 } 226 228
+1 -1
net/socket.c
··· 853 853 struct socket *sock = file->private_data; 854 854 855 855 if (unlikely(!sock->ops->splice_read)) 856 - return -EINVAL; 856 + return generic_file_splice_read(file, ppos, pipe, len, flags); 857 857 858 858 return sock->ops->splice_read(sock, ppos, pipe, len, flags); 859 859 }
+1 -7
net/sunrpc/auth_generic.c
··· 281 281 { 282 282 struct auth_cred *acred = &container_of(cred, struct generic_cred, 283 283 gc_base)->acred; 284 - bool ret; 285 - 286 - get_rpccred(cred); 287 - ret = test_bit(RPC_CRED_KEY_EXPIRE_SOON, &acred->ac_flags); 288 - put_rpccred(cred); 289 - 290 - return ret; 284 + return test_bit(RPC_CRED_KEY_EXPIRE_SOON, &acred->ac_flags); 291 285 } 292 286 293 287 static const struct rpc_credops generic_credops = {
+42 -19
net/sunrpc/auth_gss/auth_gss.c
··· 1239 1239 return &gss_auth->rpc_auth; 1240 1240 } 1241 1241 1242 + static struct gss_cred * 1243 + gss_dup_cred(struct gss_auth *gss_auth, struct gss_cred *gss_cred) 1244 + { 1245 + struct gss_cred *new; 1246 + 1247 + /* Make a copy of the cred so that we can reference count it */ 1248 + new = kzalloc(sizeof(*gss_cred), GFP_NOIO); 1249 + if (new) { 1250 + struct auth_cred acred = { 1251 + .uid = gss_cred->gc_base.cr_uid, 1252 + }; 1253 + struct gss_cl_ctx *ctx = 1254 + rcu_dereference_protected(gss_cred->gc_ctx, 1); 1255 + 1256 + rpcauth_init_cred(&new->gc_base, &acred, 1257 + &gss_auth->rpc_auth, 1258 + &gss_nullops); 1259 + new->gc_base.cr_flags = 1UL << RPCAUTH_CRED_UPTODATE; 1260 + new->gc_service = gss_cred->gc_service; 1261 + new->gc_principal = gss_cred->gc_principal; 1262 + kref_get(&gss_auth->kref); 1263 + rcu_assign_pointer(new->gc_ctx, ctx); 1264 + gss_get_ctx(ctx); 1265 + } 1266 + return new; 1267 + } 1268 + 1242 1269 /* 1243 - * gss_destroying_context will cause the RPCSEC_GSS to send a NULL RPC call 1270 + * gss_send_destroy_context will cause the RPCSEC_GSS to send a NULL RPC call 1244 1271 * to the server with the GSS control procedure field set to 1245 1272 * RPC_GSS_PROC_DESTROY. This should normally cause the server to release 1246 1273 * all RPCSEC_GSS state associated with that context. 1247 1274 */ 1248 - static int 1249 - gss_destroying_context(struct rpc_cred *cred) 1275 + static void 1276 + gss_send_destroy_context(struct rpc_cred *cred) 1250 1277 { 1251 1278 struct gss_cred *gss_cred = container_of(cred, struct gss_cred, gc_base); 1252 1279 struct gss_auth *gss_auth = container_of(cred->cr_auth, struct gss_auth, rpc_auth); 1253 1280 struct gss_cl_ctx *ctx = rcu_dereference_protected(gss_cred->gc_ctx, 1); 1281 + struct gss_cred *new; 1254 1282 struct rpc_task *task; 1255 1283 1256 - if (test_bit(RPCAUTH_CRED_UPTODATE, &cred->cr_flags) == 0) 1257 - return 0; 1284 + new = gss_dup_cred(gss_auth, gss_cred); 1285 + if (new) { 1286 + ctx->gc_proc = RPC_GSS_PROC_DESTROY; 1258 1287 1259 - ctx->gc_proc = RPC_GSS_PROC_DESTROY; 1260 - cred->cr_ops = &gss_nullops; 1288 + task = rpc_call_null(gss_auth->client, &new->gc_base, 1289 + RPC_TASK_ASYNC|RPC_TASK_SOFT); 1290 + if (!IS_ERR(task)) 1291 + rpc_put_task(task); 1261 1292 1262 - /* Take a reference to ensure the cred will be destroyed either 1263 - * by the RPC call or by the put_rpccred() below */ 1264 - get_rpccred(cred); 1265 - 1266 - task = rpc_call_null(gss_auth->client, cred, RPC_TASK_ASYNC|RPC_TASK_SOFT); 1267 - if (!IS_ERR(task)) 1268 - rpc_put_task(task); 1269 - 1270 - put_rpccred(cred); 1271 - return 1; 1293 + put_rpccred(&new->gc_base); 1294 + } 1272 1295 } 1273 1296 1274 1297 /* gss_destroy_cred (and gss_free_ctx) are used to clean up after failure ··· 1353 1330 gss_destroy_cred(struct rpc_cred *cred) 1354 1331 { 1355 1332 1356 - if (gss_destroying_context(cred)) 1357 - return; 1333 + if (test_and_clear_bit(RPCAUTH_CRED_UPTODATE, &cred->cr_flags) != 0) 1334 + gss_send_destroy_context(cred); 1358 1335 gss_destroy_nullcred(cred); 1359 1336 } 1360 1337
+3 -4
net/sunrpc/xdr.c
··· 546 546 static __be32 *xdr_get_next_encode_buffer(struct xdr_stream *xdr, 547 547 size_t nbytes) 548 548 { 549 - static __be32 *p; 549 + __be32 *p; 550 550 int space_left; 551 551 int frag1bytes, frag2bytes; 552 552 ··· 673 673 WARN_ON_ONCE(xdr->iov); 674 674 return; 675 675 } 676 - if (fraglen) { 676 + if (fraglen) 677 677 xdr->end = head->iov_base + head->iov_len; 678 - xdr->page_ptr--; 679 - } 680 678 /* (otherwise assume xdr->end is already set) */ 679 + xdr->page_ptr--; 681 680 head->iov_len = len; 682 681 buf->len = len; 683 682 xdr->p = head->iov_base + head->iov_len;
+10 -9
net/tipc/discover.c
··· 166 166 167 167 /* Apply trial address if we just left trial period */ 168 168 if (!trial && !self) { 169 - tipc_net_finalize(net, tn->trial_addr); 169 + tipc_sched_net_finalize(net, tn->trial_addr); 170 + msg_set_prevnode(buf_msg(d->skb), tn->trial_addr); 170 171 msg_set_type(buf_msg(d->skb), DSC_REQ_MSG); 171 172 } 172 173 ··· 301 300 goto exit; 302 301 } 303 302 304 - /* Trial period over ? */ 305 - if (!time_before(jiffies, tn->addr_trial_end)) { 306 - /* Did we just leave it ? */ 307 - if (!tipc_own_addr(net)) 308 - tipc_net_finalize(net, tn->trial_addr); 309 - 310 - msg_set_type(buf_msg(d->skb), DSC_REQ_MSG); 311 - msg_set_prevnode(buf_msg(d->skb), tipc_own_addr(net)); 303 + /* Did we just leave trial period ? */ 304 + if (!time_before(jiffies, tn->addr_trial_end) && !tipc_own_addr(net)) { 305 + mod_timer(&d->timer, jiffies + TIPC_DISC_INIT); 306 + spin_unlock_bh(&d->lock); 307 + tipc_sched_net_finalize(net, tn->trial_addr); 308 + return; 312 309 } 313 310 314 311 /* Adjust timeout interval according to discovery phase */ ··· 318 319 d->timer_intv = TIPC_DISC_SLOW; 319 320 else if (!d->num_nodes && d->timer_intv > TIPC_DISC_FAST) 320 321 d->timer_intv = TIPC_DISC_FAST; 322 + msg_set_type(buf_msg(d->skb), DSC_REQ_MSG); 323 + msg_set_prevnode(buf_msg(d->skb), tn->trial_addr); 321 324 } 322 325 323 326 mod_timer(&d->timer, jiffies + d->timer_intv);
+37 -8
net/tipc/net.c
··· 104 104 * - A local spin_lock protecting the queue of subscriber events. 105 105 */ 106 106 107 + struct tipc_net_work { 108 + struct work_struct work; 109 + struct net *net; 110 + u32 addr; 111 + }; 112 + 113 + static void tipc_net_finalize(struct net *net, u32 addr); 114 + 107 115 int tipc_net_init(struct net *net, u8 *node_id, u32 addr) 108 116 { 109 117 if (tipc_own_id(net)) { ··· 127 119 return 0; 128 120 } 129 121 130 - void tipc_net_finalize(struct net *net, u32 addr) 122 + static void tipc_net_finalize(struct net *net, u32 addr) 131 123 { 132 124 struct tipc_net *tn = tipc_net(net); 133 125 134 - if (!cmpxchg(&tn->node_addr, 0, addr)) { 135 - tipc_set_node_addr(net, addr); 136 - tipc_named_reinit(net); 137 - tipc_sk_reinit(net); 138 - tipc_nametbl_publish(net, TIPC_CFG_SRV, addr, addr, 139 - TIPC_CLUSTER_SCOPE, 0, addr); 140 - } 126 + if (cmpxchg(&tn->node_addr, 0, addr)) 127 + return; 128 + tipc_set_node_addr(net, addr); 129 + tipc_named_reinit(net); 130 + tipc_sk_reinit(net); 131 + tipc_nametbl_publish(net, TIPC_CFG_SRV, addr, addr, 132 + TIPC_CLUSTER_SCOPE, 0, addr); 133 + } 134 + 135 + static void tipc_net_finalize_work(struct work_struct *work) 136 + { 137 + struct tipc_net_work *fwork; 138 + 139 + fwork = container_of(work, struct tipc_net_work, work); 140 + tipc_net_finalize(fwork->net, fwork->addr); 141 + kfree(fwork); 142 + } 143 + 144 + void tipc_sched_net_finalize(struct net *net, u32 addr) 145 + { 146 + struct tipc_net_work *fwork = kzalloc(sizeof(*fwork), GFP_ATOMIC); 147 + 148 + if (!fwork) 149 + return; 150 + INIT_WORK(&fwork->work, tipc_net_finalize_work); 151 + fwork->net = net; 152 + fwork->addr = addr; 153 + schedule_work(&fwork->work); 141 154 } 142 155 143 156 void tipc_net_stop(struct net *net)
+1 -1
net/tipc/net.h
··· 42 42 extern const struct nla_policy tipc_nl_net_policy[]; 43 43 44 44 int tipc_net_init(struct net *net, u8 *node_id, u32 addr); 45 - void tipc_net_finalize(struct net *net, u32 addr); 45 + void tipc_sched_net_finalize(struct net *net, u32 addr); 46 46 void tipc_net_stop(struct net *net); 47 47 int tipc_nl_net_dump(struct sk_buff *skb, struct netlink_callback *cb); 48 48 int tipc_nl_net_set(struct sk_buff *skb, struct genl_info *info);
+11 -4
net/tipc/socket.c
··· 1555 1555 /** 1556 1556 * tipc_sk_anc_data_recv - optionally capture ancillary data for received message 1557 1557 * @m: descriptor for message info 1558 - * @msg: received message header 1558 + * @skb: received message buffer 1559 1559 * @tsk: TIPC port associated with message 1560 1560 * 1561 1561 * Note: Ancillary data is not captured if not requested by receiver. 1562 1562 * 1563 1563 * Returns 0 if successful, otherwise errno 1564 1564 */ 1565 - static int tipc_sk_anc_data_recv(struct msghdr *m, struct tipc_msg *msg, 1565 + static int tipc_sk_anc_data_recv(struct msghdr *m, struct sk_buff *skb, 1566 1566 struct tipc_sock *tsk) 1567 1567 { 1568 + struct tipc_msg *msg; 1568 1569 u32 anc_data[3]; 1569 1570 u32 err; 1570 1571 u32 dest_type; ··· 1574 1573 1575 1574 if (likely(m->msg_controllen == 0)) 1576 1575 return 0; 1576 + msg = buf_msg(skb); 1577 1577 1578 1578 /* Optionally capture errored message object(s) */ 1579 1579 err = msg ? msg_errcode(msg) : 0; ··· 1585 1583 if (res) 1586 1584 return res; 1587 1585 if (anc_data[1]) { 1586 + if (skb_linearize(skb)) 1587 + return -ENOMEM; 1588 + msg = buf_msg(skb); 1588 1589 res = put_cmsg(m, SOL_TIPC, TIPC_RETDATA, anc_data[1], 1589 1590 msg_data(msg)); 1590 1591 if (res) ··· 1749 1744 1750 1745 /* Collect msg meta data, including error code and rejected data */ 1751 1746 tipc_sk_set_orig_addr(m, skb); 1752 - rc = tipc_sk_anc_data_recv(m, hdr, tsk); 1747 + rc = tipc_sk_anc_data_recv(m, skb, tsk); 1753 1748 if (unlikely(rc)) 1754 1749 goto exit; 1750 + hdr = buf_msg(skb); 1755 1751 1756 1752 /* Capture data if non-error msg, otherwise just set return value */ 1757 1753 if (likely(!err)) { ··· 1862 1856 /* Collect msg meta data, incl. error code and rejected data */ 1863 1857 if (!copied) { 1864 1858 tipc_sk_set_orig_addr(m, skb); 1865 - rc = tipc_sk_anc_data_recv(m, hdr, tsk); 1859 + rc = tipc_sk_anc_data_recv(m, skb, tsk); 1866 1860 if (rc) 1867 1861 break; 1862 + hdr = buf_msg(skb); 1868 1863 } 1869 1864 1870 1865 /* Copy data if msg ok, otherwise return error/partial data */
+1 -1
scripts/faddr2line
··· 71 71 72 72 # Try to figure out the source directory prefix so we can remove it from the 73 73 # addr2line output. HACK ALERT: This assumes that start_kernel() is in 74 - # kernel/init.c! This only works for vmlinux. Otherwise it falls back to 74 + # init/main.c! This only works for vmlinux. Otherwise it falls back to 75 75 # printing the absolute path. 76 76 find_dir_prefix() { 77 77 local objfile=$1
-1
scripts/spdxcheck.py
··· 168 168 self.curline = 0 169 169 try: 170 170 for line in fd: 171 - line = line.decode(locale.getpreferredencoding(False), errors='ignore') 172 171 self.curline += 1 173 172 if self.curline > maxlines: 174 173 break
+1
security/integrity/digsig_asymmetric.c
··· 106 106 107 107 pks.pkey_algo = "rsa"; 108 108 pks.hash_algo = hash_algo_name[hdr->hash_algo]; 109 + pks.encoding = "pkcs1"; 109 110 pks.digest = (u8 *)data; 110 111 pks.digest_size = datalen; 111 112 pks.s = hdr->sig;
+3
security/selinux/hooks.c
··· 5318 5318 addr_buf = address; 5319 5319 5320 5320 while (walk_size < addrlen) { 5321 + if (walk_size + sizeof(sa_family_t) > addrlen) 5322 + return -EINVAL; 5323 + 5321 5324 addr = addr_buf; 5322 5325 switch (addr->sa_family) { 5323 5326 case AF_UNSPEC:
+7 -3
security/selinux/ss/mls.c
··· 245 245 char *rangep[2]; 246 246 247 247 if (!pol->mls_enabled) { 248 - if ((def_sid != SECSID_NULL && oldc) || (*scontext) == '\0') 249 - return 0; 250 - return -EINVAL; 248 + /* 249 + * With no MLS, only return -EINVAL if there is a MLS field 250 + * and it did not come from an xattr. 251 + */ 252 + if (oldc && def_sid == SECSID_NULL) 253 + return -EINVAL; 254 + return 0; 251 255 } 252 256 253 257 /*
+3 -3
sound/core/oss/pcm_oss.c
··· 1062 1062 runtime->oss.channels = params_channels(params); 1063 1063 runtime->oss.rate = params_rate(params); 1064 1064 1065 - vfree(runtime->oss.buffer); 1066 - runtime->oss.buffer = vmalloc(runtime->oss.period_bytes); 1065 + kvfree(runtime->oss.buffer); 1066 + runtime->oss.buffer = kvzalloc(runtime->oss.period_bytes, GFP_KERNEL); 1067 1067 if (!runtime->oss.buffer) { 1068 1068 err = -ENOMEM; 1069 1069 goto failure; ··· 2328 2328 { 2329 2329 struct snd_pcm_runtime *runtime; 2330 2330 runtime = substream->runtime; 2331 - vfree(runtime->oss.buffer); 2331 + kvfree(runtime->oss.buffer); 2332 2332 runtime->oss.buffer = NULL; 2333 2333 #ifdef CONFIG_SND_PCM_OSS_PLUGINS 2334 2334 snd_pcm_oss_plugin_clear(substream);
+3 -3
sound/core/oss/pcm_plugin.c
··· 66 66 return -ENXIO; 67 67 size /= 8; 68 68 if (plugin->buf_frames < frames) { 69 - vfree(plugin->buf); 70 - plugin->buf = vmalloc(size); 69 + kvfree(plugin->buf); 70 + plugin->buf = kvzalloc(size, GFP_KERNEL); 71 71 plugin->buf_frames = frames; 72 72 } 73 73 if (!plugin->buf) { ··· 191 191 if (plugin->private_free) 192 192 plugin->private_free(plugin); 193 193 kfree(plugin->buf_channels); 194 - vfree(plugin->buf); 194 + kvfree(plugin->buf); 195 195 kfree(plugin); 196 196 return 0; 197 197 }
+3 -2
sound/pci/hda/patch_ca0132.c
··· 1177 1177 SND_PCI_QUIRK(0x1028, 0x0708, "Alienware 15 R2 2016", QUIRK_ALIENWARE), 1178 1178 SND_PCI_QUIRK(0x1102, 0x0010, "Sound Blaster Z", QUIRK_SBZ), 1179 1179 SND_PCI_QUIRK(0x1102, 0x0023, "Sound Blaster Z", QUIRK_SBZ), 1180 + SND_PCI_QUIRK(0x1102, 0x0033, "Sound Blaster ZxR", QUIRK_SBZ), 1180 1181 SND_PCI_QUIRK(0x1458, 0xA016, "Recon3Di", QUIRK_R3DI), 1181 1182 SND_PCI_QUIRK(0x1458, 0xA026, "Gigabyte G1.Sniper Z97", QUIRK_R3DI), 1182 1183 SND_PCI_QUIRK(0x1458, 0xA036, "Gigabyte GA-Z170X-Gaming 7", QUIRK_R3DI), ··· 8414 8413 8415 8414 snd_hda_power_down(codec); 8416 8415 if (spec->mem_base) 8417 - iounmap(spec->mem_base); 8416 + pci_iounmap(codec->bus->pci, spec->mem_base); 8418 8417 kfree(spec->spec_init_verbs); 8419 8418 kfree(codec->spec); 8420 8419 } ··· 8489 8488 break; 8490 8489 case QUIRK_AE5: 8491 8490 codec_dbg(codec, "%s: QUIRK_AE5 applied.\n", __func__); 8492 - snd_hda_apply_pincfgs(codec, r3di_pincfgs); 8491 + snd_hda_apply_pincfgs(codec, ae5_pincfgs); 8493 8492 break; 8494 8493 } 8495 8494
+1
sound/pci/hda/patch_realtek.c
··· 6481 6481 SND_PCI_QUIRK(0x103c, 0x2336, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1), 6482 6482 SND_PCI_QUIRK(0x103c, 0x2337, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1), 6483 6483 SND_PCI_QUIRK(0x103c, 0x221c, "HP EliteBook 755 G2", ALC280_FIXUP_HP_HEADSET_MIC), 6484 + SND_PCI_QUIRK(0x103c, 0x820d, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3), 6484 6485 SND_PCI_QUIRK(0x103c, 0x8256, "HP", ALC221_FIXUP_HP_FRONT_MIC), 6485 6486 SND_PCI_QUIRK(0x103c, 0x827e, "HP x360", ALC295_FIXUP_HP_X360), 6486 6487 SND_PCI_QUIRK(0x103c, 0x82bf, "HP", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+6 -6
tools/power/cpupower/Makefile
··· 129 129 WARNINGS += $(call cc-supports,-Wdeclaration-after-statement) 130 130 WARNINGS += -Wshadow 131 131 132 - CFLAGS += -DVERSION=\"$(VERSION)\" -DPACKAGE=\"$(PACKAGE)\" \ 132 + override CFLAGS += -DVERSION=\"$(VERSION)\" -DPACKAGE=\"$(PACKAGE)\" \ 133 133 -DPACKAGE_BUGREPORT=\"$(PACKAGE_BUGREPORT)\" -D_GNU_SOURCE 134 134 135 135 UTIL_OBJS = utils/helpers/amd.o utils/helpers/msr.o \ ··· 156 156 LIB_OBJS = lib/cpufreq.o lib/cpupower.o lib/cpuidle.o 157 157 LIB_OBJS := $(addprefix $(OUTPUT),$(LIB_OBJS)) 158 158 159 - CFLAGS += -pipe 159 + override CFLAGS += -pipe 160 160 161 161 ifeq ($(strip $(NLS)),true) 162 162 INSTALL_NLS += install-gmo 163 163 COMPILE_NLS += create-gmo 164 - CFLAGS += -DNLS 164 + override CFLAGS += -DNLS 165 165 endif 166 166 167 167 ifeq ($(strip $(CPUFREQ_BENCH)),true) ··· 175 175 UTIL_SRC += $(LIB_SRC) 176 176 endif 177 177 178 - CFLAGS += $(WARNINGS) 178 + override CFLAGS += $(WARNINGS) 179 179 180 180 ifeq ($(strip $(V)),false) 181 181 QUIET=@ ··· 188 188 189 189 # if DEBUG is enabled, then we do not strip or optimize 190 190 ifeq ($(strip $(DEBUG)),true) 191 - CFLAGS += -O1 -g -DDEBUG 191 + override CFLAGS += -O1 -g -DDEBUG 192 192 STRIPCMD = /bin/true -Since_we_are_debugging 193 193 else 194 - CFLAGS += $(OPTIMIZATION) -fomit-frame-pointer 194 + override CFLAGS += $(OPTIMIZATION) -fomit-frame-pointer 195 195 STRIPCMD = $(STRIP) -s --remove-section=.note --remove-section=.comment 196 196 endif 197 197
+1 -1
tools/power/cpupower/bench/Makefile
··· 9 9 ifeq ($(strip $(STATIC)),true) 10 10 LIBS = -L../ -L$(OUTPUT) -lm 11 11 OBJS = $(OUTPUT)main.o $(OUTPUT)parse.o $(OUTPUT)system.o $(OUTPUT)benchmark.o \ 12 - $(OUTPUT)../lib/cpufreq.o $(OUTPUT)../lib/sysfs.o 12 + $(OUTPUT)../lib/cpufreq.o $(OUTPUT)../lib/cpupower.o 13 13 else 14 14 LIBS = -L../ -L$(OUTPUT) -lm -lcpupower 15 15 OBJS = $(OUTPUT)main.o $(OUTPUT)parse.o $(OUTPUT)system.o $(OUTPUT)benchmark.o
+2 -2
tools/power/cpupower/debug/x86_64/Makefile
··· 13 13 default: all 14 14 15 15 $(OUTPUT)centrino-decode: ../i386/centrino-decode.c 16 - $(CC) $(CFLAGS) -o $@ $< 16 + $(CC) $(CFLAGS) -o $@ $(LDFLAGS) $< 17 17 18 18 $(OUTPUT)powernow-k8-decode: ../i386/powernow-k8-decode.c 19 - $(CC) $(CFLAGS) -o $@ $< 19 + $(CC) $(CFLAGS) -o $@ $(LDFLAGS) $< 20 20 21 21 all: $(OUTPUT)centrino-decode $(OUTPUT)powernow-k8-decode 22 22
+1 -1
tools/power/cpupower/lib/cpufreq.c
··· 28 28 29 29 snprintf(path, sizeof(path), PATH_TO_CPU "cpu%u/cpufreq/%s", 30 30 cpu, fname); 31 - return sysfs_read_file(path, buf, buflen); 31 + return cpupower_read_sysfs(path, buf, buflen); 32 32 } 33 33 34 34 /* helper function to write a new value to a /sys file */
+1 -1
tools/power/cpupower/lib/cpuidle.c
··· 319 319 320 320 snprintf(path, sizeof(path), PATH_TO_CPU "cpuidle/%s", fname); 321 321 322 - return sysfs_read_file(path, buf, buflen); 322 + return cpupower_read_sysfs(path, buf, buflen); 323 323 } 324 324 325 325
+2 -2
tools/power/cpupower/lib/cpupower.c
··· 15 15 #include "cpupower.h" 16 16 #include "cpupower_intern.h" 17 17 18 - unsigned int sysfs_read_file(const char *path, char *buf, size_t buflen) 18 + unsigned int cpupower_read_sysfs(const char *path, char *buf, size_t buflen) 19 19 { 20 20 int fd; 21 21 ssize_t numread; ··· 95 95 96 96 snprintf(path, sizeof(path), PATH_TO_CPU "cpu%u/topology/%s", 97 97 cpu, fname); 98 - if (sysfs_read_file(path, linebuf, MAX_LINE_LEN) == 0) 98 + if (cpupower_read_sysfs(path, linebuf, MAX_LINE_LEN) == 0) 99 99 return -1; 100 100 *result = strtol(linebuf, &endp, 0); 101 101 if (endp == linebuf || errno == ERANGE)
+1 -1
tools/power/cpupower/lib/cpupower_intern.h
··· 3 3 #define MAX_LINE_LEN 4096 4 4 #define SYSFS_PATH_MAX 255 5 5 6 - unsigned int sysfs_read_file(const char *path, char *buf, size_t buflen); 6 + unsigned int cpupower_read_sysfs(const char *path, char *buf, size_t buflen);
+4 -4
tools/testing/nvdimm/test/nfit.c
··· 140 140 [6] = NFIT_DIMM_HANDLE(1, 0, 0, 0, 1), 141 141 }; 142 142 143 - static unsigned long dimm_fail_cmd_flags[NUM_DCR]; 144 - static int dimm_fail_cmd_code[NUM_DCR]; 143 + static unsigned long dimm_fail_cmd_flags[ARRAY_SIZE(handle)]; 144 + static int dimm_fail_cmd_code[ARRAY_SIZE(handle)]; 145 145 146 146 static const struct nd_intel_smart smart_def = { 147 147 .flags = ND_INTEL_SMART_HEALTH_VALID ··· 205 205 unsigned long deadline; 206 206 spinlock_t lock; 207 207 } ars_state; 208 - struct device *dimm_dev[NUM_DCR]; 208 + struct device *dimm_dev[ARRAY_SIZE(handle)]; 209 209 struct nd_intel_smart *smart; 210 210 struct nd_intel_smart_threshold *smart_threshold; 211 211 struct badrange badrange; ··· 2680 2680 u32 nfit_handle = __to_nfit_memdev(nfit_mem)->device_handle; 2681 2681 int i; 2682 2682 2683 - for (i = 0; i < NUM_DCR; i++) 2683 + for (i = 0; i < ARRAY_SIZE(handle); i++) 2684 2684 if (nfit_handle == handle[i]) 2685 2685 dev_set_drvdata(nfit_test->dimm_dev[i], 2686 2686 nfit_mem);
+18 -3
tools/testing/selftests/powerpc/mm/wild_bctr.c
··· 47 47 return 0; 48 48 } 49 49 50 - #define REG_POISON 0x5a5aUL 51 - #define POISONED_REG(n) ((REG_POISON << 48) | ((n) << 32) | (REG_POISON << 16) | (n)) 50 + #define REG_POISON 0x5a5a 51 + #define POISONED_REG(n) ((((unsigned long)REG_POISON) << 48) | ((n) << 32) | \ 52 + (((unsigned long)REG_POISON) << 16) | (n)) 52 53 53 54 static inline void poison_regs(void) 54 55 { ··· 106 105 } 107 106 } 108 107 108 + #ifdef _CALL_AIXDESC 109 + struct opd { 110 + unsigned long ip; 111 + unsigned long toc; 112 + unsigned long env; 113 + }; 114 + static struct opd bad_opd = { 115 + .ip = BAD_NIP, 116 + }; 117 + #define BAD_FUNC (&bad_opd) 118 + #else 119 + #define BAD_FUNC BAD_NIP 120 + #endif 121 + 109 122 int test_wild_bctr(void) 110 123 { 111 124 int (*func_ptr)(void); ··· 148 133 149 134 poison_regs(); 150 135 151 - func_ptr = (int (*)(void))BAD_NIP; 136 + func_ptr = (int (*)(void))BAD_FUNC; 152 137 func_ptr(); 153 138 154 139 FAIL_IF(1); /* we didn't segv? */
+13 -5
tools/testing/selftests/tc-testing/tdc.py
··· 134 134 (rawout, serr) = proc.communicate() 135 135 136 136 if proc.returncode != 0 and len(serr) > 0: 137 - foutput = serr.decode("utf-8") 137 + foutput = serr.decode("utf-8", errors="ignore") 138 138 else: 139 - foutput = rawout.decode("utf-8") 139 + foutput = rawout.decode("utf-8", errors="ignore") 140 140 141 141 proc.stdout.close() 142 142 proc.stderr.close() ··· 169 169 file=sys.stderr) 170 170 print("\n{} *** Error message: \"{}\"".format(prefix, foutput), 171 171 file=sys.stderr) 172 + print("returncode {}; expected {}".format(proc.returncode, 173 + exit_codes)) 172 174 print("\n{} *** Aborting test run.".format(prefix), file=sys.stderr) 173 175 print("\n\n{} *** stdout ***".format(proc.stdout), file=sys.stderr) 174 176 print("\n\n{} *** stderr ***".format(proc.stderr), file=sys.stderr) ··· 197 195 print('-----> execute stage') 198 196 pm.call_pre_execute() 199 197 (p, procout) = exec_cmd(args, pm, 'execute', tidx["cmdUnderTest"]) 200 - exit_code = p.returncode 198 + if p: 199 + exit_code = p.returncode 200 + else: 201 + exit_code = None 202 + 201 203 pm.call_post_execute() 202 204 203 - if (exit_code != int(tidx["expExitCode"])): 205 + if (exit_code is None or exit_code != int(tidx["expExitCode"])): 204 206 result = False 205 - print("exit:", exit_code, int(tidx["expExitCode"])) 207 + print("exit: {!r}".format(exit_code)) 208 + print("exit: {}".format(int(tidx["expExitCode"]))) 209 + #print("exit: {!r} {}".format(exit_code, int(tidx["expExitCode"]))) 206 210 print(procout) 207 211 else: 208 212 if args.verbose > 0: